@atlas040: nih anak kenapa dah #Ella #JKTNewera #JKT48 #memberJKT #fyp

atlas
atlas
Open In TikTok:
Region: ID
Friday 24 January 2025 13:10:03 GMT
235539
44965
250
1656

Music

Download

Comments

kyyboyop11
Kyy :
1 Show 1 Meme🫵😭
2025-01-24 17:03:21
7
.kyzshna13
keyyiiì. :
ellaa😭😭😭
2025-01-27 06:54:39
0
yanzyanz65
Yanz :
si catcat trauma 🤣
2025-01-28 03:53:17
0
dnsptra29
dens project 🌀 :
HEH🫵🏻😭
2025-01-24 16:11:28
0
orang.biasa56304
........... :
member auto trauma😔
2025-01-27 02:59:26
3
sandri_0707
Andry Julianto :
ella 🤣🤣🤣🤣🤣🤣
2025-01-24 16:39:34
0
marshacynthiaerinelia
marshawithnazwa—Ⳋ९ :
first
2025-01-24 13:29:48
0
steven.chandra07
steven Chandra :
kulkas ngakak diblkng wkwkwk
2025-01-24 14:42:19
2465
caatelannadii
🪄 Catelyn RAD46 / ラッド 🎀 :
Cathy takut😭
2025-01-25 02:45:20
439
glasess22
kopiitemmangga76 :
semakin kesini, kok
2025-01-24 14:38:34
255
lalalo113
only_JKT48 and MAGIC 5 :
fix muthe sama daysi pasti trauma daysi sampe menjauh karna takut apa lagu muthe kaget nya beneran itu mah
2025-01-24 22:29:13
228
adammm69_
Adamm⚡ :
batre nya 100% mulu jir😹
2025-01-25 01:24:37
175
callmfvc1ss
zee asadel :
Cathy :ini napa senior gw jadi anj😭
2025-01-25 01:07:41
114
ndewww15
dewi Sintia :
muthe terkaget-kaget bu bos Gita tertawa terbahak-bahak😭🤣ada aja tingkahnya
2025-01-25 02:34:24
50
firamagfira144
Fira Magfira :
diliat liat ngakak banget tuh mbak kulkas wkwkwkwk 🤣
2025-01-24 17:09:40
94
maya_nya_muthe
May🍅🪡#SewingADream :
lucu bngt ka muthe kaget🤣
2025-01-24 17:18:43
78
fauzanrroy__
RisolesMayoo :
kulkas aja cair liat kegumushan ella🫰🏼🗿
2025-01-24 22:39:57
74
bayungab.b
GAREE :
Cathy tertekan🗿
2025-01-24 19:37:42
51
caralimeu_07
Aisu Remonade :
Cathy terlihat trauma 😭😭😭
2025-01-24 22:50:15
35
congsaiba
Cong SaiBa :
se orng gita sampek ngakak gtu weee🤣🤣
2025-01-25 07:24:55
33
mh_.lfn
paan :
kebanyakan hidup ella itu genrenya komedi 😂
2025-01-24 21:52:48
28
sepitanpamulukaditinggal
⭐KekasihBayangan⭐ :
seabad sekali bu boss ketawa jangan di lewatkan belom tentu ada lagi moment gini 🤭
2025-01-24 16:53:52
23
rikopr_id
ik :
bu boss ngakak banget 😭
2025-01-24 16:50:14
21
30830031193
katak :
Gita puas banget ketawanya 🤣
2025-01-24 22:35:12
15
epetuweetum
. :
muthe 😭😭
2025-01-24 15:49:39
15
To see more videos from user @atlas040, please go to the Tikwm homepage.

Other Videos

Cursor composer with a local LLM Deepseek R1 70B Distillation? I will show you how! Prerequisites: Git Installed. Python 3 installed. Ngrok installed with api key from their website. Requires cursor pro subscription. Install and setup oobabooga/text-generation-webui `git clone https://github.com/oobabooga/text-generation-webui.git` Start the Text Generation WebUI with the api enabled. Linux: `./start_linux.sh --api` Mac: `./start_macos.sh --api` Windows: `.\start_windows.bat --api` WSL: `.\start_wsl.bat --api` Grab the model you want off of hugging face. You need ~46 GB of ram to run the model in this video. I recommend running it on GPU's but unified Macs should be able to run the model as well. If you're not sure what models your computer can run figure out how much ram / or video ram you have available and google the best r1 model/distillation to use with the amount of ram you have available. There are also different models for GPU's and CPU's. Be sure to get a model that can run on GPU hardware if you have VRAM or CPU if you have a unified mac or regular ram. I do not advise running the model on Intel or AMD cpu as they are not very quick or efficient at inference. The model I am using is the DeepSeek R1 70B distillation at a 4 bit quantization.  Find the model you want to try out on hugging face. Once you find the model you want to copy the name of it. This is the bit of it after `huggingface.co/` in the URL. `unsloth/DeepSeek-R1-Distill-Llama-70B-bnb-4bit`in my case. Open your browser and navigate to the text generation web ui: `http://127.0.0.1:7860` Click on Model on the left side navigation. Find the Download model or LoRA section. In the blank text field below paste the name of your model from hugging face. Again for me its `unsloth/DeepSeek-R1-Distill-Llama-70B-bnb-4bit`. Then click download. Depending on your connection speed this might take awhile. Once your model is downloaded. In the upper left corner go to the model dropdown. Select your model. Select the device you have ram on and slide the slider to the amount of ram per device. In my case I have 2 GPU's with 23500 RAM selected. Next hit Load up next to the model dropdown. Next head over to a terminal and open a Ngrok tunnel to port 5000 `ngrok http 5000` This will give you a publically accessible url. Copy the url ngrok provides. Once the model is loaded jump over to cursor.  Goto Settings Select Open AI API Key and enable. Then select Override OpenAI Base URL: Paste in your url from the ngrok command and append `/v1` to the end.  So if your URL is `https://xxx-xxx-x-xxx-xxx.ngrok-free.app` you would put `https://xxx-xxx-x-xxx-xxx.ngrok-free.app/v1` into the base url. Put in a fake open ai api key: `sk-test` Click verify You are now ready to open composer, chat. Select any model from OpenAI and text-generation-webui will use the model it currently has loaded to respond. #nvda #deepseek #llm #cursor #windsurf #development #strawberry #aidevelopment #softwaredevelopment #ide #vscode #localllm
Cursor composer with a local LLM Deepseek R1 70B Distillation? I will show you how! Prerequisites: Git Installed. Python 3 installed. Ngrok installed with api key from their website. Requires cursor pro subscription. Install and setup oobabooga/text-generation-webui `git clone https://github.com/oobabooga/text-generation-webui.git` Start the Text Generation WebUI with the api enabled. Linux: `./start_linux.sh --api` Mac: `./start_macos.sh --api` Windows: `.\start_windows.bat --api` WSL: `.\start_wsl.bat --api` Grab the model you want off of hugging face. You need ~46 GB of ram to run the model in this video. I recommend running it on GPU's but unified Macs should be able to run the model as well. If you're not sure what models your computer can run figure out how much ram / or video ram you have available and google the best r1 model/distillation to use with the amount of ram you have available. There are also different models for GPU's and CPU's. Be sure to get a model that can run on GPU hardware if you have VRAM or CPU if you have a unified mac or regular ram. I do not advise running the model on Intel or AMD cpu as they are not very quick or efficient at inference. The model I am using is the DeepSeek R1 70B distillation at a 4 bit quantization. Find the model you want to try out on hugging face. Once you find the model you want to copy the name of it. This is the bit of it after `huggingface.co/` in the URL. `unsloth/DeepSeek-R1-Distill-Llama-70B-bnb-4bit`in my case. Open your browser and navigate to the text generation web ui: `http://127.0.0.1:7860` Click on Model on the left side navigation. Find the Download model or LoRA section. In the blank text field below paste the name of your model from hugging face. Again for me its `unsloth/DeepSeek-R1-Distill-Llama-70B-bnb-4bit`. Then click download. Depending on your connection speed this might take awhile. Once your model is downloaded. In the upper left corner go to the model dropdown. Select your model. Select the device you have ram on and slide the slider to the amount of ram per device. In my case I have 2 GPU's with 23500 RAM selected. Next hit Load up next to the model dropdown. Next head over to a terminal and open a Ngrok tunnel to port 5000 `ngrok http 5000` This will give you a publically accessible url. Copy the url ngrok provides. Once the model is loaded jump over to cursor. Goto Settings Select Open AI API Key and enable. Then select Override OpenAI Base URL: Paste in your url from the ngrok command and append `/v1` to the end. So if your URL is `https://xxx-xxx-x-xxx-xxx.ngrok-free.app` you would put `https://xxx-xxx-x-xxx-xxx.ngrok-free.app/v1` into the base url. Put in a fake open ai api key: `sk-test` Click verify You are now ready to open composer, chat. Select any model from OpenAI and text-generation-webui will use the model it currently has loaded to respond. #nvda #deepseek #llm #cursor #windsurf #development #strawberry #aidevelopment #softwaredevelopment #ide #vscode #localllm

About