Language
English
عربي
Tiếng Việt
русский
français
español
日本語
한글
Deutsch
हिन्दी
简体中文
繁體中文
Home
How To Use
Language
English
عربي
Tiếng Việt
русский
français
español
日本語
한글
Deutsch
हिन्दी
简体中文
繁體中文
Home
Detail
@amandaxcharlotte: Könnt ihr mir spanische creator empfehlen? 🥹
amandaxcharlotte
Open In TikTok:
Region: DE
Wednesday 18 December 2024 09:32:04 GMT
308309
29424
311
101
Music
Download
No Watermark .mp4 (
5.5MB
)
No Watermark(HD) .mp4 (
29.27MB
)
Watermark .mp4 (
0MB
)
Music .mp3
Comments
Chelly :
Ash Trevino ❤️
2024-12-18 10:40:09
2128
:) :
Flavia von THTH
2024-12-18 10:58:17
28
Ale🦖 :
laurabrunet
2024-12-20 23:07:06
0
𝒞✞ :
Ash Trevino
2024-12-18 13:09:39
4
Spanisch lernen in 3 Monaten :
Ja folg rein (mache nur keine Ootd)
2024-12-18 10:39:32
179
🌸 :
ericaavellanedaruiz
2024-12-18 09:42:21
136
𝓜𝓲𝓵𝓲𝓬𝓪‘🌺 :
wie schön du aussiehst 💗
2024-12-18 09:36:27
60
🍓🍫 :
Ericaavellanedaruzi ist megaaaaa
2024-12-18 10:07:24
78
Sarah :
Teresa Seco 😌
2024-12-18 11:18:11
57
Zoë🇩🇴 :
Es gibt viele gute Dominikanische🇩🇴 tiktoker
2024-12-18 15:46:25
19
User :
Kennt jemand italienische Creator?
2024-12-18 13:44:32
7
Iced matcha latte🍵 :
Tati und dieser Apollo 😂
2024-12-18 13:27:25
12
Jess_Carmelita 🦋 :
Maeb 😭🤣
2024-12-18 20:53:32
12
HeSheItDasSmussMit :
Du bist so knuffig ❤️
2024-12-18 10:06:55
36
Kim🇮🇹🇩🇴💘 :
Yanerys🇩🇴aber sie redet 🇩🇴spanish
2024-12-18 14:51:47
10
ramona.5zwei :
ericaavellanedaruiz
2024-12-18 12:15:35
8
To see more videos from user @amandaxcharlotte, please go to the Tikwm homepage.
Other Videos
Does anyone run into this issue? #hoytarchery
ลืมแทบไม่ไหว - SARAN ขอบคุณคลิปจากช่อง Jay XD ด้วยครับ #รับโปรโมทเพลง #Influencer_music
Es chiquita y rota …
Just trust your heart #lonely #sadness #emptiness #sadcore #emotions #vibe #fyp #deepthoughts #cinematography #nostalgia
Cursor composer with a local LLM Deepseek R1 70B Distillation? I will show you how! Prerequisites: Git Installed. Python 3 installed. Ngrok installed with api key from their website. Requires cursor pro subscription. Install and setup oobabooga/text-generation-webui `git clone https://github.com/oobabooga/text-generation-webui.git` Start the Text Generation WebUI with the api enabled. Linux: `./start_linux.sh --api` Mac: `./start_macos.sh --api` Windows: `.\start_windows.bat --api` WSL: `.\start_wsl.bat --api` Grab the model you want off of hugging face. You need ~46 GB of ram to run the model in this video. I recommend running it on GPU's but unified Macs should be able to run the model as well. If you're not sure what models your computer can run figure out how much ram / or video ram you have available and google the best r1 model/distillation to use with the amount of ram you have available. There are also different models for GPU's and CPU's. Be sure to get a model that can run on GPU hardware if you have VRAM or CPU if you have a unified mac or regular ram. I do not advise running the model on Intel or AMD cpu as they are not very quick or efficient at inference. The model I am using is the DeepSeek R1 70B distillation at a 4 bit quantization. Find the model you want to try out on hugging face. Once you find the model you want to copy the name of it. This is the bit of it after `huggingface.co/` in the URL. `unsloth/DeepSeek-R1-Distill-Llama-70B-bnb-4bit`in my case. Open your browser and navigate to the text generation web ui: `http://127.0.0.1:7860` Click on Model on the left side navigation. Find the Download model or LoRA section. In the blank text field below paste the name of your model from hugging face. Again for me its `unsloth/DeepSeek-R1-Distill-Llama-70B-bnb-4bit`. Then click download. Depending on your connection speed this might take awhile. Once your model is downloaded. In the upper left corner go to the model dropdown. Select your model. Select the device you have ram on and slide the slider to the amount of ram per device. In my case I have 2 GPU's with 23500 RAM selected. Next hit Load up next to the model dropdown. Next head over to a terminal and open a Ngrok tunnel to port 5000 `ngrok http 5000` This will give you a publically accessible url. Copy the url ngrok provides. Once the model is loaded jump over to cursor. Goto Settings Select Open AI API Key and enable. Then select Override OpenAI Base URL: Paste in your url from the ngrok command and append `/v1` to the end. So if your URL is `https://xxx-xxx-x-xxx-xxx.ngrok-free.app` you would put `https://xxx-xxx-x-xxx-xxx.ngrok-free.app/v1` into the base url. Put in a fake open ai api key: `sk-test` Click verify You are now ready to open composer, chat. Select any model from OpenAI and text-generation-webui will use the model it currently has loaded to respond. #nvda #deepseek #llm #cursor #windsurf #development #strawberry #aidevelopment #softwaredevelopment #ide #vscode #localllm
About
Robot
Legal
Privacy Policy