@wiredgeist: Forget ChatGPT! 🚀 Here’s how to set up your own locally running Large Language Model (LLM) in just a few steps. 🖥️✨ Using Ollama, Docker, and OpenWebUI, you’ll take control of your AI without relying on external servers. Watch as I break it down step by step to get you up and running. #ai #local #llm #forfree #techtiktok #ollama
Can I update it regularly thru the update from Ollama website or do I have to lose everything and start from scratch with every new update?
2024-12-03 23:28:21
1
levy k :
can I get for Claude ai
2024-12-01 23:18:36
1
Senzo_Shinga :
Can it generate malicious code without limitations?
2024-12-03 15:01:14
1
itsNate :
why wouldn't the code produced work
2024-12-03 00:48:53
1
GonFreecss :
can i run that with my vivobook s16 flip inter core i9 13900h 16 gb ram and iris xe graphics?
2024-12-02 21:14:25
1
Djal Beqar :
Can i do same on the iphone ?????
2024-12-02 21:05:24
1
ᅠᅠᅠᅠᅠᅠᅠᅠᅠᅠᅠᅠᅠᅠᅠ :
32gb ram still buffers it requires alot of system requirement
2024-12-01 20:54:34
24
Mqi :
too many sweats, but that's the life
2024-12-03 18:07:34
2
Korpip :
What model can run in a raspberry Pi 3b+? Would be best to see how mobile one of these AIs could be with limited internet access.
2024-12-01 16:39:51
1
Bra Lucky :
Can I host it on my website and upload the models in the file directories in the server? Help please
2024-12-03 17:28:15
2
∅ :
with this I can run agents?
2024-12-02 03:31:19
1
comentaweas :
need a lot of ram
2024-12-01 19:22:15
8
Blekiano :
isnt Jan.ai easier for the average guy?
2024-11-28 23:48:36
2
cristobalovo :
But even with the terminal you are hitting the ollama api which makes you have limits
2024-12-02 18:32:59
1
♾️ :
make me one that gives trade signal
2024-11-29 00:48:59
4
H. :
What would be the risk of do it?
2024-12-02 22:18:35
1
HolyTinz :
how to get the ui
2024-12-02 04:43:27
2
NICO :
Can this run into my windows 7?😭🙏🏻
2024-12-02 14:57:15
1
J U S W A A A A :
This is awesome
2024-12-02 15:48:56
1
wait im goated :
it is not fair to compare chatgpt and ollama. yea u can run an ollama model but only the very weak ones since the ones comparable to chatgpt requires a ton of hardware resources.
2024-12-02 20:54:46
8
Jesús Zambrano :
what bigger model can you recommend?
2024-12-02 00:28:45
1
appleuser5472934 :
Not the same LLM model used in ChatGPT
2024-12-01 13:22:27
4
vince arizala :
needs high gpu
2024-12-02 18:25:24
1
jayjaybirdsnest :
but that's not chatgpt.
2024-12-01 17:10:15
3
To see more videos from user @wiredgeist, please go to the Tikwm
homepage.