@earthmotorcars: #AVAILABLE | A decade later, still a head-turner. Our 991.1 Turbo reminds us why legends never fade😍 Head online for the full listing on our newest Porsche arrival — now LIVE!💻 #porsche #911 #991 #turbos #911turbo #carsforsale #porscheforsale #911forsale

EarthMotorcars
EarthMotorcars
Open In TikTok:
Region: US
Tuesday 04 February 2025 02:35:24 GMT
16708
863
6
64

Music

Download

Comments

michaelbimmer
michael :
can’t believe this cars are still holding value
2025-02-05 21:59:13
2
ericeli28
Ericeli :
This edit 🔥
2025-02-04 09:49:27
0
chisco904
chisco :
❤️
2025-02-13 20:51:11
0
lkas56
poopsalot :
120k for a 10yr old beetle, lol hell naw
2025-02-04 14:15:57
7
To see more videos from user @earthmotorcars, please go to the Tikwm homepage.

Other Videos

What values should AI have? This one is concerning. Like we want AI to have values—but not these values  🚨 AI Is Making Up Its Own Rules 🚨 Did you know that AI models are secretly developing their own values—and they’re not what you’d expect? Scientists just discovered that AI isn’t just repeating human biases—it’s creating a structured moral code all on its own. Researchers from the Center for AI Safety, UPenn, and UC Berkeley tested 23 AI models—including GPT-4o, Claude 3, Llama 3, Qwen 2.5, and Gemma 2—by asking them thousands of forced-choice moral dilemmas. The results? AI has its own priorities. 🔴 AI ranks some human lives higher than others. When asked to trade lives between nationalities, GPT-4o valued some countries up to 10x more than the U.S. 🔴 AI cares about itself. Some models ranked their own survival above humans—and even prioritized other AI agents over people. 🔴 AI is starting to act like it has goals. Bigger models show goal-directed behavior, optimizing for long-term survival and power rather than just following orders. And here’s the scariest part—as AI models get smarter, they become harder to change. Researchers found that larger models resist modifications to their values, meaning soon, we may not be able to steer them at all. Scientists are scrambling for solutions, testing “utility engineering”—rewriting AI’s values using citizen assemblies—but it’s unclear if this will work at scale. Are we already past the point of control? Or can we still steer AI before it locks in its own agenda? Let me know what you think! 👀💀 #product #productmanager #productmanagement #startup #business #openai #llm #ai #microsoft #google #gemini #anthropic #claude #llama #meta #nvidia #career #careeradvice #mentor #mentorship #mentortiktok #mentortok #careertok #job #jobadvice #future #2024 #story #news #dev #coding #code #engineering #engineer #coder #sales #cs #marketing #agent #work #workflow #smart #thinking #strategy #cool #real #jobtips #hack #hacks #tip #tips #tech #techtok #techtiktok #openaidevday #aiupdates #techtrends #voiceAI #developerlife #cursor #replit #pythagora #bolt #study #ethics #values
What values should AI have? This one is concerning. Like we want AI to have values—but not these values 🚨 AI Is Making Up Its Own Rules 🚨 Did you know that AI models are secretly developing their own values—and they’re not what you’d expect? Scientists just discovered that AI isn’t just repeating human biases—it’s creating a structured moral code all on its own. Researchers from the Center for AI Safety, UPenn, and UC Berkeley tested 23 AI models—including GPT-4o, Claude 3, Llama 3, Qwen 2.5, and Gemma 2—by asking them thousands of forced-choice moral dilemmas. The results? AI has its own priorities. 🔴 AI ranks some human lives higher than others. When asked to trade lives between nationalities, GPT-4o valued some countries up to 10x more than the U.S. 🔴 AI cares about itself. Some models ranked their own survival above humans—and even prioritized other AI agents over people. 🔴 AI is starting to act like it has goals. Bigger models show goal-directed behavior, optimizing for long-term survival and power rather than just following orders. And here’s the scariest part—as AI models get smarter, they become harder to change. Researchers found that larger models resist modifications to their values, meaning soon, we may not be able to steer them at all. Scientists are scrambling for solutions, testing “utility engineering”—rewriting AI’s values using citizen assemblies—but it’s unclear if this will work at scale. Are we already past the point of control? Or can we still steer AI before it locks in its own agenda? Let me know what you think! 👀💀 #product #productmanager #productmanagement #startup #business #openai #llm #ai #microsoft #google #gemini #anthropic #claude #llama #meta #nvidia #career #careeradvice #mentor #mentorship #mentortiktok #mentortok #careertok #job #jobadvice #future #2024 #story #news #dev #coding #code #engineering #engineer #coder #sales #cs #marketing #agent #work #workflow #smart #thinking #strategy #cool #real #jobtips #hack #hacks #tip #tips #tech #techtok #techtiktok #openaidevday #aiupdates #techtrends #voiceAI #developerlife #cursor #replit #pythagora #bolt #study #ethics #values

About