@social_work_therapy_dog: Don’t ask me how they both learned this because I couldn’t tell you, but anyways this partially cured my depression from this past week🤷🏻‍♀️ #akcrally #rallytraining #akcrallytraining #rallydog #rallydogs #dogsportsoftiktok #dogsports

Maddie, Jovie & Nessie🤍🤎🐩
Maddie, Jovie & Nessie🤍🤎🐩
Open In TikTok:
Region: US
Thursday 14 November 2024 01:50:32 GMT
5767
658
30
0

Music

Download

Comments

missmandm06
Marisa Patton :
I love training multiple dogs. When they all do something the same way it's very telling that I'm the problem 😂 this one just earns style points though!
2024-11-14 02:17:32
22
doberbjorn
björn :
just came across this again and i think you unknowingly taught it from the way you’re luring 😂
2024-11-18 12:51:47
0
To see more videos from user @social_work_therapy_dog, please go to the Tikwm homepage.

Other Videos

Let’s talk about a wild AI chatbot case with Air Canada that could’ve been straight out of a sitcom (but with legal consequences). A customer, seeking a bereavement fare, relied on their chatbot’s promise that they could apply for a refund within 90 days. Turns out, that policy doesn’t exist. Classic case of AI hallucination—except it’s not funny when a refund is on the line. Here’s where AWS’s automated reasoning tools could’ve been helpful. Automated reasoning ensures chatbots don’t just make things up by cross-checking responses with actual policies. Imagine if Air Canada’s chatbot had been equipped with these tools: it would’ve said something like, “Sorry, bereavement fares must be arranged before travel” instead of leading the customer down a path of frustration and legal drama. Instead, Air Canada argued the chatbot was its own legal entity—yes, really. Spoiler: the tribunal didn’t buy it. The company was held accountable for the AI’s output on its own site. The takeaway? If you’re building AI tools, don’t skip the safeguards. Tech like AWS’s contextual grounding and automated reasoning isn’t just about avoiding PR nightmares—it’s about creating AI that’s trustworthy, user-friendly, and, most importantly, doesn’t lie to your customers. And yes this isn’t the only way to solve for this issue. Vertex AI from Google has made progress here with toolchaik approaches. Microsoft Corrections addresses this with less emphasis on automated reasoning. I think AWS is taking a smart approach here. AI might be smart, but without the right guardrails, it can turn into a liability faster than you can say “refund denied.” #product #productmanager #productmanagement #startup #business #openai #llm #ai #microsoft #google #gemini #anthropic #claude #llama #meta #nvidia #career #careeradvice #mentor #mentorship #mentortiktok #mentortok #careertok #job #jobadvice #future #2024 #story #news #dev #coding #code #engineering #engineer #coder #sales #cs #marketing #agent #work #workflow #smart #thinking #strategy #cool #real #jobtips #hack #hacks #tip #tips #tech #techtok #techtiktok #openaidevday #aiupdates #techtrends #voiceAI #developerlife #aircanada #awsreinvent #reinvent #aws #amazon #bedrock #guardrails
Let’s talk about a wild AI chatbot case with Air Canada that could’ve been straight out of a sitcom (but with legal consequences). A customer, seeking a bereavement fare, relied on their chatbot’s promise that they could apply for a refund within 90 days. Turns out, that policy doesn’t exist. Classic case of AI hallucination—except it’s not funny when a refund is on the line. Here’s where AWS’s automated reasoning tools could’ve been helpful. Automated reasoning ensures chatbots don’t just make things up by cross-checking responses with actual policies. Imagine if Air Canada’s chatbot had been equipped with these tools: it would’ve said something like, “Sorry, bereavement fares must be arranged before travel” instead of leading the customer down a path of frustration and legal drama. Instead, Air Canada argued the chatbot was its own legal entity—yes, really. Spoiler: the tribunal didn’t buy it. The company was held accountable for the AI’s output on its own site. The takeaway? If you’re building AI tools, don’t skip the safeguards. Tech like AWS’s contextual grounding and automated reasoning isn’t just about avoiding PR nightmares—it’s about creating AI that’s trustworthy, user-friendly, and, most importantly, doesn’t lie to your customers. And yes this isn’t the only way to solve for this issue. Vertex AI from Google has made progress here with toolchaik approaches. Microsoft Corrections addresses this with less emphasis on automated reasoning. I think AWS is taking a smart approach here. AI might be smart, but without the right guardrails, it can turn into a liability faster than you can say “refund denied.” #product #productmanager #productmanagement #startup #business #openai #llm #ai #microsoft #google #gemini #anthropic #claude #llama #meta #nvidia #career #careeradvice #mentor #mentorship #mentortiktok #mentortok #careertok #job #jobadvice #future #2024 #story #news #dev #coding #code #engineering #engineer #coder #sales #cs #marketing #agent #work #workflow #smart #thinking #strategy #cool #real #jobtips #hack #hacks #tip #tips #tech #techtok #techtiktok #openaidevday #aiupdates #techtrends #voiceAI #developerlife #aircanada #awsreinvent #reinvent #aws #amazon #bedrock #guardrails

About