But if the AI knows it's hallucinating then why would hallucinations even exist?
2024-08-08 01:12:17
1830
King Dong 333 :
say please as well.
2024-10-03 01:21:30
0
Alexander Kelso :
Or, or, and this is a wild concept, they just don't include ai in everything.
2024-08-08 02:53:11
723
Dan Jones 🥥🌴 :
This reminds me of OpenAI announcing that they would figure out their monetization strategy by asking ChatGPT.
2024-08-08 01:29:13
432
mari 🦪 :
“do not hallucinate” is so funny to me 😭 what do they MEAN??? that’s what AI does AFAIK
2024-08-08 01:43:42
196
hannele :
there are papers that suggest that LLMs "know" when they're hallucinating
2024-08-10 01:22:55
4
Gabo Rmz :
Because GenAI is trained to give an answer regardless of it being true or not. Write a prompt instructing GenAI “it’s ok to not know the answer” and see how the output improves.
2024-08-08 07:20:00
59
Bored Otechu :
it's the same way apple stuff works... a hope and a prayer...
2024-10-01 02:13:28
0
decaf2rtl :
If they don't want it to lie, it'll just say "I don't know" which is an awful search result
2024-08-08 00:37:17
71
Mark :
Is this an argument that generative AI chat has theory of mind? How else would it know to stop lying?
2024-08-08 04:51:09
1
ni :
This is standard practice at the moment. Best way to mitigate hallucination is to prompt it not to hallucinate, and then ground it with retrieved augmented context
2024-08-08 12:29:49
24
Jesus Christ • Creator :
Telling it not to make things up has been proven to produce less hallucination on some models.
2024-08-08 07:08:25
1
bumbletoon :
AI hallucinate because they are forced to answer even if nothing in training data has a high enough probability of being correct
2024-08-08 14:02:30
2
Andres Montoya9048 :
we do the same. we don't lie because we. agree to each other we will not
2024-08-08 00:35:17
4
Tiago Marquezine :
Those prompts are widely used now for LLMs. It’s basically a prompt to tell AI to take longer if needed to validade their answers. It DOES work.
2024-08-15 20:22:03
0
user5988173254078 :
Nah this makes sense, a small but significant proportion of hallucinations are things the AI 'knows' are wrong but 'thinks' are more likely to follow the previous character - e.g. Common misconception
2024-08-08 10:01:54
1
dzez_zeka :
I mean AI is taught to always have an answer because AI hallucinations can be creative and correct some of the times and people don't like hearing 'I don't know', imagine asking Google and it says idk
2024-08-12 22:07:27
1
TBDG :
Google is doing this too, shockingly for their Weather app? 😭 they added things like "you're an expert..." and "make sure you don't include any info which is not provided in the weather data"
2024-08-10 03:04:16
1
⛈Boi, Interrupted⛈ :
AI is more artificial tism tbh
2024-08-10 06:14:00
0
Max Miller :
I actually have found better results by misspelling words and stuff
2024-08-08 00:45:40
12
Craig Dennis :
The main reason for hallucinations is that the LLM does not have an answer. By saying “it’s ok to say you don’t know” also reduces hallucinations.
2024-08-08 09:44:35
15
Riley ☆ :
Maybe it works better on those grammatical errors because it was trained on a whole bunch of people on the internet making typos and grammar mistakes
2024-08-08 00:42:47
16
Oninji :
It could be prompting the AI to do a second pass on it's answer and re-generate it if distance itself too much from initial assumption. 1/2
2024-08-08 13:12:11
0
To see more videos from user @alberta.nyc, please go to the Tikwm
homepage.