@alberta.nyc: Has prompt engineering gone too far? (yes) #coding #promptengineering #appleintelligence

Alberta Tech
Alberta Tech
Open In TikTok:
Region: US
Thursday 08 August 2024 00:13:51 GMT
315602
24247
793
1294

Music

Download

Comments

matanjuggles
matanjuggles :
next up "make sure all code has no bugs"
2024-08-08 02:11:53
3506
jconnor850
El Mero Mero :
But if the AI knows it's hallucinating then why would hallucinations even exist?
2024-08-08 01:12:17
1830
kingdong333
King Dong 333 :
say please as well.
2024-10-03 01:21:30
0
alexanderkelso
Alexander Kelso :
Or, or, and this is a wild concept, they just don't include ai in everything.
2024-08-08 02:53:11
723
danjones000
Dan Jones 🥥🌴 :
This reminds me of OpenAI announcing that they would figure out their monetization strategy by asking ChatGPT.
2024-08-08 01:29:13
432
2rant2delish
mari 🦪 :
“do not hallucinate” is so funny to me 😭 what do they MEAN??? that’s what AI does AFAIK
2024-08-08 01:43:42
196
basic_hannele
hannele :
there are papers that suggest that LLMs "know" when they're hallucinating
2024-08-10 01:22:55
4
gabermz
Gabo Rmz :
Because GenAI is trained to give an answer regardless of it being true or not. Write a prompt instructing GenAI “it’s ok to not know the answer” and see how the output improves.
2024-08-08 07:20:00
59
bored_otechu
Bored Otechu :
it's the same way apple stuff works... a hope and a prayer...
2024-10-01 02:13:28
0
caldq01061
decaf2rtl :
If they don't want it to lie, it'll just say "I don't know" which is an awful search result
2024-08-08 00:37:17
71
emmieaych
Mark :
Is this an argument that generative AI chat has theory of mind? How else would it know to stop lying?
2024-08-08 04:51:09
1
nickhim94
ni :
This is standard practice at the moment. Best way to mitigate hallucination is to prompt it not to hallucinate, and then ground it with retrieved augmented context
2024-08-08 12:29:49
24
flambruciofilliba
Jesus Christ • Creator :
Telling it not to make things up has been proven to produce less hallucination on some models.
2024-08-08 07:08:25
1
bumbletoons
bumbletoon :
AI hallucinate because they are forced to answer even if nothing in training data has a high enough probability of being correct
2024-08-08 14:02:30
2
vantta_pendragon
Andres Montoya9048 :
we do the same. we don't lie because we. agree to each other we will not
2024-08-08 00:35:17
4
tiagomaqz
Tiago Marquezine :
Those prompts are widely used now for LLMs. It’s basically a prompt to tell AI to take longer if needed to validade their answers. It DOES work.
2024-08-15 20:22:03
0
shaadowofamaster
user5988173254078 :
Nah this makes sense, a small but significant proportion of hallucinations are things the AI 'knows' are wrong but 'thinks' are more likely to follow the previous character - e.g. Common misconception
2024-08-08 10:01:54
1
dzezeka
dzez_zeka :
I mean AI is taught to always have an answer because AI hallucinations can be creative and correct some of the times and people don't like hearing 'I don't know', imagine asking Google and it says idk
2024-08-12 22:07:27
1
scaredbeing
TBDG :
Google is doing this too, shockingly for their Weather app? 😭 they added things like "you're an expert..." and "make sure you don't include any info which is not provided in the weather data"
2024-08-10 03:04:16
1
dissociatedinchrist
⛈Boi, Interrupted⛈ :
AI is more artificial tism tbh
2024-08-10 06:14:00
0
m0stlysapien
Max Miller :
I actually have found better results by misspelling words and stuff
2024-08-08 00:45:40
12
craigmdennis
Craig Dennis :
The main reason for hallucinations is that the LLM does not have an answer. By saying “it’s ok to say you don’t know” also reduces hallucinations.
2024-08-08 09:44:35
15
xx_riley0123_xx
Riley ☆ :
Maybe it works better on those grammatical errors because it was trained on a whole bunch of people on the internet making typos and grammar mistakes
2024-08-08 00:42:47
16
oninji
Oninji :
It could be prompting the AI to do a second pass on it's answer and re-generate it if distance itself too much from initial assumption. 1/2
2024-08-08 13:12:11
0
To see more videos from user @alberta.nyc, please go to the Tikwm homepage.

Other Videos


About