“But that’s not what I wrote!”
Imagine a situation where I write a piece of content; a blog post, an article, or anything else.
You then ask an LLM a certain question, and it uses my article, repurposes it somehow, and presents an altered version of my content.
I object, “But that’s not what I wrote!”
You respond, “I’s OK. I don’t really care what you actually wrote. I care what you actually meant, and the AI is able to understand you much better than you think you understand yourself.”
Trusting the machine
We currently, rightly, trust the machines with things they are clearly better at than we are: keeping track of massive amounts of data, accurately processing data, and similar tasks. We still don’t trust machines for strategic thinking, emotional processing, and pretty much everything related to meaning.
But we might soon get into a phase where people trust LLMs more than humans at human skills. Here are some possibe scenarios that might develop:
- If Google can correctly check with me, “Did you mean…?” then LLMs can clearly do that much better with longer prompts.
- Search engines have a tiny context (a few words) while LLMs are having more than a million tokens as of July 2024.
- LLMs will probably use all the content on our phones/computers as context for the prompt. In this case, they’ll have a (lifelong) perspective on how/why/when the prmopt is being asked.
- LLMs will create prompts for us.
- With perfect display of multi sensory content might come complete trust in the LLM.
- The LLM can make itself more believable, by knowing how to appear believable, by knowing us so well and using whatever resonates with (against?) us.
- Efficiency/laziness. We don’t expect to live long enough to watch all the videos we would like to watch. So, “Just give me the summary!”