Member-only story

This Is Why GPT Models Sound Like Humans | HackerNoon

Thomas Cherickal
7 min readMar 21, 2024

Understand why GPTs are so human-like in natural language and how much they have in common with us in terms of operation. Also a bonus:

How ChatGPT would go about creating AGI.

Originally published at https://hackernoon.com.

‘Attention’ Was Not All You Needed.

So, Nobody Knows Why AI Works, Right?

Yes and No.

The Fundamental Problem is that we have not found out much about the mystery that is the human brain.

Yet Transformers and Humans share eerily similar responses to natural language.

Let’s explain why.

Storing Information Representations In Transformers

We use a vector generated from an input of several words or a token to create an embedding for a transformer.

Therefore information — is stored as a vector.

Let us consider representation. This representational embedding played a much more fundamental role than attention. Representation in a decodable or workable form had to be reached first!

You may be thinking, how does this relate in any way to all that a transformer like GPT-4 can do when trained on the entire Internet?

The information is encoded by attention dot products and mathematical processing…

--

--

No responses yet