Market Cap: $2.9575T 1.600%
Volume(24h): $99.1224B 1.790%
  • Market Cap: $2.9575T 1.600%
  • Volume(24h): $99.1224B 1.790%
  • Fear & Greed Index:
  • Market Cap: $2.9575T 1.600%
Cryptos
Topics
Cryptospedia
News
CryptosTopics
Videos
Top News
Cryptos
Topics
Cryptospedia
News
CryptosTopics
Videos
bitcoin
bitcoin

$93113.538616 USD

-0.11%

ethereum
ethereum

$1748.590950 USD

-2.15%

tether
tether

$1.000392 USD

0.02%

xrp
xrp

$2.177851 USD

-1.16%

bnb
bnb

$600.317897 USD

-0.84%

solana
solana

$151.339663 USD

1.47%

usd-coin
usd-coin

$0.999927 USD

0.01%

dogecoin
dogecoin

$0.179240 USD

2.45%

cardano
cardano

$0.707230 USD

2.73%

tron
tron

$0.243466 USD

-0.61%

sui
sui

$3.323843 USD

10.76%

chainlink
chainlink

$14.828095 USD

0.41%

avalanche
avalanche

$21.905207 USD

-0.82%

stellar
stellar

$0.275988 USD

4.91%

unus-sed-leo
unus-sed-leo

$9.206268 USD

0.44%

Cryptocurrency News Articles

'Meaning Machine' visualizes how large-scale language models break words down into tokens and process them

Apr 23, 2025 at 06:00 pm

Generative AI using large-scale language models such as ChatGPT, Claude, and Grok respond appropriately to user words just like a human. However, there is a big difference

'Meaning Machine' visualizes how large-scale language models break words down into tokens and process them

The website 'Meaning Machine' provides a visually easy-to-understand view of how large-scale language models process language, which is different from how humans process language. Generative AI using large-scale language models such as ChatGPT, Claude, and Grok respond appropriately to user words just like a human. However, there is a big difference between how large-scale language models process language and how humans process language.

The website 'Meaning Machine' provides a visually easy-to-understand view of how large-scale language models process language.

Meaning Machine · Streamlit

Below that, the input sentence is split into words, and each word is shown with a numeric ID when it is represented as a 'token' by the large-scale language model.

Joshua Hathcock, the developer of Meaning Machine, explains that large-scale language models do not process entire sentences together, but rather split words and character sets into numerical IDs called tokens and process them abstractly. For example, in the case of the GPT model on which ChatGPT is based, common words such as 'The,' 'young,' 'student,' 'didn't,' and 'submit' are often represented by a single token, but rare words are split into multiple tokens made up of combinations of subwords. The large-scale language model then identifies the grammatical role of each token and infers the subject, verb, object, etc. of the sentence. In the example sentence, the subject is 'student,' the verb is 'submit,' and the object is 'report.'

The large-scale language model tags each token with its part of speech (POS), maps dependencies in a sentence, and structures and represents the sentence.

The meaning of the dependency strings is explained in the table at the bottom of the Meaning Machine page.

Each token is then converted into a list (vector) of hundreds of numbers that capture its meaning and context. The figure below shows each token in the example sentence visualized in two dimensions through dimensionality reduction.

Below that is a tree showing the dependencies of each token, which shows which tokens depend on which other tokens, and what the whole picture means.

You can navigate through the dependencies by dragging the bar at the bottom of the diagram left and right.

In Meaning Machine, you can enter any sentence you like into the input form at the top of the page to see how the large-scale language model converts each word into a token and how it captures the dependencies of the entire sentence.

'These technical steps reveal something deeper: language models don't understand language the way humans do,' Hathcock said. 'They simulate language convincingly, but in a fundamentally different way. When you or I say 'dog,' we might recall the feel of fur, the sound of a bark, and even an emotional response. But when a large-scale language model sees the word 'dog,' it sees a vector of numbers formed by the frequency with which 'dog' appears near words like 'bark,' 'tail,' 'vet,' and so on. This is not wrong; it has statistical meaning. But this has no substance, no basis, no knowledge.' In other words, large-scale language models and humans process language fundamentally differently, and no matter how human-like a response may be, there are no beliefs or goals.

Despite this, large-scale language models are already widely used in society, creating people's resumes, filtering content, and sometimes even determining what is valuable. Since AI is already becoming a social infrastructure, Hathcock argued that it is important to know the difference in performance and understanding of large-scale language models.

Disclaimer:info@kdj.com

The information provided is not trading advice. kdj.com does not assume any responsibility for any investments made based on the information provided in this article. Cryptocurrencies are highly volatile and it is highly recommended that you invest with caution after thorough research!

If you believe that the content used on this website infringes your copyright, please contact us immediately (info@kdj.com) and we will delete it promptly.

Other articles published on Apr 26, 2025