Crypto Cheat Sheet AI: Explaining 30 Common Slang Terms in One Shot
Original Title: "AI Insider Jargon Dictionary (March 2026 Edition), Recommended for Bookmarking"
Original Author: Golem, Odaily Planet Daily
Now, if you're in the crypto world and not paying attention to AI, you're easily subject to ridicule (yes, my friend, think about why you clicked in).
Are you completely clueless about the basic concepts of AI, asking the soy sauce bean for the meaning of every acronym in a sentence? Are you lost in a sea of proprietary terms at AI events, pretending you're not disconnected?
While it's not realistic to dive into the AI industry in a short amount of time, knowing a summary of high-frequency AI industry basics is worthwhile. Luckily, this article is prepared for you below. Sincerely advise you to read through and bookmark.
Basic Vocabulary (12)
· LLM (Large Language Model)
The core of LLM is a deep learning model trained on massive amounts of data, proficient in understanding and generating language. It can process text and increasingly handle other types of content.
In contrast is the SLM (Small Language Model) - usually emphasizing a language model with lower costs, lighter deployment, and more convenient localization.
· AI Agent
AI Agent refers not only to a "chatting model" but a system capable of understanding goals, invoking tools, step-by-step task execution, planning, and validation when necessary. Google defines an agent as software that can reason based on multimodal input and act on behalf of the user.
· Multimodal
Its AI model is not only text-based but can simultaneously process various input-output forms like text, images, audio, videos, etc. Google specifically defines multimodal as the ability to process and generate different types of content.
· Prompt
The user's input command to the model, the most basic form of human-machine interaction.
· Generative AI (Generative AI / AIGC)
Emphasizing AI "generation" rather than just classification or prediction, generative models can produce text, code, images, emojis, videos, etc., based on a prompt.
· Token
This is one of the concepts in the AI field most similar to the "Gas Unit." Models do not understand content based on "words," but rather process input and output based on tokens, with billing, context length, and response speed usually highly correlated to tokens.
· Context Window
Refers to the total number of tokens a model can "see" and utilize at once, also known as the number of tokens the model can consider or "remember" in a single processing step.
· Memory
Allows a model or agent to retain user preferences, task context, and historical states.
· Training
The process by which a model learns parameters from data.
· Inference
In contrast to training, refers to the process where a model, once deployed, receives input and generates output. In the industry, it is often said that "training is expensive, but inference is even costlier" because many costs in the real commercialization phase occur during inference. The distinction between training and inference is also the foundational framework for discussions of deployment costs in mainstream vendors.
· Tool Use / Tool Calling
Means that a model not only outputs text but can also call tools such as search, code execution, databases, external APIs, etc. This has already been regarded as a key capability of agents.
· API
Infrastructure for AI products, applications, and agents when interacting with third-party services.
Advanced Vocabulary (18)
· Transformer
A model architecture that makes AI better at understanding contextual relationships, serving as the technical foundation for most large language models today. Its key feature is the ability to simultaneously consider the relationship between each word in the entire piece of content.
· Attention
The central mechanism in Transformers, its role is to enable the model to automatically determine "which words are most worthy of attention" when reading a sentence.
· Agentic / Agentic Workflow
This is a recently popular term, which means a system is no longer just "question and answer," but has a certain degree of autonomy to break down tasks, decide on the next steps, and invoke external capabilities. Many vendors see it as a sign of "moving from Chatbot to executable system."
· Subagents
An Agent further breaks down into multiple dedicated sub-agents to handle subtasks.
· Skills
With the rise of OpenClaw, this term has become more common. It refers to installable, reusable, and combinable capability units/instructions for an AI Agent, but also warns of tool misuse and data exposure risks.
· Hallucination
It refers to a model confidently generating erroneous or absurd output by "perceiving non-existent patterns," presenting a seemingly reasonable but actually incorrect overconfident output.
· Latency
The time it takes for a model to process a request and produce an output, is one of the most common engineering jargon, frequently encountered in discussions on deployment and productization.
· Guardrails
Used to limit what a model/Agent can do, when to stop, and what content cannot be output.
· Vibe Coding
This term is also one of the hottest AI slang terminologies today, meaning users express their needs directly through conversation, and AI writes the code, without the user needing to understand how to code specifically.
· Parameters
Numerical scales used internally in a model to store capabilities and knowledge, often used to roughly measure the scale of a model. Phrases like "hundreds of billions of parameters" are common bragging statements in the AI community.
· Reasoning Model
It usually refers to models that are better at multi-step reasoning, planning, validation, and complex task execution.
· MCP (Model Context Protocol)
This is a very hot new buzzword in the past year, serving as a common interface between models and external tools/data sources.
· Fine-tuning
Continuing training on a base model to make it more suitable for a specific task, style, or domain. Google's terminology directly considers tuning and fine-tuning as related concepts.
· Distillation
Transferring the capabilities of a large model to a smaller model, like having the "teacher" instruct the "student."
· RAG (Retrieval-Augmented Generation)
This has almost become a standard configuration in enterprise AI. Microsoft defines it as a "search + LLM" pattern, using external data to ground the answers, addressing issues such as outdated training data and lack of understanding of private knowledge bases. The goal is to base the answers on real documents and private knowledge rather than solely on the model's own recall.
· Grounding
Often associated with RAG, it means ensuring that the model's answers are based on external sources such as documents, databases, web pages, rather than relying only on parameter memorization. Microsoft explicitly identifies grounding as a core value in the RAG documentation.
· Embedding (Vector Embedding / Semantic Vector)
Encoding textual, image, audio, and other content into high-dimensional numerical vectors for semantic similarity calculations.
· Benchmark
An evaluation method that uses a standardized set of criteria to test a model's capabilities, often used by various models to "prove their strength" through leaderboard rankings.
You may also like

Exchanging 200,000 for nearly 100 million, DeFi stablecoins face another attack

The underlying business agreement of the trillion-dollar Agent economy: Understanding ERC-8183, it's not just about payments, but the future

When Wall Street's ETH begins to "yield": Looking at the asset properties of Ethereum from BlackRock's ETHB

The Power of Agency: The Agentic Wallet and the Next Decade of Wallets

Understanding x402 and MPP in One Article: Two Routes for Agent Payments

Particle Founder: The entrepreneurial insights I have gained the most from in the past year

Huang Renxun's latest podcast transcript: The future of Nvidia, the development of embodied intelligence and agents, the explosion of inference demand, and the public relations crisis of artificial intelligence

OKX Ventures Research Report: AI Agent Economic Infrastructure Research Report (Part 1)

The migration of settlement rights: B18 and the institutional starting point of on-chain banks

From Tencent and Circle: Looking at the Simple and Difficult Questions of Investment

The second half of stablecoins no longer belongs to the crypto circle

Cursor "Shell" Kimi Controversy Reversed: From Copyright Infringement Allegations to Authorized Collaboration, China's Open Source Model Once Again Becomes a Global AI Foundation

The Real Reason Tokens Don't Sell: 90% of Crypto Projects Overlook Investor Relations

Is the income of pump.fun real, earning a million dollars a day despite the market downturn?

The real reason why tokens are not selling: 90% of crypto projects neglect investor relations

Who is the true winner of the "Tokenization" narrative?

Moss: The Era of AI-Traded by Anyone | Project Introduction
