AI Jargon Buster Header

AI Jargon Buster

February 26, 20267 min read

This AI Jargon Buster was created based on the questions we regularly hear when helping people learn and work with AI. It explains common AI terms in plain English to support clearer thinking and better decision-making.

You don't have to read the whole thing if you don't want to. Simply use "control + F" if you're on a Windows machine or "command + F" if you're on a Mac and search for the specific term you're looking for.

We use the term "AI brain" throughout this guide as a shorthand for the trained model — the part of an AI system that processes information and generates responses. It's a useful analogy, but AI doesn't think or understand the way humans do.

Questions are always welcome! Feel free to email us at [email protected].


Agent (Agentic AI): An AI system that can take actions on its own, such as browsing the web, running code, or completing multi-step tasks, rather than just answering a single question.

Alignment: How closely AI behaviour matches human values and intent. (See also: Guardrails, Responsible AI)

API: A way for one piece of software to send requests to and receive responses from another. Short for Application Programming Interface.

Artificial Intelligence (AI): Computers doing tasks that normally require human thinking.

Bias: When AI generates skewed outcomes due to weighted training data or prompting

Chatbot: A software tool designed to have conversations with users. Some chatbots use AI; others follow simple scripted rules.

Computer Vision: AI working with images or video. (See also: Multimodal)

Confidence vs. Accuracy: Confidence is a measure of how sure the model is that it is giving you the right answer, whilst Accuracy is a measure of how right the answer actually is.

Context: The information the AI uses to understand your request, such as previous messages or documents you've provided. (See also: Context Window)

Context Window: The amount of information an AI system can pay attention to at one time. If too much information is added, older details may be ignored. (See also: Context, Token)

Deep Learning: A type of machine learning that uses many layers of processing to spot complex patterns. (See also: Machine Learning, Neural Network)

Embedding: A way of converting words, images, or other data into numbers so the AI can measure how similar or related they are.

Few-Shot Learning: Learning a task from just a few examples. (See also: Zero-Shot Learning)

Fine-Tuning: Further training an AI brain (model) to be better at a specific task. (See also: Model, Foundation Model, Training Data)

Foundation Model: A large, general-purpose AI brain (model) trained on broad data that can be adapted for many tasks. ChatGPT, Claude, and Gemini are built on foundation models. (See also: Fine-Tuning, Large Language Model)

Generalisation: How well an AI brain (model) works on new, previously unseen data. (See also: Overfitting, Underfitting)

Guardrails: Rules or limits built into an AI system to reduce risk, prevent misuse, or guide how it behaves. (See also: Alignment, Responsible AI)

Hallucinations: When an AI system gives an answer that sounds confident and believable, but is incorrect, made up, or not supported by evidence. (See also: Confidence vs. Accuracy, Verification)

Inference: When the AI brain (model) processes a new input and produces a response — this is what's happening every time you use an AI tool. (See also: Inference Cost, Latency, Training)

Inference Cost: The computing cost of running the AI brain (model) each time it responds to a user. (See also: Inference)

Large Language Model (LLM): A type of AI brain (model) trained on large amounts of text to predict and generate language, such as answering questions or writing summaries. (See also: Model, Foundation Model, Token)

Latency: How long an AI takes to respond. (See also: Inference)

Loss Function: A score that measures how far off the AI brain's (model's) output is from the correct answer, used to guide improvements during training. (See also: Reward Function)

Machine Learning (ML): A way for computers to learn from data instead of from fixed rules. (See also: Deep Learning, Supervised Learning, Unsupervised Learning, Reinforcement Learning)

Model: The trained "AI brain" — the part of an AI system that takes in information and produces responses or predictions based on what it learned during training. (See also: Foundation Model)

Model Drift: When an AI brain (model) becomes less accurate over time because the real world has changed since it was trained. (See also: Model, Training Data)

Multimodal: AI that works with more than one type of data, such as text, images, and audio. (See also: Computer Vision)

Natural Language Processing (NLP): AI working with human language. (See also: Large Language Model, Sentiment Analysis)

Neural Network: A system loosely inspired by how brain cells connect, designed to process information in layers to find patterns in data. (See also: Deep Learning)

Open Source vs. Closed Source: Open source AI models share their code publicly for anyone to inspect or build on, while closed source models are kept private by the company that built them.

Overfitting: When an AI brain (model) has memorised its training data too closely and struggles with anything new. (See also: Underfitting, Generalisation)

Prompt: The instruction, question, or request you give to an AI system. How you ask often affects what you get back. (See also: Context, Prompt Engineering)

Prompt Engineering: The skill of writing clear, effective instructions to get better results from an AI system. (See also: Prompt)

Reinforcement Learning: A training method where the AI learns by trying different actions and receiving feedback on which ones worked best. (See also: Machine Learning, Reward Function, Supervised Learning, Unsupervised Learning)

Responsible AI: An approach to building and using AI that considers fairness, transparency, privacy, and safety throughout. (See also: Bias, Guardrails, Alignment)

Reward Function: A score that tells the AI how well it performed during reinforcement learning, used to encourage actions that lead to better outcomes. (See also: Loss Function, Reinforcement Learning)

Retrieval: The process of finding and pulling in relevant documents or information for the AI to use, without creating new content. (See also: Retrieval-Augmented Generation)

Retrieval-Augmented Generation (RAG): A method where an AI first retrieves relevant source material, then generates an answer using that material as its basis. (See also: Retrieval, Source Grounded)

Scalability: How well the system handles more users or data.

Sentiment Analysis: Using AI to detect the tone or emotion in text, such as whether a customer review is positive, negative, or neutral. (See also: Natural Language Processing)

Source Grounded: An AI system that is limited to working only from specific documents or sources, rather than answering from general knowledge. (See also: Retrieval-Augmented Generation, Hallucinations)

Supervised Learning: Training with examples that include correct answers, so the AI learns what "right" looks like. (See also: Machine Learning, Unsupervised Learning, Reinforcement Learning)

Synthetic Data: Artificially generated data used for training purposes. (See also: Training Data)

Temperature: A setting that controls how predictable or creative an AI's responses are. Lower temperature gives more focused, consistent answers; higher temperature gives more varied, surprising ones.

Token: A small chunk of text the AI processes — roughly three-quarters of a word on average. For example, the word "butterfly" might be split into "butter" and "fly." (See also: Context Window)

Training: The process of teaching an AI brain (model) by feeding it data and letting it adjust its settings to improve. (See also: Training Data, Inference)

Training Data: The information used to train an AI brain (model). This data shapes what the model has seen before, but it does not give the model understanding or judgment. (See also: Supervised Learning)

Underfitting: When an AI brain (model) hasn't learned enough from its training data to be useful. (See also: Overfitting, Generalisation)

Unsupervised Learning: Training where the AI finds patterns in data on its own, without being told what the correct answers are. (See also: Machine Learning, Supervised Learning, Reinforcement Learning)

Validation Set: A separate set of data used to check the AI brain's (model's) progress during training. (See also: Training Data, Overfitting)

Verification: The act of checking AI-generated information against trusted sources, original documents, or human expertise before relying on it. (See also: Hallucinations, Confidence vs. Accuracy)

Zero-Shot Learning: Doing a task without seeing examples first. (See also: Few-Shot Learning)

Natalie is the COO of LEMA Logic and a digital strategist with a passion for making Tech + AI work for real people. She loves helping small and medium-sized businesses (SMBs) cut through the noise, find the right tools, and use them to simplify operations, connect with customers, and grow sustainably. With years of experience in multinational corporations, she now focuses on bringing that high-level expertise to SMBs, making advanced technology approachable and effective. For Natalie, the best tech isn’t just about efficiency—it’s about making work more enjoyable, freeing up time for creativity, and creating space for both business and personal growth.

Natalie Gallagher

Natalie is the COO of LEMA Logic and a digital strategist with a passion for making Tech + AI work for real people. She loves helping small and medium-sized businesses (SMBs) cut through the noise, find the right tools, and use them to simplify operations, connect with customers, and grow sustainably. With years of experience in multinational corporations, she now focuses on bringing that high-level expertise to SMBs, making advanced technology approachable and effective. For Natalie, the best tech isn’t just about efficiency—it’s about making work more enjoyable, freeing up time for creativity, and creating space for both business and personal growth.

LinkedIn logo icon
Back to Blog

Copyright © 2025 LEMA Logic. All Rights Reserved. Privacy Policy. Terms of Service. Disclaimer.

LEMA Logic Limited is incorporated in the Isle of Man - company 37753C.

LEMA Logic is also a trading name of Gallagher Innovations, Inc. a company incorporated in Maryland, USA and registered in the Isle of Man - company 006459F.