Cologne, London, and Paris. Come, meet, learn, and celebrate everything Product Experience!

en

AI Glossary

The 101 on all the AI terms you need to know whether you’re new to using AI or have used it before. See something missing? Send us a note and we’ll get it added!

Active learning

A training approach where the algorithm selectively chooses a particular range of examples to learn instead of blindly searching for a diverse range of labeled examples.

Adaptive Gradient (AdaGrad) algorithm

A sophisticated algorithm that effectively gives each parameter an independent learning rate and incorporates past knowledge for gradient-based optimization.

Alignment

A field of AI safety research that aims to build safe, secure AI systems that produce accurate, desired outcomes.

Anomaly detection

The process of identifying outliers in your dataset to ensure conformity and accuracy.

Artificial Intelligence

The theory and development of computer systems that aim to mimic the problem-solving and decision-making capabilities of the human mind.

Automated Machine Learning (AutoML)

The process of automating machine learning tasks, from data preparation to deployment, to assist non-technical users by simplifying complex processes, saving them time and improving prediction accuracy.

Deep learning model

A method in artificial intelligence that is inspired by the human brain, teaching computers to recognize complex patterns and produce accurate insights or predictions based on pictures, texts, and sounds.

Generative AI

Utilizing artificial intelligence technology to create original content from scratch, including text, imagery, and audio content.

Generative pre-trained transformer 4 (GPT-4)

Developed by OpenAI, this machine learning model is trained using data from the internet to generate any type of text. It only requires a small amount of input text to create large volumes of relevant, sophisticated response.

Grounding

The process of linking abstract knowledge from an AI system to contextualized, real-life examples to produce better predictions.

Hallucinations

When a Large Language Model (LLM) generates false information because the model has no understanding of the context of the input provided, and the language generated is technically grammatically and semantically correct.

Hidden layer

The layer in a neural network that connects the input layer of features with the prediction in the output layer.

Large Language Model (LLM)

 A language model characterized by its large size. Their size is enabled by AI accelerators, which are able to process vast amounts of text data, mostly scraped from the Internet. Notable examples include OpenAI‘s GPT models (e.g., GPT-3.5 and GPT-4, used in ChatGPT), Google‘s PaLM (used in Bard), and Meta‘s LLaMa, as well as BLOOM, Ernie 3.0 Titan, and Claude.

Learning algorithm

A set of instructions used in machine learning that allows a computer program to extrapolate information from training data and use what it learns to make predictions about a new input. The math and logic of these algorithms can improve on their own over time as more data is provided.

Learning rate

The number that tells the algorithm how heavily to adjust weights and biases of different data points.

Loss function

A mathematical function that calculates how far a model’s prediction is from its label. The goal of training an algorithm is to improve prediction accuracy and minimize the loss that is produced.

Machine learning (ML)

The use and development of computer systems that are able to learn and adapt without following explicit instructions by drawing inferences by analyzing algorithms and statistical models.

Natural language processing (NLP)

A method in artificial intelligence that is inspired by the way humans process language, coaching computers to understand text and spoken words, complete with the speaker or writer’s original intent and sentiment.

Neural networks

A model that can mimic complex nonlinear relationships between features and labels through neurons that connect to nodes in different layers.

Open source models

Artificial intelligence projects that are open to the public to develop, with the goal being to collaborate and learn with the community. Open-source models are typically faster, more innovative, and more customizable, but pose some obvious security and liability risks.

Product Experience (PX) Strategy

A comprehensive strategy to build and deliver world-class product experiences across every customer touchpoint to accelerate growth, stay competitive, and support the organization’s overall goals.

Prompt engineering

The art of creating well-structured prompts that elicit the desired output from a large language model.

Proprietary models

Artificial intelligence projects that are developed, packaged, and sold by a single organization. Proprietary models are typically better funded than open-source models so they can often afford to implement new advances quickly and have the resources to support agility and scalability in an uncertain market.

Supervised machine learning

Training a model on specific features and their corresponding attributes, similar to a student studying a set of questions and their answers.

Taxonomy

A set structure used to organize and categorize a vast amount of product information in a logical, easy-to-understand way. The main goal is to present both structured and unstructured product data in a way that is quickly digestible for both internal teams and consumers.

Unsupervised machine learning

Training a model to identify patterns in a specific dataset and generate educated predictions, which can be particularly useful with unlabeled datasets.