Activation function
A function inside a neural network layer that helps the model learn complex patterns (not just straight lines).
Filter by tag
50 results
A function inside a neural network layer that helps the model learn complex patterns (not just straight lines).
A system that can decide actions to reach a goal, sometimes by using tools (search, code, APIs).
Making sure an AI system’s behavior matches human intent, especially in tricky edge cases.
Finding unusual data points that don’t match the normal pattern (useful for fraud or failures).
A mechanism that helps models focus on the most relevant parts of the input.
A score that summarizes how well a classifier separates positives vs. negatives across thresholds.
The learning algorithm that computes how to adjust a neural network’s weights.
A small group of training examples processed together in one training step.
A systematic error in predictions; also used to describe unfair differences in performance across groups.
Predicting a category, like spam vs. not spam.
Grouping similar items without labels (unsupervised learning).
A table that breaks down predictions into true/false positives and true/false negatives.
How much text (tokens) an LLM can consider at once when generating.
Testing a model on multiple data splits to better estimate real-world performance.
Creating “altered” training examples to improve robustness (common in images/audio).
A collection of examples used to train and test a model.
A generative model that starts from noise and gradually turns it into an image.
When the real-world data changes over time and the model’s accuracy drops.
A vector (list of numbers) that represents meaning so similar items are close in space.
One full pass through the training dataset.
An input signal the model uses (like age, price, or word count).
Training a pre-trained model further to specialize it for your task or domain.
A large pre-trained model that can be adapted to many tasks via prompting or fine-tuning.
How well a model performs on new data, not just the data it trained on.
A method to update model parameters step-by-step to reduce error.
When a generative model produces confident output that is incorrect or made up.
A training setting you choose (like learning rate), not something the model learns.
Using a trained model to make predictions or generate outputs.
Training an LLM to follow user instructions more reliably.
The correct answer attached to a training example in supervised learning.
How long the model takes to respond after you send a request.
How big each training update step is; too high can be unstable, too low can be slow.
A number measuring how wrong the model is; training tries to minimize it.
A layered model that learns patterns by adjusting many connected weights.
When a model memorizes training data and performs worse on new data.
A learned value inside a model (like a weight).
Out of predicted positives, how many were actually positive.
The instruction and context you give a generative model.
A trick where a user tries to override or bypass rules by crafting a malicious prompt.
Combining document retrieval with generation so the model can answer using retrieved sources.
Out of all real positives, how many the model successfully found.
Predicting a number, like price or temperature.
Learning by trial-and-error using rewards (common in games and robotics).
Reinforcement Learning from Human Feedback—using human preferences to improve model responses.
A generation setting: higher temperature usually means more randomness and variety.
A small piece of text an LLM processes (a word or part of a word).
Splitting text into tokens before a model processes it.
A neural network architecture built around attention; used by many LLMs.
A dataset split used to tune decisions during training (separate from test set).
Learned numbers inside a neural network that shape how it makes predictions.