AI Glossary: Key Definitions

Reinforcement Learning from Human Feedback (RLHF)

RLHF is a machine learning technique used to fine-tune AI models using human feedback to improve their behavior and outputs. The key steps involved are:

Applications: RLHF is used to align AI with human values, reduce harmful outputs, and ensure responses are more relevant, ethical, and understandable.

Retrieval-Augmented Generation (RAG)

RAG is an AI architecture that combines a language model with information retrieval to provide factually grounded responses. The process involves:

Applications: RAG is commonly used in chatbots, virtual assistants, and question-answering systems to improve the factual accuracy and relevance of responses.

Fine-Tuning

The process of adapting a pre-trained model to a specific task using additional training data. It helps specialize a general AI model for niche tasks or industries.

Prompt Engineering

Designing inputs or queries to guide AI models toward producing desired outputs. Effective prompt engineering can improve response quality without changing the model itself.

Reinforcement Learning (RL)

A broader machine learning approach where agents learn by interacting with an environment and receiving rewards or penalties. This feedback loop encourages the agent to take actions that maximize rewards over time.

Knowledge Base

A structured repository of information used by AI systems for information retrieval. In RAG systems, the knowledge base is queried to provide factually grounded outputs.

Human-in-the-Loop (HITL)

A method where humans remain involved