ai:introduction:ai_terms
Differences
This shows you the differences between two versions of the page.
| ai:introduction:ai_terms [2025/02/19 08:38] – created 195.53.121.100 | ai:introduction:ai_terms [2025/02/19 08:41] (current) – 195.53.121.100 | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| - | ====== AI Terms: RLHF and RAG ====== | + | ====== AI Glossary: Key Definitions |
| === Reinforcement Learning from Human Feedback (RLHF) === | === Reinforcement Learning from Human Feedback (RLHF) === | ||
| Line 17: | Line 17: | ||
| **Applications: | **Applications: | ||
| - | === Related Terms === | + | === Fine-Tuning |
| - | * **Fine-Tuning**: | + | The process of adapting a pre-trained model to a specific task using additional training data. It helps specialize a general |
| - | * **Prompt Engineering**: | + | |
| - | * **Reinforcement Learning (RL)**: A broader machine learning approach where agents learn by interacting with an environment and receiving rewards or penalties. | + | |
| - | * **Knowledge Base**: A repository of information used for retrieval in systems like RAG. | + | |
| - | * **Human-in-the-Loop (HITL)**: A method where humans remain involved in the training | + | |
| - | RLHF and RAG are essential techniques for improving | + | === Prompt Engineering === |
| + | Designing inputs or queries to guide AI models toward producing desired outputs. Effective prompt engineering can improve response quality without changing the model itself. | ||
| + | |||
| + | === Reinforcement Learning (RL) === | ||
| + | A broader machine learning approach where agents learn by interacting with an environment | ||
| + | |||
| + | === Knowledge Base === | ||
| + | A structured repository of information used by AI systems | ||
| + | |||
| + | === Human-in-the-Loop (HITL) === | ||
| + | A method where humans remain involved | ||
ai/introduction/ai_terms.1739954280.txt.gz · Last modified: 2025/02/19 08:38 by 195.53.121.100
