🏠 Hem
Benchmarkar
📊 Alla benchmarkar 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List-applikationer 🎨 Kreativa fria sidor 🎯 FSACB - Ultimata uppvisningen 🌍 Översättningsbenchmark
Modeller
🏆 Topp 10 modeller 🆓 Gratis modeller 📋 Alla modeller ⚙️ Kilo Code
Resurser
💬 Promptbibliotek 📖 AI-ordlista 🔗 Användbara länkar

AI-ordlista

Den kompletta ordlistan över AI

162
kategorier
2 032
underkategorier
23 060
termer
📖
termer

Faithfulness Score

Metric evaluating the degree of faithfulness of the generated response to the provided context, measuring whether the statements are factually supported by the retrieved sources.

📖
termer

Context Relevance

Indicator quantifying the relevance of retrieved documents or passages to the initial query, essential for evaluating the quality of the RAG retrieval component.

📖
termer

Answer Relevance

Score measuring the extent to which the generated response directly and completely answers the posed question, regardless of factual accuracy.

📖
termer

Retrieval Precision

Proportion of relevant documents among all retrieved documents, evaluating the system's effectiveness in returning only useful information.

📖
termer

Retrieval Recall

Ratio of relevant documents retrieved compared to the total number of relevant documents available in the knowledge base.

📖
termer

Knowledge F1 Score

Harmonic mean between precision and recall of retrieved knowledge, providing a balanced measure of the overall RAG system performance.

📖
termer

Context Utilization Rate

Percentage of relevant information from the retrieved context that is actually used in the final response, measuring the efficiency of source utilization.

📖
termer

Hallucination Rate

Frequency at which the model generates information not supported by the provided context, a critical indicator of RAG system reliability.

📖
termer

Semantic Similarity Score

Semantic similarity measure between the generated response and a reference response, using embeddings to capture meaning nuances.

📖
termer

Answer Completeness

Evaluation of the coverage of all relevant aspects of the question in the generated response, ensuring a comprehensive answer.

📖
termer

Retrieval Latency

Time required to retrieve relevant documents from the knowledge base, a crucial criterion for user experience in production.

📖
termer

Token Efficiency Ratio

Ratio between the number of relevant tokens used and the total number of tokens generated, measuring the economic efficiency of the RAG system.

📖
termer

Groundedness Score

Metric assessing the extent to which each statement in the response is supported by explicit evidence in the retrieved sources.

📖
termer

Source Attribution Accuracy

Accuracy with which the system correctly attributes each part of the response to its appropriate documentary source in the retrieved context.

📖
termer

Response Consistency

Measure of the internal consistency of the generated response, assessing the absence of contradictions between different parts of the response.

📖
termer

Query Ambiguity Resolution

Ability of the RAG system to interpret and resolve ambiguities in the user query to retrieve the most relevant information.

📖
termer

Information Overlap Score

Measure of the overlap between information present in the response and that available in the retrieved context, avoiding redundancies.

📖
termer

Answer Accuracy

Evaluation of the factual truthfulness of the generated response compared to a ground truth or validated reference sources.

📖
termer

Retrieval Coverage

Extent of the knowledge base actually accessible by the retrieval system, impacting the ability to answer diverse questions.

📖
termer

Response Coherence

Quality of the logical structure and narrative flow of the generated response, ensuring clear and understandable presentation of information.

🔍

Inga resultat hittades