YZ Sözlüğü
Yapay Zekanın tam sözlüğü
Cross-Encoder Reranking
Reranking architecture where the transformer processes both the query and each document as a single input sequence to evaluate their mutual relevance. This approach offers high precision at the cost of higher computational complexity.
MonoT5 Reranking
T5-based reranking model that reformulates the ranking task as a text generation problem using special 'True' and 'False' tokens. This approach allows efficient reranking of retrieved documents by leveraging natural language understanding capabilities.
ColBERT Reranking
Late interaction token-level reranking system that encodes documents and queries into contextualized vectors for each token. This method captures granular matches while maintaining acceptable computational efficiency.
BGE Reranker
Reranking model optimized for semantic search tasks, trained on large corpora of relevance data with a cross-encoder architecture. It excels at fine discrimination between relevant and non-relevant documents for RAG systems.
Listwise Loss
Loss function that directly optimizes the complete ordering of the retrieved document list rather than individual pairs. This approach considers the global relevance distribution to improve reranking quality.
Pairwise Loss
Training function that compares document pairs to learn how to discriminate between more relevant and less relevant documents. This method is particularly effective for supervised learning reranking systems.
Multi-stage Retrieval
Retrieval architecture composed of multiple successive phases including broad initial retrieval followed by one or more levels of progressively more precise reranking. This approach balances efficiency and precision in large-scale RAG systems.
Passage Reranking
Reranking process applied specifically to passages or text segments rather than entire documents for increased granularity in selecting relevant content. This technique optimizes input quality for generation in RAG.
Learning to Rank (LTR)
Machine learning paradigm applied to information ranking where models are trained to order items according to their relevance. LTR combines various features to optimize ranking metrics such as NDCG and MAP.
Neural Information Retrieval
Information retrieval approach using neural networks to model relevance between queries and documents through dense vector representations. This method captures complex semantic relationships beyond simple keyword matches.
Dense Retrieval Reranking
Reranking technique based on dense embeddings that reassesses the relevance of initially retrieved documents using vector similarities in a semantic space. This method refines initial retrieval results to improve final quality.
Query-Document Interaction
Fundamental mechanism in reranking systems that explicitly models interactions between query terms and document terms to calculate a relevance score. This approach captures complex dependencies beyond independent representations.
Relevance Feedback Loop
Iterative process where relevance judgments on ranked results are used to refine the reranking model or the query itself. This adaptive learning technique continuously improves system performance.
Attention-based Reranking
Reranking architecture using attention mechanisms to identify and weight the most relevant parts of documents in relation to the query. This approach allows for fine contextual assessment of document relevance.