Better RAG Performance with Fewer Documents
A study from the Hebrew University of Jerusalem found that RAG models perform better with fewer, but highly relevant documents. Tests with Llama-3.1 and Gemma 2 showed up to 10% improvement, as irrelevant yet similar documents can confuse AI models and degrade accuracy.