Here is the text converted from the uploaded image:
An Accounting Department is thrilled with the RAG Application that the Application & Data Science Teams recently rolled out. However, they provided some feedback that sometimes (approximately 20% of the time), the documents retrieved are not relevant to their prompts or are too generic.
During development, there was extensive testing between models to make sure the best possible model was selected. The Accounting Department emphasizes that when the responses use the right documents, the results are very good and they are pleased with the completeness, accuracy, and coherence of those responses.
What would be a way to address the irrelevant RAG results without having to rebuild the entire workflow?
A. Replace the embedding model with a larger, more general-purpose language model to improve document retrieval.
B. Fine-tune the Large Language Model on a broader dataset to enable it to generate more relevant responses.
C. Implement a rerank model as a post retrieval step to re-order initially retrieved documents based on query-document relevance.
D. Significantly expand the document knowledge base by ingesting a much larger volume of financial
reports.