RAGAS, or Retrieval-Augmented Generation Assessment, is an open-source framework for evaluating RAG systems using reference-free metrics computed by language models. Its main metrics are faithfulness, which checks whether the answer is grounded in retrieved context, answer relevance, which measures how well the answer addresses the question, context precision, which measures the proportion of retrieved chunks that are relevant, and context recall, which measures whether all necessary information was retrieved. RAGAS requires a judge language model to compute most metrics.