GraphRAG: Hierarchical approach to Retrieval Augmented Generation 💡This is a community blog by Akash Desai What is RAG? Retrieval-Augmented Generation (RAG) is an architecture that combines traditional information retrieval systems with large
Chat with your stats using Langchain dataframe agent & LanceDB hybrid search In this blog, we’ll explore how to build a chat application that interacts with CSV and Excel files using LanceDB’s hybrid search capabilities.
Practical introduction to Adaptive-RAG Traditional LLMs provide answers based on a fixed knowledge database on which they are trained. This limits their ability to respond with current or specific
Hybrid Search: Combining BM25 and Semantic Search for Better Results with Langchain Have you ever thought about how search engines find exactly what you're looking for? They usually use a mix of looking for specific
Advanced RAG: Precise Zero-Shot Dense Retrieval with HyDE In the world of search engines, the quest to find the most relevant information is a constant challenge. Researchers are always on the lookout for
Better RAG with Active Retrieval Augmented Generation FLARE by Akash A. Desai Welcome to our deep dive into Forward-Looking Active Retrieval Augmented Generation (FLARE), an innovative approach enhancing the accuracy and reliability of
Optimizing LLMs: A Step-by-Step Guide to Fine-Tuning with PEFT and QLoRA A Practical Guide to Fine-Tuning LLM using QLora Conducting inference with large language models (LLMs) demands significant GPU power and memory resources, which can be
Context-aware chatbot using Llama 2 & lanceDB serverless vector database Building a real chatbot using RAG method – by Akash A. Desai Introduction: Many people know about OpenAI’s cool AI models like GPT-3.5 and