LlamaIndex
Talk to us

LlamaIndex Oct 10, 2023

LlamaIndex update 2023–10–10

Here’s our weekly look at developments across the LLM space and RAG (Retrieval Augmented Generation) in particular, as well as the latest news and features from your favorite open source library. If you’ve got a project (or a blog post, or a video) that you think people should hear about, we’re happy to feature it in here! Drop us a line at news@llamaindex.ai.

This update is now available in handy email form! Just head to our home page and enter your email to sign up.

🤩 First, the highlights:

  1. Full observability with Arize AI Phoenix: we launched a one-code-line integration with Arize AI for comprehensive tracing and observability in all RAG/agent pipelines. Enjoy local data storage, track LLM input/output prompts, monitor token usage, timing, retrieval visualizations, and agent loops. Additionally, export traces for evaluations and data analysis. All while ensuring your data stays local. Notebook, Tweet.
  2. RetrieverEvaluator: new in the library, “RetrieverEvaluator” allows enhanced retrieval evaluations, complementing LLM generation tests. The module supports benchmarking, standard ranking metrics, and synthetic dataset creation for comprehensive retrieval assessments. Docs, Tweet.
  3. HuggingFace Embeddings: we added native support for three more Hugging Face embedding models, including the base embeddings wrapper, instructor embeddings, and optimum embeddings in ONNX format. Docs, Tweet.
  4. Multi-Document Agents: we’ve introduced v0 experimental support for multi-document agents for advanced QA, beyond typical top-k RAG. It supports diverse queries from single to multiple docs. This foundational version sets the stage for future enhancements like parallel query planning and reduced latency. Docs, Tweet.

🏆 Congratulations to our Streamlit Hackathon Winners!

We love seeing people build amazing things with LlamaIndex!

  1. NewsGPT by Kang-Chi Ho: https://buff.ly/46jkutx
  2. FinSight by Vishwas Gowda: https://buff.ly/3PzOnyC

Feature Releases and Enhancements:

  1. Multi-Document Agents: we introduced multi-document agents (V0) for advanced QA, beyond typical top-k RAG. They support diverse queries from single to multiple docs. This foundational version sets the stage for future enhancements like parallel query planning and reduced latency. Docs, Tweet.
  2. Ensemble Retriever: we’re addressing the RAG challenge of determining chunk size by experimenting with diverse document chunking and ensembling for retrieval. Docs, Tweet.
  3. HuggingFace Embeddings: we added native support for three more Hugging Face embedding models, including the base embeddings wrapper, instructor embeddings, and optimum embeddings in ONNX format. Docs, Tweet.
  4. OpenAI Function Calling fine-tuning: we’re using OpenAI’s latest function calling fine-tuning which enhanced structured data extraction, optimizing gpt-3.5-turbo for improved extraction in RAG. Docs, Tweet.
  5. Metadata Extraction: we’re making metadata extraction efficient by extracting a complete Pydantic object from a document with just one LLM call. Docs, Tweet.
  6. Structured RAG Outputs: we now efficiently structure RAG pipeline outputs with native Pydantic outputs from all queries without the need for an additional LLM parsing call. Docs, Tweet.
  7. Streamlined secinsights.ai deployment: Our open-sourced secinsights.ai offers a RAG app template, now enhanced with GitHub Codespaces and Docker for swift cloud deployment without setup hassles. Tweet.
  8. LongContextReorder: We introduced LongContextReorder****,**** Zeneto’s approach to reposition vital context in RAG systems, addressing the challenge of over-retrieving which can obscure essential details. Docs, Tweet.
  9. RA-DIT: We drew inspiration from the RA-DIT paper, which introduced LLM fine-tuning for retrieval-augmented input prompts to improve RAG systems. This method fosters enhanced utilization of context and more effective answer synthesis, even in the presence of suboptimal context. Docs, Tweet.
  10. Blockchain: LlamaIndex data agents can be now used to analyze any blockchain subgraph using natural language queries. Tweet.

🔎 RAG Evaluation Enhancements:

  1. RetrieverEvaluator: We introduced “RetrieverEvaluator” for enhanced retrieval evaluations, complementing LLM generation tests. The module supports benchmarking, standard ranking metrics, and synthetic dataset creation for comprehensive retrieval assessments. Docs, Tweet.
  2. SemanticSimilarityEvaluator: We introduced a new semantic similarity evaluator — SemanticSimilarityEvaluator for LLM/RAG outputs, comparing embedding similarity between reference and generated answers. Docs, Tweet.

📚 Tutorials:

  1. Guide on building RAG from scratch with open-source modules.
  2. Dstack tutorial on implementing RAG with OSS LLMs using LlamaIndex and Weaviate.
  3. Wenqi Glantz turorial on Exploring ReAct Agent for Better Prompting in RAG Pipeline.
  4. Javier Torres tutorial on building a multi-document chatbot.
  5. Erika Cardenas tutorial on RAG techniques in LlamaIndex covering SQL Router Query Engine, Sub Question Query Engine, Recursive Retriever Query Engine, Self-Correcting Query Engine.
  6. Wenqi Glantz tutorial on 7 Query Strategies for Navigating Knowledge Graphs With LlamaIndex.
  7. Ravi Theja tutorial on Evaluating the Ideal Chunk Size for RAG using LlamaIndex.

⚙️ Integrations & Collaborations:

  1. Arize AI Phoenix: We launched a one-code-line integration with Arize AI for comprehensive tracing and observability in all RAG/agent pipelines. Enjoy local data storage, track LLM input/output prompts, monitor token usage, timing, retrieval visualizations, and agent loops. Additionally, export traces for evaluations and data analysis. All while ensuring your data stays local. Notebook, Tweet.
  2. Neo4j: We introduced an API spec for LLM-agent interaction with Neo4j, offering beyond just “text-to-cypher” with full agent reasoning. Docs, Tweet.
  3. TimescaleDB: We integrated with TimescaleDB for enhanced time-based retrieval in RAG systems, offering time filters and cost-effective storage solutions. Blogpost, Tweet.
  4. BraintrustData: We integrated with BraintrustData, enabling seamless RAG pipeline construction, evaluations, and easy public URL sharing for results. Notebook, Tweet.
  5. LocalAI: We integrated LocalAI_API LLM support for on-prem runs or as an alternative to OpenAI LLM. Tweet.
  6. HoneyHiveAI: We integrated with HoneyHiveAI for enhanced multi-step RAG/agent pipeline monitoring. Log traces, gather user feedback, and utilize it for precise fine-tuning and evaluations. Docs, Tweet.
  7. UnstructuredIO: We integrated with UnstructuredIO to tackle the RAG challenge of querying embedded tables in 10-K filings. Now, seamlessly query any tabular data or text within a 10-K document. Notebook, Tweet.
  8. Clarifai: We integrated with Clarifai, offering access to 40+ LLMs and various embedding models. Tweet.

🎥 Webinars:

  1. Webinar by SingleStoreDB on How to Build a GenAI App with LlamaIndex.
  2. Webinar on projects built during the SuperAGI Autonomous Agents Hackathon featuring evo.ninja, RicAI, Atlas and MunichAI.

🎈 Events:

  1. Jerry Liu and Simon conducted a workshop on RAG + Evaluation at RaySummit.
  2. Yi Ding spoke on ‘LLM Quirks Mode’ at MLOps community event.
  3. Jerry Liu spoke on Evals/ Benchmarking and Advanced RAG techniques at AIConf 2023.
  4. Ravi Theja conducted a workshop on Mastering RAG with LlamaIndex at PyCon India, 2023.
  5. Ravi Theja presented a poster on Automatic Knowledge Transfer(KT) Video generation on code bases using LlamaIndex at PyCon India, 2023.