Talk to us

LlamaIndex Jan 16, 2024

LlamaIndex Newsletter 2024–01–16

Hello LlamaIndex Enthusiasts 🦙,

Get ready for an exciting week at LlamaIndex, teeming with dynamic community contributions and insightful learning resources. Dive into our range of new features, tutorials, guides, and events, all designed to enhance your LlamaIndex journey.

We’re excited to announce our very first in-person hackathon, scheduled for February 2nd-4th. Join us to connect with fellow RAG enthusiasts and compete for prizes totaling over $4,000!

If you’ve been working on a fascinating project, penned an insightful article, or produced an engaging video, we’re eager to see it! Share your contributions with us at news@llamaindex.ai. Don’t forget to subscribe to our newsletter on our website to receive all the latest updates directly in your inbox.

🤩 The highlights:

  1. Chain-of-Table: Step-by-step table reasoning and operations for enhanced LLM tabular data understanding. LlamaPack, Tweet.
  2. LLM Self-Consistency: Merges textual and symbolic reasoning with majority voting for precise answers. LlamaPack, Tweet.
  3. Semantic Text Splitting in RAG: Greg Kamradt’s embedding similarity method for efficient document splitting. LlamaPack, Tweet.
  4. Parallel RAG Ingestion: Up to 15x faster document processing in LlamaIndex. Notebook, Tweet.
  5. TogetherAI’s Embeddings Support: Guide to build retrieval-augmented apps with MistralAI’s 8x7b model and TogetherAI Embeddings. Blogpost , Tweet.

✨ Feature Releases and Enhancements:

  • We launched Chain-of-Table Framework in LlamaPack for LLM Tabular Data Understanding. This approach enables step-by-step table reasoning and operations like adding columns, row selection, grouping, and sorting, mimicking a data scientist’s method for concise data representation. LlamaPack, Tweet.
  • We launched LLM Self-Consistency Mechanism for Tabular Data in LlamaPack. This method combines textual and symbolic reasoning, utilizing a novel mix self-consistency approach with majority voting to select the best answer. LlamaPack, Tweet.
  • We have Introduced Semantic Text Splitting in RAG with LlamaPack. Check Greg Kamradt’s method of splitting documents based on embedding similarity between sentences. This auto-tuned threshold approach enhances RAG pipelines, soon to be available in LlamaPack using LlamaIndex abstractions. LlamaPack, Tweet.
  • We launched Parallel RAG Ingestion in LlamaIndex for up to 15x Faster Document Processing. Notebook, Tweet.
  • We have launched Support for TogetherAI’s Embeddings Endpoint. Check the blog for a step-by-step guide on creating a retrieval-augmented generation app with MistralAI’s 8x7b model and TogetherAI Embeddings. Blogpost , Tweet.
  • We integrated AgentSearch-v1 as a data loader and Retriever in LlamaHub, offering a robust alternative for internet content search/retrieval without relying on Bing/Google APIs. LlamaPack, Tweet.
  • Raduaschl introduced Ensembling and Fusion in Advanced RAG with LlamaPack. Learn to build an ensembling + fusion pipeline in about 30 lines of code using QueryPipeline syntax, featuring full async support. LlamaPack, Tweet.

🗺️ Guides:

  • Guide to Building Full-Stack RAG Applications with LlamaIndex and Azure Cosmos DB.
  • Guide showing to combine auto-retrieval for semi-structured retrieval with metadata with MMR to enforce diversity in results.
  • Guide by MountainMicky to understanding the Importance of Reranking in Advanced RAG Pipelines.

✍️ Tutorials:

🎥 Events:

  • Ravi Theja gave talk on Building Multi-Tenancy RAG System with LlamaIndex and Qdrant at FOSS United, Bangalore, India.

🏢 Calling all enterprises:

Are you building with LlamaIndex? We are working hard to make LlamaIndex, even more, Enterprise-ready and have sneak peeks at our upcoming products available for partners. Interested? Get in touch.