LlamaIndex
Talk to us

LlamaIndex Dec 5, 2023

LlamaIndex Newsletter 2023–12–05

Hello Llama Community 🦙,

We are excited to collaborate with DeepLearningAI and TruEraAI to launch an extensive course on advanced Retrieval-Augmented Generation (RAG) and its evaluations. The course includes Sentence Window Retrieval, Auto-merging Retrieval, and Evaluations with TruLensML, providing practical tools for enhanced learning and application. To make the most of this learning opportunity, we invite you to take the course.

We appreciate your support and are always excited to see your projects and videos. Feel free to share them at news@llamaindex.ai. Also, remember to subscribe to our newsletter on our website for the latest updates and to connect with our vibrant community.

🤩 First, the highlights:

  1. Launch of Seven Advanced Retrieval LlamaPacks: Simplifies building advanced RAG systems to nearly a single line of code, offering techniques like Hybrid Fusion and Auto-merging Retriever. Tweet.
  2. Introduction of the OpenAI Cookbook: A comprehensive guide for evaluating RAG systems with LlamaIndex, covering system understanding, building, and performance evaluation. Blog, Notebook
  3. Speed Enhancement in Structured Metadata Extraction: Achieved 2x to 10x faster processing in extracting structured metadata from text, boosting RAG performance. Docs, Tweet.
  4. We launched versions 3 of RAGs, our project that lets you use natural language to generate a RAG bot customized to your needs. This version incorporates web search, so your bot can incorporate answers fresh from the web. Tweet.
  5. Core guide for Full-Stack LLM App Development: Simplifies complex app development with tools like ‘create-llama’ for full-stack apps, ‘SEC Insights’ for multi-document processing, and ‘LlamaIndex Chat’ for chatbot customization.

✨ Feature Releases and Enhancements:

  • We’ve launched seven advanced retrieval LlamaPacks, serving as templates to easily build advanced RAG systems. These packs simplify the process to almost a single line of code, moving away from the traditional notebook approach. The techniques include Hybrid Fusion, Query Rewriting + Fusion, Retrieval with Embedded Tables, Auto-merging Retriever, Sentence Window Retriever, Node Reference Retriever, and Multi-Document Agents for handling complex queries. Tweet.
  • We introduce new abstractions for structured output extraction in multi-modal settings, enabling the transformation of images into structured Pydantic objects. This enhancement is particularly useful for applications like product reviews, restaurant listings, and OCR. Notebook, Tweet.
  • We introduce the OpenAI Cookbook, a guide focused on evaluating RAG systems using LlamaIndex. It encompasses understanding RAG systems, building them with LlamaIndex, and evaluating their performance in retrieval and response generation. Blog, Notebook, Tweet.
  • We launched RAGs v3 — a bot that transcends traditional limits by incorporating web search capabilities. This bot, designed to operate in natural language rather than code, offers an enhanced experience compared to the combination of ChatGPT and Bing. Leveraging our integration with Metaphor Systems — a search engine tailored for Large Language Models (LLMs) — the bot can retrieve relevant text from the internet to provide answers beyond its internal corpus. Additionally, users can now view the tools the agent uses, with the web search feature exclusively accessible to our OpenAI agent. Repo, Tweet.
  • We have significantly improved the speed of extracting structured metadata (like titles and summaries) from text to enhance RAG performance. Our new implementation offers 2x to 10x faster processing, overcoming the limitations of previous slower methods. Docs, Tweet.
  • We have made it incredibly easy to set up a RAG + Streamlit app, now possible with just a single line of code using our StreamlitChatPack. This pack provides a ready-to-use RAG pipeline and a Streamlit chat interface, customizable in terms of data sources and retrieval algorithms. Docs, Tweet.

👀 Demo:

AInimal Go — an innovative multi-modal app inspired by Pokemon-Go. This interactive application, developed by Harshad Suryawanshi, lets users capture or upload images of animals, classify them using the ResNet-18 model, and engage in conversations with the animals, augmented by a knowledge base of over 200 Wikipedia articles. Notably, the app employs a targeted ResNet model for classification, offering enhanced speed and cost efficiency, instead of using GPT-4V.

Blog, Repo, HuggingFace Space, Tweet.

🗺️ Guides:

  • We introduce a core guide within the LlamaIndex ecosystem, designed to simplify “full-stack” app development, which is notably more complex than notebook development. This includes ‘create-llama’ for building full-stack apps with advanced templates, ‘SEC Insights’ for multi-document handling of over 10,000 filings, and ‘LlamaIndex Chat’ for a customizable chatbot experience. All tools are open-source with full guides and tutorials available.
  • Guide on using the Table Transformer model with GPT-4V for advanced RAG applications in parsing tables from PDFs: Our method involves CLIP for page retrieval, Table Transforms for table image extraction, and GPT-4V for answer synthesis. This approach is compared with three other multi-modal table understanding techniques, including using CLIP for whole page retrieval, text extraction and indexing with GPT-4V, and OCR on table images for context.
  • Guide on analyzing various multi-modal models for their ability to extract structured data from complex product images on an Amazon page. The models compared include GPT-4V, Fuyu-8B, MiniGPT4, CogVLM-4, and LLaVa-13B. Key findings reveal that all models incorrectly identified the number of reviews (correct answer: 5685), only GPT-4V and Fuyu accurately determined the price, each model’s product description varied from the original, and Mini-GPT4 incorrectly assessed the product rating.

✍️ Tutorials:

  • Jo Kristian Bergum blog post on Hands-On RAG guide for personal data with Vespa and LLamaIndex.
  • Wenqi Glantz made a tutorial on Llama Packs: The Low-Code Solution to Building Your LLM Apps.
  • Liza Shulyayeva’s in-depth tutorial on building and deploying a retrieval-augmented generation (RAG) app to conversationally query the contents of your video library

🎥 Webinars:

  • Webinar on PrivateGPT — Production RAG with Local Models.

🏆 Hackathons:

  • Your reminder that there’s still time to join the TruEra Challenge, an online hackathon from Dec 1st to 8th, and explore AI observability with technology from TruEra AI and Google Vertex AI. Use the LlamaIndex framework to enhance your LLM-based app. Participants receive $30 in Google Cloud credits, plus an additional $100 upon solution submission. Winners share a $9,000 cash prize pool and $14,000 in Google Cloud credits.
  • We partnered with Zilliz Universe to participate in their Advent of Code event. This December, explore 25 open-source projects, with daily challenges to build something in 30 minutes or less. It’s a great opportunity to learn new skills and have winter fun. For tips, tutorials, and resources, visit the Advent of Code channel in Discord each day.