LlamaIndex
Talk to us

LlamaIndex Oct 17, 2023

LlamaIndex Newsletter 2023–10–17

Hello Llama Enthusiasts 🦙!

Another week has flown by, and we’re back with a jam-packed newsletter filled with updates on hackathons, guides, integrations, features, webinars, tutorials, blogs, and demos. If you have a project, blog post, or video that deserves a spotlight, we’d love to feature it! Just reach out to us at news@llamaindex.ai.

Bonus: You can now get all these updates straight to your inbox! Simply visit our homepage and sign up for our email updates.

🤩 First, the highlights:

  1. AI.Engineer Summit: At the AI.Engineer Summit, Jerry Liu discussed RAG applications, while Simon led a workshop on RAG app optimization (Jerry’s slides, Simon’s slides)
  2. Text to pgVector: we launched PGVectorSQLQueryEngine for combined SQL and vector queries on PostgreSQL. (Docs, Tweet)
  3. Hugging Face Integration: Integrated with HuggingFace’s text-embeddings-inference server for high-speed, large-scale BERT model serving. (Docs, Tweet)
  4. Multi-Document Agents: New V1 agents support advanced multi-document retrieval and async query planning. (Docs, Tweet)
  5. Unstructured Parsing: Unveiled UnstructuredElementNodeParser, a hierarchical parser for embedded tables/text using UnstructuredIO. (Docs, Tweet)
  6. LLM Compatibility: We have charted LLM performances on various tasks and found that the Zephyr-7b-alpha model stands out as the top-performing 7B model in advanced RAG tasks. (Docs)

🏆 Congratulations to our AGI House Hackathon Winners!

We love seeing people build amazing things with LlamaIndex!

Build:

  1. Demostify
  2. Stick with Fit, SafeQuery, Cherry

Break:

Test:

  • X-Ray Insight

Honorable Mentions:

🎤 LlamaIndex at AI.Engineer Summit:

  1. Jerry Liu gave a talk on Building production-ready RAG applications. Slides.
  2. Simon conducted a workshop on Building, Evaluating, and Optimizing your RAG App for Production with LlamaIndex. Slides, Code.

🗺️ Guides:

  1. LLM Compatibility Tracking: We’ve charted LLM performances on various tasks, revealing zephyr-7b-alpha as the only current 7B model excelling in advanced RAG/ Agentic tasks. Docs.
  2. Evaluations: Adjusting chunk size is essential for RAG apps. Having more chunks isn’t necessarily better, and re-ranking might be counterproductive. To fine-tune, experiment with different chunk sizes and top-k values. The Arize AI team has provided a guide to help you evaluate using Arize AI Phoenix and Llama Index. Slides, Notebook.

✍️ Tutorials:

  1. Shahul’s tutorial demonstrates how to choose the best embeddings for your data, emphasizing that retriever performance and embedding quality are crucial for a RAG system’s efficacy using the LlamaIndex and RAGAS libraries.
  2. Wenqi Glantz’s tutorial on Evaluation Driven Development for RAG Pipelines.
  3. Wenqi Glantz’s tutorial on Masking PII Data in the RAG Pipeline.
  4. Ofer Mendelevitch’s from Vectara has a tutorial on Retrieval Augmented Generation with LlamaIndex on comparing Vectara’s new Boomerang model to OpenAI and Cohere.
  5. Patrick Loeber from AssemblyAI has a tutorial on Build LlamaIndex Audio Apps.
  6. Pradip Nichite made a tutorial on NL2SQL with LlamaIndex: Querying Databases Using Natural Language.
  7. Mayo Oshin has a tutorial on How to Compare Multiple Large PDF Files.
  8. Sudarshan Koirala made a tutorial on Chat With Documents with LlamaIndex and Pinecone.

💡 Demos:

✨ Feature Releases and Enhancements:

  1. Text to pgVector: We introduced the PGVectorSQLQueryEngine, which allows you to query a PostgreSQL database using both full SQL and vector search simultaneously. Docs, Tweet.
  2. Multi-Document Agents: We introduce Multi-Document Agents (V1) that can now retrieve across multiple docs and plan queries asynchronously, offering a superior analysis compared to standard RAG. Docs, Tweet.
  3. UnstructuredIO: We’ve partnered with UnstructuredIO to enhance LLM/RAG applications. By extracting tables from PDFs, we’ve improved query methods beyond basic vector indexing, enabling hybrid queries and cross-document comparisons, especially for tabular questions. Docs, Tweet.
  4. UnstructuredElementNodeParser: Going beyond basic text-splitting, we introduce the UnstructuredElementNodeParser. It models embedded tables/text hierarchically in a data graph using UnstructuredIO. Docs, Tweet.
  5. Cross-Encoder Fine-Tuning: Cross-encoders enhance RAG by refining post-embedding search results. With LlamaIndex, you can now fine-tune cross-encoders on any document, boosting performance. Docs, Tweet.

⚙️ Integrations & Collaborations:

  1. Assembly AI: We introduced a new data reader for audio data integration with AssemblyAI. This integration allows effortless audio loading and facilitates building vector store indices and query engines for inquiries. Docs, Tweet.
  2. Nougat — MetaAI: We integrated Nougat, an exceptional OCR tool from Meta, that excels in interpreting scientific papers, notably mathematical notations, and LaTeX as a loader in LlamaHub, allowing streamlined processing of ArXiv papers within the RAG pipeline. Docs, Tweet.
  3. Hugging Face-Text Embeddings Inference: We integrated with the new text-embeddings-inference server from HuggingFace offering production-scale serving with distributed tracing for all BERT models at impressive speeds. Docs, Tweet.

🎥 Webinars And Podcast:

  1. Webinar with Timescale on Time-based retrieval for RAG.
  2. Webinar with Omar Khattab and Thomas Joshi on DSPy — a framework for LLMs that emphasizes programming over prompting.
  3. Jerry Liu’s podcast with Latent Space on LlamaIndex’s origin story, fine-tuning, and more.