Talk to us

Ravi Theja Sep 6, 2023

LlamaIndex Update — 09/03/2023

Hello LlamaIndex Community!

We’re thrilled to bring you the latest edition of our LlamaIndex Update series. Whether you’ve been a part of our journey from the start or have just recently joined us, your engagement and input are invaluable to us.

In this update, we’re excited to unveil some significant advancements. We’ve got comprehensive updates on new features for both the Python and TypeScript versions of LlamaIndex. In addition, we’re offering some expert insights on RAG tips that you won’t want to miss. To keep you ahead of the curve, we’ve also curated a selection of webinars, tutorials, events, and demos.

So without further ado, let’s delve into the latest developments.

New Features:


  1. LlamaIndex introduces the Sweep AI code splitter for RAG apps, addressing the challenges of traditional code splitting. This tool features recursive splitting combined with CSTs across 100+ languages, enhancing the LlamaIndex experience. BlogPost, Tweet.
  2. LlamaIndex now supports streaming data ETL, enhancing structured data extraction with the OpenAI Function API. By inputting a Pydantic object class in LlamaIndex, users can receive streamed data objects from OpenAI individually. Docs, Tweet.
  3. LlamaIndex has teamed up with Neo4j to amplify knowledge graph capabilities with LLM’s. This integration not only allows for storing any knowledge graph created in LlamaIndex directly in Neo4j but also introduces a specialized text-to-cypher prompt for Neo4j users. Docs, Tweet.
  4. LlamaIndex, in collaboration with Mendable AI and Nomic AI, unveils a Nomic Atlas visual map detailing user questions from the Mendable AI bot. This innovative tool groups similar questions, providing insights for improved app deployment, prompt control, language support, and documentation. New users can find the helpful Mendable AI bot on LlamaIndex’s documentation site. Tweet.
  5. LlamaIndex, in collaboration with Predibase, offers an optimal way to operationalize LLMs. Experience top-tier RAG by privately hosting open-source LLMs on managed infrastructure right within your VPC. Docs, Tweet.
  6. The LlamaIndex playground app enhances the RAG experience. Updates include new Temperature and Top P options, along with intuitive tooltips offering plain language explanations.
  7. LlamaIndex Tip💡: Boost your RAG systems by adding structured data to raw text. This allows for easier metadata filtering and optimal embedding biases. Dive into our guide on harnessing the HuggingFace span marker for targeted entity extraction. Docs, Tweet.
  8. LlamaIndex now has the Semantic Scholar Loader. With it, users can swiftly set up citation-based Q&A systems. Docs, Tweet.
  9. LlamaIndex highlights the significance of text chunk size in LLM QA systems. To determine the best chunk size without human intervention, we suggest ensembling different sizes and using a reranker for context relevance during queries. This method involves simultaneous queries across retrievers of various sizes and consolidating results for reranking. Though experimental, this approach aims to discern the optimal chunk size strategy. Docs, Tweet.
  10. LlamaIndex’s customer support bot seamlessly interfaces with Shopify’s 50k-line GraphQL API Spec. Through smart tools and LlamaIndex features, it offers quick insights like refunded orders despite the vast spec size. Efficient indexing ensures precise user query responses. Docs, Tweet.
  11. LlamaIndex’s integration with Xinference enables users to effortlessly expand models like llama 2, chatglm, and vicuna to incorporate RAG and agents. Docs, Tweet.
  12. LlamaIndex introduces One-click Observability. With just a single code line, integrate LlamaIndex with advanced observability tools from partners like Weights & Biases, ArizeAI, and TruEra, simplifying LLM app debugging for production. Docs, Tweet.
  13. LlamaIndex has updated the LLM default temperature value to 0.1. Tweet.
  14. LlamaIndex integration with Zep, enhancing the memory layer of LLM apps. It’s not just about storage but also enriching data with summaries, metadata, and more. BlogPost, Tweet.
  15. LlamaIndex has revamped its defaults! Now, gpt-3.5-turbo is the go-to LLM, with enhanced prompts and a superior text splitter. Additionally, if OpenAI’s key isn’t set, it has backup options with llama.cpp. New embedding features have also been added. Tweet.
  16. LlamaIndex now seamlessly integrates with FastChat by lmsysorg. Elevate your LLM deployments like Vicuna and Llama 2, serving as an alternative to OpenAI. Tweet.
  17. LlamaIndex provides a seamless integration with Azure AI Services. Dive into a richer ecosystem of AI tools from Computer Vision, Translation, and speech enhancing your multi-modal AI interactions. Docs1, Docs2, Docs3, Tweet.
  18. LlamaIndex unveils Graph RAG — an approach to enhance LLMs with context from graph databases. Extract valuable subgraphs from any knowledge graph for superior question-answering capabilities. Docs, Tweet.
  19. LlamaIndex has expanded native async support, enhancing the scalability of full-stack LLM apps. We now offer async agents, tool execution, and callback support, and have introduced async methods in vector stores. Tweet.
  20. LlamaIndex enhances debugging with data agent trace observability. Additionally, system prompts can now be added to any query engine and we have begun the transition of LLM and embedding modules to Pydantic. Docs, Tweet.
  21. LlamaIndex’s Recursive Document Agents enhance RAG by retrieving based on summaries and adjusting chunk retrieval per need. This boosts querying across varied documents, offering both question-answering and summarization within a document. Docs, Tweet.
  22. LlamaIndex integrates with Metaphor to supercharge data agents. This integration offers a specialized search engine tailored for LLMs, allowing dynamic data lookup beyond just RAG, and answering a broader range of questions. BlogPost, Tweet.
  23. LlamaIndex now supports integration with OpenAI’s fine-tuned models via their new endpoint. Seamlessly integrate these models into your RAG pipeline. Docs, Tweet.
  24. LlamaIndex introduces the OpenAIFineTuningHandler to streamline data collection for fine-tuning gpt-3.5-turbo with GPT-4 outputs. Run RAG with GPT-4 and effortlessly generate a dataset to train a more cost-effective model. Notebook, Tweet.
  25. LlamaIndex presents the Principled Development Practices guide, detailing best practices for LLM app development Observability, Evaluation, and Monitoring. Docs, Tweet.
  26. LlamaIndex introduces a refined Prompt system. With just three core classes: PromptTemplate, ChatPromptTemplate, and SelectorPromptTemplate, users can effortlessly format as chat messages or text and tailor prompts based on model conditions. Docs, Tweet.
  27. LlamaIndex delves into chunk dreaming a concept inspired by Thomas H. Chapin IV. By auto-extracting metadata from a text chunk, it can identify potential questions and provide summaries over neighboring nodes. This enriched context boosts RAG’s performance. Docs, Tweet.
  28. LlamaIndex is integrated with BagelDB, enabling developers to effortlessly tap into vector data stored on BagelDB. Tweet.
  29. LlamaIndex now lets the LLM choose between vector search for semantic queries or our BM25 retriever for keyword-specific ones. Docs, Tweet.
  30. LlamaIndex introduces the AutoMergingRetriever, crafted with insights from Jason and ChatGPT. This technique fetches precise context chunks and seamlessly merges them, optimizing LLM responses. Using the HierarchicalNodeParser, we ensure interconnected chunks for enhanced context clarity. Docs, Tweet.
  31. LlamaIndex introduces embedding finetuning for optimized retrieval performance. Beyond enhancing RAG, we’ve simplified retrieval evaluations with automatic QA dataset generation from text, streamlining both finetuning and evaluation processes. Docs, Tweet.
  32. LlamaIndex now integrates directly with Airbyte sources including Gong, Hubspot, Salesforce, Shopify, Stripe, Typeform, and Zendesk Support. Easily enhance your LlamaIndex application with these platforms implemented as data loaders. BlogPost, Tweet.
  33. LlamaIndex integrates with DeepEval, a comprehensive library to evaluate LLM and RAG apps. Assess on four key metrics: Relevance, Factual Consistency, Answer Similarity, and Bias/Toxicity. Docs, Tweet.
  34. LlamaIndex recommends evaluating LLM + RAG step-by-step, especially retrieval. Create synthetic retrieval datasets from text chunks using LLMs. This method not only evaluates retrieval but also fine-tunes embeddings. Docs, Tweet.
  35. LlamaIndex unveils a managed index abstraction simplifying RAG’s ingestion and storage processes with Vectara. Docs, Tweet.
  36. LlamaIndex has significantly enhanced its callback handling support, encompassing features like tracebacks, LLM token counts, templates, and detailed agent tool information. These advancements pave the way for smoother integrations with evaluation and observability applications. Tweet.
  37. LlamaIndex has integrated with AskMarvinAI, enabling automated metadata extraction from text corpora. Just annotate a Pydantic model and effortlessly log metadata from all associated text chunks. Docs, Tweet.
  38. LlamaIndex is integrated with RunGPT by JinaAI, an outstanding framework for one-click deployment of various open-source models such as Llama, Vicuna, Pythia, and more. Coupled with LlamaIndex’s innate chat/streaming capabilities, users can now deploy and utilize powerhouse models like Llama-7B seamlessly. Docs, Tweet.


  1. LITS has Full Azure OpenAI integration. Tweet.
  2. LITS Enhanced Llama2 support, new default temperature (0.1), and GPT chat integration. Tweet.
  3. LITS helps to use fromDocuments without repeat checks; auto SHA256 comparison. Tweet.
  4. LITS now supports OpenAI v4, Anthropic 0.6, & Replicate 0.16.1., CSV loader, Merged NodeWithEmbeddings & BaseNode. Tweet.
  5. LITS now supports PapaCSVLoader for math. Tweet.
  6. LITS is now integrated with LiteLLM. Tweet.
  7. LITS now has additional session options for proxy server support, Default timeout reset to 60 seconds for OpenAI. Tweet.
  8. LITS now has Pinecone integration. Tweet.
  9. LITS has Optimized ChatGPT prompts, fixed metadata rehydration issues, and OpenAI Node v4.1.0 with fine-tuned model support. Tweet.
  10. LITS has introduced enhanced text-splitting features, including a specialized tokenizer for Chinese, Japanese, and Korean, and refinements to the SentenceSplitter for handling decimal numbers. Tweet.
  11. LITS has a Markdown loader and metadata support in the response synthesizer. Tweet.
  12. LITS revamped usability: ListIndex is now SummaryIndex for clarity, and prompts have been made typed and customizable to enhance user control and experience. Tweet.
  13. LITS has Notion Reader. Now, users can effortlessly import their documents directly into their RAG or Data Agent application in LITS. Tweet.

RAG Tips:

LlamaIndex shares four tactics to boost your RAG pipeline:

1️⃣ Use summaries for retrieval, and a broader context for synthesis.

2️⃣ Use metadata for structured retrieval over large docs.

3️⃣ Deploy LLMs for dynamic retrieval based on tasks.

4️⃣ Fine-tune embeddings for better retrieval.


  1. Jason's tutorial on adding Image Responses to GPT knowledge retrieval apps.
  2. Wenqi Glantz tutorial on Building Production-Ready LLM Apps with LlamaIndex: Document Metadata for Higher Accuracy Retrieval
  3. Streamlit tutorial on Building a chatbot with custom data sources, powered by LlamaIndex.
  4. Wenqi Glantz tutorial on Building Production-Ready LLM Apps With LlamaIndex: Recursive Document Agents for Dynamic Retrieval.
  5. Erika Cardenas covers the usage of LlamaIndex in building an RAG app.
  6. Argilla blog post on Fine-tuning and evaluating GPT-3.5 with human feedback for RAG using LlamaIndex.
  7. KDNuggests blog post on Build Your Own PandasAI with LlamaIndex.

From the LlamaIndex team:

  1. Jerry Liu’s tutorial on fine-tuning Llama 2 for Text-to-SQL Applications.
  2. Jerry Liu's tutorial on Fine-Tuning Embeddings for RAG with Synthetic Data.
  3. Ravi Theja’s tutorial on combining Text2SQL and RAG with LlamaIndex to analyze product reviews.
  4. Ravi Theja’s tutorial on different Indicies, Storage Context, and Service Context of LlamaIndex.
  5. Ravi Theja’s tutorial on Custom Retrievers and Hybrid Search in LlamaIndex.
  6. Adam's tutorial on Introduction to Data Agents for Developers.
  7. Ravi Theja’s tutorial on creating Automatic Knowledge Transfer (KT) Generation for Code Bases using LlamaIndex.


  1. Webinar with members from Docugami on Document Metadata and Local Models for Better, Faster Retrieval.
  2. Webinar with Shaun and Piaoyang on building Personalized AI Characters with RealChar.
  3. Webinar with Bob (Weaviet), Max (sid.ai), and Tuana (HayStack) on making RAG Production-Ready.
  4. Workshop by Wey Gu on Building RAG with Knowledge Graphs.
  5. Webinar with Jo Bergum and Shishir Patil on fine-tuning and RAG.


  1. Jerry Liu spoke about LlamaIndex at the NYSE Floor Talk.
  2. Ravi Theja spoke about LlamaIndex at the Fifth Elephant conference in Bengaluru, India.
  3. Ravi Theja conducted a workshop on LlamaIndex in Bengaluru, India.

Demos And Papers:

  1. The paper titled Performance of ChatGPT, human radiologists, and context-aware ChatGPT in identifying AO codes from radiology reports is an intriguing medical research. It leverages both LlamaIndex and ChatGPT to pinpoint AO codes within radiology reports, enhancing fracture classification. A fantastic fusion of tech and medicine!
  2. SEC Insights AI does SEC document analysis using LlamaIndex is on Product Hunt as the 5th product of the day.
  3. RentEarth: an agent to build your own startup with an amazing 3D interface and LlamaIndex.

In wrapping up this edition of our LlamaIndex Update series, we’re reminded of the power of collaboration and innovation. From new features to integrations and tutorials, our mission to revolutionize the AI realm marches forward. To every member of our community, thank you for your unwavering support and enthusiasm. Let’s continue to elevate the world of AI together!