Skip to content

Releases: run-llama/llama_index

v0.10.10

21 Feb 04:42
2e7e55b
Compare
Choose a tag to compare
v0.10.10

v0.10.8

20 Feb 16:30
1cffd25
Compare
Choose a tag to compare
v0.10.8

v0.10.7

19 Feb 17:47
baf973f
Compare
Choose a tag to compare

New Features

  • Added Self-Discover llamapack (#10951)

Bug Fixes / Nits

  • Fixed linting in CICD (#10945)
  • Fixed using remote graph stores (#10971)
  • Added missing LLM kwarg in NoText response synthesizer (#10971)
  • Fixed openai import in rankgpt (#10971)
  • Fixed resolving model name to string in openai embeddings (#10971)
  • Off by one error in sentence window node parser (#10971)

v0.10.6

18 Feb 01:37
8b8199f
Compare
Choose a tag to compare

[0.10.6] - 2024-02-17

First, apologies for missing the changelog the last few versions. Trying to figure out the best process with 400+ packages.

At some point, each package will have a dedicated changelog.

But for now, onto the "master" changelog.

New Features

  • Added NomicHFEmbedding (#10762)
  • Added MinioReader (#10744)

Bug Fixes / Nits

  • Various fixes for clickhouse vector store (#10799)
  • Fix index name in neo4j vector store (#10749)
  • Fixes to sagemaker embeddings (#10778)
  • Fixed performance issues when splitting nodes (#10766)
  • Fix non-float values in reranker + b25 (#10930)
  • OpenAI-agent should be a dep of openai program (#10930)
  • Add missing shortcut imports for query pipeline components (#10930)
  • Fix NLTK and tiktoken not being bundled properly with core (#10930)
  • Add back llama_index.core.__version__ (#10930)

v0.10.5

16 Feb 03:17
edd941d
Compare
Choose a tag to compare
v0.10.5

v0.10.3

13 Feb 16:25
5c1f056
Compare
Choose a tag to compare
v0.10.3

v0.10.1 (and V0.10.0)

12 Feb 19:20
50c8e15
Compare
Choose a tag to compare

Today we’re excited to launch LlamaIndex v0.10.0. It is by far the biggest update to our Python package to date (see this gargantuan PR), and it takes a massive step towards making LlamaIndex a next-generation, production-ready data framework for your LLM applications.

LlamaIndex v0.10 contains some major updates:

  • We have created a llama-index-core package, and split all integrations and templates into separate packages: Hundreds of integrations (LLMs, embeddings, vector stores, data loaders, callbacks, agent tools, and more) are now versioned and packaged as a separate PyPI packages, while preserving namespace imports: for example, you can still usefrom llama_index.llms.openai import OpenAI for a LLM.
  • LlamaHub will be the central hub for all integrations: the former llama-hub repo itself is consolidated into the main llama_index repo. Instead of integrations being split between the core library and LlamaHub, every integration will be listed on LlamaHub. We are actively working on updating the site, stay tuned!
  • ServiceContext is deprecated: Every LlamaIndex user is familiar with ServiceContext, which over time has become a clunky, unneeded abstraction for managing LLMs, embeddings, chunk sizes, callbacks, and more. As a result we are completely deprecating it; you can now either directly specify arguments or set a default.

Upgrading your codebase to LlamaIndex v0.10 may lead to some breakages, primarily around our integrations/packaging changes, but fortunately we’ve included some scripts to make it as easy as possible to migrate your codebase to use LlamaIndex v0.10.

Full Blog Post

v0.10 Documentation

v0.10 Installation Guide

v0.10 Quickstart

Updated Contribution Guide

Temporary v0.10 Package Registry

v0.10 Migration Guide

v0.9.48

12 Feb 15:41
4187950
Compare
Choose a tag to compare
v0.9.48

v0.9.47

11 Feb 19:55
Compare
Choose a tag to compare
v0.9.47

v0.9.46

08 Feb 22:12
e5b163d
Compare
Choose a tag to compare
v0.9.46