Releases: run-llama/create-llama
Releases · run-llama/create-llama
v0.1.8
Patch Changes
- cd50a33: Add interpreter tool for TS using e2b.dev
v0.1.7
Patch Changes
- 260d37a: Add system prompt env variable for TS
- bbd5b8d: Fix postgres connection leaking issue
- bb53425: Support HTTP proxies by setting the GLOBAL_AGENT_HTTP_PROXY env variable
- 69c2e16: Fix streaming for Express
- 7873bfb: Update Ollama provider to run with the base URL from the environment variable
v0.1.6
Patch Changes
- 56537a1: Display PDF files in source nodes
v0.1.5
Patch Changes
- 84db798: feat: support display latex in chat markdown
v0.1.4
Patch Changes
- 0bc8e75: Use ingestion pipeline for dedicated vector stores (Python only)
- cb1001d: Add ChromaDB vector store
v0.1.3
Patch Changes
- 416073d: Directly import vector stores to work with NextJS
v0.1.2
Patch Changes
- 056e376: Add support for displaying tool outputs (including weather widget as example)
v0.1.1
Patch Changes
- 7bd3ed5: Support Anthropic and Gemini as model providers
- 7bd3ed5: Support new agents from LITS 0.3
- cfb5257: Display events (e.g. retrieving nodes) per chat message
v0.1.0
Minor Changes
- f1c3e8d: Add Llama3 and Phi3 support using Ollama
Patch Changes
- a0dec80: Use
gpt-4-turbo
model as default. Upgrade Python llama-index to 0.10.28
- 753229d: Remove asking for AI models and use defaults instead (OpenAIs GPT-4 Vision Preview and Embeddings v3). Use
--ask-models
CLI parameter to select models.
- 1d78202: Add observability for Python
- 6acccd2: Use poetry run generate to generate embeddings for FastAPI
- 9efcffe: Use Settings object for LlamaIndex configuration
- 418bf9b: refactor: use tsx instead of ts-node
- 1be69a5: Add Qdrant support