fix(deps): update dependency llama-index to ^0.13.0 #2414
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
^0.12.6
->^0.13.0
Release Notes
run-llama/llama_index (llama-index)
v0.13.0
Compare Source
NOTE: All packages have been bumped to handle the latest llama-index-core version.
llama-index-core
[0.13.0]FunctionCallingAgent
, the olderReActAgent
implementation,AgentRunner
, all step workers,StructuredAgentPlanner
,OpenAIAgent
, and more. All users should migrate to the new workflow based agents:FunctionAgent
,CodeActAgent
,ReActAgent
, andAgentWorkflow
(#19529)QueryPipeline
class and all associated code (#19554)index.as_chat_engine()
to return aCondensePlusContextChatEngine
. Agent-based chat engines have been removed (which was the previous default). If you need an agent, use the above mentioned agent classes. (#19529)llama-index-embeddings-mixedbreadai
[0.5.0]llama-index-instrumentation
[0.4.0]llama-index-llms-google-genai
[0.3.0]llama-index-llms-nvidia
[0.4.0]llama-index-llms-upstage
[0.6.0]llama-index-postprocessor-mixedbreadai-rerank
[0.5.0]llama-index-readers-github
[0.8.0]llama-index-readers-s3
[0.5.0]client_kwargs
in S3Reader (#19546)llama-index-tools-valyu
[0.4.0]llama-index-voice-agents-gemini-live
[0.2.0]llama-index-vector-stores-astradb
[0.5.0]llama-index-vector-stores-milvus
[0.9.0]llama-index-vector-stores-s3
[0.2.0]llama-index-vector-stores-postgres
[0.6.0]v0.12.52
Compare Source
llama-index-core
[0.12.52.post1]llama-index-core
[0.12.52]stores_text=True/False
(#19501)llama-index-indices-managed-bge-m3
[0.5.0]llama-index-readers-web
[0.4.5]llama-index-tools-jira-issue
[0.1.0]llama-index-vector-stores-azureaisearch
[0.3.10]**kwargs
into AzureAISearchVectorStore super init (#19500)llama-index-vector-stores-neo4jvector
[0.4.1]v0.12.51
Compare Source
llama-index-core
[0.12.51]llama-index-readers-azstorage-blob
[0.3.2]v0.12.50
Compare Source
llama-index-core
[0.12.50]get_cache_dir()
function more secure by changing default location (#19415)session_id
metadata_filter in VectorMemoryBlock (#19427)llama-index-instrumentation
[0.3.0]llama-index-llms-bedrock-converse
[0.7.6]llama-index-llms-cloudflare-ai-gateway
[0.1.0]llama-index-llms-google-genai
[0.2.5]google_search
Tool Support to GoogleGenAI LLM Integration (#19406)llama-index-readers-confluence
[0.3.2]llama-index-readers-service-now
[0.1.0]llama-index-protocols-ag-ui
[0.1.4]llama-index-tools-wikipedia
[0.3.1]WikipediaToolSpec.load_data
tool (#19464)llama-index-vector-stores-baiduvectordb
[0.3.1]**kwargs
tosuper().__init__
in BaiduVectorDB (#19436)llama-index-vector-stores-moorcheh
[0.1.1]llama-index-vector-stores-s3
[0.1.0]v0.12.49
Compare Source
llama-index-core
[0.12.49]llama-index-llms-bedrock-converse
[0.7.5]llama-index-llms-nvidia
[0.3.5]llama-index-llms-oci-genai
[0.5.2]llama-index-postprocessor-flashrank-rerank
[0.1.0]llama-index-readers-web
[0.4.4]llama-index-storage-docstore-duckdb
[0.1.0](#19282)
llama-index-storage-index-store-duckdb
[0.1.0](#19282)
llama-index-storage-kvstore-duckdb
[0.1.3](#19282)
llama-index-vector-stores-duckdb
[0.4.6](#19383)
llama-index-vector-stores-moorcheh
[0.1.0]v0.12.48
Compare Source
llama-index-core
[0.12.48]llama-index-embeddings-huggingface-optimum-intel
[0.3.1]llama-index-indices-managed-lancedb
[0.1.0]llama-index-indices-managed-llamacloud
[0.7.10]llama-index-llms-google-genai
[0.2.4]llama-index-llms-oci-genai
[0.5.1]llama-index-readers-file
[0.4.11]llama-index-storage-chat-stores-postgres
[0.2.2]v0.12.47
Compare Source
llama-index-core
[0.12.47]max_iterations
arg to.run()
of 20 for agents (#19035)tool_required
toTrue
forFunctionCallingProgram
and structured LLMs where supported (#19326)LLM
counterpartLLM
classes support multi-modal features inllama-index-core
LLM
classes useImageBlock
internally to support multi-modal featuresllama-index-cli
[0.4.4]llama-index-indices-managed-lancedb
[0.1.0]llama-index-llms-anthropic
[0.7.6]llama-index-llms-oci-genai
[0.5.1]llama-index-readers-web
[0.4.3]v0.12.46
Compare Source
llama-index-core
[0.12.46]llama-index-embeddings-google-genai
[0.2.1]llama-index-embeddings-nvidia
[0.3.4]llama-index-llms-google-genai
[0.2.3]llama-index-tools-mcp
[0.2.6]llama-index-voice-agents-elevenlabs
[0.3.0-beta]v0.12.45
Compare Source
llama-index-core
[0.12.45]Node
from ingestion cache (#19279)get_doc_id()
withid_
in base index (#19266)llama-index-llms-anthropic
[0.7.5]llama-index-embeddings-azure-openai
[0.3.9]llama-index-embeddings-bedrock
[0.5.2]llama-index-llms-bedrock-converse
[0.7.4]llama-index-llms-dashscope
[0.4.1]tool_calls
info from ChatMessage kwargs to top level (#19224)llama-index-memory-mem0
[0.3.2]llama-index-tools-google
[0.5.0]llama-index-vector-stores-postgres
[0.5.4]v0.12.44
Compare Source
llama-index-core
[0.12.44]CachePoint
content block for caching chat messages (#19193)llama-index-embeddings-fastembed
[0.3.5]llama-index-embeddings-huggingface
[0.5.5]asyncio.to_thread
(#19207)llama-index-llms-anthropic
[0.7.4]llama-index-llms-google-genai
[0.2.2]llama-index-llms-mistralai
[0.6.1]llama-index-llms-perplexity
[0.3.7]PPLX_API_KEY
in perplexity llm integration (#19217)llama-index-postprocessor-bedrock-rerank
[0.4.0]llama-index-postprocessor-sbert-rerank
[0.3.2]cross_encoder_kwargs
parameter for advanced configuration (#19148)llama-index-utils-workflow
[0.3.5]llama-index-vector-stores-azureaisearch
[0.3.8]llama-index-vector-stores-db2
[0.1.0]llama-index-vector-stores-duckdb
[0.4.0]llama-index-vector-stores-pinecone
[0.6.0]>=3.9,<4.0
forllama-index-vector-stores-pinecone
(#19186)llama-index-vector-stores-qdrant
[0.6.1]llama-index-voice-agents-openai
[0.1.1-beta]v0.12.43
Compare Source
llama-index-core
[0.12.43]get_tqdm_iterable
in SimpleDirectoryReader (#18722)llama-index-workflows
and keeping backward compatibility (#19043)llama-index-instrumentation
(#19062)llama-index-llms-bedrock-converse
[0.7.2]llama-index-llms-openai
[0.4.7]llama-index-llms-perplexity
[0.3.6]llama-index-postprocessor-sbert-rerank
[0.3.1]llama-index-protocols-ag-ui
[0.1.2]ag-ui
protocol support (#19104, #19103, #19102, #18898)llama-index-readers-google
[0.6.2]llama-index-readers-hive
[0.3.1]llama-index-readers-mongodb
[0.3.2]alazy_load_data
for mongodb reader (#19038)llama-index-storage-chat-store-sqlite
[0.1.1]llama-index-tools-hive
[0.1.0]llama-index-utils-workflow
[0.3.4]llama-index-vector-stores-lancedb
[0.3.3]llama-index-vector-stores-milvus
[0.8.5]Connections.connect()
got multiple values for argumentalias
(#19119)llama-index-vector-stores-opengauss
[0.1.0]Configuration
📅 Schedule: Branch creation - "every weekend" in timezone US/Eastern, Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.