We are excited to announce the belated release of Beginning Spring AI. The book is intended to be a first look at using the Spring AI library to leverage the world of AI that is upon us. It is fascinating how fast the technology and releases of new models and new features.
Buy the book on Amazon on use coupon code DISCOVER20 to get 20% off the book through October!
It looks like we unfortunately missed the impending 1.0.0.GA by a few months! The code and examples written in the book were based on 1.0.0.M2 which feels like a lifetime ago. Last week Mark Pollack announced the Spring AI 1.0.0 RC1 release and that today will be the release of the GA! Super excited to see all the updates available and want to help with understanding how things have changed since the books release.
The most visible change is the complete overhaul of artifact IDs and package structure.
Artifact ID
<!-- OLD -->
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-openai-spring-boot-starter</artifactId>
</dependency>
<!-- NEW -->
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-model-openai</artifactId>
</dependency>
The pattern has changed to:
Model starters:
spring-ai-{model}-spring-boot-starter
→spring-ai-starter-model-{model}
Vector Store starters:
spring-ai-{store}-store-spring-boot-starter
→spring-ai-starter-vector-store-{store}
Package Name
Key classes have moved to new packages:
KeywordMetadataEnricher
andSummaryMetadataEnricher
moved fromorg.springframework.ai.transformer
toorg.springframework.ai.chat.transformer
Content
,MediaContent
, andMedia
moved fromorg.springframework.ai.model
toorg.springframework.ai.content
Module Structure
Spring AI now uses a modular architecture instead of a monolithic design:
spring-ai-commons
: Base module with no dependenciesspring-ai-model
: Core AI capability abstractionsspring-ai-vector-store
: Unified vector database abstractionspring-ai-client-chat
: High-level conversational AI APIsspring-ai-advisors-vector-store
: Bridges chat with vector stores for RAG
Chat Memory and Advisor
// OLD
// Using constants from AbstractChatMemoryAdvisor
myConfig.put(AbstractChatMemoryAdvisor.CHAT_MEMORY_CONVERSATION_ID_KEY, "conv123");
// NEW
// Now use constants from ChatMemory interface
myConfig.put(ChatMemory.CONVERSATION_ID, "conv123");
CHAT_MEMORY_RETRIEVE_SIZE_KEY
renamed toTOP_K
DEFAULT_CHAT_MEMORY_RESPONSE_SIZE
(value: 100) renamed toDEFAULT_TOP_K
with a new default value of 20CHAT_MEMORY_CONVERSATION_ID_KEY
moved fromAbstractChatMemoryAdvisor
to theChatMemory
interface
Tool Calling
// OLD (deprecated)
ChatClient chatClient = new OpenAiChatClient(api)
.tools(List.of(new Tool("get_weather", "Get weather", params)))
.toolCallbacks(List.of(new ToolCallback("get_weather", handler)));
// NEW
ChatClient chatClient = new OpenAiChatClient(api)
.toolSpecifications(List.of(new Tool("get_weather", "Get weather", params)))
.toolCallbacks(List.of(new ToolCallback("get_weather", handler)));
The
tools()
method is nowtoolSpecifications()
ToolContext
is now final and cannot be extendedToolContext
behavior changed to support both explicit and implicit tool resolution
Observability
Spring AI now uses logging instead of tracing for observability:
# OLD
spring.ai.openai.include-prompt=true
spring.ai.openai.include-completion=true
# NEW
spring.ai.openai.log-prompt=true
spring.ai.openai.log-completion=true
Configuration properties renamed:
include-prompt
→log-prompt
, etc.Added
TracingAwareLoggingObservationHandler
for trace-aware loggingReplaced
micrometer-tracing-bridge-otel
withmicrometer-tracing
Vector Store
// OLD
Optional<Boolean> result = vectorStore.delete(ids);
if (result.isPresent() && result.get()) {
// handle successful deletion
}
// NEW
vectorStore.delete(ids); // Now throws an exception if delete fails
The
delete()
method inVectorStore
is now void instead of returningOptional<Boolean>
Default value of
initialize-schema
property is nowfalse
Template
Self-contained templates in advisors:
QuestionAnswerAdvisor
expects templates withquery
andquestion_answer_context
placeholdersPromptChatMemoryAdvisor
expects templates withinstructions
andmemory
placeholdersVectorStoreChatMemoryAdvisor
expects templates withinstructions
andlong_term_memory
placeholders
Model Autoconfiguration
# OLD
spring.ai.openai.chat.enabled=true
# NEW
spring.ai.model.chat=openai
# Or to disable: spring.ai.model.chat=none
The old properties for enabling/disabling models have been removed
Use new properties instead:
spring.ai.model.chat
,spring.ai.model.embedding
, etc.
Usage Interface
// OLD
Long generationTokens = usage.getGenerationTokens();
// NEW
Integer completionTokens = usage.getCompletionTokens();
getGenerationTokens()
renamed togetCompletionTokens()
Token count field types changed from
Long
toInteger
Removed Implementations
Several implementations have been removed:
Watson AI model (based on outdated text generation)
MoonShot and QianFan (not accessible outside China)
HanaDB vector store
CassandraChatMemory implementation
Summary
We hope you enjoy the book as much as we enjoyed writing it. The next 6 - 12 months are likely to see even more rapid shifts and changes and we hope to write more about some of that here. We’ll hopefully also be able to get the examples available on the Apress repository updated to reflect the new release of 1.0.0.GA today.
To our success!