Sign Up to Our Newsletter

Be the first to know the latest tech updates

[mc4wp_form id=195]

Lessons Learned from Upgrading to LangChain 1.0 in Production

Lessons Learned from Upgrading to LangChain 1.0 in Production


first stable v1.0 release in late October 2025. After spending the past two months working with their new APIs, I genuinely feel this is the most coherent and thoughtfully designed version of LangChain to date.

I wasn’t always a LangChain fan. The early versions were fragile, poorly documented, abstractions shifted frequently, and it felt too premature to use in prod. But v1.0 felt more intentional, and had a more consistent mental model for how data should flow through agents and tools. 

This isn’t a sponsored post by the way — I’d love to hear your thoughts, feel free to DM me here!

This article isn’t here to regurgitate the docs. I’m assuming you’ve already dabbled with LangChain (or are a heavy user). Rather than dumping a laundry list of points, I’m going to cherry-pick just four key points.

A quick recap: LangChain, LangGraph & LangSmith

At a high level, LangChain is a framework for building LLM apps and agents, allowing devs to ship AI features fast with common abstractions.

LangGraph is the graph-based execution engine for durable, stateful agent workflows in a controllable way. Finally, LangSmith is an observability platform for tracing and monitoring.

Put simply: LangChain helps you build agents fast, LangGraph runs them reliably, and LangSmith lets you monitor and improve them in production.

My stack

For context, most of my recent work focuses on building multi-agent features for a customer-facing AI platform at work. My backend stack is FastAPI, with Pydantic powering schema validation and data contracts.

Lesson 1: Dropping support for Pydantic models

A major shift in the migration to v1.0 was the introduction of the new create_agent method. It streamlines how agents are defined and invoked, but it also drops support for Pydantic models and dataclasses in agent state. Everything must now be expressed as TypedDicts extending AgentState.

If you’re using FastAPI, Pydantic is often the recommended and default schema validator. I valued schema consistency across the codebase and felt that mixing TypedDicts and Pydantic models would inevitably create confusion — especially for new engineers who might not know which schema format to follow.

To solve this, I introduced a small helper that converts a Pydantic model into a TypedDict that extends AgentState right before it’s passed to create_agent . It is critical to note that LangChain attaches custom metadata to type annotations which you must preserve. Python utilities like get_type_hints() strip these annotations, meaning a naïve conversion won’t work.

Lesson 2: Deep agents are opinionated by design

Alongside the new create_agent API in LangChain 1.0 came something that caught my attention: the deepagents library. Inspired by tools like Claude Code and Manus, deep agents can plan, break tasks into steps, and even spawn subagents. 

When I first saw this, I wanted to use it everywhere. Why wouldn’t you want “smarter” agents, right? But after trying it across several workflows, I realised that this extra autonomy was sometimes unnecessary — and in certain cases, counterproductive — for my use cases.

The deepagents library is fairly opinionated, and very much by design. Each deep agent comes with some built-in middleware — things like ToDoListMiddleware, FilesystemMiddleware, SummarizationMiddleware, etc. These shape how the agent thinks, plans, and manages context. The catch is that you can’t control exactly when these default middleware run, nor can you disable the ones you don’t need.

After digging into the deepagents source code here, you can see that the middleware parameter is additional middleware to apply after standard middleware. Any middleware that was passed in middleware=[...] gets appended after the defaults. 

All this extra orchestration also introduced noticeable latency, and may not provide meaningful benefit. So if you want more granular control, stick with the simpler create_agent method.

I’m not saying deep agents are bad, they’re powerful in the right scenarios. However, this is a good reminder of a classic engineering principle: don’t chase the “shiny” thing. Use the tech that solves your actual problem, even if it’s the “less glamorous” option.

My favourite feature: Structured output

Having deployed agents in production, especially ones that integrate with deterministic enterprise systems, getting agents to consistently produce output of a specific schema was crucial.

LangChain 1.0 makes this pretty easy. You can define a schema (e.g., a Pydantic model) and pass it to create_agent via the response_format parameter. The agent then produces output that conforms to that schema within a single agent loop with no additional steps.

This has been incredibly useful whenever I need the agent to strictly adhere to a JSON structure with certain fields guaranteed. So far, structured output has been very reliable too.

What I want to explore more of: Middleware

One of the trickiest parts of building reliable agents is context engineering — making sure the agent always has the right information at the right time. Middleware was introduced to give developers precise control over each step of the agent loop, and I think it is worth diving deeper into.

Middleware can mean different things depending on context (pun intended). In LangGraph, this can mean controlling the exact sequence of node execution. In long-running conversations, it might involve summarising accumulated context before the next LLM call. In human-in-the-loop scenarios, middleware can pause execution and wait for a user to approve or reject a tool call.

More recently, in the latest v1.1 minor release, LangChain also added a model retry middleware with configurable exponential backoff, allowing graceful recovery for transient endpoint errors.

I personally think middleware is a game changer as agentic workflows get more complex, long-running, and stateful, especially when you need fine-grained control or robust error handling. 

This list of middleware is growing and it really helps that it remains provider agnostic. If you’ve experimented with middleware in your own work, I’d love to hear what you found most useful!

To end off

That’s it for now — four key reflections from what I’ve learnt so far about LangChain. And if anyone from the LangChain team happens to be reading this, I’m always happy to share user feedback anytime or simply chat 🙂

Have fun building!



Source link

Clara Chong

About Author

TechToday Logo

Your go-to destination for the latest in tech, AI breakthroughs, industry trends, and expert insights.

Get Latest Updates and big deals

Our expertise, as well as our passion for web design, sets us apart from other agencies.

Digitally Interactive  Copyright 2022-25 All Rights Reserved.