Deprecated Everything — When The AI Describes Tools As They Were Six Months Ago

You asked Claude about Sora's API pricing. It gave you a detailed breakdown — per-second generation costs, resolution tiers, a free trial allocation of 50 generations per month. You budgeted your project around those numbers. When you went to sign up, the pricing page looked nothing like what the AI described. The tier structure was different. The free allocation didn't exist. The per-second costs were roughly double what you'd been quoted. [VERIFY] You hadn't been given wrong information exactly — you'd been given old information, delivered in present tense, with no indication that anything had changed. The AI described Sora as it existed in its training data, which might as well have been a different product.

This is the training data cutoff problem made concrete. Every AI model has a point in time beyond which it knows nothing. Everything after that date is invisible. The model doesn't know what it doesn't know — it doesn't flag the gap, doesn't warn you, doesn't say "my information about this might be outdated." It describes the tool as it existed at the last moment it had data, using present tense, with full confidence, as if nothing could possibly have changed between then and now.

The Pattern

The cutoff is a hard wall, but the effects are fuzzy. It's not that the AI knows everything perfectly up to date X and nothing after. Its training data is a mix of sources gathered over a range of dates, with varying levels of completeness. Some tools might be well-represented in the training data from 18 months ago. Others might have sparse coverage even within the training window. The practical result is that the AI's knowledge of any given tool is a snapshot — but the resolution of that snapshot varies, and the date of the snapshot is rarely what you'd guess.

The deprecation cascade is the most common version of the pattern. Here's how it plays out: Tool X deprecated Feature A in January 2026. The AI was trained on data through roughly mid-2025. You ask about Feature A in March 2026. The AI describes Feature A in detail — how it works, how to access it, what parameters it supports. You build around Feature A. When you go to use it, it's gone. Not just different — removed entirely. The tool's deprecation notice is sitting in a changelog you never checked because you had a confident, detailed answer from what felt like a reliable source.

Pricing is the worst offender in this category, and it's not close. AI tool pricing changes constantly — sometimes monthly. Free tiers appear and disappear. Per-unit costs get restructured. Usage limits shift. The AI confidently quotes prices from its training data, and those prices may bear no resemblance to current reality. If you're scoping a project, budgeting for a client, or comparing tools based on cost, AI-provided pricing is almost guaranteed to be wrong in some material way. Not approximate — structurally wrong. The tier that the AI describes might not exist anymore.

UI descriptions are another high-frequency failure mode. You ask the AI where to find a specific setting or how to navigate to a particular feature. It tells you: "Click the gear icon in the top-right corner, then select Advanced Settings, then scroll to the API Configuration section." You open the tool. The gear icon is in the left sidebar. Advanced Settings was consolidated into a single Settings page three months ago. There is no API Configuration section — it was renamed to Developer Options. Each instruction is wrong, and each wrong instruction erodes your confidence that you're even looking at the right tool. You start wondering if you have the wrong version, the wrong plan tier, the wrong browser. You don't. The AI's UI description is a tour of a building that was remodeled since the last time anyone took a photo.

The workflow description problem is the most time-consuming variant. You ask the AI how to accomplish a specific task in a tool — say, creating a multi-step automation in Make.com. The AI walks you through a workflow: create a new scenario, add a webhook trigger, connect a JSON parser module, pipe the output to a Google Sheets module. But Make.com redesigned their scenario builder. The webhook trigger setup process changed. The JSON parser module was replaced with a built-in transformation step. The Google Sheets module now requires a different authentication flow. Every step the AI described maps to something that used to be correct, and every step is now wrong in a different way. You're following directions to a house that moved.

The "search the web" feature that some AI models offer doesn't fully solve this, and it's worth understanding why. Even AI with web access sometimes prioritizes its training data over search results, or retrieves cached pages that are themselves outdated. The model may synthesize an answer that blends its training data with web results, producing something that's half-current and half-stale — which is arguably worse than being entirely outdated, because you can't tell which parts are which. Web-augmented AI answers about tool capabilities are more reliable than pure training data answers, but they are not reliable in the way that opening the tool's current documentation is reliable.

The Psychology

The present tense is what gets you. If the AI said "as of mid-2025, Sora's pricing was structured as follows," you'd know to check for updates. But the AI doesn't timestamp its claims. It describes everything in present tense — "Sora's API pricing is structured as follows" — because it doesn't know the information is outdated. It doesn't have a concept of "things I know that might have changed." Its training data is its reality. This creates a false sense of currency in every response about specific tools, and you absorb that false currency without realizing it.

There's an asymmetry in how outdated information hurts you that's worth naming. When the AI describes a feature that's been improved — more capable now than when the AI last knew about it — you still benefit. You build with the old capability in mind, discover the tool actually does more, and you're pleasantly surprised. No harm done. But when the AI describes a feature that's been deprecated, downgraded, or repriced — you build on something that's gone, and the failure costs you time, money, or both. The damage is one-directional. Outdated information that undersells the tool is harmless. Outdated information that oversells the tool is expensive. And you can't tell which you're getting until you check.

There's also a recency illusion that compounds the problem. Because the AI's response is generated right now — in real-time, in front of you — it feels current. The act of receiving information in the present moment creates a cognitive association with present-tense accuracy. A blog post from 2024 looks old. An AI response generated three seconds ago feels fresh. But the AI's knowledge might be older than that blog post. The delivery mechanism masks the age of the underlying information, and your brain processes delivery time as information time. They're not the same thing, but they feel the same.

The version-specific query trick — asking "what does Tool X do in 2026" — helps marginally but doesn't solve the problem. The AI recognizes you're asking about a specific timeframe, but it may not have 2026-specific information to draw from. It fills the gap with what it does know, sometimes with a disclaimer, often without one. The result is training-data-era information dressed up in a 2026 frame, which is worse than no frame at all because it actively suggests currency.

The Fix

The changelog is your antidote. For any tool capability the AI describes, your first verification stop should be the tool's changelog or release notes — not the full documentation, not a Google search, the changelog. The changelog tells you exactly what changed and when. If the AI described Feature A and the changelog shows Feature A was deprecated in January, you have your answer in under 60 seconds.

Most serious AI tools maintain public changelogs. Anthropic, OpenAI, Runway, ElevenLabs, n8n, Make — they all publish release notes that document what shipped, what changed, and what was removed. Bookmark the changelogs for the tools you use regularly. When the AI tells you something specific about a tool — a feature, a pricing tier, an API endpoint, a UI location — check the changelog for changes since the AI's likely training data cutoff. This is a sub-two-minute check that prevents the multi-hour consequences of building on deprecated information.

For pricing specifically, the only source you should trust is the tool's current pricing page, loaded in your browser right now. Not the AI's answer. Not a cached Google result. Not a comparison website that might be running on its own stale data. The live pricing page. If you're making any financial decision based on AI tool costs — budgeting a project, comparing alternatives, scoping a pilot — load the pricing page and use those numbers. The AI's pricing information is a historical curiosity, not a planning input.

A broader habit that catches the deprecation problem: date-stamp your AI interactions mentally. When the AI tells you something specific about a tool, internally note that this information is from the AI's training window, not from today. Ask yourself: how likely is this to have changed since then? Pricing — very likely. API structure — moderately likely. Core concept of what the tool does — unlikely. The more volatile the information category, the more important it is to verify against a current source.

The workflow that actually works is triangulation. The AI tells you what the tool could do — as of some point in the past. The tool's changelog tells you what changed since then. The tool's current documentation tells you what it does now. All three together give you a complete picture. Any one alone is incomplete. The AI without the changelog leaves you building on deprecated ground. The changelog without the AI is hard to interpret without context. The docs without the AI are slow to navigate. Use all three, in that order, and the deprecation problem becomes manageable — not eliminated, but caught before it costs you anything.


This is part of CustomClanker's AI Confabulation series — when the AI in your other tab is confidently wrong.