Claude design is not capable of stealing designer jobs (yet)

Every time I see someone complain that Claude will not follow their Figma spec, even after building a skill, even after updating CLAUDE.md, I want to tell them: that is not a bug in your prompt. That is the product. It is a prediction engine. It produces the most statistically probable output given your input. It was never designed to follow a spec. It was designed to sound like it is following one. Those are different things. One of them exists.
The sooner everyone accepts that, the sooner we stop reading posts that say "Figma is dead" or "Design is dead." Those posts are not wrong because the technology is bad. They are wrong because they are describing a capability the thing does not have and was never built to have.
Hallucinations are a feature, not a bug
It cannot be used as a qualitative analysis tool either. It will make up user quotes. Confidently, in the right register, with the right level of specificity. Tell it to respond only with quotes from actual users and it will produce something that sounds exactly right and is completely invented. Andrej Karpathy put it plainly in 2023:
"Hallucination is not a bug, it is LLM's greatest feature."
He is correct. The same mechanism that makes it generative is the one that makes it unreliable as a fact source. You cannot have one without the other. Gary Marcus has been saying something similar for years and goes further, writing that LLMs "continue to lack a grasp of reality" and cannot begin to fact-check their own output. This is not a fixable bug. It is the architecture.
That framing makes people uncomfortable, but it is accurate.
The numbers nobody is looking at closely enough
Could it be that a bunch of really big companies have spent a lot of money convincing people, especially executives, that this technology is going to change the world, and all they have to do is believe it really hard and not look too closely at the numbers or the actual output? Sequoia's David Cahn calculated that the AI industry needs to fill a $600 billion annual revenue gap just to justify current infrastructure spending. Goldman Sachs published a report in 2024 asking whether the $1 trillion going into AI would ever pay off, and their own head of global equity research concluded that "not one truly transformative, let alone cost-effective, application has been found." That is not a fringe take. That is Goldman Sachs.
The hype is going to find its level
The bubble is going to correct. It may not pop cleanly; these things rarely do, but the hype is going to find its level. In the meantime, we are all living through a wildly out-of-control hype cycle enabled by a credulous tech and business press, and the people paying for it are the ones making decisions based on demos instead of output.
So what is it actually good for
It makes up statistically probable strings of text. That is it. That is the whole product. It is genuinely useful within those constraints. It got me from zero to a working blog in a single session. But that is not the same as following a spec, producing reliable research, or replacing design judgment. Those are different things. And until everyone agrees on what the thing actually is, we are going to keep reading posts about what it is eventually going to become.
Bibliography
Karpathy, A. (2023, December 9). On the "hallucination problem." X (formerly Twitter). https://x.com/karpathy/status/1733299213503787018
Marcus, G. (2025, May 5). Why do large language models hallucinate? Marcus on AI (Substack). https://garymarcus.substack.com/p/why-do-large-language-models-hallucinate
Towards Data Science. (2026, March 16). Hallucinations in LLMs are not a bug in the data. https://towardsdatascience.com/hallucinations-in-llms-are-not-a-bug-in-the-data/
Cahn, D. (2024, June 20). AI's $600B question. Sequoia Capital. https://sequoiacap.com/article/ais-600b-question/
Goldman Sachs Research. (2024, June). Gen AI: Too much spend, too little benefit? Goldman Sachs Top of Mind Series. Referenced via: https://decrypt.co/239130/generative-ai-never-mind-says-goldman-sachs
Acemoglu, D. (2024). The simple macroeconomics of AI. Referenced via Goldman Sachs Top of Mind and: https://lfaidata.foundation/communityblog/2024/10/28/ai-advances-arent-likely-to-occur-nearly-as-quickly-as-many-believe/
Masud, F. / HP Enterprise. (2024). A closer look at AI FOMO in corporate America. HP Workforce Experience Blog. https://workforceexperience.hp.com/blog/ai-fomo/ (source of the Barclays 12,000 products estimate)
