AI 产品与平台开发工具

Claude Design and Google DESIGN.md: Replacing Designers or Replacing Coders?

A trend already underway: on small projects, designers and coders are merging

After this wave of AI design tools, the most common anxiety in designer circles is “will my job be replaced.” The anxiety in coder circles is “do I now have to learn some design.” Both camps worry from their own angle, but a simpler observation is that on small companies and simple projects, these two roles have already merged in practice. A solo developer, a three-to-five-person startup team, a PM who needs a mockup for a pitch deck can now ship something without hiring a designer or a frontend engineer. The phenomenon is less visible in enterprise products and high-end brand work, where today’s AI tools can’t meet the bar for visual quality, brand consistency, or complex interaction. But from solo projects up through mid-sized products, the merge is happening.

This piece isn’t a prediction of the future. It looks only at where the current trend is heading. Wherever the two roles merge, one person who knows a bit of the other side is enough. Is that person a designer who knows a bit of code, or a coder who knows a bit of design? Put differently: after the merge, who does less work for more money?

Over the past few months of 2026 the question got a fairly clear answer, because the major players all shipped new products and the direction was surprisingly consistent. Anthropic released Claude Design on April 17, Google updated Stitch 2.0 in March and then on April 21 open-sourced DESIGN.md as a cross-tool specification, and alongside them Lovable, Bolt, v0, Cursor, all built out from the coder’s working environment. The entry point of these tools is a text box, the output is code, and the feedback loop is “run it and see if it’s right.” A designer who has never written CSS sits down to use them and immediately feels that the workflow doesn’t match how she works. That’s the claim this piece defends: the default answer from the entire current AI design stack is let the coder who knows a bit of design replace the designer who only knows design. I’ll unpack the shape of Claude Design and DESIGN.md to explain why, despite looking like they hand design authority back to designers, they remain coder tools; then I’ll describe the opposite direction Figma is taking, and how far along that path it actually is. The default answer isn’t necessarily the best one, but for most product teams today it’s the one that wins, and the reason is more structural than “who is smarter.”

The recently shipped big products are all built for coders

Start with Claude Design. Anthropic shipped it on April 17, and Figma’s stock dropped about 7% that day. The interface is a palette icon in the left rail of claude.ai; clicking it opens a chat box plus a live HTML preview. You type a natural-language description of what you want, Claude generates a first pass, and you iterate through continued chat, inline comments, and drag sliders. Output exports to HTML, PDF, PPTX, Canva, or gets handed off to Claude Code to keep writing code.

Anthropic’s official target personas are five: senior designers, product managers, founders, marketers, sales. The last four share a common trait: they want visual output without opening Figma. The Register’s headline for the launch was “Anthropic debuts Claude Design, because who needs designers?” (theregister.com). The French review site agence-scroll put it plainly: “The real target is non-designers. People who’ve never opened Figma.” (agence-scroll.com). The positioning is explicit.

Now Google’s two recent moves. The March Stitch 2.0 update added infinite canvas, voice input, and multi-screen generation. The bigger move is DESIGN.md: on April 21 Google published it as a cross-tool spec (github.com/google-labs-code/design.md). Stitch itself is consistently reviewed as an inspiration tool. Independent designer Dianne Alter, after testing it against Qualcomm’s open-source design system, summarized: “Stitch screenshots become references for a real design file, not the file itself.” (designproject.io).

DESIGN.md wasn’t invented by Google. The idea of “a Markdown file describing a design system for an AI to read” first emerged in the vibe coding community (see Banani’s design.md guide: “vibe coding community started experimenting with moving stylistic preferences into a separate md instruction file”). Many solo developers using Claude Code or Cursor to generate UI noticed the output drifted in style, so they spontaneously started dropping a file describing the brand’s visual language into the project as context. In March 2026 Google internalized this as a Stitch product feature (designmd.app: “Standard introduced by Google Stitch in March 2026”). In late March VoltAgent’s awesome-design-md repository launched and within weeks reverse-engineered brand samples from the public CSS of Stripe, Vercel, Linear, Notion, Apple, Ferrari, Tesla, and 55+ other sites. On April 21 Google released DESIGN.md under Apache 2.0 as a formal spec (github.com/google-labs-code/design.md), paired with a command-line tool and a token naming convention. The substance of this move is claiming de facto standard authority: the format was already spreading in the wild without a unified convention, and Google stepped in to define how DESIGN.md should be written going forward.

DESIGN.md is a Markdown file placed at the project root that any AI agent can read when it opens the project. It has two layers juxtaposed: the first half is design tokens written in YAML, defining concrete values for colors, type sizes, spacing, corner radii, and shadows; the second half is natural-language prose explaining the intent behind every meaningful decision. A minimal example:

colors:
  primary: "#B8422E"
  background: "#F7F1E8"
typography:
  heading: "Söhne Mono"

---

The warm cream background pulls the reading experience toward an editorial
register, suited to long-form reading pages, not conversion-oriented
transaction pages. The brick red primary is a restrained signal in a long-form
context; when paired with primary action buttons, don't stack additional
saturated accent colors on top. Söhne Mono is reserved for heading levels;
body copy stays sans-serif.

This form didn’t exist in the industry before. The Design Tokens Community Group’s tokens.json has structured values but no intent. Tailwind’s config has class mappings but no why. Figma Variables can do semantic layering but stays locked inside Figma’s file format, unreadable to other agents. DESIGN.md is the first time the what and the why of design live together in one plain-text artifact, where any tool can git diff and version-control it.

Alongside the open source release Google shipped a @google/design.md command-line tool (full spec at spec.md) with two subcommands. lint checks whether token names follow the convention, whether references resolve (e.g. {colors.primary} pointing to a non-existent key), and whether color combinations meet WCAG 4.5:1 contrast, outputting structured JSON for agents to consume. diff compares two versions of DESIGN.md, separately surfacing token-level and prose-level changes, and returns a "regression": true/false field that drops straight into CI. The point of this CLI is that design rules can now be gated like code: change a color and CI automatically runs lint to check whether you broke accessibility; edit a paragraph of prose and diff surfaces which judgment principles changed.

Community response within two weeks was fast. VoltAgent’s awesome-design-md collection reverse-engineered DESIGN.md samples from 55+ major sites’ public CSS, and anyone can drop one into a project root and have Claude Code or Cursor generate UI in that brand’s visual language. Anthropic’s skills project opened an issue discussing treating DESIGN.md as a first-class citizen alongside their own SKILL.md. Independent sites like designmd.app and design-extractor.com describe DESIGN.md as “the design counterpart to AGENTS.md.” That framing captures the real reason it drew attention: not because “a new format got open-sourced” but because for the first time design rules share the same engineering infrastructure as code rules — plain text, git-able, diff-able, CI-gateable.

Who DESIGN.md is for is clear from its shape. YAML syntax, {colors.primary} references, command-line lint, and the mental model of placing DESIGN.md at the project root alongside AGENTS.md and CLAUDE.md: all of this is the working language of coders. The presenter, David East, is a DevRel at Google Labs with a Firebase background. In that collection of 55+ brands, the DESIGN.md files are essentially reverse-engineered from website CSS: the token portion is accurate (because CSS is already structured), and the prose portion is scraped from brand marketing pages, yielding phrases like “trustworthy financial feel,” not the kind of thing a brand designer actually writes in internal documentation. What DESIGN.md does is give coders a more structured way to describe a design system to an AI; the designer is neither the primary author nor the primary consumer.

The concrete differences between two working rhythms

Input side: describe in text or judge with your eyes. Coders already work by describing intent in text; that’s what writing code is. Swapping the medium of intent from code to natural-language prompt is a small shift for a coder, not an upending of workflow. A designer’s day is visual judgment: whether this button belongs here, whether this block of text sits comfortably away from the image, whether this palette fits the brand. Claude Design and Stitch let you type “make the top section feel more modern,” show you the result, and let you reply “make the text larger.” For a designer, the input side throws away half her professional capability. Independent designer Jae Lee, after trying Stitch, said: “Not many options to choose from, just a few colors, a color picker, a handful of Google Fonts.” (designerup.co). She expected a canvas she could actually edit; what she got was a chat box in a shell.

Editing side: regenerate the whole screen or adjust at element granularity. A designer’s daily work in Figma is “change this padding from 16 to 20, this line height from 1.4 to 1.15, soften this shadow,” making the change and seeing the result instantly. In Claude Design or Stitch you say “move that button 8 pixels to the left” and the AI interprets it differently every time; you say “make the text thinner” and it might swap the whole block to a different weight. Mejba Ahmed, after a round of testing, wrote: “When you need to adjust something by exactly 4 pixels, or align a baseline to a specific grid, Stitch’s natural language interface becomes a frustration.” (mejba.me). Coders don’t do pixel-level adjustment in the first place: write the code, glance at the result, good enough. A designer’s work lives precisely in fine adjustment, and this loss of granularity cancels her core skill.

Delivery side: one-shot generation or continuous iteration. In Stitch’s Experimental mode you finish a whole project only to find you can’t export to Figma (Google employee Rishabh Chauhan confirmed this personally on the developer forum); the Gemini 3 Pro mode has no export option either; and once the free quota runs out, a rate limit with a threshold the user can’t see kicks in. A coder reads this as “tool isn’t mature yet, come back later.” A designer reads it as “I spent an afternoon and can’t carry the result back into my main workflow.” A designer’s work isn’t generating one image and walking away; it’s iterating on a stable canvas over and over, and that’s what Figma spent a decade polishing.

Presence or absence of a feedback loop decides who gets taken over by agents first

Put the three together and they point to one deeper difference: the coder’s work has a ready-made feedback loop, the designer’s doesn’t. This is the deepest reason every AI design tool serves coders first.

The coder’s work has been trained over the past decade into something heavily text-based. Writing code is already describing intent in a hybrid language. More importantly, the coder has a full feedback loop in hand: the generated code can run, can be tested, can go through CI; when it breaks, logs and stack traces point at the issue. If the agent mistranslates the prompt, the coder finds out immediately. That feedback loop is why “coder speaks natural language, agent writes code” works as an engineering proposition. You don’t need the coder to trust the agent; you just need the coder to verify the agent’s output. If it doesn’t check out, regenerate.

The designer’s work isn’t like this. Many of her decisions are visual judgment, taste, and feel, and the industry has never proven that those can be fully expressed in natural language. Worse, the feedback loop is broken. A designer can see that “this button’s tap target on mobile is too small” (visual judgment), can see that “this heading doesn’t fit the brand’s editorial tone” (taste judgment), but can’t see that “this API will time out on 3G” or “this component’s dark-mode focus state collides with existing CSS variables” (engineering judgment). If the agent’s output bakes in hidden engineering decisions, the designer has no way to verify those decisions visually; she’s signing a contract she hasn’t read.

Put it another way: today, making the coder the principal and the agent the executor works because the feedback channel is already there. Code runs and tests, and the coder notices instantly when the agent mistranslates. Flipping it so the designer is the principal and the agent is the executor doesn’t work yet, because the feedback channel has to be built from scratch: the agent has to translate engineering decisions into something the designer can verify visually. That’s a product-design problem with no ready template to copy. Mainstream AI tools chose not to take it on and walked down the path of least resistance toward the coder. That isn’t laziness; it’s the technically obvious choice. The cost is that “the coder who knows a bit of design replaces the designer who only knows design” becomes the default outcome, not because coders are smarter, but because the coder’s working mode natively fits current agent technology.

Figma is doing something different

At this point the conclusion is that every mainstream AI tool is letting coders eat designers, and designers are left waiting. But one player is heading in the exact opposite direction, and is further along than the mainstream conversation realizes. That player is Figma.

Figma’s AI-related moves over the past year (Make, MCP server, Code Connect, AI Skills, Slots) look like scattered pieces if taken individually, but the decision principle behind them is consistent: Figma isn’t building an AI that replaces designers; it’s building a set of rules that force agents to go through designers first. That rule set has two parts.

First, the designer’s Figma file is the source of truth for every agent. The point is that whether you’re Claude Code, Cursor, or GitHub Copilot, when you want to write frontend code you should come to Figma and read the designer’s file, not have the coder describe it to you again through DESIGN.md or a screenshot. Figma turned this principle into product via the MCP server: when a coder writes code in Cursor, Cursor continuously reads color variables, component definitions, and layout constraints from the Figma file in the background, so the generated code stays aligned with the designer’s source file. When the designer changes a button corner radius, Cursor picks it up the next time it generates code. In this loop the designer isn’t “done once I hand off the design file”; she’s continuously defining semantics, and both agents and coders read from her.

Second, agents can only work inside the constraints the designer set up in advance. The point is that on a Figma canvas an agent can’t improvise freely; it can only assemble from the components, variables, and fonts the designer has defined. Figma turned this into product via AI Skills: a skill is a Markdown-written procedural instruction telling the agent how to do things inside this Figma file, such as “sync a new component from the codebase,” “generate a new UI screen based on existing components,” “sync tokens across code and Figma.” The agent can’t step outside the skill’s boundary to invent new visuals. Figma put the agent in shackles the designer set in advance, so what the agent does isn’t “creating from nothing” but “assembling on top of an existing semantic layer.” Christine Vallaure wrote on UX Collective: “The agentic AI setup will pull towards efficiency. Towards the library of things, we colour in quickly. That is the natural gravity of a system built for assembly. The designer’s job is to ensure there is something worth assembling in the first place.” (uxdesign.cc).

These two principles together go in the exact opposite direction from Claude Design. Claude Design says “design starts at the text box.” Figma says “design starts at the designer’s canvas; the agent plugs in behind the canvas.” The former uses AI to route around the designer; the latter makes the designer the upstream of the agent ecosystem. Figma today is the player closest to a designer-led agentic system, not because its technology is stronger but because its starting point is correct.

But Figma hasn’t finished the job either. Figma Make currently only generates React; a designer who wants Flutter, SwiftUI, Vue, or plain HTML is out of luck. The more fundamental limit is that Figma is still purely frontend and can’t do full-stack. If a designer in Figma says “show the user’s five most recent orders here,” she still needs a coder to define the API, the database schema, and the loading logic. Figma has no way to translate that intent into engineering decisions and then hand back a visual representation the designer can verify. The problem I named earlier, that the designer can’t verify engineering decisions, Figma hasn’t solved. More accurately: Figma has walked the first half of the designer-led path (agent assembles within visual constraints) and hasn’t started the second half (agent translates engineering decisions back into the visual layer). Both halves together are what a complete “designer replaces the coder” product would require.

What you can do today

If you’re a designer. Don’t panic, and don’t pretend Claude Design and Stitch are your tools. Their real customers are product managers, founders, and marketers. What they do is take the “I need a rough mockup” kind of task out of your inbox. Your work shifts from “help non-designers visualize their ideas” upward to “apply real design judgment on top of their rough output.” At the same time, spend time learning Figma’s AI roadmap end to end. The combination of MCP server, Make, Skills, and Code Connect in 2026 is the first time you can continuously and systematically define the semantic source of an agent ecosystem. Learning this pays off far more than learning Claude Design.

If you’re a coder or PM. These tools let you do things you used to need a designer for, but Claude Design and Stitch output at an inspiration level, not a ship-to-production level. Internal prototypes, demo decks for leadership, early validation: all fine. Products going live will break on accessibility, brand consistency, and edge cases. If there’s already a designer on the team, don’t rush to learn DESIGN.md; learn Figma MCP first. Having Cursor or Claude Code read the designer’s Figma file directly is more accurate than any hand-synced DESIGN.md. DESIGN.md is useful as a format for aligning visual rules with AI on projects that don’t have a designer, but it isn’t a Figma replacement.

If you’re thinking about product opportunities. “An agentic system for designers” isn’t an untouched blank. Figma is working on it. The real blank spaces are where Figma can’t reach: design-to-code loops for non-React output, visual translation of engineering decisions, full-stack intent annotation. Any of these, done right, serves a group that isn’t well served today: designers who want to own full-stack products without turning into coders.

The designer role and the coder role have already merged on small projects. After the merge, is it the designer-who-knows-a-bit-of-coder who does less for more, or the coder-who-knows-a-bit-of-designer? Today’s mainstream AI tools answer: the latter. The answer holds not because coders are smarter, but because today’s agent interfaces fit the coder’s working mode naturally. But this isn’t the only answer. Figma is making the other one, and has walked further than the mainstream discussion realizes. Which answer wins depends on whether Figma can finish the second half, and whether new players show up in the gaps Figma can’t cover. If you sit between both worlds, going deep on one side matters more than staying shallow on both.