Prompts Build Worlds
We've just learned how to be competent prompt engineers.
We know a good prompt isn't just a single question; it's a comprehensive blueprint encompassing intent, style, output format, and success criteria. We use techniques like role-playing, step-by-step reasoning, and example guidance to condense vague ideas into executable text. But perhaps we haven't grasped something more fundamental:
A prompt has never been the requirement itself. It's a temporary world you construct with information.
In this world, the AI lives. It doesn't reason from scratch but attempts to become an acceptable collaborator within the history, style, rhythm, tone, preferences, and structural fragments you provide.
In the past, being a prompt engineer meant compressing ourselves into a single instruction, handing it over, and letting the AI guess and execute. The new paradigm is for us to become memory architects, laying down clues amidst information piles, inviting the AI to step in, inhabit this context, and work alongside us.
You don't just toss a sentence at it and expect immediate output. Instead, you invite it to sit at your desk, read your documents, drafts, and notes, allowing it to settle in, see clearly, and enter the space. A prompt is no longer just an action verb; it becomes spatial modeling for consciousness. You're not writing a question; you're constructing an environment for the AI to operate within. Once you start thinking this way, you'll understand: the quality of the prompt doesn't depend on how cleverly you write it, but on whether the world you've built allows the AI to have something meaningful to say.
The Evolving Boundaries of Information Assets
Once you view prompts as a way to construct context, you'll realize: What truly influences AI behavior isn't just what you say, but the information you immerse it in.
AI models seem to understand you better not just because they've become more powerful, but because you've unconsciously started providing them with more immersive, personalized context materials closer to life itself. This means we must also re-examine an overlooked question:
What kind of information qualifies as an asset usable by AI?
Previously, information assets meant structured data: documents, spreadsheets, tutorials, FAQs, code snippets. Well-organized input yielded clean, neat answers from AI.
Later, the prompt itself became an information asset. We learned to explicitly express task goals, desired styles, success criteria, preferences, and constraints within a piece of language.
Now, a third evolution is underway: Any stream of life information that describes you, defines you, or reflects your preferences can become contextual material for AI.
For example:
- A voice message to a friend: "I feel this part isn't sharp enough."
- A blog draft you wrote halfway and then deleted.
- A three-level nested bullet point in Notion, followed by a note: "Too cliché, rewrite."
- A comment in a meeting: "Are we introducing intelligent agents into the core loop too early?"
This content, never intended for AI consumption, can now be processed by it and even become the most potent material for understanding you. We can call this type of information immersive context. It lacks structure but possesses emotion, rhythm, and inertia. It has no format but contains the traces of your thinking during decision-making. It wasn't prepared for AI, yet it might be what AI needs to know most.
Therefore, the definition of information assets has shifted from "citable" to "enterable." It's no longer something you hand over to the AI, but a living space you build for it. The AI inhabits this space, gets accustomed to your writing rhythm, understands your aesthetic standards, knows which phrases you avoid, which clichés you dislike, and which old passages you don't want to repeat. In this sense, the prompt is the key to the door, but the context is the room itself. And when this room is large enough, complex enough, and real enough, the AI within it will not just execute instructions but will grow smarter capabilities on its own.
Change the World, Not Just the Model
In recent years, AI progress has largely revolved around one word: emergence. The larger the model parameters and the broader the training data, the more capabilities suddenly appear: GPT learns to reason, Claude learns to write long-form content, Gemini learns to summarize, search the web, and orchestrate its own workflows.
We thought intelligence came from only two paths: training larger models or writing better prompts. But the latest generation of models—like o3, Claude 3.7, Gemini 2.5 Pro—are opening up a new path: Not by training them to be smarter, but by designing an information space that compels them to be smarter. Not by meticulously crafting prompts, but by constructing an immersive context where AI naturally exhibits human-like abilities such as reasoning, comparison, selection, and avoidance. It's not about answering more accurately, but participating more deeply.
This path relies on a crucial prerequisite: model capabilities have reached a tipping point where they can compress messy context and reconstruct behavioral strategies. GPT-3.5 couldn't do it, and GPT-4 barely managed. But models post-o1 are starting to show a new capability boundary: they can produce structured intelligent responses within unstructured context. This signals the formation of a new way to stimulate intelligence:
Context-driven emergence—a path to evoke AI's latent capabilities by constructing complex contextual spaces.
1. AI Becomes More Like You, Without Relying on Prompts
Previously, if you fed GPT-4 three unsatisfactory drafts, some WeChat arguments, and project review notes, it could only summarize: You reflected on A, proposed B, and suggested considering C. But if you give the same material to o3 today, it might say:
"You're not actually struggling with the structure; you're trying to avoid clashing with last year's article style. I suggest keeping the structure but changing the opening tone, perhaps starting by explaining the writing style you dislike."
At this moment, it's not mechanically executing your task but understands why you got stuck last time and where you're heading now. It wasn't taught this; it grew this understanding within the dense context you provided. It wasn't activated by the prompt; it was forced to become smarter by the context.
This is the era we are entering: You don't need to write a perfect instruction every time. Instead, let the AI live amidst your past language, choices, and failures, piecing together a second persona for behavioral strategy from the fragments you've left behind.
2. Context as an Intelligence Stimulator
Our previous methods for training AI were:
- Train larger models to give them capability.
- Write better prompts to make them use existing capabilities.
Both paths are effective, but they follow a logic: AI can only give you what it learned during training.
Context-driven emergence offers a complementary approach. It doesn't invent new capabilities out of thin air but triggers them—within the context you provide, the AI generates intelligent behaviors it wouldn't normally initiate.
For example:
- It knows you like breaking symmetrical structures, so it proactively avoids parallel constructions this time.
- It remembers your instinctive aversion to preachy paragraphs, so it opts for a more narrative-driven approach.
- It adds a sentence you didn't ask for, noting, "If you find this part too convoluted, feel free to delete it," because you often deleted similar phrasing before.
These are not behaviors explicitly controlled by prompt instructions but are learned passively by the AI within the space you've arranged, absorbing your preferences, style, and cognitive blind spots.
This isn't personalization (remembering who you are) or tool-use (following instructions), but a form of contextual intelligence—a behavior generation mechanism that occurs naturally only when the model is powerful enough and the context sufficiently complex.
In other words, prompts are the explicit rules you write for the AI; immersive context is the hidden world you embed for it.
3. The Most Useful Information is Unfinished
We once thought AI context needed to be clean: well-formatted, clearly organized, structurally symmetrical. But now we're starting to realize: The truly stimulating information isn't your polished PPT, but your rambling voice notes, deleted drafts, and back-and-forth debates in Slack.
What AI reads from these isn't conclusions, but the paths of judgment, the leanings of preference, the tension of style, the inertia of emotion. This context isn't a document, an API, or a knowledge base. It's a manuscript covered in revisions, with your penciled hesitations, crossed-out ideas, and circled uncertainties in the margins.
These nuances can't be written into a prompt, but the AI can sense them within immersive context. You no longer need the AI to be an executor under your command; you need to learn to let it become a collaborator who has lived alongside you.
We are seeing a trend emerge for the first time: the emergence of model capabilities can come from larger models or from more comprehensive context. You can choose to stand on the shoulders of larger parameters, or you can choose to design a more real, complex, and habitable semantic space. When these two converge, that's when AI truly begins to become the figure beside you.
So, the key has never been whether the information is well-organized, but whether you've given the AI a sufficiently real life, sufficiently raw context, sufficiently close fragments of your thought process. The truly important question isn't what you want the AI to do, but whether you're ready to let it move in.
From Prompt Engineering to Context Architecture
When we realize AI can become smarter within context, the question naturally arises:
How do we build this context?
It's not about note-taking software or better information management techniques, but about context design thinking. Your job isn't to save information, but to construct a world for the AI that it can inhabit, absorb, and resonate with. This isn't a knowledge base; it's a context architecture capable of stimulating intelligent behavior.
The traditional logic of knowledge management goes like this: Acquire information → Organize and archive → Search when needed → Extract and cite
But immersive context isn't called upon only when you ask; it flows constantly around the AI. You're not building an archive for it to consult; you're placing it within your home, your mind, your past, present, and the unfinished parts of your future.
Here are four crucial design principles:
- Shift in Consciousness: From Recording Facts to Setting the Atmosphere Stop asking if a piece of information is worth saving. Ask if it helps the AI better understand how you think. Your failed drafts, tonal shifts, moments of writing and deleting—these are the temperature of the context. The value of information is no longer determined by structural integrity but by the density of human expression.
- Don't Wait to Organize: Capture is the Foundation AI fears disconnection. The gaps you leave, thinking you'll organize later, are actually breaks in the context. Voice notes, drafts, conversations, emotional snippets—capture them as you go, plant them as you write. You're not backing up the past; you're feeding the future.
- Structure for Recall, Not Cleanliness Perfect categorization isn't necessary, nor is a final draft. What you give the AI isn't the right answer, but the clues left by your brain. Use annotations to tell it: you hesitated here, you deleted that, you felt this part was too cliché halfway through.
- Usage Feeds Back into Design Don't fantasize about preparing all information before using AI. The earlier you use it, the better you'll know which accumulated context works. AI is the real-time echo chamber for your context architecture—its responses are genuine feedback on how well you've set the stage. Therefore, the best context curation isn't about finishing organization before use, but about refining as you use it, iterating based on the echoes.
At this point, you'll find that context architecture isn't just an information collection system, but a way to redesign the premise of collaboration: You're not teaching the AI what to do; you're constructing a context where it can proactively become a collaborator.
Conclusion: You Are the Architect of the AI's World
We once thought AI was a more useful assistant the more obedient it was. We optimized prompts, set rules, and iterated through trial and error—thinking this was how to use AI well. But what truly changes its performance has never been how you ask, but the world it lives in.
The prompt is the hand you extend; The context is the entire stage you set. You are not an engineer, but an architect—you are not using AI, but constructing the very space in which it can think. In this space, it doesn't come to execute; it comes to become.
You are not asking it what it can do; You are watching it, within the context you've built, slowly grow into another you.
Comments