Disposable Software and Decision Resolution
It all started with a ten‑minute task. In machine learning R&D we use Labelbox for human data labeling. One time I noticed the labeled data quality was terrible, so I sent it back for relabeling. The next day the new results arrived, and I wanted to quickly see what changes were made and whether the wrong items were fixed.
In the pre‑AI workflow, this is a classic inefficiency. Even though we had the before/after JSON files, humans cannot directly parse and grasp large structured datasets. To understand the concrete changes, I had only two options. First, write Python code to parse and diff, then build a frontend to visualize it. That takes at least an hour or two of coding and debugging. But for a ten‑minute task, spending two hours of dev time is not worth it.
So most engineers choose the second path: manual sampling. I would open the JSON with a text editor, use an image viewer, randomly pick ten samples, and compare by hand. It is fast—done in ten minutes—but the decision quality is low. It relies on intuition and even luck. We cannot see the full picture of changes; we are forced to infer from extremely limited samples. This heavily depends on the engineer’s experience, trying to reconstruct the whole from scattered clues.
In the AI era, I used a third way. I threw the before/after JSON files and the original image links to an AI and asked it to build a website for visualization. Two minutes later, it generated a site with full diff, filtering, sorting, and search. With that tool, I did not need sampling, manual comparison, or luck. I directly saw the distribution of all changes, then decided which scenarios to inspect closely. Ten minutes later, I found some errors were fixed but one specific scenario still had many mistakes. Based on this full‑coverage review, I decided to return part of the batch for relabeling.
What unsettled me was that the powerful visualization site became useless the moment I finished the check. In traditional software engineering, this is a huge waste of development resources. Throughout our education and careers we are taught to design reusable, maintainable code, to obey DRY. This kind of use‑and‑burn disposable software feels heretical.
Yet it was exactly this disposable software that freed me from low‑resolution decisions based on blind sampling and gave me a fine‑grained, high‑resolution view of the labeling results. In other words, when the cost of writing code approaches zero, the once‑heretical strategy becomes optimal: for a tiny, temporary need, build a dedicated, polished tool on the spot.
The Invisible Price List and the Compressed Reality
Looking back, many “best practices” we take for granted in software engineering are not innate truths, but products of a specific economic environment—or cost structure.
In the digital world, a system’s real runtime state—massive logs, complex intermediate states, or structured data like Labelbox—is opaque to human senses. Code is both the material that builds products and the only effective way to observe the digital world. We must write code to parse, filter, and visualize, so unreadable data becomes understandable information.
But in the old cost structure, code is expensive and engineers’ time is scarce. Building the tools to observe the digital world is costly. So we each have an invisible price list in mind: writing a special analysis script just to track down a bug is not worth it; building a back office UI just so a PM can understand logic is not worth it; spinning up a visualization site just to verify data quality is pure fantasy.
Because the cost of building observation tools is far higher than the value of the problem, we pay the price of not getting high‑resolution observations. We are forced to adopt a survival strategy of compromise: compress reality.
Our intuition, experience, and so‑called best practices largely embody this compression strategy. Take debugging. In the traditional model, adding logs and analyzing them is expensive. An engineer cannot instrument a hundred places for debugging; even if we did and produced megabytes of logs, we could not read them. We end up logging or breaking at a few spots and trying one by one—guessing where the problem hides inside a black box.
So debugging has long been seen as a craft, even an art. This also distorts how we define technical expertise. Our career growth becomes training ourselves to simulate machines in our heads. A senior engineer is expected to have strong “intuition,” able to build a mental model from sparse clues and guess the bug’s location. We spend years polishing the accuracy of this blind guessing and take pride in it. But ultimately, it is dancing with shackles: we perfected blind guessing only because we could not afford to observe the full picture.
The same logic appears in collaboration. For PMs, the engineering system looks like a black box. So they communicate with engineers through docs and tickets rather than sharing live system states. Turning code logic into a visualization a non‑technical person can grasp is too expensive. We replace transparent information sharing with contracts and trust.
This also warps the definition of a PM’s skill set. A seasoned PM is often defined as someone who can precisely control inputs and outputs outside the black box. They do not need to understand the internals; instead they learn to write airtight requirements (stricter contracts) or to build trust to counter delivery uncertainty. Their skill tree is maxed out on “how to decide without seeing,” not “how to decide by seeing.”
This model has run so long that we take it as the only way the world works. We think intuition is the mark of experts and sampling is standard procedure. In reality, it is because we lived in an era where information was expensive to obtain, forcing us into a low‑resolution reality.
Now AI is tearing up the invisible price list. The cost of generating code, analyzing data, and building interfaces drops by orders of magnitude. We no longer need lossy compression of reality. Information once discarded for being too expensive—massive logs, full visual diffs, exhaustive context—suddenly appears with almost no cost.
This creates a tangible paradigm shift: from intuition‑driven, low‑resolution decisions to full‑data, high‑resolution decisions. That old price list is a cocoon trapping us in a low‑resolution world. In this new era, the real danger is not lack of experience or intuition, but choosing to stay in the cocoon and habitually ignoring observable facts.
Disposable Software as a Decompression Tool
In the new cost structure, our view of code and best practices must fundamentally change. For a long time, code has been treated as an asset requiring careful maintenance. Because it was expensive, we prized reuse and avoided reinventing wheels. In the AI era, the biggest variable AI introduces to code is not helping us ship the final product faster, but pushing the marginal cost of toolmaking to near zero.
Observation tools for the digital world have become so cheap that our behavior changes. Imagine if making a microscope cost less than squinting with the naked eye; the rational choice would be to build a microscope for every speck of dust.
That is exactly what the Labelbox case shows: building disposable software for a one‑off decision. This seemingly wasteful behavior is actually the most efficient solution to the specific problem. Code is no longer a long‑term asset but a disposable consumable.
This kind of disposable software is essentially a cheap decompression tool. Earlier we said reality’s logic is often folded inside black boxes. Our traditional definition of career growth was largely about training the ability to guess inside a black box—accumulating experience and intuition to infer the unseen. Disposable software violently unfolds those folded boxes—code logic, data state, or thought process—and lays them flat for humans to see.
This also rewrites what it means to be a senior engineer. Previously, senior meant strong simulation ability and technical intuition, able to deduce bottlenecks from thin air. Now, senior means a strong instinct to build tools: to quickly construct observation systems that directly reveal the truth.
Look at how debugging gets rebuilt by this logic. Before AI, debugging was an art of intuition. Because full observation was too expensive, we filled information gaps with guesses. In the new model, we can ask AI to instantly write code that instruments hundreds of key points, generate specialized log analysis scripts, or even dump the logs into a 2M‑token LLM window. We no longer have to play detective; with brute‑force search and full data analysis we can directly see the bug’s root cause. Debugging shifts from a luck‑based art to a data‑driven science.
The same decompression applies to collaboration. Previously, PMs faced an engineering black box and played a documentation‑and‑trust game. Now, for every requirement discussion or intermediate state, we can quickly spin up a temporary visualization. PMs no longer need to stare at opaque tickets to guess progress; they can directly see the system’s runtime logic. These disposable tools erase translation costs, turning contract‑based vendor dynamics into fact‑based partnerships.
The logic also reshapes documentation and knowledge management. In the past we only recorded high‑level strategic conclusions because turning messy thinking into text was too costly. Now AI lets us turn everyday discussions, intermediate thoughts, even rough drafts into structured docs at low cost.
There is a massive compounding effect hidden here. These documents are often generated once and may never be read, but together they form a high‑resolution personal knowledge base. When future us, a new teammate, or a future AI agent needs to look back, they see not just dry conclusions but the full context and reasoning of past decisions. The code is disposable, but the cognitive assets captured through code compound over time.
At this stage, cheap code is no longer the goal but the means. We write disposable code not to build a permanent structure, but to set up temporary scaffolding that lets us climb up and see what is inside the black box. Once we see the truth, the scaffolding can be torn down with zero guilt.
This mode of trading code for cognition is the key to uncompressing reality. It unlocks a new pattern: do not skimp on compute or code, because in the pursuit of high‑resolution truth, code is the cheapest consumable.
AI Native Is Strategy Reconstruction
Once we understand the collapse of information acquisition costs and the decompression power of disposable software, we can more precisely define AI Native.
Today, people often narrowly frame AI Native as efficiency gains. They focus on using Copilot to write the same function faster or ChatGPT to generate the same doc faster. This is merely running faster on the old path—putting faster horses on the carriage without changing the carriage itself.
True AI Native means adopting a brand‑new optimal strategy in the new cost structure. When the cost of action (writing code) approaches zero, the optimal path is no longer a straight line (using intuition to muddle through). It is to probe every possible path on the map and then decide. This strategic shift will reshape how individuals, teams, and industries work.
At the personal level, this means a fundamental shift in decision‑making intuition.
A truly AI Native developer, when facing uncertainty, no longer reaches first for personal experience to “guess.” They first think about how to build a tool on the spot to “see.” This instinct shift is critical. Before, when hitting a nasty bug, our reflex was to read source, set breakpoints, and simulate mentally. Now the reflex should be to write a script to analyze logs or ask AI to build a visualization plugin that draws the internal state.
This shift requires breaking the old habit of saving code. We must get used to burning compute to build a tool that will be discarded five minutes later, just to make a ten‑minute decision. In the new economic model, code is cheap and consumable, while decision accuracy is expensive. Trading cheap code for costly cognitive certainty is the baseline arbitrage of the new era.
At the mid‑level, this means reconstructing organizational trust mechanisms.
Traditional software org structures are designed to cope with black boxes. PMs and engineers, clients and vendors, fundamentally operate in a contract‑driven, adversarial mode. Because internals are invisible, we rely on detailed docs, strict tickets, and heavy processes to manage risk. Trust is scarce, built on relationships or resumes.
In an AI Native mode, disposable software gives us aggressive transparency. For every intermediate step or requirement change, we can spin up visual dashboards or interfaces quickly. Once the black box is opened and stakeholders see the live logic, adversarial guessing and defensive postures naturally turn into fact‑based, consensus‑driven collaboration.
Trust no longer needs careful human maintenance or rigid process enforcement; it rests on highly transparent information. This zero‑friction collaboration will fundamentally change how software teams are organized.
At the macro level, this means redefining deliverables.
As user‑generated software capabilities rise, the value delivered by software companies also changes. In the past we delivered finished furniture with fixed functions; users could only operate within preset bounds. That was because letting users modify code themselves was too costly.
In the future, we will increasingly deliver a Generative Kernel. We will encapsulate core business logic, data structures, and best practices as a foundation. On top of that foundation, AI can generate various interfaces and tools at any time to meet users’ specific, temporary, even one‑off needs.
This is compounding in its ultimate form. We no longer try to exhaust every user need and build features (the old cost‑structure approach). Instead, we lower the barrier for users to build their own specialized tools. Whether through knowledge captured in docs or capabilities exposed via APIs, everything serves one goal: letting everyone build their own high‑resolution view at minimal cost through AI.
Conclusion
To tear up the invisible price list, break the cocoon, and emerge, we need not Cursor, Claude, or GPT per se—we need a refreshed mental model.
In this new era, the greatest risk is not being unable to see because of technical limits, but choosing not to look in order to cling to old software engineering dogma. When the cost of seeing the truth has fallen to dust, embracing high‑resolution reality is the real leverage AI Native gives us.
Comments