Behind Manus’s Wild Popularity: How Agentic AI Builds Lasting Competitive Advantages

Manus has been making waves on the Chinese internet since its recent release, quickly winning over a wide range of users. After diving deeply into the product, I find it truly inspiring. It captures a crucial aspect in the competition among Agentic AI products: the compound effect. In this article, I want to explore, from a longer-term perspective, the key factors that drive competition for Agentic AI products like Deep Research, Cursor, or Manus, and which elements can form a true moat (and which cannot).

Before discussing the three main aspects of Agentic AI product competition, I want to explain why I think Manus stands out. Contrary to the hype portrayed in some media, Manus is neither a mind-blowing creation that popped out of nowhere, nor the first to try this kind of product format. On the contrary, it has a very clear lineage.

It’s closely related to two types of existing products. One is Agentic research tools, such as Gemini, Perplexity, and OpenAI’s Deep Research. These allow you to enter a simple topic or request and then help you research across the web, producing a detailed and in-depth report. The other type is Agentic generation tools, such as Cursor, Devin, or Gamma. You give them a request, and they can help you write code, produce a document, or create presentation slides—essentially delivering the final form you need. In 2024, both kinds of products made huge strides, crossing the threshold of basic usability and going viral. Still, one big pain point remained: you either do research or you do code, but there’s no effective linkage between the two.

This pain point is subtle, because Agentic AI’s core appeal lies in its ability to complete a complex task end-to-end through self-iteration and autonomous decision-making. If, in practice, you still have to think, “I’ll use OpenAI’s Deep Research for this,” then copy the result into Cursor to generate a visualization, and finally throw both pieces into Gamma to create a PPT, you’re basically going against the very idea of Agentic AI—and you lose most of its value.

Manus is striking because it bridges that entire workflow. On one hand, it can conduct research in an Agentic manner by browsing the internet and gathering a comprehensive set of materials. On the other, based on these materials, it can carry out further analysis and visualization to produce the final output—be it a website, a text-based report, or a slideshow. This end-to-end application scenario is extraordinarily difficult to achieve in previous products. Moreover, Manus is finely polished and impressively complete as a product, making it both precise in concept and highly user-friendly—hence the instant, explosive popularity.

The Compound Effect of Tools

None of this, however, really touches on the deeper, more essential features or advantages of Agentic AI. One standout characteristic is that Agentic AI exhibits compound effects in multiple dimensions.

In Manus’s case, one important reason for its success is that it can leverage a greater number of tools than previous products. That might sound trivial, but it isn’t. In Agentic AI products, going from being able to use six tools to being able to use eight is far more significant than going from using two tools to four. This is because the tools used by AI can interact and reinforce each other. If an AI can only write code and search text, adding a new image search function might not be all that helpful. But if it can already write reports and generate slides, then adding an image search function suddenly makes its output far more vivid, substantially enhancing the product experience. That’s exactly the approach Manus takes. Even if we ignore all its other innovations and just focus on the way it merges Deep Research and Cursor, the simple increase in the number of tools immediately opens up scenarios that earlier products couldn’t handle.

This is the first kind of compound effect in Agentic AI. When we expand the number of tools it can call upon, the benefits multiply in a near-explosive fashion. It’s an extremely direct way to enhance the user experience. However, it doesn’t necessarily provide a durable moat. With tools like Cursor around, simply integrating a specific tool so an AI can autonomously call it is not difficult. Setting aside the finer points of good product design, cloning a Manus-type product is not especially challenging. And relying on the quantity of tools alone to build barriers to entry is not a long-term strategy.

The Compound Effect of Data

Agentic AI has a similar compound effect in other areas, too. One that’s often overlooked is data. But here I’m not referring to data in the sense of training an LLM—for example, you used 2T tokens, I used 3T, and now I’m better than you. In the Agentic AI era, data has a deeper meaning. Specifically, it isn’t just about the quantity of data but about acquiring, organizing, and externalizing data across its entire lifecycle.

Working side by side with humans, we see a certain phenomenon: having a seasoned veteran on the team is like having a secret weapon. In a factory, a senior mechanic might know exactly where to tap a failing machine to get it working again, while a new graduate has to run all sorts of tests for a rough understanding of the issue. A veteran doctor can make diagnoses by feeling a patient’s pulse, whereas a junior doctor may require multiple lab tests for a similar conclusion.

That’s a classic example of the advantage gained from accumulating data over time. For humans, two key things are happening: the accumulation of experience—decades of encountering similar machine failures or medical cases—and the organization of that knowledge. At that point, the knowledge is internalized in their memory, which is enough for most human experts. However, because humans still rely on written language when communicating with AI, there’s a further stage of externalizing that knowledge, turning it into clear documentation that AI can use.

For Agentic AI, maintaining this loop of knowledge accumulation, organization, and externalization is essential. Consider a software engineering scenario. If you give an AI that writes code a repository of one hundred thousand lines and some tasks to do, the odds of it accomplishing everything perfectly on the first try are low. But if you give it time to gradually read the code, understand it, and categorize it, then produce documentation that summarizes what it’s learned, coding becomes far simpler.

Here is an example of such documentation. It describes the basic architecture, design concepts, and which functions live in which files. In older projects, we can include more historical context. With this documentation in place, the AI can zero in on the right parts of the code without blindly creating brand-new files for everything. And over time, the AI also knows what’s been tried before and what the current solution looks like, reducing the chance of going in circles and undoing its own work. So the “data” we’re talking about here isn’t just a huge heap of tokens. Rather, it’s a long-term, (semi-)automated process of collecting, digesting, and consolidating information. For a given customer, the more time an AI spends working alongside them, the more knowledge it accumulates. Even if another AI is inherently “smarter,” a long-term collaborator with extensive organizational knowledge might still feel much more useful in actual practice. This system of second-order data management is a genuine moat.

Likewise, this process of data accumulation enjoys a compound effect of its own. More historical data and well-organized documentation can lead the AI to more meaningful insights. You can think of it as transforming a traditional knowledge system into one that’s AI-friendly. “AI-friendly” isn’t a binary switch but a continuum that requires time to solidify. I’d even compare it to a co-evolution between humans and nature. The AI is mining, refining, and accumulating knowledge from the raw data. Meanwhile, users become more and more aware that making data easily accessible to the AI brings enormous benefits. As a result, they’re more willing to adapt their workflow to accommodate AI’s data management processes. That, in turn, yields additional benefits: for instance, with Zoom AI Companion, you can capture the tribal knowledge shared in meetings that would otherwise be lost. Over time, it becomes documented knowledge that the AI can use to assist you. This feedback loop of adaptation and mutual understanding forms a potent moat.

The Compound Effect of Intelligence

Agentic AI also has another fascinating aspect: intelligence itself can exhibit compounding effects. It’s less obvious at first glance compared to tools or data, but a tool’s level of intelligence influences the user’s Agentic experience in multiple ways.

On the most basic level, a smarter AI can better understand user needs and knows how to combine a few tools to achieve maximum benefit. A less-intelligent LLM might waste time calling tool after tool yet still not acquire enough information. A more thoughtful LLM, on the other hand, has a clear process and can solve problems quickly with only a few carefully chosen tools.

A related factor: comparing something like Gemini to OpenAI’s Deep Research, you see a completely different caliber of AI. Gemini feels like it’s mechanically following a predetermined script, starting with certain keywords, searching the web, then deciding which pages to scrape, and ultimately summarizing the content. Deep Research, by contrast, feels much more proactive and better at self-iteration. It starts by formulating a plan, uses different search keywords accordingly, and may dynamically adjust its strategy based on whatever it finds. The final results tend to be more enlightening, not just answering your questions but offering new perspectives or research directions. This capacity for autonomous thinking yields a nonlinear boost in value.

Given that only a few companies can truly develop their own LLMs and that training them requires ample resources and capital, intelligence can also serve as a meaningful barrier to entry.

Key Drivers of Competition

It’s important to note that these three compound effects don’t merely add up; they amplify each other. When the number of tools expands, that creates more avenues for data processing and accumulation—spanning project management, searching, and documentation, each yielding data for the AI to learn from. Meanwhile, as the AI processes all this information, it refines its understanding and reasoning abilities. We can see this synergy clearly in Manus. At first, Deep Research could conduct in-depth investigations, and Cursor could write code or produce documents. But once you bring them together on a single Agentic platform, the AI can go from researched information to logic, presentation, and final publication in one seamless flow. Within this closed loop, tools, data, and intelligence stimulate each other, delivering a more fluid and sophisticated end-to-end experience.

That’s why the crucial question in Agentic AI competition is how to expand quickly enough to reach the right side of the exponential growth curves involving tools, data, or intelligence. In the early phase, investing effort in increasing the number of tools or the amount of data yields only moderate returns. But once you hit a tipping point, exponential growth really kicks in. On the right side of the curve, each new tool or added piece of data can transform the user experience in a major way. This is how Agentic AI products compete and build their moat. Of course, the exponential curve won’t keep going forever. It might resemble an S-curve, where benefits level off at some point as complexity and resource constraints pile up. There might also be bottlenecks at certain stages that require deeper architectural or organizational innovations to keep the system evolving in synergy.

From this perspective, building a moat around tools alone is not very reliable. Building a moat around LLM intelligence demands a lot of resources. And building a moat around data might be the simplest and most feasible approach. More than just stockpiling data, the key lies in how you structure the data and the methodology behind it. Data itself can be copied, but systematically externalizing tacit knowledge, structuring and storing it, and managing it efficiently are exceptionally hard to replicate. This is a bit like corporate culture: once an organization develops a powerful data management and knowledge externalization framework, competitors could copy the data and tools yet still find it extremely challenging to mirror that kind of implicit organizational capacity in the short term. Therefore, over the long haul, the most formidable edge in Agentic AI competition isn’t just about the scale of data or intelligence but rather about how the data and tool usage are systematically organized and synchronized.

Conclusion

From Manus, we can see the crucial factors in Agentic AI competition and how genuine moats might be formed. But more importantly, this competition isn’t simply about racing to add more tools or accumulate ever-larger volumes of data. It’s about how organizations adapt to the AI era on a deeper, structural level. The eventual winners might not be the companies with the strongest technologies per se, but those that truly grasp how AI and humans can co-evolve—and can develop long-lasting, stable collaboration mechanisms. That, in my view, is the real inspiration that Agentic AI brings us.

Comments