Author: Mankiw
Recently, a seemingly insignificant user experience has sparked intense tension between the AI industry and internet platforms. Some smartphones equipped with AI assistants were identified by the platform system as "suspected of using cheats" when attempting to automatically complete operations such as sending WeChat red envelopes and placing e-commerce orders via voice commands, triggering risk warnings and even account restrictions.
On the surface, this is just a technical compatibility issue; but in the broader industry context, it actually reveals a structural conflict surrounding "who has the right to operate the phone and who controls the user access."
On one side are mobile phone manufacturers and large model teams who hope to deeply embed AI into operating systems and achieve "seamless interaction"; on the other side are internet platforms that have long relied on App entry points, user paths, and data loops to build their business ecosystems.
When "all-purpose assistants" start "doing things" for users, are they efficiency tools or rule breakers? This question is being pushed to the forefront of the law by reality.
"The Future Is Here" or "Risk Warning"—A "Code War" Behind the Mobile Phone Screen
Recently, users who have received the latest AI smartphones may have experienced a dramatic scenario of "one second of the future, one second of warning": just as they marvel at its convenience, they receive risk warnings from platforms such as WeChat.
This all began with the deep collaboration between ByteDance's "Doubao" platform and several mobile phone manufacturers. Today's voice assistants are no longer just for checking the weather, but are super managers that can "see the screen and simulate operations."
Imagine this scenario: simply say to your phone, "Send a red envelope in the Qingfei Football Team group" or "Buy me the best deal on the new Adidas football boots," and your phone will automatically open the app, compare prices, and make the payment—all without you having to lift a finger.
This technology, based on "simulated clicks" and "screen semantic understanding," has for the first time allowed AI to truly take over mobile phones. However, this "smooth" experience quickly ran into a brick wall with internet platforms.
Many users have found that using Doubao AI to operate WeChat triggers account restrictions and even sends warnings of "suspected use of cheats." E-commerce platforms like Taobao are also highly vigilant about this type of automated access. One blogger likened it to: AI is like a butler running errands for you, but being stopped by mall security: "We don't serve robots."
What appears to be a minor friction over technological compatibility is in fact another landmark contest in the history of the Chinese internet. It is no longer a simple battle for traffic, but a direct clash between an operating system (OS) and a super app over "digital sovereignty."
Why did major companies like Tencent and Alibaba react so strongly? This has to do with the core business model of the mobile internet – the "walled garden".
The commercial foundation of social media, e-commerce, and content platforms lies in their exclusive access to user time. Every click and every browsing step is crucial for advertising monetization and data accumulation. The emergence of "system-level AI assistants" like Doubao directly challenges this model.
This is a profound battle over "entry points" and "data." AI-powered smartphones have touched the core business lifeline of internet giants, primarily in three aspects:
1. The "Click the icon" crisis:
When users can simply speak and AI completes the task, the app itself may be bypassed. Users will no longer need to open the app to browse products or watch ads, which means that the platform's revenue from ad exposure and user attention economy will be significantly weakened.
2. Parasitic acquisition of data assets:
AI operates and reads information by "looking" at the screen, without requiring the platform to open its interfaces. This is equivalent to bypassing traditional cooperation rules and directly obtaining the content, products, and data that the platform has invested heavily in. From the platform's perspective, this is a "free-riding" behavior, and they may even use this data to train the AI model itself.
3. The "gatekeeper" of traffic distribution has changed hands:
In the past, the power to distribute traffic rested with super apps. Now, system-level AI is becoming the new "master switch." When users ask "What should I recommend?", AI's answer will directly determine where commercial traffic flows, which is enough to reshape the competitive landscape.
Therefore, the platform's warnings and safeguards are not simply a matter of technological exclusion, but a fundamental defense of its own business ecosystem. This reveals the deep-seated, unresolved conflict between technological innovation and platform rules.
As legal professionals, we can see four unavoidable core legal risks through this battle between AI smartphones and major manufacturers:
The current controversy centers on whether the operation of AI constitutes unfair competition. According to the Anti-Unfair Competition Law, using technological means to hinder the normal service of others' network products may constitute infringement.
"Plug-in" Risks: In the "Tencent v. 360 case" and several recent "automatic red envelope grabbing plug-in cases," judicial practice has established a principle: unauthorized modification or interference with the operating logic of other software, or increasing server load through automation, may constitute unfair competition. AI's "simulated clicks," if they skip ads or bypass interactive verification, affecting platform services or business logic, may also face infringement claims.
Traffic and compatibility issues: If AI guides users away from their original platform and to use its recommended services, it may involve "traffic hijacking." Conversely, if a platform indiscriminately blocks all AI operations, it may need to demonstrate whether such blocking is a reasonable and necessary self-protection measure.
AI needs to "see" the screen content in order to execute instructions, which directly touches upon the strict regulations of the Personal Information Protection Law.
Future litigation may revolve around "essential facilities" and "refusal to deal".
AI phone manufacturers may argue that WeChat and Taobao already possess the attributes of public infrastructure, and that refusing AI access without a legitimate reason constitutes an abuse of market dominance and hinders technological innovation.
The platform may argue that data sharing must be premised on security and intellectual property protection. Allowing AI to access data without authorization could breach technical protection measures and harm the rights and interests of users and the platform.
As AI transforms from a tool into an "agent," it brings about a series of civil liability issues.
This contest is not only a technological battle, but also a process of redefining the legal boundaries of data ownership, platform responsibility, and user authorization in practice. AI vendors and platforms alike need to find a clear balance between innovation and compliance.
The conflict between Doubao and the big companies, on the surface, is a product dispute, but in reality, it reveals the disconnect between the old and new order: the past centered on apps is now being impacted by the interconnected experience dominated by AI.
As legal professionals, we clearly see that the existing legal system is increasingly inadequate in the face of the intervention of general artificial intelligence. Simply "blocking" or "bypassing" is not a sustainable solution. The future solution may not lie in continuing to rely on technologies like "simulated clicks" for circumvention, but rather in promoting the establishment of a standardized AI interaction interface protocol.
In the current climate where rules are still unclear, we pay tribute to those who persist in exploring the forefront of AI and uphold the principle of using technology for good. At the same time, we must also be keenly aware that respecting boundaries often leads to more sustainable outcomes than disruption itself.



Copy linkX (Twitter)LinkedInFacebookEmail
Polkadot's DOT holds steady with token uncha