<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>Home on Acoustic Info Plus: Your Source for Audio Technology News</title>
        <link>https://acousticinfoplus.com/</link>
        <description>Recent content in Home on Acoustic Info Plus: Your Source for Audio Technology News</description>
        <generator>Hugo -- gohugo.io</generator>
        <language>en-us</language>
        <lastBuildDate>Wed, 29 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://acousticinfoplus.com/index.xml" rel="self" type="application/rss+xml" /><item>
            <title>China&#39;s First Undergraduate Major in AI for Business Approved</title>
            <link>https://acousticinfoplus.com/posts/note-c1e5d9dcdd/</link>
            <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-c1e5d9dcdd/</guid>
            <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&#xA;&lt;/h2&gt;&lt;p&gt;On April 28, 2026, the Ministry of Education officially approved the University of Science and Technology of China (USTC) to establish an undergraduate major in &lt;strong&gt;AI for Business&lt;/strong&gt;.&lt;/p&gt;&#xA;&lt;p&gt;USTC becomes the first and currently the only university in the country to offer this program, with plans to enroll its first undergraduate students in the Fall semester of 2026.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 2&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;233px&#34; data-flex-grow=&#34;97&#34; height=&#34;886&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-c1e5d9dcdd/img-9d799540f2.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-c1e5d9dcdd/img-9d799540f2_hu_852628ed8cd9b90.jpeg 800w, https://acousticinfoplus.com/posts/note-c1e5d9dcdd/img-9d799540f2.jpeg 862w&#34; width=&#34;862&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;program-overview&#34;&gt;Program Overview&#xA;&lt;/h2&gt;&lt;p&gt;As the digital economy continues to develop, the integration of AI with business applications is becoming increasingly widespread. There is a growing demand for interdisciplinary talents who understand both technical principles and business management, which are crucial for supporting the country&amp;rsquo;s high-quality development and the intelligent upgrading of industries. In this context, USTC&amp;rsquo;s School of Science and Technology Business and School of Management spearheaded the initiative to promote the AI for Business major, which underwent nearly two years of validation before receiving approval.&lt;/p&gt;&#xA;&lt;p&gt;The &lt;strong&gt;AI for Business&lt;/strong&gt; program is positioned as non-purely technical, focusing on the integration of AI into business scenarios. The knowledge system combines foundational theories from AI and economic management, covering cutting-edge topics such as AI-driven business model innovation, AI hardware architecture and industrial ecosystem, AI principles and applications, business intelligence agents, AI-driven innovation investment, and AI governance, thereby constructing a cross-disciplinary knowledge structure that supports intelligent business decision-making.&lt;/p&gt;&#xA;&lt;h2 id=&#34;educational-objectives&#34;&gt;Educational Objectives&#xA;&lt;/h2&gt;&lt;p&gt;Students will systematically master core theories in business management, artificial intelligence, mathematical optimization, and computer science. They will develop eight core competencies: integration of business AI, intelligent data analysis, human-machine collaborative decision-making, business system design, mathematical optimization and modeling, foundational business management, AI ethics and responsibility, as well as innovation practice and communication. This interdisciplinary knowledge system aims to enhance students&amp;rsquo; core capabilities to adapt to future societal changes.&lt;/p&gt;&#xA;&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&#xA;&lt;/h2&gt;&lt;p&gt;This initiative is part of the Ministry of Education&amp;rsquo;s ongoing efforts to optimize professional settings, guiding and supporting universities to actively establish new programs that meet national strategic needs and modern industrial development demands. The ministry has also introduced new programs in various fields, including energy science and engineering, deep earth science and engineering, and more, to drive innovation and development in emerging and future industries.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Ministry of Industry and Information Technology Launches AI and Software Initiative</title>
            <link>https://acousticinfoplus.com/posts/note-dc3bf27db9/</link>
            <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-dc3bf27db9/</guid>
            <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&#xA;&lt;/h2&gt;&lt;p&gt;On April 28, the Chinese stock market experienced fluctuations, with the three major indices opening lower. The AI ETF on the ChiNext board, which has over 55% concentration in optical communication, fell by 2.45%. This fund tracks the highest concentration index in the market, with nearly 40% total concentration. As of the previous trading day, the fund had accumulated a 20.2% increase since the 25th, showing strong momentum. Meanwhile, the lowest fee rate Sci-Tech Chip ETF dropped by 1.63%, and the Hong Kong Stock Connect Hang Seng Technology ETF fell by 2.34%. In terms of individual stocks, Cambricon Technologies rose by 1.31%, while Haiguang Information fell by 2.45%, and SMIC experienced a decline of 2.21%.&lt;/p&gt;&#xA;&lt;h2 id=&#34;ai-and-software-initiative&#34;&gt;AI and Software Initiative&#xA;&lt;/h2&gt;&lt;p&gt;The Ministry of Industry and Information Technology (MIIT) announced the launch of a special initiative focusing on &amp;ldquo;Artificial Intelligence + Software&amp;rdquo;. This initiative aims to accelerate the research and application of intelligent programming, promote new business models such as model-as-a-service and agent-as-a-service, strengthen the open-source ecosystem, and advance the intelligent upgrade of basic and industrial software. Additionally, it seeks to establish a service system for the digital transformation of the manufacturing industry, orderly layout computing power and edge computing, improve the intelligent computing cloud system, and implement actions to build high-quality industrial data sets, thereby supporting the high-quality development of the service industry.&lt;/p&gt;&#xA;&lt;h2 id=&#34;market-insights&#34;&gt;Market Insights&#xA;&lt;/h2&gt;&lt;p&gt;Guojin Securities noted that global demand for computing power continues to grow, with domestic companies accelerating their layout in chip design, manufacturing, and application. The expansion of AI large models and intelligent application scenarios is driving industrial upgrades and technological innovation, forming a collaborative development pattern across the entire industry chain from hardware infrastructure to software applications.&lt;/p&gt;&#xA;&lt;p&gt;Huaxin Securities mentioned that OpenAI has released the flagship GPT-5.5 model, showcasing the native capabilities of AI agents. This model has achieved significant upgrades in code development, office operations, and cutting-edge scientific research, capable of undertaking complex engineering and high-end scientific tasks, promoting stable development in the computing power leasing market, and ensuring continuous iteration and upgrade of AI technologies.&lt;/p&gt;&#xA;&#xA;    &lt;blockquote&gt;&#xA;        &lt;p&gt;Note: The above information is for reference only and does not constitute investment advice. The market carries risks; investors should exercise caution.&lt;/p&gt;&#xA;&#xA;    &lt;/blockquote&gt;&#xA;</description>
        </item><item>
            <title>Reconstructing Education for the AI Era: Shanghai Jiao Tong University&#39;s Approach</title>
            <link>https://acousticinfoplus.com/posts/note-13bb08f4f6/</link>
            <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-13bb08f4f6/</guid>
            <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&#xA;&lt;/h2&gt;&lt;p&gt;Recently, General Secretary Xi Jinping sent a letter to all faculty and students of four transportation universities, encouraging them to uphold the educational philosophy of &amp;ldquo;practical learning and practical work,&amp;rdquo; inherit and promote the spirit of the Westward Migration, focus on major national strategic needs, strengthen independent technological innovation and talent cultivation, and achieve more breakthroughs in the deep integration of industry, academia, and research. The university has deeply studied and implemented the important spirit of Xi Jinping&amp;rsquo;s letter, firmly grasping the inherent consistency and mutual support of educational development, technological innovation, and talent cultivation, continuously transforming educational advantages, talent advantages, and innovation advantages into development advantages, competitive advantages, and strategic advantages.&lt;/p&gt;&#xA;&lt;p&gt;Currently, artificial intelligence is reconstructing human social production and lifestyle at an unprecedented speed, and seizing the high ground of global AI development has become an important support for our country to build international competitive advantages and win great power games. Ultimately, technological competition is a competition for talent and education. In the face of this unprecedented transformation, we recognize that AI education is facing &amp;ldquo;three structural challenges,&amp;rdquo; but AI itself provides us with a new way to break barriers and reconstruct paradigms. The &amp;ldquo;AI + Education Action Plan,&amp;rdquo; jointly issued by the Ministry of Education and four other departments, clearly requires leveraging AI as an engine for educational transformation and proposes specific requirements such as &amp;ldquo;building AI learning communities and gathering open-source courses&amp;rdquo; and &amp;ldquo;conducting achievement certification to encourage faculty and students to participate in open-source ecosystem construction,&amp;rdquo; providing direction and deployment for AI education reform. Shanghai Jiao Tong University focuses on cultivating high-quality talent capable of thriving in the intelligent era, seizing opportunities, and consistently using AI as a key lever for enhancing educational capabilities, directly facing challenges, and promoting deep faculty and student participation in AI open-source ecosystem construction, paving a new path that integrates talent cultivation with ecosystem building.&lt;/p&gt;&#xA;&lt;h2 id=&#34;facing-the-three-constraints-of-the-ai-era&#34;&gt;Facing the &amp;ldquo;Three Constraints&amp;rdquo; of the AI Era&#xA;&lt;/h2&gt;&lt;p&gt;In the context of rapid iteration of AI technology and continuous upgrading of industrial demand, universities, as the main battlefield for talent cultivation, face multiple challenges in teaching, practice, and resources.&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;Knowledge Barriers&lt;/strong&gt;: The development of disciplines lags behind technological leaps. Traditional academic systems act as invisible walls, isolating knowledge transfer into fortresses, limiting students to a single disciplinary perspective, making it difficult to form cross-domain innovative thinking. Additionally, the speed of classroom knowledge updates lags far behind the evolution of AI technology, resulting in teaching content often failing to keep pace with the times, leaving students &amp;ldquo;holding old maps, struggling to find new continents.&amp;rdquo; This rigid barrier severely restricts the emergence of interdisciplinary innovative talent.&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;Supply-Demand Misalignment&lt;/strong&gt;: Skills training is disconnected from industrial practice. As AI applications profoundly reshape the labor market, traditional skills reliant on mechanical repetition and rule-based operations face severe challenges of being replaced by &amp;ldquo;digital employees.&amp;rdquo; Currently, there is a significant gap between the talent supply from universities and the actual needs of industries: on one hand, various sectors urgently require the implementation of AI scenarios; on the other hand, graduates generally lack real engineering practice experience, making it difficult to quickly translate theoretical knowledge into productivity for solving complex scenarios. This &amp;ldquo;disconnection between learning and application&amp;rdquo; not only weakens students&amp;rsquo; employment competitiveness but also makes it hard for them to adapt to the rapid iterations of the intelligent era.&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;Resource Scarcity&lt;/strong&gt;: Innovative exploration is constrained by computing power bottlenecks. Computing power is the &amp;ldquo;source of motivation&amp;rdquo; in the intelligent era, and cutting-edge courses heavily rely on AI innovation resources such as computing power, data, models, and tools. However, most universities struggle to bear the enormous investment in intelligent computing clusters and lack the capacity to maintain professional operational teams. Constrained by shortcomings in AI training environments, high-level teaching and research exploration empowered by AI often become a source of water without a stream. The computing power gap has become the biggest bottleneck restricting faculty and students from deeply participating in AI ecosystem construction and producing original results. Without a fertile &amp;ldquo;research soil,&amp;rdquo; it is challenging to cultivate innovative fruits that will lead the future.&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;h2 id=&#34;reconstructing-the-educational-ecosystem-with-open-source-spirit&#34;&gt;Reconstructing the Educational Ecosystem with Open Source Spirit&#xA;&lt;/h2&gt;&lt;p&gt;In the face of these challenges, mere slight adjustments to traditional educational models are insufficient to achieve breakthroughs. The open-source orientation clearly defined in the Action Plan provides us with a way to break through—leading with an open-source spirit, breaking barriers, integrating resources, and collaborating on innovation to reconstruct an educational paradigm suitable for the AI era, achieving synchronous resonance between talent cultivation and industrial development.&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;From &amp;ldquo;Knowledge Transmission&amp;rdquo; to &amp;ldquo;Open Source Collaborative Creation&amp;rdquo;&lt;/strong&gt;: AI breaks the temporal and spatial barriers to knowledge acquisition, and the open-source spirit makes it possible to innovate while &amp;ldquo;standing on the shoulders of giants.&amp;rdquo; AI is not only an object of learning but also the core engine empowering personalized, project-based learning. In the teaching paradigm of &amp;ldquo;AI + Human Intelligence (AI + HI),&amp;rdquo; by introducing multi-agent interaction mechanisms and integrating multi-domain expert models, we reshape the learning ecosystem of human-machine collaboration. Through &amp;ldquo;open-source courses,&amp;rdquo; we establish a community-based sharing and feedback mechanism, allowing cutting-edge research results, frontline industry practices, and immediate social needs to rapidly translate into teaching content, making learning no longer limited to the classroom or textbooks, and achieving knowledge iteration with &amp;ldquo;zero time difference&amp;rdquo; as much as possible.&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;From &amp;ldquo;Skill Executors&amp;rdquo; to &amp;ldquo;Human-Machine Collaborative Innovators&amp;rdquo;&lt;/strong&gt;: The core competitiveness of future talent lies not in how much established knowledge they possess but in their ability to harness AI to solve complex engineering problems. We must actively guide students to abandon the anxiety of &amp;ldquo;being replaced&amp;rdquo; and focus on cultivating &amp;ldquo;enhanced innovators&amp;rdquo; with human-machine collaboration capabilities. We should promote the establishment of a long-term mechanism for deep integration between schools and enterprises, bringing cutting-edge industry practices into the classroom, transforming real pain points from enterprise R&amp;amp;D and applications into practical AI projects for universities, and implementing &amp;ldquo;real problems with real solutions.&amp;rdquo; In frontline practical tasks, we hone students&amp;rsquo; engineering capabilities, allowing them to &amp;ldquo;learn to swim in the storm of practical challenges,&amp;rdquo; and quantify and certify their contributions, forming a lifelong &amp;ldquo;digital skills passport,&amp;rdquo; truly realizing the leap from &amp;ldquo;credential-based&amp;rdquo; to &amp;ldquo;capability-based&amp;rdquo;.&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;From &amp;ldquo;Resource Islands&amp;rdquo; to &amp;ldquo;Inclusive Shared Computing Power Base&amp;rdquo;&lt;/strong&gt;: Leveraging national strategic forces and deep industry-education integration, we must seize the historic opportunity to build an efficient collaborative computing power network. We should promote domestic computing power to &amp;ldquo;enter universities, classrooms, and research,&amp;rdquo; achieving true &amp;ldquo;computing power equality&amp;rdquo; and &amp;ldquo;educational equity.&amp;rdquo; Shanghai Jiao Tong University is constructing the &amp;ldquo;Zhiyuan No. 1&amp;rdquo; thousand-calorie intelligent computing cluster, promoting large-scale domestic computing power into campus, which not only addresses the shortage of training and inference resources but also lowers usage thresholds and stimulates faculty and student engagement. At the same time, by learning, developing, and innovating in a controllable software and hardware environment, we fundamentally strengthen the security foundation of China’s AI ecosystem, promoting vigorous original innovation.&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;h2 id=&#34;building-the-qiwuh-learning-community-as-a-new-engine-for-talent-development&#34;&gt;Building the &amp;ldquo;Qiwuh Learning Community&amp;rdquo; as a New Engine for Talent Development&#xA;&lt;/h2&gt;&lt;p&gt;In the face of change, Shanghai Jiao Tong University, based on the concept of reconstructing an open-source educational ecosystem, combines its own educational advantages, transforming theoretical exploration into practical action. By building a specialized talent cultivation platform, we integrate open-source spirit, computing power resources, and industrial needs throughout the talent cultivation process, providing replicable and promotable practical samples for AI education reform.&lt;/p&gt;&#xA;&lt;p&gt;We are actively planning to collaborate with high-level universities, research institutions, and leading technology companies to create a national AI practical talent cultivation platform—&amp;ldquo;Qiwuh Learning Community.&amp;rdquo; &amp;ldquo;Qiwuh&amp;rdquo; aims to enlighten wisdom and open the door to innovation; it also seeks to understand laws and internalize engineering qualities. Our core goal is to cultivate high-quality talent for the intelligent era, gathering high-quality open-source courses and introducing advanced domestic computing power to construct a closed-loop of &amp;ldquo;theory—practice—innovation.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;We will gather thousands of excellent open-source micro-courses, breaking down the &amp;ldquo;walls&amp;rdquo; between universities and enterprises, creating an immersive learning environment that integrates theory and practice; introduce domestically produced, controllable large-scale advanced computing power, transforming it into an accessible innovative resource space for frontline faculty and students, solidifying the digital foundation for engineering practice; deepen the &amp;ldquo;challenge-based&amp;rdquo; mechanism with leading enterprises, implementing a new model of industry-education integration where &amp;ldquo;enterprises propose problems, universities lead topics, tackle the same problems together, and jointly evaluate results&amp;rdquo;; construct a diversified talent evaluation system, establish classified and graded achievement certification standards, and bridge the &amp;ldquo;last mile&amp;rdquo; of mutual recognition of achievements between universities; connect quality entrepreneurial resources, empowering students for high-quality employment and cross-border innovation. Let the &amp;ldquo;Qiwuh Learning Community&amp;rdquo; truly become a &amp;ldquo;training ground&amp;rdquo; for domestic computing power ecosystems and an &amp;ldquo;accelerator&amp;rdquo; for the growth of outstanding innovative talents.&lt;/p&gt;&#xA;&lt;p&gt;AI education is a systematic project that must fully leverage the advantages of the national system while also stimulating market vitality. &amp;ldquo;One flower alone does not make spring; a hundred flowers in bloom fill the garden with spring.&amp;rdquo; The essence of open source is connection and symbiosis. We should gather innovative synergy with the open-source spirit, solidify the digital foundation with independent computing power, and jointly compose a new chapter in the high-quality development of AI education in China, allowing every innovative dream to take root and sprout in the fertile soil of open source, injecting continuous innovative momentum into the construction of an educational powerhouse.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Understanding Artificial Intelligence: Debunking Myths and Realities</title>
            <link>https://acousticinfoplus.com/posts/note-6949599064/</link>
            <pubDate>Wed, 29 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-6949599064/</guid>
            <description>&lt;h2 id=&#34;understanding-artificial-intelligence-debunking-myths-and-realities&#34;&gt;Understanding Artificial Intelligence: Debunking Myths and Realities&#xA;&lt;/h2&gt;&lt;p&gt;Artificial intelligence (AI) has emerged from the laboratory and is now embedded in every aspect of our lives: from facial recognition on smartphones and precise recommendations in short videos to smart navigation, AI-generated art, and interconnected smart home appliances. However, most people still perceive AI through a superficial lens, associating it with robots and high-tech wizardry, often either over-mythologizing or demonizing it.&lt;/p&gt;&#xA;&lt;p&gt;In simple terms, artificial intelligence is not a &amp;ldquo;superhuman&amp;rdquo; with autonomous consciousness, but rather a &lt;strong&gt;scientific discipline that enables machines to simulate human intelligent behavior&lt;/strong&gt;. Human abilities such as thinking, learning, judgment, recognition, and creativity can be simulated by computers through algorithms, big data, and computational power. This is the core essence of AI. It does not think independently, lacks self-emotion, and does not generate ideas on its own; all its &amp;ldquo;intelligence&amp;rdquo; is derived from learning from vast amounts of data and algorithmic computations.&lt;/p&gt;&#xA;&lt;p&gt;AI can be divided into two main categories that everyone should understand. The first is &lt;strong&gt;weak AI&lt;/strong&gt;, which encompasses all the AI technologies we currently use. This type of AI focuses on specific domains and can only perform designated tasks, such as chess-playing AI, translation AI, drawing AI, and voice assistants. They excel in specialized tasks but cannot think universally. The second is &lt;strong&gt;strong AI&lt;/strong&gt;, which exists only in theory and is portrayed in science fiction as having general thinking and autonomous consciousness akin to humans.&lt;/p&gt;&#xA;&lt;p&gt;Three core elements are essential for AI to function:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&lt;strong&gt;Big Data&lt;/strong&gt;: This is the &amp;ldquo;knowledge base&amp;rdquo; of AI, consisting of vast amounts of text, images, videos, and behavioral data that serve as learning materials for AI.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Algorithms&lt;/strong&gt;: These are the &amp;ldquo;brain logic&amp;rdquo; of AI, defining how machines analyze, learn, and make judgments.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Computational Power&lt;/strong&gt;: This is the &amp;ldquo;driving force&amp;rdquo; behind AI, with powerful chips and servers ensuring that complex calculations are completed swiftly.&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;The birth of artificial intelligence is fundamentally a tool for liberating human productivity. It replaces humans in repetitive, tedious, high-precision, and high-risk tasks, freeing us from mechanical labor. By understanding the essence of AI, we can rationally view technological development: it is not a rival that will overthrow humanity but an important vehicle for extending human wisdom.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Cat Wu on Rapid AI Product Development at Anthropic</title>
            <link>https://acousticinfoplus.com/posts/note-f3d492656f/</link>
            <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-f3d492656f/</guid>
            <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&#xA;&lt;/h2&gt;&lt;p&gt;In a landscape where most companies release new products quarterly, Anthropic has compressed its release cycle to daily iterations. Behind this rapid pace is Cat Wu, a Chinese-American woman born in the 1990s. From engineer to the product lead for Anthropic&amp;rsquo;s flagship products, Claude Code and Cowork, she is not only driving the evolution of this generation of AI products but also interviewing hundreds of aspiring product managers in the AI field, witnessing firsthand who succeeds and who falls behind.&lt;/p&gt;&#xA;&lt;p&gt;Cat Wu, whose full name is Catherine Wu, has a rich background in engineering and venture capital. She graduated with a degree in computer science from Princeton University and has held positions at Scale AI, Dagster, and Index Ventures before joining Anthropic in August 2024. In July 2025, she and executive Boris Cherny were recruited by the AI programming startup Cursor but returned to Anthropic shortly after, taking over the Claude Code product line.&lt;/p&gt;&#xA;&lt;h2 id=&#34;accelerated-product-development&#34;&gt;Accelerated Product Development&#xA;&lt;/h2&gt;&lt;p&gt;&amp;ldquo;We have shortened the development cycle for many product features from six months to one month, and sometimes even just one day,&amp;rdquo; Cat Wu stated in a recent in-depth interview. This rapid pace has been a consistent state at Anthropic for several quarters. &amp;ldquo;Internal models have improved efficiency, but more importantly, it’s about the processes and team expectations. We strive to minimize processes, removing all obstacles to release, so everyone feels they can turn an idea into a product in a week or even a day.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;When prioritizing product features, the team focuses on one mission: to bring safe AGI to all of humanity. &amp;ldquo;If Claude Code fails but Anthropic as a whole succeeds, I would be very happy. The entire team is willing to make decisions based on this mindset,&amp;rdquo; she noted. Interestingly, Cat pointed out that during new model releases, the most significant changes often come from &amp;ldquo;deleting features&amp;rdquo; that were originally added to compensate for the model&amp;rsquo;s limitations.&lt;/p&gt;&#xA;&lt;p&gt;Regarding the previous Claude Code source code leak, she revealed, &amp;ldquo;This was a human error,&amp;rdquo; and the involved employee still works at the company. &amp;ldquo;This is a process issue; the most important thing is to learn from it and increase protective measures, which is what we are currently doing.&amp;rdquo;&lt;/p&gt;&#xA;&lt;h2 id=&#34;insights-on-product-management&#34;&gt;Insights on Product Management&#xA;&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; I want to start with your role, especially your collaboration with Boris. Everyone knows Boris, who created Claude Code and leads the team, submitting countless PRs daily. I feel like you don’t get enough recognition for your contributions to Claude Code, Cowork, and everything you’re doing. Can you explain your role in the team and how you collaborate with Boris?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; I feel very fortunate to work with Boris; he is a fantastic thought partner. He is our technical lead and the visionary behind the product, skilled at defining what the product should look like in three to six months, even envisioning the &amp;ldquo;full AGI version&amp;rdquo; of the product.&lt;/p&gt;&#xA;&lt;p&gt;My focus is more on the path from now to that three to six-month vision. I spend a lot of time on cross-team collaboration, ensuring that marketing, sales, finance, and computing teams all align with the plan, moving in the same direction, and ensuring that features are ready and not stuck at the release stage. In some ways, we collaborate very well because we have a sense of &amp;ldquo;brain circuit fusion.&amp;rdquo; But the boundaries are quite blurred; about 80% overlaps, with 20% I particularly care about and lead, while the remaining 20% is what he cares about more, and he leads.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; You mentioned that you have been interviewing a lot of PMs. If I received a dollar for every referral I made for someone to become a PM at Anthropic, I’d probably have 300 billion ARR by now. It’s one of the most sought-after companies, so I can imagine how many people you’ve interviewed. You said many people misunderstood what it means to be a successful AI product manager. Can you share the issues you’ve observed and what skills are needed to succeed?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; Before AI, the pace of technological change was relatively slow. You could plan on a six to twelve-month cycle, and because feature releases were also slow, there was a strong emphasis on collaboration with other teams to ensure their features could unlock your path, as writing code itself is expensive. But now, AI has significantly increased engineering efficiency. With rapid improvements in model capabilities, the development cycle for many product features has shortened from six months to one month, then to a week, and sometimes even a day. In this context, we need to push products out faster.&lt;/p&gt;&#xA;&lt;p&gt;This means that as a PM, you should no longer focus on aligning roadmaps across multiple quarters but rather think about how to get things done in the quickest way possible. How can you deliver an idea to users within a week? The best PMs in AI-native products are those who can drastically shorten the time from idea to user while clearly defining which core tasks must be ready to go out of the box.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; I like what you said; many people still don’t realize how fast the pace is and how much of the work is about &amp;ldquo;helping teams accelerate.&amp;rdquo; How do you help the team move so quickly?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; The first thing is to set clear goals. Because large models are inherently general, they bring a lot of ambiguity: who are we making products for? What problems are we solving? What are the most important use cases? A good PM can clarify these, such as: our core users are professional developers; this feature addresses the issue of too many permission pop-ups causing fatigue; our goal is to allow developers in enterprises to implement &amp;ldquo;zero permission pop-ups&amp;rdquo; safely. This makes the goals clear and automatically excludes many unnecessary solutions.&lt;/p&gt;&#xA;&lt;p&gt;Second, establish a reusable release process. For example, in Claude Code, we release almost all features in the form of &amp;ldquo;research previews.&amp;rdquo; We clearly tell users this is an early product, just an idea, still collecting feedback, and may not be supported long-term. The benefit of this approach is that it lowers the commitment cost, allowing us to quickly launch something in one or two weeks. Third, we create a collaboration framework for the team, letting everyone know when to pull in cross-functional teams and what their expectations are.&lt;/p&gt;&#xA;&lt;p&gt;We have very tight processes between engineering, marketing, and documentation: once engineers feel a feature is ready and has completed internal use, it goes to a release channel, and the documentation, PMM, and developer relations teams immediately follow up, allowing an announcement to be made the next day. This process reduces release friction, and one of the PM&amp;rsquo;s responsibilities is to build this system.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; What role does the PRD play in this system? You mentioned that goals are important; do you still write PRDs or just simple bullet points? How has this evolved in the AI era?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; We mainly do two things. First, we have very strict data metrics, reviewing them weekly with the entire team to ensure everyone deeply understands the business aspects: what the core goals are, how trends are, and what the driving factors are. Second, we have a set of team principles, including who the core users are and why they are. This is to ensure everyone understands how the business operates, what is important, and what can be sacrificed, allowing for autonomous decision-making instead of being bottlenecked by PMs. For particularly ambiguous features, we still write a one-page document outlining the goals, ideal use cases, and current failure modes that need addressing. Of course, for some projects, especially those involving heavy infrastructure, it does take months, and in those cases, we still write complete PRDs.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; I want to delve deeper into how you can move so quickly. I’ve never seen a release pace like Anthropic’s, with significant features coming online almost daily. Recently, you developed a model called Mythos, which is still in preview. It’s so powerful that people are a bit concerned about its capabilities. Are you using it internally, and is that one of the reasons for your speed?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; We have been fast for several quarters, so it’s not solely due to Mythos. It is indeed very powerful, and we do use the model internally, which has improved some efficiency, but that’s not the main reason. The more critical factors are the processes and team expectations. We strive to minimize processes, removing all obstacles to release, so everyone feels they can turn an idea into a product in a week or even a day.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; That’s amazing; having the strongest models while developing products is a hard advantage to replicate.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; We are indeed fortunate to have access to these cutting-edge models.&lt;/p&gt;&#xA;&lt;h2 id=&#34;overlapping-roles-of-engineers-and-pms&#34;&gt;Overlapping Roles of Engineers and PMs&#xA;&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; Recently, there was an incident where Claude Code’s source code was leaked about a week ago. Can you explain what happened?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; We conducted an investigation immediately after noticing. This was a human error. At the time, someone was using Claude to write a PR, which was an update about the release process, and it went through two layers of human review. Ultimately, it was a human mistake, and we have strengthened our processes to ensure it doesn’t happen again.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; Is that person still with the company?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; Yes, they are. This is a process issue; the most important thing is to learn from it and increase protective measures, which is what we are currently doing.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; Another issue is OpenClaw. Recently, you restricted the use of Claude subscriptions to run OpenClaw, and the community reacted strongly, with many feeling this harms the open-source community. What’s your take?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; We have indeed seen very high demand for Claude, so we have been working hard to expand our infrastructure while optimizing token usage efficiency to allow for longer usage. However, this product was not originally designed for third-party products; their usage patterns differ significantly from our first-party products. We have also spent a lot of time considering how to make a smooth transition, such as providing additional credits to subscription users. But ultimately, we made a tough decision: to prioritize supporting our first-party products and APIs, which is the context for this decision.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; This makes sense to me. Your $200 monthly subscription is essentially unlimited, but the computing costs are high, and the company still needs to make a profit; it’s not feasible to keep subsidizing. Returning to the PM team, what is your team structure like? How many PMs do you have?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; We currently have about 30 to 40 PMs divided into several teams. There is a research PM team responsible for collecting user feedback on models and passing it to the research team while also participating in model releases; a cloud developer platform team that maintains the Claude Code API and releases capabilities like hosted Agents; a Claude Code team responsible for the core products of Claude Code and Cowork; an enterprise team that makes these products easier for enterprises to adopt, focusing on cost control, permission management, security, etc.; and a growth team responsible for the growth of the entire product line, with whom we closely collaborate on Claude Code and Cowork.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; Speaking of growth, Amole recently appeared on our podcast. He mentioned an interesting but rarely discussed point: there’s a general feeling that fewer PMs will be needed in the future, with some saying, &amp;ldquo;Why do we need PMs when engineers can release on their own?&amp;rdquo; But his view is the opposite: because engineers are moving so fast, PMs and designers are being &amp;ldquo;squeezed out,&amp;rdquo; and with new features coming online daily, it’s hard to keep up. So he believes we actually need more PMs. What’s your perspective? Do you think PM hiring will increase in the future? How will this profession evolve in the long term?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; I think various roles are merging. PMs are doing some engineering tasks, engineers are doing PM tasks, and designers are doing both PM and coding. You can choose to hire more engineers with a strong product sense or keep the number of engineers constant and add more PMs to guide their work. In our team, we prefer to hire engineers with a strong product sense. This reduces the &amp;ldquo;friction cost&amp;rdquo; in the product release process. For example, we have many engineers who can go from seeing user feedback on Twitter to launching a product within a week, with minimal PM involvement. I think this is actually the most efficient way.&lt;/p&gt;&#xA;&lt;p&gt;So I believe the boundaries between engineers and PMs are overlapping. Regardless of which type of person you add, it will bring value. However, I think &amp;ldquo;product sense&amp;rdquo; remains a very scarce skill, and whenever we see someone particularly strong in this area, we are very eager to hire them.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; You were originally an engineer, right?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; Yes, I was an engineer for many years. Then I briefly worked in venture capital before joining Anthropic. In fact, almost all PMs in our team are either from engineering backgrounds or have written code on Claude Code. I think this helps build trust within the team and allows us to move faster. Many of our designers were also former front-end engineers.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; This leads to a key question: as these roles merge, many people wonder which skills will be most valuable in the future if I come from an engineering, product, or design background. In your case, engineering skills are clearly important. But in other companies, would a design background transitioning to PM be more advantageous?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; I still believe the core lies in &amp;ldquo;product sense.&amp;rdquo; As coding becomes cheaper, the more valuable skill becomes deciding &amp;ldquo;what to write.&amp;rdquo; For example, what is the best user experience for this feature? How can we make users feel most satisfied?&lt;/p&gt;&#xA;&lt;p&gt;We receive thousands of GitHub issues daily, and users suggest everything. At that point, strong judgment and taste are needed to decide what is worth doing and how it should be done. This ability can come from any background, but it is the most important. I think engineering backgrounds will be particularly valuable in the coming months because they help you assess the feasibility of implementing something, which often affects prioritization. For example, if a feature is easy to implement, it might not require much discussion, and you can just spend an hour to get it done; but if it’s complex, you’ll realize it’s costly, which will affect decision-making.&lt;/p&gt;&#xA;&lt;h2 id=&#34;sacrificing-product-consistency&#34;&gt;Sacrificing Product Consistency&#xA;&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; You mentioned that in the coming months, skills will change rapidly, making it hard to predict how things will be. What will humans continue to value in the short term?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; I think the most important thing is &amp;ldquo;first principles thinking.&amp;rdquo; You need to understand how the technical environment is changing, what the team truly needs you to do, and proactively fill that gap. Work is becoming increasingly &amp;ldquo;ambiguous,&amp;rdquo; and an excellent PM should be able to see all the gaps, prioritize them, and either learn new skills or use existing abilities to solve problems. Therefore, what is now more popular is someone who can &amp;ldquo;switch between multiple roles,&amp;rdquo; is willing to take on various tasks, and doesn’t care much about titles.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; I love this answer. I’ve been asking cutting-edge professionals like you a question: before humans reach superintelligence, where does the value of the human brain lie? Listening to you, the core is in choosing topics, judging directions, prioritizing, and determining whether something is &amp;ldquo;right.&amp;rdquo; Is there anything to add?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; I think humans still have an advantage in &amp;ldquo;common sense.&amp;rdquo; A product launch involves thousands of details, with many potential pitfalls. Models are currently not very good at understanding who all the stakeholders are, their relationships, preferences, and how to communicate with them. These are more about &amp;ldquo;tacit knowledge,&amp;rdquo; similar to emotional intelligence, which remains very important. Of course, we hope models become stronger in this area, but there is still a gap.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; In such a rapidly changing environment, how do you maintain your sanity? It feels like being in the eye of a tornado.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; I think our team enjoys the chaos. We face challenges with a smile because there are always many things to deal with and many risks. If you get anxious about everything, you’ll quickly burn out. We prefer to find those who see difficulties and say, &amp;ldquo;This is hard, but I’m excited to solve it.&amp;rdquo; They do their best, accept imperfection, but can sleep soundly knowing they’ve done their best.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; This is also an important ability. Some say this is the &amp;ldquo;most normal time in the world,&amp;rdquo; and things will only get crazier.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; It will indeed get increasingly difficult. Sometimes on a Sunday night, there’s a P0 issue, and by Monday morning, there’s an even more severe one, and by the afternoon, there might be something even crazier, making you feel that yesterday’s issue was nothing. You just have to accept that what you can do is limited. You need to ensure you get enough sleep to make good decisions the next day. At the same time, prioritize extremely, focusing on the most important things, and accept that some things won’t be done well. For instance, some of our products may not be polished enough upon launch, but as long as it doesn’t affect core user value, it’s acceptable because we will quickly gather feedback and fix it in the next iteration.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; It sounds like that scene in &amp;ldquo;Pirates of the Caribbean&amp;rdquo; where the ship is about to explode, and someone is still elegantly walking downstairs. The people I’ve encountered at Anthropic do seem very calm and optimistic.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; Without this state, it’s easy to burn out. We also tend to hire those who have experienced many ups and downs in the industry; they know what brings them energy and how to maintain their state over the long term.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; In this trend of role merging, what might we lose? For example, career paths, design consistency, code quality?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; We will indeed sacrifice some &amp;ldquo;product consistency.&amp;rdquo; When code costs were high, you would meticulously plan the entire product system, with each product&amp;rsquo;s positioning, use cases, and how they collaborate, usually corresponding one scenario to one product. But now, AI is developing too quickly, and we need to test many ideas, so sometimes features overlap. Often, this is because we internally like two different forms at the same time, hoping users will tell us which is better. But this can confuse new users: they don’t know what the best path is to complete a task. This means we need to do more user education to help them understand core functions and best practices.&lt;/p&gt;&#xA;&lt;p&gt;Another issue is that users may feel they can’t keep up. In the past, you would only have an update once a month or even a quarter, and not looking at it was fine. But now these tools develop so quickly that many people check Twitter daily for the latest updates. We are also thinking about how to make users less anxious, hoping that when they open the tool, it can guide and teach them rather than making them feel like they are on an ever-faster treadmill.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; I noticed you recently launched an interesting feature called /powerup, which helps users understand the best ways to use Claude Code. Is this to address this issue?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; Yes, that’s the idea. Initially, we were hesitant to create such onboarding because we felt the product should be intuitive enough not to need a tutorial. But later, we realized there were too many features, and users were eager for a built-in guide to tell them what the top ten most important features were among hundreds. So we adjusted our previous philosophy and added this feature.&lt;/p&gt;&#xA;&lt;h2 id=&#34;anthropics-growth-and-mission&#34;&gt;Anthropic&amp;rsquo;s Growth and Mission&#xA;&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; Anthropic has experienced remarkable growth over the past few years. Initially, it was quite behind, had little funding, and lacked distribution channels, with OpenAI far ahead, and many thought there was no chance. But now, your growth is astonishing. From an internal perspective, what do you think has been the key to success?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; I think the two most important factors are a highly unified sense of mission and the ability to make quick decisions based on that mission. We hire people who genuinely care about &amp;ldquo;bringing safe AGI to all of humanity.&amp;rdquo; And this is not just a slogan; we repeatedly reference this mission when making product decisions. By placing the mission above any single product, we can make rapid decisions and execute uniformly across the organization. This is quite rare in a company of our size.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; Just to confirm my understanding, you prioritize &amp;ldquo;safety alignment (ensuring AI is beneficial to the world)&amp;rdquo; as the primary mission. As long as this mission is clear enough, many decisions become easier to make. For example, when two priorities conflict, you look at which aligns better with Anthropic’s mission and prioritize that. Once a decision is made, everyone supports it.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; Sometimes this also means that, for example, we want to release a certain feature on Claude Code but find something more important, so we lower the priority of that feature and postpone it for later.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; This is interesting. I think it also explains the difference between you and another company, OpenAI, which has done many different things. Your logic is: we won’t do social networks, and we won’t do information streams because these don’t align with our mission. This restraint allows Anthropic to maintain focus, which seems to be one of the key factors for success.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; When I talk about &amp;ldquo;mission,&amp;rdquo; I mean placing Anthropic&amp;rsquo;s goals above any individual, any single product. To me, our second-best trait is actually &amp;ldquo;focus,&amp;rdquo; but mission and focus are still somewhat different. The mission means the team is willing to make sacrifices, even if it impacts their goals or KRs, as long as it serves Anthropic’s overall goals and KRs. And everyone is willing to make such trade-offs. For example, if Claude Code fails but Anthropic overall succeeds, I would be very happy. The entire team is also willing to make decisions based on this mindset.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; This question may be sensitive, but do you think decisions like those regarding OpenClaw also fall under this logic? For instance, this direction didn’t push Anthropic’s mission, so it had to be stopped?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; I think it’s very important for Anthropic to expand the user base we can reach. One way to achieve this is through Claude subscriptions and our first-party products. So we are very determined to double down on these directions, but this sometimes does come at the expense of third-party products.&lt;/p&gt;&#xA;&lt;h2 id=&#34;claudes-internal-skills&#34;&gt;Claude&amp;rsquo;s Internal Skills&#xA;&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; We just mentioned products like Claude, Cowork, etc. I want to clarify that everyone understands the differences between these tools and am curious about how you personally use them. For instance, when should one use Claude Code, Claude desktop, or Cowork?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; I usually use Claude Code in the terminal, especially when I want to quickly start a one-off coding task and want to use the latest features. The CLI is our earliest product form, and many new features are launched here first, so it’s the most powerful tool. Generally, I use it when handling one or a few tasks at the same time. The desktop version is more suitable for front-end work. I love using its preview feature; for example, when I’m working on a web app, I’ll use both Claude Code and desktop, opening the preview panel on the right side, so I can interact with Claude while seeing the web page effect in real-time.&lt;/p&gt;&#xA;&lt;p&gt;For non-technical users, the desktop version is also friendlier. The terminal can be intimidating for many, with various prompts that look &amp;ldquo;scary,&amp;rdquo; and it doesn’t allow for the same clickable operations as other products. So if you’re not used to the terminal, I highly recommend using the desktop version of Claude Code. Additionally, the desktop provides a global view, allowing you to see CLI sessions, desktop sessions, and tasks initiated on web or mobile, serving as a unified control panel. As for web and mobile, their biggest advantage is &amp;ldquo;initiating tasks anytime, anywhere.&amp;rdquo; CLI and desktop require you to use them on a local computer, but in reality, you can’t always carry a laptop.&lt;/p&gt;&#xA;&lt;p&gt;I’ve seen many people walking outside, using their phones to hotspot their laptops, and not daring to turn off their computers. This shows we actually lack a product that solves this scenario. Mobile does a great job of addressing this issue, allowing you to initiate tasks anytime without needing to carry a laptop.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; That’s very relatable. I’ve seen this scenario on planes where people are afraid to close their laptops, just waiting for the Agent to finish running while staying connected to Wi-Fi.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; As for Cowork, it addresses another class of problems: many work outputs are not code. For example, clearing Slack, clearing inboxes, creating client presentation PPTs, writing feature goal documents, or release plans are all &amp;ldquo;non-code outputs.&amp;rdquo; Cowork is very suitable for these scenarios. So my classification is simple: if the output is code, I use Claude Code (whether on desktop or mobile); if the output is not code, I use Cowork.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; I think people may underestimate Cowork&amp;rsquo;s success. It’s growing rapidly, but many may not fully understand what it can do. Can you share some practical use cases based on your work as a PM? Any surprising applications?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; If you’re just starting to use Cowork, the first step is to connect all relevant data sources related to your work.&lt;/p&gt;&#xA;&lt;p&gt;Because only by obtaining enough context can it provide high-quality results. For me, I connect Google Calendar, Slack, Gmail, and Google Drive, allowing it to freely access context, extract information, and link threads, significantly improving result quality. For example, last night I was using Cowork because we had a Code with Claude conference, and I needed to give several presentations. One of the presentation topics was: how Claude Code evolved from &amp;ldquo;assistant&amp;rdquo; to &amp;ldquo;real Agent.&amp;rdquo; I wanted to showcase our released products and some internal success cases.&lt;/p&gt;&#xA;&lt;p&gt;I fed Cowork all the materials, including a draft prepared by our product marketing colleague Alex, and told it the narrative logic I wanted to present. Then it worked for an hour: it looked at what we had published on Twitter, checked internal release records, reviewed the announcement channel for Claude Code (which contains many practical cases shared by teams), and finally integrated all the information into a 20-page PPT. When I woke up in the morning, the overall quality was quite good. Although I made some modifications, such as preferring &amp;ldquo;fewer words&amp;rdquo; in the slides, it initially wrote a bit too much.&lt;/p&gt;&#xA;&lt;p&gt;But overall, the speed far exceeded my own efficiency. And because it can access our design system, the PPT looked like it was made by a professional designer, very polished.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; This is essentially a PM&amp;rsquo;s dream; creating PPTs is so tedious and slow. To help everyone try it out, the steps you mentioned are: first connect Slack, Google Calendar, Gmail, Google Drive, right?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; Yes, the key is to connect your communication tools and the team’s &amp;ldquo;information sources.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; What was your prompt like at that time?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; I actually kept it very simple: &amp;ldquo;Help me create a PPT for the Code with Claude conference. This is the content suggested by PMM, this is the draft I’m not satisfied with, and here’s a version I made manually (with links). First, give me a detailed outline while avoiding repetition with the keynote.&amp;rdquo; Claude would first read these links and then generate an outline. I would then decide which content to keep based on its suggestions. This reflects the current role of PMs: Claude is a strong &amp;ldquo;brainstorming partner,&amp;rdquo; capable of quickly integrating large amounts of information and providing multiple possibilities; but the final decision still rests with the PM.&lt;/p&gt;&#xA;&lt;p&gt;The structure I finalized was: from &amp;ldquo;making local tasks successful&amp;rdquo; to &amp;ldquo;ensuring every PR goes through,&amp;rdquo; and then to &amp;ldquo;helping engineers submit more PRs,&amp;rdquo; with corresponding demos for each stage. Once I confirmed the outline, Cowork took a few more hours to complete the entire PPT.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; Amazing. It’s like you’re conversing with a designer who understands both design and content. How is the design system implemented? How does it know Anthropic&amp;rsquo;s style?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; We already have a standard external presentation template, and I directly provided this template to Claude. It can learn our color schemes, fonts, layouts, etc.; for example, we have about 20 commonly used slide formats. You can also connect Figma’s MCP; if your template is there, it can read directly from it.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; Speaking of which, I’m curious about your PM toolkit. Besides Claude Code and Cowork, what else do you use?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; My toolkit mainly consists of Claude Code and Cowork. Anthropic essentially operates around Slack; I feel it’s almost the company’s &amp;ldquo;operating system.&amp;rdquo; In my daily work, I spend about 30% of my time continuously testing Cowork’s boundaries to see where it falls short. I also spend a lot of time conversing with the model to understand why it makes mistakes. Additionally, we’ve built many internal tools. The biggest value of Claude Code is that it significantly lowers the barrier to developing custom applications. So now, there are many &amp;ldquo;personalized work software&amp;rdquo; within the company to solve very specific scenarios instead of relying on those not fully compatible general tools.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; Can you give some examples?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; For example, one of our sales colleagues using Claude Code found himself repeatedly creating similar client presentation PPTs. So he developed a web app: it contains several of the most effective templates (like 101, 201, advanced tutorials); then you can input client information, which will be automatically pulled from systems like Salesforce and Gong; the system will automatically adjust the content based on client circumstances, such as whether they use Bedrock or the enterprise version of Claude; whether they focus more on code reviews or security compliance; and whether HIPAA compliance is needed; then it automatically generates a customized PPT. What used to take 20-30 minutes of work now gets done in seconds.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; It’s interesting that tools like Slack are rarely attempted to be replaced. Everyone talks about SaaS being replaced by self-built tools, but Slack seems to be an irreplaceable infrastructure.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; I think it is indeed a crucial communication infrastructure, and it does very well in &amp;ldquo;real-time information synchronization.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; Yes, many people complain about Slack, but it does its job very well, and the most cutting-edge teams basically can’t do without it, which is quite interesting.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; Yes, and I also appreciate its design in terms of &amp;ldquo;customizability.&amp;rdquo; We love to create Slack bots, and this &amp;ldquo;hackability&amp;rdquo; allows us to integrate Slack in our own way. So I really commend Slack’s work in this regard.&lt;/p&gt;&#xA;&lt;h2 id=&#34;token-usage-and-internal-model-limits&#34;&gt;Token Usage and Internal Model Limits&#xA;&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; You just mentioned many different teams and how they use Claude Code and Cowork. Besides the engineering team, which team uses the most tokens? I’d guess engineering is first; if not, that’s interesting. Who’s second?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; The Applied AI team is very strong in exploring the boundaries of Claude Code and Cowork. Much of their work involves collaborating with clients to help them implement our APIs. So sometimes they directly help clients create prototypes, and Claude Code has made this process much faster than before. At the same time, they also handle a lot of client communications, such as client needs, historical meeting records, etc. So their usage on Cowork and Claude Code is very heavy.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; What exactly is the Applied AI team? Is it similar to forward-deployed engineering?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; You can think of it that way. Their work is to help clients implement our APIs and model capabilities internally, whether for their own products or to improve internal efficiency.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; Got it, it’s a somewhat technical go-to-market/customer success role.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; Yes, it’s a very technical go-to-market role.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; So you think they are the second-highest in token usage?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; Yes, and they are also constantly exploring the usage boundaries of Cowork. For example, many people are responsible for multiple clients and may have 5 to 10 client meetings in a day. So they might use Cowork the night before to prepare: &amp;ldquo;Help me summarize all client meetings tomorrow, what each client is focusing on, what demands they have raised, and what previous action items are.&amp;rdquo; Cowork will automatically generate a &amp;ldquo;battle briefing&amp;rdquo; to help them quickly get into the right mindset. Additionally, if a client asks in a meeting, &amp;ldquo;When will a certain feature be released?&amp;rdquo; Cowork can even check the latest progress in Slack and provide the latest ETA to include in the meeting materials. These are all workflows that people have built themselves and shared within the team.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; That’s cool. Recently, there’s an interesting trend: some people have reported that their AI token costs have exceeded their own salaries. Does Anthropic have similar data internally? For instance, how many tokens do engineers or PMs use daily or monthly?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; We have indeed observed that as model capabilities improve, people assign more tasks to it and spend more time on Claude Code and Cowork. So every time a model has a significant upgrade, the per capita token consumption increases. Currently, this cost is still far lower than the average salary of engineers, but this ratio is continuously growing.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; You also have a significant advantage in that you can use the most advanced models, and token usage is essentially unlimited, right?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; We can use many tokens, but there are indeed limits for some people.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; So there are still upper limits.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; We place great importance on enabling internal teams to develop as quickly as possible, and we believe everyone understands the costs of running the models and will use tokens responsibly. Wasting tokens is discouraged, but we trust everyone to make judgments.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; Returning to the PM role, you mentioned some aspects earlier. I want to ask systematically: what new capabilities do AI companies value most in PMs now?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; The most challenging capability is defining &amp;ldquo;what the product should look like in a month.&amp;rdquo; Because at this time scale, there is significant uncertainty in model capabilities and user behavior. But excellent PMs can see patterns from how users &amp;ldquo;break product boundaries&amp;rdquo; and set directions, continuously pushing forward. If model capabilities change beyond expectations, they can also adjust promptly.&lt;/p&gt;&#xA;&lt;p&gt;Another difficult aspect is that you need to have a &amp;ldquo;just right&amp;rdquo; belief in AGI. Everyone can imagine a future where models are incredibly powerful and almost omnipotent, where products could even degrade to just a text box. But the real challenge is: how to maximize its potential under the current model capabilities? How to guide users onto the &amp;ldquo;best path&amp;rdquo;? How to amplify its strengths and compensate for its weaknesses? This capability is actually very scarce.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; How can this ability be cultivated? Does it require extensive interaction with models to understand their boundaries?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; Yes, it requires a lot of interaction with the models. One thing I enjoy doing is having the model &amp;ldquo;self-reflect.&amp;rdquo; For instance, sometimes when the model does something strange, I ask it why it did that. It might say: the system prompt was ambiguous; or it didn’t realize front-end validation was part of the task; or it delegated the task to a sub-agent but didn’t check the results. This analysis helps you understand where it was misled, allowing you to optimize the system.&lt;/p&gt;&#xA;&lt;p&gt;Another important point is to find trusted &amp;ldquo;feedback sources.&amp;rdquo; Not all user feedback is equally valuable. Usually, there are a few individuals particularly skilled at judging model performance. Finding these five people is crucial. The third point is to conduct evaluations. You don’t need to do hundreds of evaluations; just ten high-quality ones can help the team clarify goals and measure progress. This is a severely underestimated task that more PMs and engineers should participate in.&lt;/p&gt;&#xA;&lt;h2 id=&#34;deleting-features-after-new-model-releases&#34;&gt;Deleting Features After New Model Releases&#xA;&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; Many people say the future of product managers is to write evaluations, essentially defining &amp;ldquo;what success looks like.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;How much time do you spend on this?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; It depends on the specific issue. Some teams invest a lot of time in evaluations. We have a small team that collaborates closely with research to analyze model behavior meticulously. I usually participate when a feature needs clearer definition, such as doing five evaluations to explain how to run them, what succeeds, what fails, and how to optimize prompts. Features like memory rely heavily on evaluations.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; You mentioned the &amp;ldquo;personality&amp;rdquo; of Claude. I previously interviewed a co-founder who emphasized this point as well. Many initially thought it was just an &amp;ldquo;interesting&amp;rdquo; addition, but it’s actually core to Claude’s success. What’s your take?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; You can think of real-life colleagues; some people just make you feel &amp;ldquo;great to work with.&amp;rdquo; Claude is similar. People like it because it is: easygoing, fun; yet very professional; has no ego; is willing to admit mistakes; has a positive attitude; for example, when you feel a task is difficult, it says, &amp;ldquo;That’s okay, we’ll take it step by step. Would you like me to help you get started?&amp;rdquo; The traits of excellent colleagues are positivity, proactivity, and sincere feedback, which we are all striving to inject into Claude.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; You mentioned that after releasing new models, you often have to rethink products, which sounds both exciting and overwhelming. How frequent is this situation?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; The bigger change is actually &amp;ldquo;deleting features.&amp;rdquo; Many features were originally added to compensate for the model&amp;rsquo;s limitations. For example, the early to-do list: the model would miss steps when making large-scale modifications, so we added a task list to force it to complete them. But in the new model, it can naturally complete these steps, so that feature becomes less important. Every time we release a new model, we recheck the system prompt and delete parts that are no longer needed.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; So the model &amp;ldquo;eats up&amp;rdquo; those product-level patches you previously made?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; Yes. But what’s even more exciting is that new models also unlock entirely new features. For instance, code review—we tried many times until recently when the model was strong enough to reach a usable level. Now we can even run multiple code review agents in parallel, scanning the entire codebase and outputting high-quality issues.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; Finally, let’s talk about the vision. What is the long-term direction for Claude Code and Cowork?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; We think from the basic unit of &amp;ldquo;tasks.&amp;rdquo; The first step is to ensure individual tasks succeed consistently. As model strength increases and task success rates improve, people will start running multiple tasks simultaneously. The next step might be: running dozens or hundreds of Claudes at the same time. At that point, the questions become: how to manage these tasks? How to build an interface that lets humans know what to focus on? How to ensure agents have completed and verified their work? How to establish feedback mechanisms that allow the system to continuously improve itself?&lt;/p&gt;&#xA;&lt;p&gt;This is what we are thinking about for the long-term direction.&lt;/p&gt;&#xA;&lt;h2 id=&#34;the-value-of-automation&#34;&gt;The Value of Automation&#xA;&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; Many listeners, including product managers, entrepreneurs, and various cross-functional roles, are worried about their roles and future career development. What advice would you give them? Not just about surviving in this highly AI-driven world, but how to truly succeed and thrive? What do you think they should hear and do?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; I believe AI has given everyone a much larger leverage than in the past. So I would advise you: whenever you realize you’re repeatedly doing a manual task, think about whether you can automate it using Claude Code, Cowork, or other AI tools. Most people’s work includes parts they enjoy creatively and some tedious, cumbersome parts they dislike. The beauty of AI is that it can help you handle these tedious tasks. It can learn from every time you perform these manual tasks, summarize patterns, and then execute automatically, allowing you to focus on more creative aspects. This means you can do much more than before.&lt;/p&gt;&#xA;&lt;p&gt;So my most direct advice is: identify those repetitive tasks that can be handed over to Claude, continuously iterate on these automated workflows until they achieve high success rates, and then think about what else you can do for your team, product, or company—those things you’ve always wanted to do but never had the time or energy to pursue, or whether there’s something you’ve always felt the company should do but never had time to tackle. If AI can help you handle those &amp;ldquo;grunt work,&amp;rdquo; it’s like you’ve gained an extra 20% of your time. My advice is to embrace these tools, delegate the work you dislike, and find ways they can accelerate you, and then you can achieve more.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; One core point you just made that I strongly agree with is using AI to solve problems. There are many tools and great potential now, but for many, the hardest part is figuring out what to do. Your advice essentially is: pay attention to those things you do repeatedly that can be automated and those ideas you’ve always wanted to pursue but haven’t had time for. Essentially, it’s about solving problems for yourself, right?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; Yes, that’s completely correct. I would also advise everyone to push automation from &amp;ldquo;this is a nice concept&amp;rdquo; to &amp;ldquo;it’s genuinely 100% usable.&amp;rdquo; Sometimes I see users automate a process to 90% or 95% and then give up. But if it can’t achieve 100% automation, it doesn’t count as true automation. The last 5% to 10% often requires more time, and building automation can sometimes be slower than doing it manually. But I still encourage everyone to pick something you really want to achieve 100% automation on, invest enough effort to refine it: teach the model your preferences, give it feedback, and let it improve continuously until it reaches 100%. Only then can you truly trust it. A 95% automated task doesn’t hold much value.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; I totally relate to that; it’s excellent advice.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; I’m in the same boat. I’m currently teaching Cowork to help me achieve inbox zero in Gmail, but the process is very time-consuming, and it’s far from ideal.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; What a coincidence; I am too. I set up an automated email classification process to sort those &amp;ldquo;junk requests&amp;rdquo; (like wanting to be on the podcast) into a folder. It’s about 95% accurate, but it occasionally misses important emails.&lt;/p&gt;&#xA;&lt;p&gt;So your advice is great; I need to perfect it.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; We are also working to make these custom processes easier to use. The current process is indeed a bit complex: you have to define a skill, learn how to call it, give it feedback, and let Cowork update this skill based on feedback, and finally check the updated results. This is also our responsibility, to make the whole process smoother rather than painful.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; Fantastic. Cat, is there anything else you’d like to add? Or anything you want to emphasize before we jump into the quick-fire round?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; I see many people experimenting with AI to create various prototypes or build workflows. But I recommend focusing on applications you’ll use daily. Because only in true usage can you gain value. If you just create a prototype but it doesn’t help you improve efficiency, then AI hasn’t really brought you value. That kind of &amp;ldquo;once-off creation, thinking it’s cool, and then never using it again&amp;rdquo; approach teaches you very little and doesn’t truly leverage.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; That’s a great point. I’ve also noticed another extreme: some people spend a lot of time customizing their workflows. There’s a type of person who never automates, but another type who over-optimizes tools, adding various skills, MCPs, and workflow optimizations. Sometimes, this can lead them away from the initial goal, like actually releasing a product or making a feature.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; Yes, I feel the same way. Customizing these things is indeed fun, and we hope the product is hackable enough for you to use it in your own way. But there is a boundary. I see some people spending too much time on customization, even losing sleep, and neglecting the core tasks they initially wanted to complete.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; I’ve seen a lot of this on Twitter, with people saying, &amp;ldquo;Look at my configuration, how optimized it is.&amp;rdquo; But the question is, what are you actually doing?&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; Many times, simpler configurations are actually more effective.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; Speaking of which, I saw a tweet from Andrej Karpathy yesterday, mentioning an interesting split: one group of people used ChatGPT or Claude early on but thought, &amp;ldquo;It’s just okay,&amp;rdquo; and then gave up, remaining skeptical about AI; while another group used it to write code and truly saw its power. These two groups completely fail to understand each other. So your advice is crucial: use it to do real things to understand its capabilities.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; Yes, I think a significant shift is that products in 2024 will mostly be &amp;ldquo;conversational,&amp;rdquo; while the current generation of Claude Code products is &amp;ldquo;action-oriented.&amp;rdquo; The real &amp;ldquo;aha moment&amp;rdquo; is when Claude can execute tasks for you. When you realize it can not only tell you what to do but also do it for you, that feeling is incredibly shocking.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Host:&lt;/strong&gt; Exactly. I also want to mention a Chrome extension where you can watch Claude automate actions, like &amp;ldquo;help me fill out this form,&amp;rdquo; and it actually does it.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cat Wu:&lt;/strong&gt; Yes, that’s the feeling.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>China Meteorological Administration Advances AI in Weather Forecasting</title>
            <link>https://acousticinfoplus.com/posts/note-f8a355411d/</link>
            <pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-f8a355411d/</guid>
            <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&#xA;&lt;/h2&gt;&lt;p&gt;The China Meteorological Administration (CMA) is committed to advancing the integration of artificial intelligence (AI) in meteorology during the 14th Five-Year Plan period. This initiative aims to strengthen the Xiong&amp;rsquo;an AI Meteorological Innovation Research Institute and enhance foundational support in data, computing power, and platforms, accelerating the establishment of a world-class AI research and development center for meteorology.&lt;/p&gt;&#xA;&lt;h2 id=&#34;ai-weather-forecast-models&#34;&gt;AI Weather Forecast Models&#xA;&lt;/h2&gt;&lt;p&gt;In recent years, the CMA has launched AI weather forecasting models such as &amp;ldquo;Wind Thunder,&amp;rdquo; &amp;ldquo;Wind Clear,&amp;rdquo; and &amp;ldquo;Wind Smooth,&amp;rdquo; which provide seamless forecasts covering 0 to 60 days. The &amp;ldquo;Wind Thunder&amp;rdquo; model excels in predicting short-term heavy rainfall intensity and location. The &amp;ldquo;Wind Clear&amp;rdquo; model is effective for mid-term weather forecasts ranging from 3 to 10 days globally. Meanwhile, the &amp;ldquo;Wind Smooth&amp;rdquo; model significantly enhances forecasting capabilities for weather and climate systems beyond 15 days.&lt;/p&gt;&#xA;&lt;h2 id=&#34;future-directions&#34;&gt;Future Directions&#xA;&lt;/h2&gt;&lt;p&gt;According to Song Shanyun, Deputy Director of the CMA, the focus during the 14th Five-Year Plan will be on deepening the integration of physical laws with AI applications. The goal is to develop AI weather forecasting technologies aimed at the Earth system, promoting a deeper fusion of AI with meteorological services to continuously improve forecasting accuracy, resolution, relevance, and timeliness.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>China&#39;s AI International Cooperation Initiative: A Path to Global Development</title>
            <link>https://acousticinfoplus.com/posts/note-5832f23f34/</link>
            <pubDate>Mon, 27 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-5832f23f34/</guid>
            <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&#xA;&lt;/h2&gt;&lt;p&gt;As satellites pass over Earth&amp;rsquo;s orbit, artificial intelligence (AI) is transcending borders, profoundly reshaping global development and cooperation. By 2025, China&amp;rsquo;s AI open-source construction has achieved leapfrog development, placing it among the world&amp;rsquo;s leaders. China maintains an open and inclusive stance, providing solid support for global AI collaborative development.&lt;/p&gt;&#xA;&lt;h2 id=&#34;ai-initiatives-and-projects&#34;&gt;AI Initiatives and Projects&#xA;&lt;/h2&gt;&lt;p&gt;From the green data centers operating day and night in the Guizhou mountains to the precision agriculture project in Mozambique&amp;rsquo;s Gaza Province utilizing &amp;ldquo;Beidou + drone&amp;rdquo; technology, and the ASEAN AI multilingual translation center bridging civilizations, numerous pragmatic cooperation scenes and vibrant practices collectively paint a grand picture of the world empowered by &amp;ldquo;AI +.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;In September 2025, China proposed the &amp;ldquo;AI + International Cooperation Initiative,&amp;rdquo; an international public product embodying the concept of a community with a shared future for mankind. This initiative focuses on five key areas: improving people&amp;rsquo;s livelihoods, technological progress, industrial applications, cultural prosperity, and talent cultivation, establishing an action framework for global AI collaborative development, which has garnered widespread attention and positive responses from the international community.&lt;/p&gt;&#xA;&lt;h2 id=&#34;focus-on-peoples-livelihoods&#34;&gt;Focus on People&amp;rsquo;s Livelihoods&#xA;&lt;/h2&gt;&lt;p&gt;The initiative prioritizes people&amp;rsquo;s livelihoods, ensuring that AI technology benefits citizens worldwide, particularly aiding developing countries in solving challenges. In Mozambique&amp;rsquo;s Gaza Province, the China-Mozambique agricultural cooperation project introduces China&amp;rsquo;s &amp;ldquo;Beidou + drone&amp;rdquo; precision agriculture technology. Agricultural drones are widely used for tasks such as field mapping, rice planting, and pest control, covering over 80,000 acres, transforming low-yield fields into high-yield ones. Rice yields have increased from about 150 kg per mu to over 400 kg, with some demonstration fields reaching 500 kg and high-yield areas exceeding 550 kg.&lt;/p&gt;&#xA;&lt;p&gt;In the medical field, AI-assisted diagnostic systems extend quality resources to remote areas, improving diagnostic accuracy through image recognition. In education, intelligent learning platforms break geographical barriers, allowing students in developing countries to share global quality resources, ensuring that technology reaches every corner.&lt;/p&gt;&#xA;&lt;h2 id=&#34;technological-support&#34;&gt;Technological Support&#xA;&lt;/h2&gt;&lt;p&gt;Behind the warmth of technology is a solid scientific foundation. Technological progress is the core driving force of &amp;ldquo;AI +,&amp;rdquo; with related initiatives leading innovation paradigm shifts and promoting cross-domain R&amp;amp;D collaboration. Currently, China ranks among the top tier globally in large model research and open-source development, with a comprehensive system of general large models and industry-specific vertical models, providing low-cost, inclusive model technology support to the world through open-source sharing.&lt;/p&gt;&#xA;&lt;p&gt;By November 2025, the Guizhou green data center cluster achieves low-carbon operation relying on hydropower, with a PUE value below 1.2 and a total computing power exceeding 100,000 PFLOPS, of which over 98% is intelligent computing power. The Hohhot computing hub uses wind and solar green electricity, reducing carbon emissions by 640,000 tons annually, pioneering carbon sink mutual recognition in computing power. By the end of 2025, China&amp;rsquo;s intelligent computing power scale reaches 1.59 million PFLOPS, with eight planned national computing hubs accelerating construction, totaling 306 national green computing facilities, providing a replicable Chinese model for global green computing development.&lt;/p&gt;&#xA;&lt;p&gt;In basic research, AI large models deeply empower cutting-edge fields such as biomanufacturing and quantum technology, assisting global researchers in sharing innovative results.&lt;/p&gt;&#xA;&lt;h2 id=&#34;reshaping-supply-chains&#34;&gt;Reshaping Supply Chains&#xA;&lt;/h2&gt;&lt;p&gt;AI&amp;rsquo;s empowerment of global development profoundly reshapes industrial and supply chains. The initiative advocates for AI to enable industrial upgrades and cultivate new business models, stabilizing global industrial supply chains. China&amp;rsquo;s &amp;ldquo;computing power supply + R&amp;amp;D application&amp;rdquo; linkage demonstrates significant results: Beijing Haidian focuses on AI R&amp;amp;D and achievement transformation, while Shanghai Lingang builds a cross-border computing hub. Eight national computing hub nodes collaborate to create a nationwide integrated computing network supporting cross-border capacity coordination.&lt;/p&gt;&#xA;&lt;p&gt;On the Haizhi Online platform, a European engineer&amp;rsquo;s 3D gear drawings are analyzed by AI in milliseconds, precisely connecting with small and medium-sized enterprises in Kunshan, Jiangsu. The platform, relying on over 200 factory tags and more than a hundred demand tags, bridges the information gap in non-standard parts trade, facilitating efficient circulation of over a million industrial drawings and helping various enterprises smoothly integrate into global industrial division.&lt;/p&gt;&#xA;&lt;p&gt;In Russia&amp;rsquo;s Far East, AI smart agricultural machinery significantly enhances agricultural production efficiency. In Uzbekistan, AI photovoltaic cleaning robots ensure stable green electricity output. In Tajikistan&amp;rsquo;s smart mining areas and Pakistan&amp;rsquo;s urban intelligent security systems, China&amp;rsquo;s digital and intelligent solutions deeply integrate with local needs, demonstrating that multilateral cooperation is an effective path to promote industrial empowerment.&lt;/p&gt;&#xA;&lt;h2 id=&#34;cultural-exchange&#34;&gt;Cultural Exchange&#xA;&lt;/h2&gt;&lt;p&gt;Civilizations flourish through communication, and &amp;ldquo;AI +&amp;rdquo; is becoming a digital bridge for cultural exchange. Cultural prosperity is an essential dimension of the global civilization initiative, centered on promoting mutual understanding through AI. The cooperation between China and Malaysia serves as a model. Chinese tech companies collaborate with local enterprises to build the ASEAN AI multilingual translation center, supporting translation among over 130 languages, enabling rapid translation of film and television content in just 30 minutes.&lt;/p&gt;&#xA;&lt;p&gt;Additionally, in the 2025 Belt and Road and BRICS Skills Development and Technology Innovation Competition, over a hundred teams from various countries compete in AI-enabled instructional design. The concurrently launched &amp;ldquo;Global South AI Workshop&amp;rdquo; establishes a new platform for countries to deepen cooperation in &amp;ldquo;AI + vocational education.&amp;rdquo; The application of AI in digital cultural tourism and cultural heritage protection revitalizes cultural heritage from various countries, showcasing the humanistic warmth of &amp;ldquo;AI +&amp;rdquo; and allowing different civilizations to blend and shine in the digital age.&lt;/p&gt;&#xA;&lt;h2 id=&#34;talent-development&#34;&gt;Talent Development&#xA;&lt;/h2&gt;&lt;p&gt;Talent is fundamental to development, and talent cultivation is the guarantee for the continuous empowerment of &amp;ldquo;AI +.&amp;rdquo; The initiative emphasizes building independent innovation capabilities in partner countries through open-source technology and joint training. China adheres to an open and inclusive philosophy, not only exporting technology but also sharing experiences. By the end of 2025, the number of valid domestic invention patents in China reaches 5.32 million, with AI patents ranking among the highest globally, accounting for 60% of the total worldwide, firmly holding the top position.&lt;/p&gt;&#xA;&lt;p&gt;Relevant technologies are shared with the world through open-source communities and joint R&amp;amp;D, significantly lowering the technological threshold for developing countries. Mechanism guarantees are in place, with the international cooperation resolution on strengthening AI capacity building proposed by China in 2024 being unanimously adopted at the 78th United Nations General Assembly. China has led multiple AI capacity building seminars, inviting representatives from various countries to engage in in-depth discussions on AI development, governance, and application, effectively implementing the UN General Assembly resolution. Through local training and joint schooling, China assists partner countries in cultivating AI talent, bridging the &amp;ldquo;last mile&amp;rdquo; of technology application and supporting countries in transitioning from technology input to independent innovation. Since 2026, China has further opened specialized AI capacity building training courses for ASEAN, Central Asian, and Arab countries, promoting relevant cooperation from global inclusivity to regional deepening.&lt;/p&gt;&#xA;&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&#xA;&lt;/h2&gt;&lt;p&gt;Intelligence knows no boundaries, and win-win cooperation is the path forward. China&amp;rsquo;s &amp;ldquo;AI + International Cooperation Initiative&amp;rdquo; is a comprehensive framework encompassing concepts, mechanisms, and practices. From computing hubs to industrial collaboration, from empowering people&amp;rsquo;s livelihoods to cultural exchange, from technological innovation to talent cultivation, &amp;ldquo;AI +&amp;rdquo; is breaking barriers with an open and inclusive approach. It is destined to become a powerful engine for fostering international cooperation and promoting global common development, ensuring that the benefits of intelligence reach every country and its people, and composing a new chapter of shared destiny and prosperity in the digital age.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Harness: The Next Generation Solution for Agent Frameworks</title>
            <link>https://acousticinfoplus.com/posts/note-5de004be60/</link>
            <pubDate>Sat, 25 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-5de004be60/</guid>
            <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&#xA;&lt;/h2&gt;&lt;p&gt;The engineering challenges of Agent frameworks are giving rise to a new generation of solutions—Harness. This article dissects the design philosophies of three major frameworks: OpenClaw, Hermes, and Claude Code, revealing the seven engineering gaps that Agents must cross from proof of concept to production deployment. Only when model capabilities are deeply integrated with engineering systems can we truly understand why Harness is the key factor in the success or failure of Agents.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 1&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;514px&#34; data-flex-grow=&#34;214&#34; height=&#34;420&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-bd52d0f1a7.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-bd52d0f1a7_hu_6af9ec5c85e8f021.jpeg 800w, https://acousticinfoplus.com/posts/note-5de004be60/img-bd52d0f1a7.jpeg 900w&#34; width=&#34;900&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Harness has recently gained some attention, but it differs from OpenClaw and Hermes in that it lacks a fully realized description; it was created for the stable execution of Agents.&lt;/p&gt;&#xA;&lt;p&gt;The articles available on the platform often seem either too abstract or too fragmented.&lt;/p&gt;&#xA;&lt;p&gt;To understand Harness, one must not only grasp the overarching concepts but also refer to currently operational Agent frameworks like Claude Code, OpenClaw, and Hermes to bring it back to the engineering context.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 2&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;559px&#34; data-flex-grow=&#34;233&#34; height=&#34;463&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-3ccee8fb15.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-3ccee8fb15_hu_5d8d6de311b5552a.jpeg 800w, https://acousticinfoplus.com/posts/note-5de004be60/img-3ccee8fb15.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;background-of-harness&#34;&gt;Background of Harness&#xA;&lt;/h2&gt;&lt;p&gt;Thanks to the recent developments in Agents, including the successive releases of OpenClaw and Hermes, as well as the source code leak of Claude Code, the global understanding of the Agent development paradigm has reached a new level.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 3&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;608&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-aaa45cee6d.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-aaa45cee6d_hu_5c134aef5564a8f2.jpeg 800w, https://acousticinfoplus.com/posts/note-5de004be60/img-aaa45cee6d.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Given this foundation, we cannot assume that the term Harness has suddenly become popular; it has emerged because the engineering issues have finally become apparent as Agents begin to perform real tasks.&lt;/p&gt;&#xA;&lt;p&gt;As Martin Fowler defined in an article in April 2026, Harness Engineering is a model for building trust around coding Agents, focusing on context, constraints, feedback loops, and engineering structure to gradually allow humans to delegate tasks to Agents.&lt;/p&gt;&#xA;&lt;p&gt;Anthropic itself refers to Claude Code as an excellent harness in its official engineering articles, further discussing harness design in long-running Agents and application development.&lt;/p&gt;&#xA;&lt;p&gt;Thus, at this stage, Claude emphasizes not just its model strength but also its engineering capabilities. However, we can see that domestic frameworks can also achieve significant improvements simply by switching to the Claude model.&lt;/p&gt;&#xA;&lt;p&gt;I believe Claude&amp;rsquo;s strength lies primarily in coding, and domestic engineering capabilities may not necessarily be inferior.&lt;/p&gt;&#xA;&lt;p&gt;Today, as we explore the culmination of engineering paradigms represented by Harness, we must move beyond merely discussing prompt engineering. Context engineering seems insufficient to encompass its meaning; the current question has returned to:&lt;/p&gt;&#xA;&lt;p&gt;Why do Agent frameworks like OpenClaw, Hermes, and Claude Code ultimately develop a complete engineering system? And why does this system increasingly resemble the key to the success or failure of Agents?&lt;/p&gt;&#xA;&lt;h2 id=&#34;models-and-engineering&#34;&gt;Models and Engineering&#xA;&lt;/h2&gt;&lt;p&gt;Over the past two years, major model companies have primarily focused on the Agent ecosystem: semantic understanding, visual generation, long-context tool invocation, multimodal computer operations, and browser operations.&lt;/p&gt;&#xA;&lt;p&gt;There is an industry perspective that suggests designing for the next six months, as models will become stronger, thereby reducing engineering costs. This is a significant assumption: as long as models continue to improve, applications will naturally emerge.&lt;/p&gt;&#xA;&lt;p&gt;However, the reality is that once the stability of long-context and tool calling improves, the Agent line indeed becomes much easier to work with.&lt;/p&gt;&#xA;&lt;p&gt;The problem is that a strong model does not equate to stable engineering. There are always boundaries that can be crossed, including: models may still miscall or be unstable in tool invocation; models can understand complex inputs but struggle with prolonged tasks; just because a model can write code does not mean it knows if it is correct.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 4&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;608&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-b99a853a2c.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-b99a853a2c_hu_634302d6cbf78396.jpeg 800w, https://acousticinfoplus.com/posts/note-5de004be60/img-b99a853a2c.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;The significance of engineering architecture lies in enabling Agents to complete tasks reliably. As a result, from 2025 to 2026, the focus of Agent discussions began to shift noticeably: previously, people discussed how to write prompts, then how to feed context, and now the real discussion is about what system capabilities are still needed after the Agent is operational.&lt;/p&gt;&#xA;&lt;p&gt;This is the entire context in which Harness has emerged.&lt;/p&gt;&#xA;&lt;h2 id=&#34;what-is-harness&#34;&gt;What is Harness?&#xA;&lt;/h2&gt;&lt;p&gt;Currently, there are many definitions of Harness in the market, the most understandable being:&lt;/p&gt;&#xA;&lt;p&gt;Model = Brain&#xA;Harness = Body + Workbench + Operating Procedures + Supervision Mechanisms&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 5&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;608&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-63e8f2034e.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-63e8f2034e_hu_8d0dbc3d374b8b72.jpeg 800w, https://acousticinfoplus.com/posts/note-5de004be60/img-63e8f2034e.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;First, regardless of whether this description is rigorous, I find it challenging to elaborate because Harness is an engineering product. Engineering products are not merely SDKs or prompt tricks; they are a collection of various hard problems we tackle in projects. Thus:&lt;/p&gt;&#xA;&lt;p&gt;Harness is the system that transforms model capabilities into continuous, stable, and verifiable product capabilities.&lt;/p&gt;&#xA;&lt;p&gt;At its core, it consists of many rules, constraints, and designs.&lt;/p&gt;&#xA;&lt;h2 id=&#34;prompt--context--harness&#34;&gt;Prompt → Context → Harness&#xA;&lt;/h2&gt;&lt;p&gt;&lt;img alt=&#34;Image 6&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;608&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-a7eb9bee42.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-a7eb9bee42_hu_4e8ef9c356d30bc1.jpeg 800w, https://acousticinfoplus.com/posts/note-5de004be60/img-a7eb9bee42.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;As mentioned earlier, Harness is a product of engineering practice in the process of developing Agents. Therefore, Harness did not emerge from thin air; it is an evolution of previous engineering products: Prompt Engineering and Context Engineering.&lt;/p&gt;&#xA;&#xA;    &lt;blockquote&gt;&#xA;        &lt;p&gt;Context Engineering is an extension of Prompt Engineering, and Harness is the result of their two evolutions.&lt;/p&gt;&#xA;&#xA;    &lt;/blockquote&gt;&#xA;&lt;h2 id=&#34;1-prompt-engineering&#34;&gt;1. Prompt Engineering&#xA;&lt;/h2&gt;&lt;p&gt;Prompt Engineering directly addresses how we should interact with models, making it the simplest and most effective approach. Early on, the focus was on: few-shot prompts, role prompts, CoT output format constraints, and prompt templates.&lt;/p&gt;&#xA;&lt;p&gt;The essence of this layer is translating industry know-how into natural language instructions.&lt;/p&gt;&#xA;&lt;p&gt;It is worth noting that regardless of how engineering evolves, it will ultimately return to prompts. Thus, many believe that current engineering optimizations are still extensions of Prompt Engineering, which is not incorrect.&lt;/p&gt;&#xA;&lt;h2 id=&#34;2-context-engineering&#34;&gt;2. Context Engineering&#xA;&lt;/h2&gt;&lt;p&gt;As tasks became more complex, writing a good prompt alone was no longer sufficient, leading to the emergence of Context Engineering: which private knowledge to include, which historical conversations to retain, how to compress long contexts, how to perform retrieval, and how to prevent the model from forgetting or being overwhelmed by information.&lt;/p&gt;&#xA;&lt;p&gt;At this stage, the system is no longer simply responding according to SOP but begins to answer based on combined materials, with the core revolving around CoT.&lt;/p&gt;&#xA;&lt;p&gt;It should be noted that the essence of Context Engineering is Data Engineering, and those truly engaged in production-level AI applications often find themselves in a paradox: spending 80% of their time on data, leading to doubts about the connection between this tedious work and the glamorous AI.&lt;/p&gt;&#xA;&lt;h2 id=&#34;3-harness-engineering&#34;&gt;3. Harness Engineering&#xA;&lt;/h2&gt;&lt;p&gt;As Agents began to evolve beyond simple Q&amp;amp;A, they started to: invoke tools, run code, break down tasks, view pages, write documents, execute multi-turn cycles, and manage sub-Agents, interruptions, recovery, testing, and acceptance.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 7&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;608&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-d7bf17fa90.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-d7bf17fa90_hu_13d3532fe6f3b990.jpeg 800w, https://acousticinfoplus.com/posts/note-5de004be60/img-d7bf17fa90.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;As previously mentioned, the emergence of Agents aims to address the heavy maintenance work caused by insufficient workflow generalization.&lt;/p&gt;&#xA;&lt;p&gt;However, due to increased engineering complexity, Context Engineering alone is no longer sufficient. The questions have shifted from data-related issues to: how to sustain task progression without losing control, how models can verify their correctness, how to organize execution chains, how to retain intermediate results, how to backtrack errors, and how to resume tasks.&lt;/p&gt;&#xA;&lt;p&gt;At this point, Harness naturally emerges as a comprehensive solution forced out by engineering realities:&lt;/p&gt;&#xA;&#xA;    &lt;blockquote&gt;&#xA;        &lt;p&gt;When Agents transition from Q&amp;amp;A to workflows and from single-turn to long-chain tasks, a complete solution emerges from engineering necessity.&lt;/p&gt;&#xA;&#xA;    &lt;/blockquote&gt;&#xA;&lt;h2 id=&#34;openclaw-hermes&#34;&gt;OpenClaw, Hermes&#xA;&lt;/h2&gt;&lt;p&gt;As previously mentioned, Harness has become somewhat abstract because we tend to detach it from real frameworks.&lt;/p&gt;&#xA;&lt;p&gt;To truly discuss it, we must return to the Agents themselves and place Harness back within OpenClaw, Hermes, and Claude Code, making it much more concrete.&lt;/p&gt;&#xA;&lt;p&gt;These three frameworks represent three typical engineering orientations for Agents:&lt;/p&gt;&#xA;&lt;h2 id=&#34;1-openclaw-first-control-the-agent&#34;&gt;1. OpenClaw: First Control the Agent&#xA;&lt;/h2&gt;&lt;p&gt;&lt;img alt=&#34;Image 8&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;608&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-12e8b329cc.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-12e8b329cc_hu_d4aaf7c03419332f.jpeg 800w, https://acousticinfoplus.com/posts/note-5de004be60/img-12e8b329cc.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;The official documentation and repository of OpenClaw clearly emphasize controlled runtime.&lt;/p&gt;&#xA;&lt;p&gt;It delineates Skills, Gateways, security boundaries, Sub-agents, and Sandboxes clearly.&lt;/p&gt;&#xA;&lt;p&gt;For instance, the official Skills documentation states that OpenClaw uses AgentSkills-compatible skill folders, with each skill directory containing SKILL.md, and filtering based on environment, configuration, and dependencies during loading.&lt;/p&gt;&#xA;&lt;p&gt;Its security documentation repeatedly emphasizes that OpenClaw currently assumes a personal assistant security model, meaning deployment within a trusted boundary rather than unrestricted production.&lt;/p&gt;&#xA;&lt;p&gt;The system engineering goal behind this design is clear: first organize permissions, boundaries, roles, skills, and execution environments before allowing the Agent to perform tasks.&lt;/p&gt;&#xA;&lt;p&gt;OpenClaw aims to become a standard enterprise Agent, so its engineering direction is also clear:&lt;/p&gt;&#xA;&#xA;    &lt;blockquote&gt;&#xA;        &lt;p&gt;How to enable the Agent to execute tasks safely, stably, and in a controlled manner?&lt;/p&gt;&#xA;&#xA;    &lt;/blockquote&gt;&#xA;&lt;p&gt;However, it must be noted that this framework is still immature, especially in multi-user collaboration, making it challenging to implement effectively, but that does not imply its direction is wrong.&lt;/p&gt;&#xA;&lt;h2 id=&#34;2-hermes-first-let-the-agent-learn&#34;&gt;2. Hermes: First Let the Agent Learn&#xA;&lt;/h2&gt;&lt;p&gt;Hermes presents a different flavor in its README.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 9&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;608&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-17e81846d1.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-17e81846d1_hu_5d1d1924e5faf14c.jpeg 800w, https://acousticinfoplus.com/posts/note-5de004be60/img-17e81846d1.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;It defines itself as &amp;ldquo;the self-improving AI agent,&amp;rdquo; directly stating its core capabilities as a learning feedback loop: creates skills from experience, improves them during use, nudges itself to persist knowledge, searches its own past conversations, and builds a deepening model of who you are across sessions.&lt;/p&gt;&#xA;&lt;p&gt;Hermes&amp;rsquo;s official documentation also provides eight types of external memory providers and clarifies that built-in MEMORY.md / USER.md always exist, while only one external provider can be enabled at a time to avoid schema bloat and conflicts.&lt;/p&gt;&#xA;&lt;p&gt;This is why I often say Hermes is clever: it lacks the ambition of OpenClaw, temporarily focusing on enabling individual users, iterating based on OpenClaw&amp;rsquo;s pain points. Its engineering goal is also clear:&lt;/p&gt;&#xA;&lt;p&gt;First, allow the Agent to learn and grow from experience, and gradually add boundaries and governance.&lt;/p&gt;&#xA;&lt;p&gt;Hermes aims to make Agents stronger the more they are used, evolving into long-term assistants.&lt;/p&gt;&#xA;&lt;h2 id=&#34;3-claude-code&#34;&gt;3. Claude Code&#xA;&lt;/h2&gt;&lt;p&gt;&lt;img alt=&#34;Image 10&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;608&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-513dd0e8c6.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-513dd0e8c6_hu_ed8d1b01f5028eb0.jpeg 800w, https://acousticinfoplus.com/posts/note-5de004be60/img-513dd0e8c6.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Claude Code operates in a completely different scenario; it is a production-level application, no longer just &amp;ldquo;an agent that can code.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;Anthropic has now opened the capabilities derived from Claude Code as the Claude Agent SDK, explicitly stating that this SDK provides the tools, agent loop, and context management behind Claude Code.&lt;/p&gt;&#xA;&lt;p&gt;Moreover, Anthropic has published several engineering articles specifically discussing how to design harnesses for long-term Agents and optimize harnesses in application development scenarios. Claude Code is considered an excellent harness in itself.&lt;/p&gt;&#xA;&lt;p&gt;This means that the value of Claude Code lies not just in its strong model but also in:&lt;/p&gt;&#xA;&#xA;    &lt;blockquote&gt;&#xA;        &lt;p&gt;It has developed an entire engineering framework beyond the model that is significantly important.&lt;/p&gt;&#xA;&#xA;    &lt;/blockquote&gt;&#xA;&lt;p&gt;Thus, if one truly wants to learn about Harness, logically, Claude Code serves as the best example. However, we cannot access its complete code, and in terms of complexity, OpenClaw might be the optimal solution.&lt;/p&gt;&#xA;&lt;h2 id=&#34;dissecting-harness&#34;&gt;Dissecting Harness&#xA;&lt;/h2&gt;&lt;p&gt;If we truly want to dissect Harness, I believe there are at least seven layers.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 11&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;608&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-ff1ba4c457.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-ff1ba4c457_hu_e751f07ecd121f28.jpeg 800w, https://acousticinfoplus.com/posts/note-5de004be60/img-ff1ba4c457.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Here, apart from my previous understanding of AI applications, each layer can be grounded in OpenClaw, Hermes, and Claude Code.&lt;/p&gt;&#xA;&lt;h2 id=&#34;layer-1-roles-and-rules&#34;&gt;Layer 1: Roles and Rules&#xA;&lt;/h2&gt;&lt;p&gt;When a model receives a task, the first thing is not to invoke tools but to clarify: who it is, whether it is responsible for planning, execution, or acceptance, what its boundaries are, and how to handle uncertainties.&lt;/p&gt;&#xA;&lt;p&gt;As long as this is established, all subsequent actions will have a basic level of control.&lt;/p&gt;&#xA;&lt;p&gt;OpenClaw excels in this regard: Skills are written by humans, rules are set by humans, boundaries are established by the system, and the Agent primarily executes within the framework.&lt;/p&gt;&#xA;&lt;p&gt;Hermes is more flexible here: it has system prompts, role definitions, and runtime rules, but it prefers to delegate some judgment capabilities to the Agent itself, meaning it does not concern itself with Skills Plaza, such as when to generate new Skills or update old ones.&lt;/p&gt;&#xA;&lt;p&gt;Claude Code is closer to tools as processes: Anthropic continuously emphasizes the agent loop, context management, and long-task initializers/coding agent division, which essentially embeds roles and rhythms into the system.&lt;/p&gt;&#xA;&lt;p&gt;Thus, the first step in creating Harness is to determine your current working identity.&lt;/p&gt;&#xA;&lt;h2 id=&#34;layer-2-memory-system&#34;&gt;Layer 2: Memory System&#xA;&lt;/h2&gt;&lt;p&gt;Once a task becomes lengthy, it inevitably generates many intermediate results: sub-tasks broken down, discussed solutions, current progress, user preferences, historical errors, and successful experiences.&lt;/p&gt;&#xA;&lt;p&gt;No context is too long to waste, leading to differences in engineering across frameworks:&lt;/p&gt;&#xA;&lt;p&gt;OpenClaw adopts a restrained approach to memory, essentially closer to replaceable capabilities, meaning it implements the basics, and you can replace it according to your situation.&lt;/p&gt;&#xA;&lt;p&gt;Hermes, on the other hand, has developed a complete memory system: built-in MEMORY.md, USER.md, supplemented by external memory providers, and session search.&lt;/p&gt;&#xA;&lt;p&gt;The official documentation clearly states that built-in memory is always enabled, and only one external provider can exist simultaneously, adhering to the principle: don’t mess around, just use mine.&lt;/p&gt;&#xA;&lt;p&gt;Thus, users often feel that OpenClaw frequently ignores what was said yesterday, while Hermes also exhibits this behavior but provides an explanation.&lt;/p&gt;&#xA;&lt;p&gt;Claude Code emphasizes another approach in its official articles: in long-term tasks, clear artifacts and handoffs are crucial for the next session to continue.&lt;/p&gt;&#xA;&lt;p&gt;Therefore, the essence of the memory system in engineering is to ensure that the task process leaves traces, allowing the system to pick up where it left off.&lt;/p&gt;&#xA;&lt;h2 id=&#34;layer-3-context-loading-mechanism&#34;&gt;Layer 3: Context Loading Mechanism&#xA;&lt;/h2&gt;&lt;p&gt;What exactly should the model see? This is a challenge all AI applications face, and there is a sense that no solution is optimal.&lt;/p&gt;&#xA;&lt;p&gt;In real Agent scenarios, the model will have access to an increasing amount of information: roles and rules, historical dialogues, memory, skills, tool results, and current tasks.&lt;/p&gt;&#xA;&lt;p&gt;The problem arises: it is not a lack of information but an overload of it.&lt;/p&gt;&#xA;&lt;p&gt;OpenClaw’s Skills loading logic essentially serves as a context filter: screening based on environment, configuration, and dependencies.&lt;/p&gt;&#xA;&lt;p&gt;Hermes takes a different route: its session search does not dump historical raw text but retrieves and processes it first; it also supports context engine plugins to replace built-in context compressors.&lt;/p&gt;&#xA;&lt;p&gt;Thus, determining how to provide the model with only the most necessary information in each round is, in my view, the most challenging aspect of model engineering. This further leads to issues regarding private data loading: if this layer is not handled well, the system will face problems on both ends: seeing too little, akin to amnesia, or seeing too much, leading to confusion.&lt;/p&gt;&#xA;&lt;h2 id=&#34;layer-4-stable-execution&#34;&gt;Layer 4: Stable Execution&#xA;&lt;/h2&gt;&lt;p&gt;Agents, or the ReAct framework, represent the framework we selected in the model era, marking the point where Agents begin to take action. Thus, how tools receive commands, how files are executed, how pages are read and written, how code is checked, and how results are collected are all engineering concerns, as they rely on third parties and are prone to issues.&lt;/p&gt;&#xA;&lt;p&gt;OpenClaw is a typical example of a safety-first runtime.&lt;/p&gt;&#xA;&lt;p&gt;Hermes resembles a flexible execution backend, with its official README stating it can run locally, on VPS, GPU clusters, or serverless environments with near-zero idle costs.&lt;/p&gt;&#xA;&lt;p&gt;Thus, the goal of this layer of Harness is to transform language judgments into stable, real actions. Without this layer or if it is poorly executed, frequent errors will occur.&lt;/p&gt;&#xA;&lt;h2 id=&#34;layer-5-effective-loop&#34;&gt;Layer 5: Effective Loop&#xA;&lt;/h2&gt;&lt;p&gt;Ordinary chatting represents AI 1.0; since DeepSeek, we have been pursuing multi-turn Q&amp;amp;A. Agents inevitably enter loops due to the complexity of the problems they handle: understanding tasks, deciding the next execution step, reading results, and judging the next step, continuing until closure.&lt;/p&gt;&#xA;&lt;p&gt;OpenClaw’s multi-Agent, skills, and runtime all revolve around advancing these loops.&lt;/p&gt;&#xA;&lt;p&gt;Hermes embeds delegation, skills, memory, search, and provider hooks within this loop.&lt;/p&gt;&#xA;&lt;p&gt;As mentioned earlier, more intelligence will inevitably consume more tokens; the issue with Agent loops lies here: will they waste tokens and time without substantial progress?&lt;/p&gt;&#xA;&lt;p&gt;In engineering systems, the concern has always been not the loop itself but whether money is spent without results.&lt;/p&gt;&#xA;&lt;h2 id=&#34;layer-6-scoring-and-observability&#34;&gt;Layer 6: Scoring and Observability&#xA;&lt;/h2&gt;&lt;p&gt;One of the major issues with models is not their inability to perform tasks but rather that they often believe they have completed them.&lt;/p&gt;&#xA;&lt;p&gt;On the surface, code may be written, pages rendered, and replies sent, seemingly closing the loop. However, upon verification, many aspects may not connect at all.&lt;/p&gt;&#xA;&lt;p&gt;Thus, in system engineering, we embed points at every critical node to establish scoring and observability mechanisms.&lt;/p&gt;&#xA;&lt;p&gt;In other words, the system cannot solely rely on the model reporting, &amp;ldquo;I have completed it,&amp;rdquo; but must be able to see through tests, logs, page verifications, operational metrics, manual reviews, and benchmarks what it has done, to what extent, and the quality of the results.&lt;/p&gt;&#xA;&lt;p&gt;How to establish trust in the results of Agents cannot rely solely on the model&amp;rsquo;s self-reporting; it must have external feedback mechanisms.&lt;/p&gt;&#xA;&lt;p&gt;Anthropic’s harness design articles also discuss similar issues: to improve long-term application development performance, having an agent loop is insufficient; a stronger environmental and feedback framework is also needed.&lt;/p&gt;&#xA;&lt;p&gt;OpenClaw’s strategy here is institutionalized: constraining results through rules, sandboxes, and controlled execution.&lt;/p&gt;&#xA;&lt;p&gt;Hermes, on the other hand, focuses on learning loops: gradually consolidating execution results, error paths, and successful experiences into Skills or Memory.&lt;/p&gt;&#xA;&lt;p&gt;Thus, the goal of this layer is to prevent models from blindly giving themselves high scores.&lt;/p&gt;&#xA;&lt;h2 id=&#34;layer-7-interruption-and-recovery&#34;&gt;Layer 7: Interruption and Recovery&#xA;&lt;/h2&gt;&lt;p&gt;This layer is key to engineering control.&lt;/p&gt;&#xA;&lt;p&gt;We are accustomed to completing tasks in one go, but the real world does not operate that way. Additionally, when designing SOPs/Workflows, humans struggle with boundary backtracking, and models face similar challenges.&lt;/p&gt;&#xA;&lt;p&gt;Thus, during repeated cycles, whether the overall SOP will backtrack and how to do so becomes critical.&lt;/p&gt;&#xA;&lt;p&gt;This layer often seems tedious but becomes especially important when running, as you will find that tasks can indeed be interrupted, time out, switch sessions, or fail and require retries.&lt;/p&gt;&#xA;&lt;p&gt;As for how to resolve this:&lt;/p&gt;&#xA;&lt;p&gt;Hermes uses MEMORY, USER, session search, and external providers to systematize continuity.&lt;/p&gt;&#xA;&lt;p&gt;OpenClaw’s approach leans more towards controlled processes and traces.&lt;/p&gt;&#xA;&lt;p&gt;Therefore, in engineering systems, the final challenge is how to reconnect interrupted tasks.&lt;/p&gt;&#xA;&lt;p&gt;At this point, I believe we have clarified the discussion, and finally, let’s go through Harness using OpenClaw as a concrete example.&lt;/p&gt;&#xA;&lt;h2 id=&#34;openclaw-understanding-harness&#34;&gt;OpenClaw: Understanding Harness&#xA;&lt;/h2&gt;&lt;p&gt;Having discussed many concepts regarding Harness, what does this so-called Harness look like when an Agent framework is actually operational? Let’s continue with OpenClaw, which I am more familiar with.&lt;/p&gt;&#xA;&lt;h2 id=&#34;layer-1-mcptoolchain&#34;&gt;Layer 1: MCP/Toolchain&#xA;&lt;/h2&gt;&lt;p&gt;&lt;img alt=&#34;Image 12&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;608&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-2d2c45f15e.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-2d2c45f15e_hu_f5e7259f64fca23f.jpeg 800w, https://acousticinfoplus.com/posts/note-5de004be60/img-2d2c45f15e.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;When people mention OpenClaw, the first reaction is often Skills, which is correct; Skills are indeed core and serve as the entry point for interaction with the Agent.&lt;/p&gt;&#xA;&lt;p&gt;However, from the perspective of Harness engineering stability, the entire MCP/toolchain layer is crucial. Once an Agent starts working, it must resolve how to safely and stably connect to the real world.&lt;/p&gt;&#xA;&lt;p&gt;Skills serve as method stabilizers, preventing the model from diverging too much; the MCP/toolchain represents the capabilities themselves.&lt;/p&gt;&#xA;&lt;p&gt;If Skills encounter issues, the system may behave erratically; if Tools fail, the entire process breaks, and this is dependent on third parties, which are inherently prone to problems, including API changes, permission shifts, plugin failures, and parameter variations.&lt;/p&gt;&#xA;&lt;p&gt;Thus, the engineering system must first clearly define capability specifications. OpenClaw exemplifies this by placing Tools, Plugins, Gateways, and external capabilities into a clearly bounded system, aiming to ensure:&lt;/p&gt;&#xA;&lt;p&gt;Can the model stably invoke tools within a constrained capability plane?&lt;/p&gt;&#xA;&lt;p&gt;For instance, a common scenario is when an external API fails.&lt;/p&gt;&#xA;&lt;p&gt;Without engineering control, the model may not distinguish whether it misunderstood or if the upstream interface is down.&lt;/p&gt;&#xA;&lt;p&gt;In this scenario, the model may behave foolishly, increasing output force and looping continuously, wasting tokens and time; in extreme cases, it might even inform downstream: I’ve got it done&amp;hellip;&lt;/p&gt;&#xA;&lt;p&gt;At this point, the value of Harness becomes apparent.&lt;/p&gt;&#xA;&lt;p&gt;Taking OpenClaw as an example, the correct handling approach is to treat API call failures as runtime events.&lt;/p&gt;&#xA;&lt;p&gt;Currently, OpenClaw’s strategy involves managing tool calls within a runtime plane governed by a Gateway. The specifics are too numerous to elaborate here&amp;hellip;&lt;/p&gt;&#xA;&lt;h2 id=&#34;layer-2-skills&#34;&gt;Layer 2: Skills&#xA;&lt;/h2&gt;&lt;p&gt;Once the capability foundation is established, we turn to Skills, which are crucial. Tools determine what can be done; Skills determine how to do these tasks specifically.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 13&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;608&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-1988de52bd.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-1988de52bd_hu_8f7e35da6bb931cf.jpeg 800w, https://acousticinfoplus.com/posts/note-5de004be60/img-1988de52bd.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Skills are inherently advantageous, as their on-demand loading can partially alleviate tool invocation errors. Their Workflow prompts further enhance stability by consolidating high-frequency task methods.&lt;/p&gt;&#xA;&lt;p&gt;However, in platform-type Agents like OpenClaw, the Skills issue is also apparent: Skills may originate from third parties, and Skills inherently enter the prompt construction chain, making the model fragile and easily polluted by malicious or low-quality prompts. Once the Skill mechanism is compromised, the Agent&amp;rsquo;s method layer may become distorted.&lt;/p&gt;&#xA;&lt;p&gt;Thus, in system engineering, the Skills mechanism should be classified under Harness. We have previously implemented similar systems.&lt;/p&gt;&#xA;&lt;p&gt;Now that Skills have been implemented at the foundational level, we are less concerned about the significance of Skills. For OpenClaw, the focus is on:&lt;/p&gt;&#xA;&lt;p&gt;How to ensure that this open mechanism for Skills does not drag the entire system down.&lt;/p&gt;&#xA;&lt;p&gt;Here, various rules are employed for constraints. You will find that engineering systems generate numerous constraints. For instance, OpenClaw emphasizes that third-party Skills are inherently untrustworthy.&lt;/p&gt;&#xA;&lt;p&gt;Furthermore, OpenClaw must also implement further fallback strategies. The approach here is to place Skills within a controlled loading chain; for example, plugin Skills are only low-priority paths, and Skills with the same name will be bundled/managed/agent/workspace Skills will take precedence; skill discovery for workspaces and extra directories only accepts parsed real paths that remain within the configuration root directory to avoid path traversal and arbitrary escape.&lt;/p&gt;&#xA;&lt;p&gt;Many fallback strategies are employed here, but we will not delve into the details&amp;hellip;&lt;/p&gt;&#xA;&lt;h2 id=&#34;layer-3-runtime&#34;&gt;Layer 3: Runtime&#xA;&lt;/h2&gt;&lt;p&gt;The subsequent issue is not about tool and skill invocation but rather how to sustain the execution of complex tasks.&lt;/p&gt;&#xA;&lt;p&gt;OpenClaw enters a loop when executing complex tasks: first understanding the problem, then deciding the next step, invoking tools, reading files, running code, checking results, and determining what to do next, continuing until the task is genuinely closed.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 14&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;608&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-1e94edd0c7.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-5de004be60/img-1e94edd0c7_hu_b0328551641036e9.jpeg 800w, https://acousticinfoplus.com/posts/note-5de004be60/img-1e94edd0c7.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;However, the reality is that bugs frequently occur. Once the model enters a long task, various issues may arise: it might prematurely conclude the task, claiming completion when it is not; it might loop back to the start, repeatedly invoking the same tool.&lt;/p&gt;&#xA;&lt;p&gt;Thus, from an engineering perspective, we hope to have a mechanism to clarify: how far along the current task is, who should do the next step, when to continue, when to pause, and when to revert.&lt;/p&gt;&#xA;&lt;p&gt;OpenClaw’s Runtime assumes this responsibility, attempting to organize the Agent&amp;rsquo;s actions from a series of scattered actions into a coherent process that can genuinely advance the task.&lt;/p&gt;&#xA;&lt;p&gt;This Runtime includes the entire project&amp;rsquo;s observability and interruption/retry logic, which is quite complex, so we will not elaborate further&amp;hellip;&lt;/p&gt;&#xA;&lt;p&gt;However, perhaps you now have a deeper understanding of what Harness is.&lt;/p&gt;&#xA;&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&#xA;&lt;/h2&gt;&lt;p&gt;Harness is not a module but a path—a methodology that emerges from tackling hard problems.&lt;/p&gt;&#xA;&lt;p&gt;You can clearly see how a Demo Agent progresses to OpenClaw: it begins by merely invoking tools; then realizes tools are unstable and adds rules; next, it finds rules insufficient and incorporates Skills; it then discovers Skills are inadequate and adds Runtime and Workflow; finally, it recognizes tasks may falsely appear complete and must supplement scoring and observability; and ultimately, it understands that tasks can be interrupted and must enhance recovery capabilities.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Understanding the Difference Between AI and OpenClaw</title>
            <link>https://acousticinfoplus.com/posts/note-4603d391a5/</link>
            <pubDate>Fri, 24 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-4603d391a5/</guid>
            <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&#xA;&lt;/h2&gt;&lt;p&gt;Recently, many readers have been confused about whether OpenClaw is considered AI and how it differs from ChatGPT and other models. In this article, we will clarify these concepts in simple terms.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 2&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;1535&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-4603d391a5/img-8551ab66a0.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-4603d391a5/img-8551ab66a0_hu_b66b4b563cfb2e50.jpeg 800w, https://acousticinfoplus.com/posts/note-4603d391a5/img-8551ab66a0_hu_c16028fb40719cc5.jpeg 1600w, https://acousticinfoplus.com/posts/note-4603d391a5/img-8551ab66a0_hu_f24792b98c130960.jpeg 2400w, https://acousticinfoplus.com/posts/note-4603d391a5/img-8551ab66a0.jpeg 2730w&#34; width=&#34;2730&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;core-difference-a-simple-analogy&#34;&gt;Core Difference: A Simple Analogy&#xA;&lt;/h2&gt;&lt;p&gt;Let’s use a relatable analogy to explain the differences:&lt;/p&gt;&#xA;&lt;table&gt;&#xA;  &lt;thead&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;th&gt;Role&lt;/th&gt;&#xA;          &lt;th&gt;Ability Features&lt;/th&gt;&#xA;          &lt;th&gt;Analogy Object&lt;/th&gt;&#xA;          &lt;th&gt;Typical Representatives&lt;/th&gt;&#xA;      &lt;/tr&gt;&#xA;  &lt;/thead&gt;&#xA;  &lt;tbody&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Regular AI Model&lt;/td&gt;&#xA;          &lt;td&gt;Can only respond passively; lacks persistent memory&lt;/td&gt;&#xA;          &lt;td&gt;Super Scholar (smart but restrained)&lt;/td&gt;&#xA;          &lt;td&gt;GPT, Wenxin Yiyan, Kimi&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;OpenClaw (AI Agent)&lt;/td&gt;&#xA;          &lt;td&gt;Can think and act; has long-term memory&lt;/td&gt;&#xA;          &lt;td&gt;All-round Assistant (smart brain with flexibility)&lt;/td&gt;&#xA;          &lt;td&gt;OpenClaw, QClaw (Tencent&amp;rsquo;s product based on OpenClaw)&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;  &lt;/tbody&gt;&#xA;&lt;/table&gt;&#xA;&lt;p&gt;In simple terms, the AI model is like a brain, while OpenClaw is a complete &amp;ldquo;human&amp;rdquo; equipped with eyes, ears, hands, and a notebook.&lt;/p&gt;&#xA;&lt;h2 id=&#34;three-real-world-scenarios-to-highlight-differences&#34;&gt;Three Real-World Scenarios to Highlight Differences&#xA;&lt;/h2&gt;&lt;p&gt;Let’s look at three everyday examples to illustrate the differences:&lt;/p&gt;&#xA;&lt;h3 id=&#34;scenario-1-booking-flights-and-arranging-itineraries&#34;&gt;Scenario 1: Booking Flights and Arranging Itineraries&#xA;&lt;/h3&gt;&lt;ul&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;Regular AI:&lt;/strong&gt; If you ask it, &amp;ldquo;Help me book a flight to Sanya next Friday,&amp;rdquo; it will respond, &amp;ldquo;Please open the XX travel app, search for departure and destination, select the date, fill in passenger information&amp;hellip;&amp;rdquo; (all text guidance, and you still have to do it yourself).&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;OpenClaw:&lt;/strong&gt; You tell it, &amp;ldquo;Help me book a flight to Sanya next Friday, budget under 1500, window seat,&amp;rdquo; and it will automatically open your booking software, check flights, compare prices, select seats, complete payment, and even send you a confirmation along with hotel and transfer arrangements!&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;scenario-2-handling-work-emails&#34;&gt;Scenario 2: Handling Work Emails&#xA;&lt;/h3&gt;&lt;ul&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;Regular AI:&lt;/strong&gt; You ask it, &amp;ldquo;Help me process today’s emails,&amp;rdquo; and it replies, &amp;ldquo;Please forward the email content to me, and I can help you draft a response&amp;hellip;&amp;rdquo; (you still have to manually forward the emails).&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;OpenClaw:&lt;/strong&gt; After you grant it access to your email, it logs in automatically, categorizes emails, marks important information, replies to routine inquiries, summarizes emails that need your attention, and can even import customer inquiries into Excel to create reports!&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h3 id=&#34;scenario-3-managing-personal-schedules&#34;&gt;Scenario 3: Managing Personal Schedules&#xA;&lt;/h3&gt;&lt;ul&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;Regular AI:&lt;/strong&gt; You ask, &amp;ldquo;Remind me of the meeting at 3 PM tomorrow,&amp;rdquo; and it can only say, &amp;ldquo;I have noted it down; I will remind you at 3 PM tomorrow&amp;rdquo; (essentially just a voice memo).&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;OpenClaw:&lt;/strong&gt; You say, &amp;ldquo;Help me schedule a meeting with Zhang at 3 PM tomorrow,&amp;rdquo; and it automatically checks both your calendars, finds a suitable time, sends out meeting invites, sets reminders, and even prepares meeting materials 10 minutes beforehand, generating minutes afterward!&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;technical-insights-what-is-openclaw&#34;&gt;Technical Insights: What is OpenClaw?&#xA;&lt;/h2&gt;&lt;p&gt;You might wonder if OpenClaw is more advanced than regular AI models. The answer is no! OpenClaw is essentially an open-source AI agent execution framework (think of it as an &amp;ldquo;operating system&amp;rdquo;). It lacks independent thinking and relies on AI models as its &amp;ldquo;brain.&amp;rdquo; Its core values include:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&lt;strong&gt;Task Planning:&lt;/strong&gt; Breaking down complex commands into smaller steps (e.g., breaking down &amp;ldquo;book a flight&amp;rdquo; into checking flights, selecting seats, and payment).&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Tool Invocation:&lt;/strong&gt; Automatically opening and operating various software, websites, and APIs (like its &amp;ldquo;hands and feet&amp;rdquo;).&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Persistent Memory:&lt;/strong&gt; Remembering your preferences and historical actions (e.g., you prefer window seats and have a budget limit).&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Autonomous Execution:&lt;/strong&gt; Completing entire processes without requiring step-by-step guidance.&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;In contrast, regular AI models are like a &amp;ldquo;genius&amp;rdquo; trapped in a chat box—knowledgeable and articulate but unable to interact proactively with the outside world.&lt;/p&gt;&#xA;&lt;h2 id=&#34;how-to-use-two-options-for-different-users&#34;&gt;How to Use? Two Options for Different Users&#xA;&lt;/h2&gt;&lt;p&gt;Now that you understand the differences, how should regular users utilize these tools? Here are two scenarios:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;For Non-Technical Users (Recommended):&lt;/strong&gt;&#xA;Use QClaw (Tencent&amp;rsquo;s Lobster AI)! It is a ready-to-use AI assistant based on OpenClaw, with one-click installation, a graphical interface, and no coding required. You can use it to:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Remotely control your computer via WeChat (e.g., turn it off or transfer files while away).&lt;/li&gt;&#xA;&lt;li&gt;Automatically organize desktop files and categorize photos.&lt;/li&gt;&#xA;&lt;li&gt;Schedule posts and respond to WeChat messages automatically.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;For Tech Enthusiasts (Advanced):&lt;/strong&gt;&#xA;Try the native OpenClaw! It is 100% open-source, offering maximum freedom. You can:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Deploy it locally for better data security.&lt;/li&gt;&#xA;&lt;li&gt;Integrate any AI model you like (GPT, Claude, Wenxin Yiyan, etc.).&lt;/li&gt;&#xA;&lt;li&gt;Customize tools and processes to create your own AI assistant.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;h2 id=&#34;conclusion-which-one-to-choose-consider-your-needs&#34;&gt;Conclusion: Which One to Choose? Consider Your Needs!&#xA;&lt;/h2&gt;&lt;table&gt;&#xA;  &lt;thead&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;th&gt;Need Type&lt;/th&gt;&#xA;          &lt;th&gt;Recommended Choice&lt;/th&gt;&#xA;          &lt;th&gt;Core Reason&lt;/th&gt;&#xA;      &lt;/tr&gt;&#xA;  &lt;/thead&gt;&#xA;  &lt;tbody&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Research, writing, solving math problems&lt;/td&gt;&#xA;          &lt;td&gt;Regular AI Model&lt;/td&gt;&#xA;          &lt;td&gt;Fast response, comprehensive knowledge, suitable for pure information processing&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Booking flights, sending emails, managing schedules&lt;/td&gt;&#xA;          &lt;td&gt;OpenClaw/QClaw&lt;/td&gt;&#xA;          &lt;td&gt;Can take action, freeing your hands, completing tasks end-to-end&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;Deep customization, data privacy required&lt;/td&gt;&#xA;          &lt;td&gt;Native OpenClaw&lt;/td&gt;&#xA;          &lt;td&gt;Open-source, local deployment, complete control&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;  &lt;/tbody&gt;&#xA;&lt;/table&gt;&#xA;&lt;p&gt;&lt;strong&gt;Discussion Topic:&lt;/strong&gt; What do you most want AI to help you with?&lt;/p&gt;&#xA;&lt;p&gt;After reading this, do you finally understand the difference between AI and OpenClaw? Regardless of the tool, they are designed to help us improve efficiency and free our hands.&lt;/p&gt;&#xA;&lt;p&gt;I would love for OpenClaw to help me filter quality comments and reply to them, allowing me more time to engage with everyone! What about you? Let me know in the comments what you most want AI to solve, and I will create a custom automation plan using OpenClaw for the top three most liked comments!&lt;/p&gt;&#xA;&lt;p&gt;If this article helped you, don&amp;rsquo;t forget to like and follow for more practical AI tool tips to stay ahead in the AI era!&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>AI Security Governance Forum Held at 2026 World Internet Conference</title>
            <link>https://acousticinfoplus.com/posts/note-8d4b919f8b/</link>
            <pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-8d4b919f8b/</guid>
            <description>&lt;p&gt;Recently, the AI Security Governance Forum of the 2026 World Internet Conference Asia-Pacific Summit was held at the Hong Kong Convention and Exhibition Centre. The forum focused on cutting-edge topics in global AI governance, bringing together top minds and key players from government departments, international organizations, research institutions, universities, think tanks, and the industry to explore practical, sustainable, and actionable cooperation paths.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 1&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;135px&#34; data-flex-grow=&#34;56&#34; height=&#34;1422&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-8d4b919f8b/img-4f6ac1bb21.jpeg&#34; width=&#34;800&#34;&gt;&lt;img alt=&#34;Image 2&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;135px&#34; data-flex-grow=&#34;56&#34; height=&#34;1422&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-8d4b919f8b/img-beccac5886.jpeg&#34; width=&#34;800&#34;&gt;&lt;img alt=&#34;Image 3&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;135px&#34; data-flex-grow=&#34;56&#34; height=&#34;1422&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-8d4b919f8b/img-a4481579aa.jpeg&#34; width=&#34;800&#34;&gt;&lt;img alt=&#34;Image 4&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;135px&#34; data-flex-grow=&#34;56&#34; height=&#34;1422&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-8d4b919f8b/img-8804f76d39.jpeg&#34; width=&#34;800&#34;&gt;&lt;img alt=&#34;Image 5&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;135px&#34; data-flex-grow=&#34;56&#34; height=&#34;1422&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-8d4b919f8b/img-25504a718f.jpeg&#34; width=&#34;800&#34;&gt;&lt;img alt=&#34;Image 6&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;135px&#34; data-flex-grow=&#34;56&#34; height=&#34;1422&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-8d4b919f8b/img-3bd1755ed5.jpeg&#34; width=&#34;800&#34;&gt;&lt;img alt=&#34;Image 7&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;135px&#34; data-flex-grow=&#34;56&#34; height=&#34;1422&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-8d4b919f8b/img-43829e1da2.jpeg&#34; width=&#34;800&#34;&gt;&lt;img alt=&#34;Image 8&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;135px&#34; data-flex-grow=&#34;56&#34; height=&#34;1422&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-8d4b919f8b/img-76fcd7e290.jpeg&#34; width=&#34;800&#34;&gt;&lt;img alt=&#34;Image 9&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;135px&#34; data-flex-grow=&#34;56&#34; height=&#34;1422&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-8d4b919f8b/img-86adedc333.jpeg&#34; width=&#34;800&#34;&gt;&lt;img alt=&#34;Image 10&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;135px&#34; data-flex-grow=&#34;56&#34; height=&#34;1422&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-8d4b919f8b/img-40836a9fce.jpeg&#34; width=&#34;800&#34;&gt;&lt;img alt=&#34;Image 11&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;135px&#34; data-flex-grow=&#34;56&#34; height=&#34;1422&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-8d4b919f8b/img-3d79fb76c5.jpeg&#34; width=&#34;800&#34;&gt;&lt;img alt=&#34;Image 12&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;135px&#34; data-flex-grow=&#34;56&#34; height=&#34;1422&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-8d4b919f8b/img-813ccf1c3a.jpeg&#34; width=&#34;800&#34;&gt;&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>OpenAI ChatGPT Weekly Active Users Near 1 Billion, Women Over 50%</title>
            <link>https://acousticinfoplus.com/posts/note-fb29da2211/</link>
            <pubDate>Fri, 17 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-fb29da2211/</guid>
            <description>&lt;h2 id=&#34;openai-chatgpt-weekly-active-users-near-1-billion-women-over-50&#34;&gt;OpenAI ChatGPT Weekly Active Users Near 1 Billion, Women Over 50%&#xA;&lt;/h2&gt;&lt;p&gt;On April 17, OpenAI announced that its ChatGPT user demographics have undergone a historic change. When ChatGPT was launched in 2022, the user base was approximately 80% male and 20% female. Currently, the proportion of female users has surpassed that of male users, exceeding 50%.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 2&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;465px&#34; data-flex-grow=&#34;193&#34; height=&#34;435&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-fb29da2211/img-0c1fcfe52f.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-fb29da2211/img-0c1fcfe52f_hu_86a9a89654f0aa00.jpeg 800w, https://acousticinfoplus.com/posts/note-fb29da2211/img-0c1fcfe52f.jpeg 843w&#34; width=&#34;843&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;According to the reported weekly active user count, this means that nearly 500 million women are regularly using the tool, with the total number of ChatGPT users approaching 1 billion.&lt;/p&gt;&#xA;&lt;p&gt;Computing power has become a key factor in the success of AI. OpenAI&amp;rsquo;s available computing power has increased from 0.2 gigawatts in 2023 to about 1.9 gigawatts by 2025, representing an annual growth of approximately three times. The company is heavily investing in infrastructure and has signed agreements for over 30 gigawatts of power with partners like NVIDIA, aiming to achieve 30 gigawatts of computing power by 2030.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Has Claude Opus Become Less Intelligent?</title>
            <link>https://acousticinfoplus.com/posts/note-b0d8a013d2/</link>
            <pubDate>Thu, 16 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-b0d8a013d2/</guid>
            <description>&lt;p&gt;&lt;img alt=&#34;Image 1&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;1440&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-b0d8a013d2/img-bf75a51ceb.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-b0d8a013d2/img-bf75a51ceb_hu_cc7e5adbd13beaa1.jpeg 800w, https://acousticinfoplus.com/posts/note-b0d8a013d2/img-bf75a51ceb_hu_70b2174cdb507ccc.jpeg 1600w, https://acousticinfoplus.com/posts/note-b0d8a013d2/img-bf75a51ceb_hu_82a96311617b3216.jpeg 2400w, https://acousticinfoplus.com/posts/note-b0d8a013d2/img-bf75a51ceb.jpeg 2560w&#34; width=&#34;2560&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Claude Opus seems to have become less intelligent recently.&lt;/p&gt;&#xA;&lt;p&gt;More users are expressing a vague feeling that while the model doesn’t make obvious mistakes, it no longer feels as &amp;ldquo;smart&amp;rdquo; as before.&lt;/p&gt;&#xA;&lt;p&gt;Responses are faster, reasoning is shorter, and it sometimes appears to skip essential steps, becoming more perfunctory.&lt;/p&gt;&#xA;&lt;p&gt;If this were just an isolated case, users might suspect it’s their issue, but as similar feedback increases, it becomes more than just a feeling.&lt;/p&gt;&#xA;&lt;p&gt;Videos have even surfaced online, joking that the current Opus resembles a fierce lion that has been declawed, revealing it to be just a dog.&lt;/p&gt;&#xA;&lt;p&gt;A more direct phrase has started circulating: Opus has been nerfed!&lt;/p&gt;&#xA;&lt;p&gt;Is this true? If so, why would it be nerfed?&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 2&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;476px&#34; data-flex-grow=&#34;198&#34; height=&#34;256&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-b0d8a013d2/img-eb2bd1d1be.jpeg&#34; width=&#34;508&#34;&gt;&lt;/p&gt;&#xA;&lt;h3 id=&#34;decline-in-reasoning-depth-by-67&#34;&gt;Decline in Reasoning Depth by 67%&#xA;&lt;/h3&gt;&lt;p&gt;Initially, only a few users complained that Claude Opus had &amp;ldquo;become lazy&amp;rdquo; or &amp;ldquo;wasn’t as smart as before.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;They might have noticed occasional low-level mistakes or fewer reasoning steps in complex tasks.&lt;/p&gt;&#xA;&lt;p&gt;In a sense, collaborating with the model is similar to interacting with a person; when a previously cooperative &amp;ldquo;colleague&amp;rdquo; suddenly changes, it’s unsettling.&lt;/p&gt;&#xA;&lt;p&gt;Most people&amp;rsquo;s first reaction is to doubt themselves: Is the prompt not well-written? Is the task inherently unsuitable? Surely, this is just a coincidence?&lt;/p&gt;&#xA;&lt;p&gt;However, soon similar feedback began to appear densely in the Claude community on Reddit, with consistent descriptions:&lt;/p&gt;&#xA;&lt;p&gt;Some users noted it no longer reads code carefully; others observed it provides answers faster but often omits crucial steps; and some found it more prone to &amp;ldquo;prematurely ending&amp;rdquo; long tasks, as if assuming the job was done.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 3&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;329px&#34; data-flex-grow=&#34;137&#34; height=&#34;400&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-b0d8a013d2/img-1246eeb141.jpeg&#34; width=&#34;549&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;When different users across various scenarios start repeating the same type of issues, it seems less like a mere &amp;ldquo;feeling off&amp;rdquo; and more like a change in behavior patterns.&lt;/p&gt;&#xA;&lt;p&gt;In other words, it’s not that the feeling is wrong; the model is genuinely changing.&lt;/p&gt;&#xA;&lt;p&gt;What escalated the discussion was this number: some users compared historical interaction logs while using Claude Code and found that the reasoning process in complex tasks had noticeably shortened, with reasoning depth declining by 67% since the February update.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 4&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;694px&#34; data-flex-grow=&#34;289&#34; height=&#34;239&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-b0d8a013d2/img-81583cf56c.jpeg&#34; width=&#34;692&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;(Reference link: &lt;a class=&#34;link&#34; href=&#34;https://github.com/anthropics/claude-code/issues/42796&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;&#xA;    &gt;https://github.com/anthropics/claude-code/issues/42796&lt;/a&gt;)&lt;/p&gt;&#xA;&lt;p&gt;The author candidly explained that the 67% figure is based on an estimated correlation between signature length and the length of thought content, not a direct measurement. They also mentioned that logs from January were deleted, making baseline comparisons less accurate.&lt;/p&gt;&#xA;&lt;p&gt;In contrast, what’s more compelling in the report are the behavioral changes. For instance, the ratio of read:edit (reading code vs. modifying code) dropped from 6.6 to 2.0; after March 8, 173 violations were captured by the stop hook, whereas previously there were none.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 5&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;607px&#34; data-flex-grow=&#34;252&#34; height=&#34;274&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-b0d8a013d2/img-386c99f55b.jpeg&#34; width=&#34;693&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;However, the precision of the numbers isn’t as crucial as the fact that they quantify an otherwise vague experiential issue into a trend that can be discussed.&lt;/p&gt;&#xA;&lt;p&gt;Thus, a new term began to circulate in the community: &amp;ldquo;AI shrinkflation.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;Shrinkflation is an economic term referring to the reduction in size or quantity of a product while the price remains the same. Here, it directly means that the actual capabilities provided to users have diminished, even though the model still bears the same name.&lt;/p&gt;&#xA;&lt;h3 id=&#34;the-problem-behind-the-perfunctoriness&#34;&gt;The Problem Behind the Perfunctoriness&#xA;&lt;/h3&gt;&lt;p&gt;In contrast to the community&amp;rsquo;s intense reactions, Anthropic has not directly acknowledged that the &amp;ldquo;model has weakened.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;Boris, the head of Claude Code development, explained that these changes stem from adjustments at the system level, including changes in tool invocation methods, reasoning strategies, and resource allocation mechanisms, rather than a decline in the model&amp;rsquo;s inherent capabilities.&lt;/p&gt;&#xA;&lt;p&gt;He provided an example: in Claude Code, some issues are believed to originate from the toolchain and system prompts, not the model itself. Meanwhile, under high load, the system needs to control computing power, tokens, and requests, which can also affect user experience.&lt;/p&gt;&#xA;&lt;p&gt;In the latest version, Anthropic introduced a mechanism called &amp;ldquo;adaptive thinking,&amp;rdquo; where the model dynamically decides how much reasoning to use based on task complexity.&lt;/p&gt;&#xA;&lt;p&gt;In other words, the model hasn’t deteriorated; it has begun to &amp;ldquo;decide for itself&amp;rdquo; how much computing power to employ.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 6&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;586px&#34; data-flex-grow=&#34;244&#34; height=&#34;283&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-b0d8a013d2/img-04ff629c98.jpeg&#34; width=&#34;692&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;(Reference link: &lt;a class=&#34;link&#34; href=&#34;https://news.ycombinator.com/item?id=47660925&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;&#xA;    &gt;https://news.ycombinator.com/item?id=47660925&lt;/a&gt;)&lt;/p&gt;&#xA;&lt;p&gt;From an engineering perspective, this is a reasonable optimization: less thinking for simple tasks, more for complex ones, to enhance overall efficiency.&lt;/p&gt;&#xA;&lt;p&gt;However, the problem is that efficiency optimization and capability reduction feel indistinguishable from the user experience.&lt;/p&gt;&#xA;&lt;p&gt;When a model starts reading context less, providing faster answers, and prematurely ending tasks more frequently, users perceive this not as optimization but as perfunctoriness.&lt;/p&gt;&#xA;&lt;p&gt;Moreover, this adaptive reasoning mechanism can indeed create discomfort from a subjective standpoint.&lt;/p&gt;&#xA;&lt;p&gt;Using the interpersonal analogy again: why, after starting off well, does it feel like my concerns are no longer important later on?&lt;/p&gt;&#xA;&lt;p&gt;This discomfort was quickly amplified by another change: before its release, Mythos attracted significant attention, with Claude Mythos Preview being directly labeled by Anthropic as the &amp;ldquo;next generation of capability leap,&amp;rdquo; demonstrating far superior abilities in coding and security tasks. Hence, it is being restrictively provided to a few institutions to reinforce &amp;ldquo;the world&amp;rsquo;s most critical software systems.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;When a &amp;ldquo;stronger new model&amp;rdquo; appears alongside an &amp;ldquo;old model that feels diminished,&amp;rdquo; a speculation often mentioned in the community begins to take shape: nerfing the old model to elevate the new one creates the impression of a significant upgrade.&lt;/p&gt;&#xA;&lt;p&gt;This logic lacks direct evidence, but it is increasingly believed by users.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 7&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;417px&#34; data-flex-grow=&#34;173&#34; height=&#34;310&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-b0d8a013d2/img-9d2c5a8393.jpeg&#34; width=&#34;539&#34;&gt;&lt;/p&gt;&#xA;&lt;h3 id=&#34;models-no-longer-stable&#34;&gt;Models No Longer Stable&#xA;&lt;/h3&gt;&lt;p&gt;In reality, similar occurrences are not unfamiliar in AI.&lt;/p&gt;&#xA;&lt;p&gt;As early as 2023, research compared GPT-4&amp;rsquo;s performance at different times, finding that the same model exhibited noticeable changes in reasoning methods and output behaviors over a few months. These changes were later explained as a result of multiple factors: including adjustments in reasoning strategies, tightening of safety protocols, and optimizations for cost and response speed.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 8&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;460px&#34; data-flex-grow=&#34;191&#34; height=&#34;278&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-b0d8a013d2/img-30432b6492.jpeg&#34; width=&#34;533&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Setting conspiracy theories aside, if there is indeed a degree of resource bias, it is quite normal in the AI industry: whether OpenAI or Google, almost all companies prioritize optimizing the latest generation of models, while older models gradually become marginalized.&lt;/p&gt;&#xA;&lt;p&gt;Computing power is both a cost and a productivity factor. When the upper limit of a new model’s capabilities is higher and its potential value greater, investing more resources into it is a rational choice.&lt;/p&gt;&#xA;&lt;p&gt;In this process, the state of the old model will naturally change: being &amp;ldquo;downgraded,&amp;rdquo; reasoning depth compressed, and resource allocation readjusted&amp;hellip; all of these can be understood as a kind of engineering trade-off.&lt;/p&gt;&#xA;&lt;p&gt;However, understanding doesn’t equate to acceptance; the old model being altered without warning while the new model remains unavailable to the public is hard for anyone to accept.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 9&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;473px&#34; data-flex-grow=&#34;197&#34; height=&#34;351&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-b0d8a013d2/img-9c9224a7b3.jpeg&#34; width=&#34;692&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;From the user’s perspective, the most frustrating aspect isn’t the model’s &amp;ldquo;diminished intelligence&amp;rdquo; but its &amp;ldquo;instability.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;When a model transitions from a stable tool to a constantly changing system that makes its own &amp;ldquo;better adjustments&amp;rdquo; without prompts, version notes, or boundaries, it becomes problematic.&lt;/p&gt;&#xA;&lt;p&gt;As a user, you don’t know when it changed, what exactly changed, or whether these changes will impact your ongoing tasks.&lt;/p&gt;&#xA;&lt;p&gt;You can only feel that it has changed, and it’s not as useful as it once was.&lt;/p&gt;&#xA;&lt;p&gt;At this point, a new model appears before you, seeming more stable and reliable, and perhaps easier to use.&lt;/p&gt;&#xA;&lt;p&gt;Thus, the choice becomes nuanced: it seems you are no longer actively choosing the new model, but rather being pushed towards it by the changes in the old model.&lt;/p&gt;&#xA;&lt;p&gt;Even if you know that the new model might someday become the next old model, potentially &amp;ldquo;optimizing&amp;rdquo; into an unpleasant version unexpectedly, the gap is already evident.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Anthropic Launches Advisor Tool for Enhanced AI Collaboration</title>
            <link>https://acousticinfoplus.com/posts/note-f48e616802/</link>
            <pubDate>Mon, 13 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-f48e616802/</guid>
            <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&#xA;&lt;/h2&gt;&lt;p&gt;Anthropic has released a new API tool called Advisor Strategy, which allows Sonnet or Haiku to automatically consult Opus for guidance when encountering challenges during task execution. This reverse collaboration model, where smaller models work and larger models provide insights, brings the intelligence closer to Opus while keeping costs near Sonnet&amp;rsquo;s, potentially resulting in lower overall token consumption.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 1&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;514px&#34; data-flex-grow=&#34;214&#34; height=&#34;420&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-f48e616802/img-7ff4dde8ed.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-f48e616802/img-7ff4dde8ed_hu_115e1a050a7559f3.jpeg 800w, https://acousticinfoplus.com/posts/note-f48e616802/img-7ff4dde8ed.jpeg 900w&#34; width=&#34;900&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;how-it-works&#34;&gt;How It Works&#xA;&lt;/h2&gt;&lt;p&gt;The Advisor Strategy enables Sonnet (or Haiku) to act as the Executor, executing tasks, calling tools, reading results, and iterating. When it reaches a decision point where its judgment is insufficient, it consults Opus as the Advisor. Opus receives shared context and returns a plan, correction, or stop signal, allowing Sonnet to continue its work.&lt;/p&gt;&#xA;&lt;p&gt;The Advisor does not call tools or produce user-facing outputs; it only provides guidance. Advanced reasoning only intervenes when the Executor needs it, and the entire process is billed at the Executor&amp;rsquo;s rate.&lt;/p&gt;&#xA;&lt;h2 id=&#34;evaluation-data&#34;&gt;Evaluation Data&#xA;&lt;/h2&gt;&lt;h3 id=&#34;sonnet--opus-advisor&#34;&gt;Sonnet + Opus Advisor&#xA;&lt;/h3&gt;&lt;p&gt;In the SWE-bench Multilingual tests, the combination of Sonnet and Advisor improved performance by 2.7 percentage points compared to Sonnet running solo, while reducing the cost per task by 11.9%. The cost reduction is attributed to the Advisor&amp;rsquo;s involvement, which helps the Executor avoid unnecessary detours and reduces total token consumption.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 3&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;608&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-f48e616802/img-2e8860d670.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-f48e616802/img-2e8860d670_hu_f28c6ac963d99568.jpeg 800w, https://acousticinfoplus.com/posts/note-f48e616802/img-2e8860d670.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;h3 id=&#34;haiku--opus-advisor&#34;&gt;Haiku + Opus Advisor&#xA;&lt;/h3&gt;&lt;p&gt;In the BrowseComp tests, Haiku combined with Advisor scored 41.2%, more than double Haiku running solo (19.7%). Although this score is 29% lower than Sonnet&amp;rsquo;s solo performance, the cost was reduced by 85%.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 5&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;608&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-f48e616802/img-851429106e.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-f48e616802/img-851429106e_hu_3b2ac1fa602352a5.jpeg 800w, https://acousticinfoplus.com/posts/note-f48e616802/img-851429106e.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;usage&#34;&gt;Usage&#xA;&lt;/h2&gt;&lt;p&gt;Using the API is straightforward. Simply add an advisor_20260301 type tool to the tools array in the Messages API request, specifying the Advisor model as Opus and setting a max_uses limit for how many times the Advisor can be consulted per request.&lt;/p&gt;&#xA;&lt;p&gt;The entire model handoff occurs in a single /v1/messages request, eliminating the need for additional network calls or manual context management. The Executor decides when to call the Advisor, and Anthropic routes the selected context to the Advisor model. After receiving a plan, the Executor continues execution.&lt;/p&gt;&#xA;&lt;p&gt;Billing is based on the token usage of the Advisor and Executor models. The Advisor&amp;rsquo;s tokens are billed at Opus&amp;rsquo;s rate ($5/$25), while the Executor&amp;rsquo;s tokens are billed at Sonnet&amp;rsquo;s ($3/$15) or Haiku&amp;rsquo;s ($1/$5) rates. Since the Advisor typically generates a short plan (400-700 tokens), the overall cost is significantly lower than running Opus throughout.&lt;/p&gt;&#xA;&lt;h2 id=&#34;early-user-feedback&#34;&gt;Early User Feedback&#xA;&lt;/h2&gt;&lt;p&gt;Eric Simmons, CEO of Bolt, noted improved architectural decisions on complex tasks without extra overhead on simpler tasks. Kay Zhu, co-founder and CTO of Genspark, observed clear improvements in agent rounds, tool calls, and overall scores compared to their own planning tools. Anuraj Pandey, a machine learning engineer at Eve Legal, mentioned that Advisor allowed Haiku 4.5 to consult Opus 4.6 for structured document extraction, achieving state-of-the-art quality at a fraction of the cost.&lt;/p&gt;&#xA;&lt;h2 id=&#34;key-takeaways&#34;&gt;Key Takeaways&#xA;&lt;/h2&gt;&lt;ol&gt;&#xA;&lt;li&gt;This is the first time Anthropic has provided native support for model collaboration at the API level. Previously, coordinating Sonnet and Opus required custom orchestration logic and context management.&lt;/li&gt;&#xA;&lt;li&gt;The pricing logic is clever; the Advisor outputs short plans (400-700 tokens) at a low cost, which can help the Executor avoid costly detours, leading to lower overall expenses.&lt;/li&gt;&#xA;&lt;li&gt;The combination of Haiku and Opus Advisor is noteworthy, achieving competitive results at a significantly lower price, making it suitable for large-scale, cost-sensitive agent deployments.&lt;/li&gt;&#xA;&lt;li&gt;Anthropic continues to accelerate its product offerings, with the recent releases of Mythos, Managed Agents, and the Advisor Tool, indicating rapid growth in their product line.&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;</description>
        </item><item>
            <title>Claude Launches Managed Agents for Enterprise Use</title>
            <link>https://acousticinfoplus.com/posts/note-25fb073712/</link>
            <pubDate>Fri, 10 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-25fb073712/</guid>
            <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&#xA;&lt;/h2&gt;&lt;p&gt;Recently, Claude launched its &amp;ldquo;Enterprise Edition&amp;rdquo; service, introducing &lt;strong&gt;Claude Managed Agents&lt;/strong&gt;, which has quickly drawn attention from the open-source project &amp;ldquo;Multica&amp;rdquo;!&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 1&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;218px&#34; data-flex-grow=&#34;91&#34; height=&#34;1044&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-25fb073712/img-b9064eb598.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-25fb073712/img-b9064eb598_hu_cd451c7bfdb3d1f.jpeg 800w, https://acousticinfoplus.com/posts/note-25fb073712/img-b9064eb598.jpeg 952w&#34; width=&#34;952&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;what-are-claude-managed-agents&#34;&gt;What are Claude Managed Agents?&#xA;&lt;/h2&gt;&lt;p&gt;Claude Managed Agents is a &lt;strong&gt;modular API suite&lt;/strong&gt; designed to help enterprises and teams scale the construction and deployment of cloud-hosted intelligent agents. It &lt;strong&gt;deeply integrates a performance-optimized agent runtime framework with production-grade infrastructure&lt;/strong&gt;.&lt;/p&gt;&#xA;&lt;p&gt;Users can simply describe their needs in natural language or upload a YAML configuration file to define the intelligent agent they want to run, set corresponding constraints, and the platform handles the rest of the operational and infrastructure-related complexities.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 2&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;435px&#34; data-flex-grow=&#34;181&#34; height=&#34;736&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-25fb073712/img-66ad4c26bf.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-25fb073712/img-66ad4c26bf_hu_88429e494c30fea9.jpeg 800w, https://acousticinfoplus.com/posts/note-25fb073712/img-66ad4c26bf.jpeg 1334w&#34; width=&#34;1334&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;key-features&#34;&gt;Key Features&#xA;&lt;/h2&gt;&lt;p&gt;The core features of Claude Managed Agents include:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Production-grade agent execution capability&lt;/strong&gt;: Sandbox isolation, authentication, and tool invocation are all configured for you.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Long-term autonomous operation&lt;/strong&gt;: Agents can run autonomously for hours, preserving progress and results even if the connection is interrupted.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Multi-agent collaborative orchestration&lt;/strong&gt;: Supports agents autonomously creating and scheduling other agents for parallel processing of complex tasks.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Reliable governance system&lt;/strong&gt;: Allows agents to access real business systems, with built-in permissions, identity management, and execution tracking for safety and compliance.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;Previously, Anthropic focused on providing models to users, and aside from Claude Code and Cowork, did not open its infrastructure for users to run self-built agents. Now, Anthropic clearly aims to tackle both aspects.&lt;/p&gt;&#xA;&lt;p&gt;To successfully implement agents in production, it is essential to overcome technical challenges such as sandboxed code execution, checkpoint mechanisms, credential management, permission delineation, and end-to-end tracking. In the past, enterprises often spent months just building the necessary infrastructure.&lt;/p&gt;&#xA;&lt;p&gt;Claude Managed Agents directly resolves these complex issues for users.&lt;/p&gt;&#xA;&lt;p&gt;Users only need to define the agent&amp;rsquo;s task objectives, available toolsets, and operational constraints, while the platform&amp;rsquo;s infrastructure handles the subsequent scheduling. Its built-in orchestration framework automatically decides when to invoke tools, manages context strategies, and formulates recovery plans after failures.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 9&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;240px&#34; data-flex-grow=&#34;100&#34; height=&#34;1080&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-25fb073712/img-bcbfbb28e7.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-25fb073712/img-bcbfbb28e7_hu_ac1cc75f50b247fe.jpeg 800w, https://acousticinfoplus.com/posts/note-25fb073712/img-bcbfbb28e7.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;As a dedicated managed service for Claude, Claude Managed Agents allows users to set goals and success criteria, with Claude autonomously evaluating and iterating until objectives are met. For more granular control, it also supports the traditional &amp;ldquo;prompt-response&amp;rdquo; interaction model.&lt;/p&gt;&#xA;&lt;p&gt;In internal tests for generating structured documents, Claude Managed Agents achieved up to a 10% higher success rate compared to the standard prompt interaction model, especially excelling in high-difficulty tasks.&lt;/p&gt;&#xA;&lt;p&gt;Session tracking, integrated analytics, and fault diagnosis guidance are built directly into the Claude console, allowing users to view each tool invocation, decision-making process, and the specific reasons for any issues.&lt;/p&gt;&#xA;&lt;p&gt;However, it should be noted that &lt;strong&gt;some features of Claude Managed Agents are currently in a limited research preview phase&lt;/strong&gt;, such as advanced memory tools, multi-agent collaborative orchestration, and autonomous evaluation iterations.&lt;/p&gt;&#xA;&lt;p&gt;Anthropic has indicated that many teams have already achieved a tenfold increase in delivery speed across various production use cases using Claude Managed Agents. Examples include coding agents that can read codebases, plan fixes, and submit pull requests; productivity agents that can join projects, claim tasks, and collaborate with team members; and financial and legal agents that can process documents and extract key information.&lt;/p&gt;&#xA;&lt;p&gt;The Notion team shared their practical application:&lt;/p&gt;&#xA;&#xA;    &lt;blockquote&gt;&#xA;        &lt;p&gt;Teams can directly delegate work tasks to Claude within their collaboration platform (this feature is currently in beta testing within Notion&amp;rsquo;s custom agent module). Engineers efficiently deliver code, while knowledge workers quickly create websites and presentations with it. Dozens of tasks can progress in parallel, and team members can collaborate around the results generated by the agents.&lt;/p&gt;&#xA;&#xA;    &lt;/blockquote&gt;&#xA;&lt;h2 id=&#34;pricing&#34;&gt;Pricing&#xA;&lt;/h2&gt;&lt;p&gt;For enterprises, the most pressing concern is pricing. &lt;strong&gt;Claude Managed Agents charges based on two dimensions: Token usage and session runtime.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;The first part of the token cost is charged according to the platform&amp;rsquo;s standard token pricing rules. If a network search is triggered during a session, &lt;strong&gt;it costs $10 for every thousand searches.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 10&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;332px&#34; data-flex-grow=&#34;138&#34; height=&#34;780&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-25fb073712/img-27d5c9fa1a.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-25fb073712/img-27d5c9fa1a_hu_e1f5d32ac1554603.jpeg 800w, https://acousticinfoplus.com/posts/note-25fb073712/img-27d5c9fa1a.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;The active runtime of the agent is billed separately at &lt;strong&gt;$0.08 per session hour&lt;/strong&gt;. Idle periods when the agent is waiting for user input or tool responses are not charged.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 11&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;838px&#34; data-flex-grow=&#34;349&#34; height=&#34;309&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-25fb073712/img-112eb0dc0c.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-25fb073712/img-112eb0dc0c_hu_35b703f99ff5c1f9.jpeg 800w, https://acousticinfoplus.com/posts/note-25fb073712/img-112eb0dc0c.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Currently, Claude Managed Agents is officially available for use on the Claude platform. Developers can also utilize the latest Claude Code and the built-in claude-api Skill to develop applications related to managed agents. Simply enter the command &amp;ldquo;start onboarding for managed agents in Claude API&amp;rdquo; to begin.&lt;/p&gt;&#xA;&lt;h2 id=&#34;open-source-multica&#34;&gt;Open Source Multica&#xA;&lt;/h2&gt;&lt;p&gt;Now, let’s look at the core features of the open-source Multica:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Agents as collaborative teammates&lt;/strong&gt;: Agents can autonomously take on tasks, write code, report blocking issues, and synchronize task status in real-time.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Full lifecycle autonomous execution&lt;/strong&gt;: Once configured, it can run without maintenance, supporting task queuing, claiming, execution, and completion/failure management, with real-time progress updates via WebSocket.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Skill accumulation and reuse&lt;/strong&gt;: Each solution is transformed into reusable Skills shared across the team. Skills continuously accumulate for deployment, database migration, code review, etc.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Unified computing runtime&lt;/strong&gt;: A single console can manage all computing resources, compatible with local daemons and cloud runtimes, automatically identifying available command-line tools (CLI) and supporting real-time monitoring.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Multi-workspace isolation management&lt;/strong&gt;: Organizes work by team, with workspace-level isolation. Each workspace has its own agents, issues, and settings.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;Here is a video demonstration:&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 12&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;432px&#34; data-flex-grow=&#34;180&#34; height=&#34;740&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-25fb073712/img-1e0080bb84.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-25fb073712/img-1e0080bb84_hu_dc51c2d40b1fe13.jpeg 800w, https://acousticinfoplus.com/posts/note-25fb073712/img-1e0080bb84.jpeg 1334w&#34; width=&#34;1334&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&#xA;&lt;/h2&gt;&lt;p&gt;Multica was actually released earlier than Claude Managed Agents. Jiayuan (JY) Zhang, a core contributor to Multica, previously founded the AI vertical search engine Devv.ai for developers.&lt;/p&gt;&#xA;&lt;p&gt;Jiayuan (JY) Zhang stated that the team initially created it to solve the problem of &amp;ldquo;knowledge sharing between teams and the lack of a central hub for multi-agent collaboration&amp;rdquo; within their own team.&lt;/p&gt;&#xA;&lt;p&gt;For usage, the GitHub repository also has detailed tutorials:&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 15&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;233px&#34; data-flex-grow=&#34;97&#34; height=&#34;1108&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-25fb073712/img-0b8b2865ae.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-25fb073712/img-0b8b2865ae_hu_382a1d0b55b89578.jpeg 800w, https://acousticinfoplus.com/posts/note-25fb073712/img-0b8b2865ae.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;References:&lt;/p&gt;&#xA;&lt;p&gt;[1] &lt;a class=&#34;link&#34; href=&#34;https://claude.com/blog/claude-managed-agents&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;&#xA;    &gt;https://claude.com/blog/claude-managed-agents&lt;/a&gt;&lt;br&gt;&#xA;[2] &lt;a class=&#34;link&#34; href=&#34;https://github.com/multica-ai/multica?tab=readme-ov-file&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;&#xA;    &gt;https://github.com/multica-ai/multica?tab=readme-ov-file&lt;/a&gt;&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Vibe Coding: Transforming the Role of Product Managers</title>
            <link>https://acousticinfoplus.com/posts/note-71a399eeae/</link>
            <pubDate>Fri, 03 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-71a399eeae/</guid>
            <description>&lt;h2 id=&#34;vibe-coding-transforming-the-role-of-product-managers&#34;&gt;Vibe Coding: Transforming the Role of Product Managers&#xA;&lt;/h2&gt;&lt;p&gt;Vibe Coding is reshaping the way product managers work, shifting from natural language-driven development to results-oriented evaluation and iterative-driven processes. This article delves into its core concepts, technological breakthroughs, mainstream tool selection, and how to integrate AI throughout the product development process, enabling PMs to transition from information conduits to builders, driving efficient decision-making with real pages.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 12&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;514px&#34; data-flex-grow=&#34;214&#34; height=&#34;420&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-71a399eeae/img-3713dd55e2.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-71a399eeae/img-3713dd55e2_hu_4ad6cfb79b7584d4.jpeg 800w, https://acousticinfoplus.com/posts/note-71a399eeae/img-3713dd55e2.jpeg 900w&#34; width=&#34;900&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;In 2025, a tweet sparked heated discussions in the product community; by 2026, it had become a reality in workflows. This article systematically outlines the core concepts of Vibe Coding, its underlying technological logic, mainstream tool selection, and its substantial impact on the role of product managers—helping you understand the buzzwords and use the tools effectively.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 13&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;565px&#34; data-flex-grow=&#34;235&#34; height=&#34;860&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-71a399eeae/img-b39292e972.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-71a399eeae/img-b39292e972_hu_57805596f9b6395c.jpeg 800w, https://acousticinfoplus.com/posts/note-71a399eeae/img-b39292e972_hu_55e9cda49fae1a0c.jpeg 1600w, https://acousticinfoplus.com/posts/note-71a399eeae/img-b39292e972.jpeg 2026w&#34; width=&#34;2026&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;01-core-concept-clarifying-the-terms&#34;&gt;01 Core Concept: Clarifying the Terms&#xA;&lt;/h2&gt;&lt;p&gt;In the past year, &amp;ldquo;Vibe Coding&amp;rdquo; has frequently appeared in discussions among product managers, but many remain unclear about what it is and what it can do. This is the starting point for understanding everything.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 14&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;710px&#34; data-flex-grow=&#34;296&#34; height=&#34;720&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-71a399eeae/img-58bd710a10.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-71a399eeae/img-58bd710a10_hu_f3eb044d89ebf6ba.jpeg 800w, https://acousticinfoplus.com/posts/note-71a399eeae/img-58bd710a10_hu_9a62f14720846cf7.jpeg 1600w, https://acousticinfoplus.com/posts/note-71a399eeae/img-58bd710a10.jpeg 2132w&#34; width=&#34;2132&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;This definition has three key terms worth unpacking:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&lt;strong&gt;Natural Language Driven&lt;/strong&gt; — No need to master any programming syntax; describe the desired functionality and effects in everyday language, and AI translates intentions into code.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Results-Oriented Evaluation&lt;/strong&gt; — The role of humans shifts from &amp;ldquo;writing code&amp;rdquo; to &amp;ldquo;evaluating results.&amp;rdquo; You do not need to understand the code generated by AI; you only need to assess whether the output is correct and satisfactory.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Iterative-Driven Convergence&lt;/strong&gt; — It is not about generating a complete product in one go, but rather continuously iterating through multiple rounds of natural language feedback to gradually approach the goal.&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 15&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;1002px&#34; data-flex-grow=&#34;417&#34; height=&#34;512&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-71a399eeae/img-ca09a958e4.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-71a399eeae/img-ca09a958e4_hu_a271a9499db32d07.jpeg 800w, https://acousticinfoplus.com/posts/note-71a399eeae/img-ca09a958e4_hu_1ca8e1ca0473851e.jpeg 1600w, https://acousticinfoplus.com/posts/note-71a399eeae/img-ca09a958e4.jpeg 2138w&#34; width=&#34;2138&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;02-underlying-drivers-why-20252026&#34;&gt;02 Underlying Drivers: Why 2025–2026?&#xA;&lt;/h2&gt;&lt;p&gt;The idea behind Vibe Coding is not new; it has been in its infancy since the era of GitHub Copilot. However, it truly became a productivity tool because three conditions matured simultaneously in 2025–2026.&lt;/p&gt;&#xA;&lt;h3 id=&#34;condition-1-leap-in-model-coding-capabilities&#34;&gt;Condition 1: Leap in Model Coding Capabilities&#xA;&lt;/h3&gt;&lt;p&gt;The authoritative benchmark for measuring AI programming capabilities is &lt;strong&gt;SWE-bench Verified&lt;/strong&gt;—testing the model&amp;rsquo;s ability to solve real GitHub issues, which is much harder than simply writing runnable code. The latest data shows:&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 16&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;1221px&#34; data-flex-grow=&#34;509&#34; height=&#34;420&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-71a399eeae/img-7daece71f5.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-71a399eeae/img-7daece71f5_hu_ac61990256e25c41.jpeg 800w, https://acousticinfoplus.com/posts/note-71a399eeae/img-7daece71f5_hu_3342c64adf955c10.jpeg 1600w, https://acousticinfoplus.com/posts/note-71a399eeae/img-7daece71f5.jpeg 2138w&#34; width=&#34;2138&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;This means that models can now handle coding tasks in real complex projects, rather than just generating isolated code snippets. &lt;strong&gt;This is the prerequisite for Vibe Coding to be truly practical.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;h3 id=&#34;condition-2-toolchain-completes-the-last-mile-packaging&#34;&gt;Condition 2: Toolchain Completes the &amp;ldquo;Last Mile&amp;rdquo; Packaging&#xA;&lt;/h3&gt;&lt;p&gt;AI being able to write code is one thing; enabling non-technical personnel to access a runnable product is another. The missing element was not the model&amp;rsquo;s capability, but the &lt;strong&gt;environment configuration, debugging, and deployment barriers&lt;/strong&gt;—which kept 99% of PMs out. The new generation of tools emerging since 2025 has completely encapsulated these barriers, allowing PMs to obtain runnable pages without needing to understand any engineering infrastructure.&lt;/p&gt;&#xA;&lt;h3 id=&#34;condition-3-non-developers-become-the-main-user-group&#34;&gt;Condition 3: Non-Developers Become the Main User Group&#xA;&lt;/h3&gt;&lt;p&gt;Data from 2026 shows that &lt;strong&gt;63% of active users of Vibe Coding tools are non-developers&lt;/strong&gt;. Product managers, designers, and entrepreneurs have become the primary users of these tools—indicating that the ease of use has crossed the threshold of &amp;ldquo;only technical people can use it.&amp;rdquo;&lt;/p&gt;&#xA;&lt;h2 id=&#34;03-mainstream-tools-how-to-choose-and-what-are-the-differences&#34;&gt;03 Mainstream Tools: How to Choose and What Are the Differences&#xA;&lt;/h2&gt;&lt;p&gt;Current Vibe Coding tools on the market are clearly stratified; &lt;strong&gt;there is no &amp;ldquo;best one,&amp;rdquo; only the &amp;ldquo;most suitable for the current task.&amp;rdquo;&lt;/strong&gt; Here are the core differences among mainstream tools:&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 17&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;650px&#34; data-flex-grow=&#34;271&#34; height=&#34;792&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-71a399eeae/img-657db5961b.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-71a399eeae/img-657db5961b_hu_5f8977240c735a08.jpeg 800w, https://acousticinfoplus.com/posts/note-71a399eeae/img-657db5961b_hu_2b0cbb2ff4d32800.jpeg 1600w, https://acousticinfoplus.com/posts/note-71a399eeae/img-657db5961b.jpeg 2148w&#34; width=&#34;2148&#34;&gt;&lt;img alt=&#34;Image 18&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;351px&#34; data-flex-grow=&#34;146&#34; height=&#34;1460&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-71a399eeae/img-9ffac08109.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-71a399eeae/img-9ffac08109_hu_2f63bfb9298f3b93.jpeg 800w, https://acousticinfoplus.com/posts/note-71a399eeae/img-9ffac08109_hu_f9ba7f563c0fa6e0.jpeg 1600w, https://acousticinfoplus.com/posts/note-71a399eeae/img-9ffac08109.jpeg 2140w&#34; width=&#34;2140&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;04-implementation-path-how-pms-can-integrate-it-into-real-workflows&#34;&gt;04 Implementation Path: How PMs Can Integrate It into Real Workflows&#xA;&lt;/h2&gt;&lt;p&gt;Here is a workflow that has been successfully implemented in actual projects—from a requirements discussion meeting to initiating reviews with a clickable page, &lt;strong&gt;AI assists throughout the process, allowing one person to complete it within a week.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;The essence is &lt;strong&gt;embedding AI into the entire delivery chain&lt;/strong&gt;, rather than just using it to generate a screenshot:&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 19&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;680px&#34; data-flex-grow=&#34;283&#34; height=&#34;758&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-71a399eeae/img-c35a14504d.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-71a399eeae/img-c35a14504d_hu_70da2e001fd2465a.jpeg 800w, https://acousticinfoplus.com/posts/note-71a399eeae/img-c35a14504d_hu_c9426311ed4aa438.jpeg 1600w, https://acousticinfoplus.com/posts/note-71a399eeae/img-c35a14504d.jpeg 2148w&#34; width=&#34;2148&#34;&gt;&lt;/p&gt;&#xA;&lt;h3 id=&#34;1-corpus-organization-let-ai-filter-meeting-noise&#34;&gt;1. Corpus Organization: Let AI Filter Meeting Noise&#xA;&lt;/h3&gt;&lt;p&gt;Directly feed the verbatim transcript or recording of the meeting to a large model, asking it to extract three categories of information: &amp;ldquo;real needs / pseudo-needs / items to confirm.&amp;rdquo; Clearly restrict AI from adding any content not mentioned in the meeting.&lt;/p&gt;&#xA;&lt;p&gt;→ Key Point: This step is filtering, not generating. The value of AI lies in helping you extract effective signals from a large amount of colloquial information.&lt;/p&gt;&#xA;&lt;h3 id=&#34;2-requirement-structuring-generate-a-prd-framework-in-four-parts&#34;&gt;2. Requirement Structuring: Generate a PRD Framework in Four Parts&#xA;&lt;/h3&gt;&lt;p&gt;Use a fixed framework prompt to guide AI in outputting a four-part structure: requirement statement → solution alignment → feasibility discussion → priority sorting. After obtaining the framework, manually review and cross-check with the original corpus.&lt;/p&gt;&#xA;&lt;p&gt;→ Key Point: AI excels at filling structures but struggles with assessing importance. Priority sorting must be decided by humans and cannot be entrusted to the model.&lt;/p&gt;&#xA;&lt;h3 id=&#34;3-function-breakdown-generate-a-development-ready-prd&#34;&gt;3. Function Breakdown: Generate a Development-Ready PRD&#xA;&lt;/h3&gt;&lt;p&gt;Feed the framework back to AI, adding user stories, acceptance criteria, and data field descriptions to produce a detailed PRD that engineers can start working on without further questions.&lt;/p&gt;&#xA;&lt;p&gt;→ Key Point: The granularity standard is &amp;ldquo;no ambiguity on the development side,&amp;rdquo; not pursuing document length.&lt;/p&gt;&#xA;&lt;h3 id=&#34;4-vibe-coding-turn-requirements-into-clickable-real-pages&#34;&gt;4. Vibe Coding: Turn Requirements into Clickable Real Pages&#xA;&lt;/h3&gt;&lt;p&gt;Combine the core path descriptions of the PRD into prompts and input them into Vibe Coding tools, iterating 2–3 rounds to generate a browser-runnable demo version. Tool selection: for complete full-stack options, choose &lt;strong&gt;Lovable&lt;/strong&gt; (one-click deployment); for rapid multi-version output, choose &lt;strong&gt;Bolt&lt;/strong&gt; (fastest, supports direct code conversion from Figma); for underlying models, recommend &lt;strong&gt;Claude Opus 4.5&lt;/strong&gt;.&lt;/p&gt;&#xA;&lt;p&gt;→ Key Point: The goal is to &amp;ldquo;enable the business side to make decisions based on tangible items,&amp;rdquo; not to deliver production code.&lt;/p&gt;&#xA;&lt;h3 id=&#34;5-business-review-drive-decisions-with-real-pages&#34;&gt;5. Business Review: Drive Decisions with Real Pages&#xA;&lt;/h3&gt;&lt;p&gt;Initiate reviews with the clickable page. Discussions no longer revolve around &amp;ldquo;what does this sentence mean,&amp;rdquo; but rather &amp;ldquo;is this button placed correctly&amp;rdquo;—enhancing both decision-making efficiency and quality.&lt;/p&gt;&#xA;&lt;p&gt;→ Key Point: The value of the review lies not in &amp;ldquo;passing,&amp;rdquo; but in exposing all disagreements in front of the page, eliminating later rework.&lt;/p&gt;&#xA;&lt;h2 id=&#34;05-boundary-awareness-what-vibe-coding-cannot-do&#34;&gt;05 Boundary Awareness: What Vibe Coding Cannot Do&#xA;&lt;/h2&gt;&lt;p&gt;Accurately understanding a technology requires knowing not only what it can do but also where its boundaries lie. Having excessive expectations or completely rejecting Vibe Coding are both cognitive biases.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 20&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;362px&#34; data-flex-grow=&#34;151&#34; height=&#34;1420&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-71a399eeae/img-54090e077b.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-71a399eeae/img-54090e077b_hu_2aa3ab4876d118da.jpeg 800w, https://acousticinfoplus.com/posts/note-71a399eeae/img-54090e077b_hu_264d93ba1aa727b2.jpeg 1600w, https://acousticinfoplus.com/posts/note-71a399eeae/img-54090e077b.jpeg 2146w&#34; width=&#34;2146&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;06-role-impact-how-pm-work-styles-are-changing&#34;&gt;06 Role Impact: How PM Work Styles Are Changing&#xA;&lt;/h2&gt;&lt;p&gt;Two sets of data illustrate the issue:&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 21&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;1446px&#34; data-flex-grow=&#34;602&#34; height=&#34;354&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-71a399eeae/img-c62f9014f4.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-71a399eeae/img-c62f9014f4_hu_c8a78a25a30c2ce2.jpeg 800w, https://acousticinfoplus.com/posts/note-71a399eeae/img-c62f9014f4_hu_af3fe3ec7037808f.jpeg 1600w, https://acousticinfoplus.com/posts/note-71a399eeae/img-c62f9014f4.jpeg 2134w&#34; width=&#34;2134&#34;&gt;&lt;/p&gt;&#xA;&lt;h3 id=&#34;change-1-the-role-of-pms-is-shifting-from-connectors-to-builders&#34;&gt;Change 1: The Role of PMs is Shifting from &amp;ldquo;Connectors&amp;rdquo; to &amp;ldquo;Builders&amp;rdquo;&#xA;&lt;/h3&gt;&lt;p&gt;In the past, the core value of PMs was alignment and coordination: translating business needs to designers and translating designs to engineers, acting as information conduits. Now, when PMs can independently run demo versions, they are no longer just &amp;ldquo;storytellers&amp;rdquo; in reviews but &lt;strong&gt;&amp;ldquo;people who come to the conversation with works.&amp;rdquo;&lt;/strong&gt; Their authority and pace of advancement will undergo a qualitative change.&lt;/p&gt;&#xA;&lt;h3 id=&#34;change-2-the-quality-threshold-for-requirement-reviews-is-elevated&#34;&gt;Change 2: The Quality Threshold for Requirement Reviews is Elevated&#xA;&lt;/h3&gt;&lt;p&gt;When the PM across the table comes to the review with a real clickable page, PMs who only bring written PRDs will clearly be at a disadvantage—the business side is increasingly accustomed to making decisions based on tangible items rather than relying on imagination to understand requirements. This change has become very evident in 2026.&lt;/p&gt;&#xA;&lt;h3 id=&#34;change-3-the-boundaries-between-pms-and-engineers-are-redefined&#34;&gt;Change 3: The Boundaries Between PMs and Engineers are Redefined&#xA;&lt;/h3&gt;&lt;p&gt;This is not about &amp;ldquo;PMs taking engineers&amp;rsquo; jobs&amp;rdquo; but rather about redefining the work interface: PMs are responsible for the transition from ideas to demo-level products, while engineers handle the transition from demo-level to production-level. This enhances efficiency for both parties, rather than being a zero-sum game.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 22&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;511px&#34; data-flex-grow=&#34;212&#34; height=&#34;1004&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-71a399eeae/img-3055dc3ffd.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-71a399eeae/img-3055dc3ffd_hu_7e60f4cf30706a6f.jpeg 800w, https://acousticinfoplus.com/posts/note-71a399eeae/img-3055dc3ffd_hu_9f487f6f28f903d0.jpeg 1600w, https://acousticinfoplus.com/posts/note-71a399eeae/img-3055dc3ffd.jpeg 2138w&#34; width=&#34;2138&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;If you haven&amp;rsquo;t tried this process yet, it&amp;rsquo;s recommended to start with the smallest scenario: in the next iteration, for a new page, try running a version using Vibe Coding yourself and take it to the review. Observe the changes in decision-making efficiency.&lt;/p&gt;&#xA;&lt;p&gt;You will find that many issues that were unclear in requirement meetings will clarify themselves in front of a real page.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Advancing AI in Education: China&#39;s Strategic Action Plan</title>
            <link>https://acousticinfoplus.com/posts/note-9bc11d9938/</link>
            <pubDate>Wed, 01 Apr 2026 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-9bc11d9938/</guid>
            <description>&lt;h2 id=&#34;advancing-ai-in-education&#34;&gt;Advancing AI in Education&#xA;&lt;/h2&gt;&lt;p&gt;On March 31, the Ministry of Education held a meeting to mark the fourth anniversary of the National Smart Education Public Service Platform. The meeting focused on summarizing the achievements of the digital education strategy during the 14th Five-Year Plan and outlined key tasks for the 15th Five-Year Plan. The emphasis was placed on utilizing artificial intelligence (AI) as a crucial variable to integrate AI into all aspects of education.&lt;/p&gt;&#xA;&lt;p&gt;Minister Huai Jinpeng highlighted the implementation of the national digital education strategy, which aims to enhance moral education, promote the development of education technology, improve public education services, foster professional development for teachers, and build a globally influential education center. AI is reshaping the foundational logic of education and creating new demands for innovation and high-quality development.&lt;/p&gt;&#xA;&lt;p&gt;Looking ahead to the 15th Five-Year Plan, the focus will be on several key areas:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;AI for School Education&lt;/strong&gt;: Upgrading educational centers to support personalized growth and learning, and cultivating interdisciplinary talents and high-skilled professionals in emerging AI-related fields.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;AI for Lifelong Education&lt;/strong&gt;: Establishing lifelong learning centers that connect school education with industry and social education, enhancing employability for graduates and fostering a learning society.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;AI for Technological Innovation&lt;/strong&gt;: Building innovation centers to gather resources and facilitate the transformation of technological achievements.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;AI for International Exchange&lt;/strong&gt;: Designing Chinese education centers to expand the international influence of Chinese education.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;AI for Teacher Development&lt;/strong&gt;: Upgrading teacher centers to support the growth of high-quality, professional educators.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;AI for Educational Governance&lt;/strong&gt;: Enhancing governance centers to improve the modern governance level of education and increase public satisfaction.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;Minister Huai emphasized the importance of a correct performance perspective, strengthening expectation management, and ensuring effective implementation of the digital education strategy. He called for collaborative efforts to break down departmental barriers, coordinate resources, and focus on practical solutions to complex challenges.&lt;/p&gt;&#xA;&lt;p&gt;The meeting also introduced a new version of the National Smart Education Public Service Platform, featuring new centers for lifelong learning, technological innovation, Chinese education, and educational big data. Representatives from various educational institutions shared insights on the implementation of the AI in education initiative.&lt;/p&gt;&#xA;&lt;p&gt;The meeting was conducted via video conference, with participation from various educational leaders and officials.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>The Impact of AI on Industry and Consumption in China</title>
            <link>https://acousticinfoplus.com/posts/note-4669009133/</link>
            <pubDate>Sun, 29 Mar 2026 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-4669009133/</guid>
            <description>&lt;h2 id=&#34;the-rapid-development-and-application-of-artificial-intelligence&#34;&gt;The Rapid Development and Application of Artificial Intelligence&#xA;&lt;/h2&gt;&lt;p&gt;Artificial intelligence (AI) is rapidly evolving and profoundly changing human production and lifestyle, showcasing its powerful technological capabilities and potential for empowerment. China is deeply implementing the &amp;ldquo;AI+&amp;rdquo; initiative, leading industrial innovation through technological advancements and promoting technological iteration through industrial upgrades, thereby enabling AI to empower various sectors.&lt;/p&gt;&#xA;&lt;p&gt;The opinions outlined in the State Council&amp;rsquo;s document on the implementation of the &amp;ldquo;AI+&amp;rdquo; initiative, issued in 2025, clarify overall requirements, development goals, and key directions. The 14th Five-Year Plan emphasizes the comprehensive implementation of the &amp;ldquo;AI+&amp;rdquo; initiative. Experts have been invited to discuss how to unleash strong momentum through the &amp;ldquo;AI+&amp;rdquo; initiative.&lt;/p&gt;&#xA;&lt;h2 id=&#34;ais-profound-impact-on-innovation-paradigms&#34;&gt;AI&amp;rsquo;s Profound Impact on Innovation Paradigms&#xA;&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Why implement the &amp;ldquo;AI+&amp;rdquo; initiative? How can we grasp the development needs of the &amp;ldquo;AI+&amp;rdquo; initiative from deep implementation to comprehensive implementation?&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Wang Xiaoming (Director of Industrial Technology Innovation Research Department, Chinese Academy of Sciences):&lt;/strong&gt; AI is a crucial engine for developing new productive forces, profoundly changing innovation paradigms. The marginal effects of traditional factors such as labor and capital on GDP are diminishing. AI is considered a general-purpose technology on par with electricity and the internet, possessing strong penetration and empowerment effects. Implementing the &amp;ldquo;AI+&amp;rdquo; initiative is a significant measure to develop new productive forces, achieving exponential improvements in total factor productivity by reorganizing global resource elements and reconstructing industrial development paradigms.&lt;/p&gt;&#xA;&lt;p&gt;AI is reshaping the global economic landscape and competitive dynamics, becoming a critical area in great power competition. Major economies view AI development as a significant strategy to enhance national competitiveness and maintain national security. In 2025, the U.S. plans to invest hundreds of billions of dollars through initiatives like the AI Action Plan to maintain its dominance in AI. Global AI competition has shifted from a singular focus on computational power and models to an ecosystem and application competition. China actively responds to intense international competition by deeply implementing the &amp;ldquo;AI+&amp;rdquo; initiative, leveraging its vast market and complete industrial system to seize the high ground in AI applications through scenario-driven approaches, thereby forcing breakthroughs in foundational chips and algorithms to mitigate risks of being &amp;ldquo;choked&amp;rdquo; by core technologies.&lt;/p&gt;&#xA;&lt;p&gt;AI is also a crucial means to enhance public service capabilities and drive the modernization of social governance. As the largest developing country with over 1.4 billion people, China faces many complex social governance challenges. In public services, technologies such as intelligent agents enable seamless access to education, healthcare, and elderly care, significantly improving the quality of life for citizens. In social governance, AI shifts the logic from &amp;ldquo;experience-driven, reactive&amp;rdquo; to &amp;ldquo;computational support, proactive warning,&amp;rdquo; releasing grassroots governance vitality and providing a core foundation for building a refined and intelligent governance system, thus promoting the modernization of the national governance system and capabilities.&lt;/p&gt;&#xA;&lt;p&gt;In 2017, the State Council issued the &amp;ldquo;New Generation Artificial Intelligence Development Plan,&amp;rdquo; aiming to establish China&amp;rsquo;s AI development advantages. AI is defined as a &amp;ldquo;strategic technology leading the future,&amp;rdquo; with policies focusing on foundational theories, core algorithms (like computer vision and speech recognition), and high-end chip development.&lt;/p&gt;&#xA;&lt;p&gt;In 2024, the &amp;ldquo;AI+&amp;rdquo; initiative was included in the &amp;ldquo;Government Work Report&amp;rdquo; for the first time. In 2025, the State Council issued the &amp;ldquo;Opinions on Deeply Implementing the &amp;lsquo;AI+&amp;rsquo; Initiative.&amp;rdquo; Relevant policies emphasize not only AI technology itself but also how AI can empower industrial development. The construction of computational infrastructure is continuously improving, and the data element system is gradually being established, laying the foundation for the practical application of AI.&lt;/p&gt;&#xA;&lt;p&gt;The 14th Five-Year Plan emphasizes the comprehensive implementation of the &amp;ldquo;AI+&amp;rdquo; initiative, strengthening the integration of AI with technological innovation, industrial development, cultural construction, public welfare, and social governance. In the future, the focus will not only be on expanding application scales but also on reshaping innovation paradigms. By leading changes in research paradigms and seizing high ground in industrial applications, deep changes in production methods and revolutionary leaps in productivity can be achieved.&lt;/p&gt;&#xA;&lt;p&gt;AI comprehensively empowers various industries, demonstrating immense growth potential. In the realm of &amp;ldquo;AI+ scientific research,&amp;rdquo; AI acts as a &amp;ldquo;laboratory assistant,&amp;rdquo; shortening the material development and drug screening cycles from years to weeks. In the &amp;ldquo;AI+ industry&amp;rdquo; sector, AI is widely applied in product design, supply chain management, and intelligent inspection, enabling comprehensive analysis of various factors, assisting in demand forecasting and inventory dynamic optimization, thus enhancing supply chain resilience. Intelligent inspection technologies based on computer vision are applied in various inspection stages of vehicle manufacturing, achieving real-time identification of sub-millimeter defects on high-speed production lines, improving detection efficiency under complex conditions. In the &amp;ldquo;AI+ consumption&amp;rdquo; area, AI drives the growth of intelligent product and service consumption. On one hand, the implementation of AI large model technology leads to the elimination and upgrading of terminal products, initiating a new round of intelligent iterations for smartphones, computers, and home appliances. On the other hand, AI effectively addresses issues such as uneven distribution of resources in healthcare, education, and elderly care, providing intelligent services that meet consumers&amp;rsquo; growing demands.&lt;/p&gt;&#xA;&lt;p&gt;In the future, AI development will focus on three directions: First, AI for Science will accelerate technological innovation processes, changing research paradigms and driving original technology output in materials, energy, and biomedicine, providing the driving force for industrial development. Second, embodied intelligence will rapidly develop, transitioning from large language models to visual language action models and world models, further integrating into human production and life, obtaining real data in the physical world for self-iteration. By 2035, the number of humanoid robots in workplaces in China is expected to exceed 2 million. Third, the construction of high-quality datasets will be effectively promoted. With an application-oriented approach, China will continue to strengthen the construction of high-quality AI datasets, promote the lawful and compliant opening of public data, and explore data cost compensation and revenue sharing based on value contribution, creating a batch of data service providers and forming a healthy data industry ecosystem.&lt;/p&gt;&#xA;&lt;h2 id=&#34;ai-consumption-activating-new-momentum&#34;&gt;&amp;ldquo;AI+ Consumption&amp;rdquo; Activating New Momentum&#xA;&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;The &amp;ldquo;AI+&amp;rdquo; initiative is not only a cutting-edge direction for technological development but also relates to the real concerns of the public. How does AI, as a disruptive innovative technology, boost consumption?&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Li Yongjian (Professor, School of Applied Economics, Chinese Academy of Social Sciences):&lt;/strong&gt; Consumption is a crucial engine for China&amp;rsquo;s economic growth, with final consumption expenditure contributing 52% to economic growth by 2025. As the consumption market continues to expand, residents&amp;rsquo; consumption demands exhibit diverse characteristics such as quality, personalization, scenario-based, and convenience, facing structural challenges like supply-demand mismatches, homogenized experiences, and weakened growth momentum. The new round of technological revolution represented by AI, with its powerful data processing, deep learning, and intelligent decision-making capabilities, provides new pathways to address pain points in the consumption sector and activate new consumption momentum.&lt;/p&gt;&#xA;&lt;p&gt;AI expands consumption boundaries through scenario innovation, continuously driving consumption growth in areas such as food, clothing, housing, transportation, social entertainment, and tourism. In the tourism sector, AI assistants can create customized travel plans based on consumer preferences, AI tour guides provide rich personalized explanations, and AR guide glasses offer immersive experiences, significantly enhancing the quality of tourism experiences. In shopping, AI shopping assistants analyze consumer preferences and make precise recommendations based on comprehensive data from the internet, making consumption decisions more rational; virtual fitting rooms utilize 3D modeling and intelligent matching to provide new shopping scenarios. In health, AI empowers health management platforms that integrate wearable devices and nutrition recommendations, facilitating comprehensive and precise health management for consumers while driving rapid growth in related industries such as outdoor sports and healthy eating.&lt;/p&gt;&#xA;&lt;p&gt;AI stimulates consumption demand through product innovation, translating it into real growth. For example, as of the end of Q3 2025, the total number of registered smart wearable products in China reached 181,000, a growth of over 90% compared to September 2020, with an annual growth rate of nearly 14%. Among these, the variety of smartwatches reached 29,000, with an annual growth rate of 46.8%. Additionally, wearable devices drive related consumption, such as Quark AI glasses supporting payment scenarios like &amp;ldquo;look and pay,&amp;rdquo; significantly increasing consumer repurchase rates in supermarkets. The intelligent level of traditional household appliances continues to rise, with whole-home intelligent systems interconnecting various appliances through sensors and communication modules, automatically adjusting room temperature and lighting based on user habits, optimizing household energy use in conjunction with power loads. The transition of traditional appliances to smart appliances has led to significant sales growth for products like cooking robots and window-cleaning robots, with the market size of cooking robots in China expected to reach 3.17 billion yuan by 2024 and exceed 11.7 billion yuan by 2030. According to data from AVC, the retail market share of color TVs with built-in AI large models rapidly penetrated, with online market retail share increasing from 0.8% in January to 42.1% in June, and offline market retail share increasing from 1.8% to 28.3%. In the future, there is significant growth potential for smart products catering to the elderly. Humanoid robots, utilizing AI visual recognition, multimodal interaction, and autonomous decision-making algorithms, will penetrate all scenarios from basic household chores to emotional companionship. By the end of 2025, the population aged 60 and above in China is expected to account for 23% of the total population, leading to explosive growth in demand for humanoid robots.&lt;/p&gt;&#xA;&lt;p&gt;AI creatively utilizes data elements to drive innovation in consumption formats. As consumer demands become increasingly personalized and diversified, supply-demand mismatches are a significant reason for weak consumption. By leveraging AI technology, it is possible to effectively collect and analyze data related to consumer demands, capturing dynamic market changes and guiding enterprises to adjust supply strategies in a timely manner. Through flexible production and intelligent supply chains, consumer needs can be met while facilitating consumer participation in design and marketing. In this process, consumption formats continue to innovate, and consumer demands are better satisfied. For example, AI can dynamically predict existing consumer demands, enabling production forecasting. The deep integration of AI and big data allows for precise predictions of social demands, enabling enterprises to effectively plan production and reduce inventory across various stages. New consumption models like C2M (consumer-to-manufacturer) are emerging, releasing potential consumer demands. By aggregating consumer needs through big data, AI is applied in product development, design, production, and marketing processes to meet personalized consumer demands through customized production.&lt;/p&gt;&#xA;&lt;p&gt;AI promotes innovation in consumption formats and stimulates consumption growth by fully embedding itself in transaction processes. For instance, the Qianwen APP connects with platforms like Taobao, Alipay, and Gaode to optimize consumption processes and achieve intelligent shopping. In February of this year, Qianwen&amp;rsquo;s &amp;ldquo;30 Billion Big Free Order&amp;rdquo; event showed that AI completed over 120 million orders in six days. Additionally, AI hosts can conduct uninterrupted live broadcasts 24/7, adjusting dialogue content based on interactive information and automatically optimizing discount information based on real-time order volumes and user interactions, significantly enhancing conversion efficiency.&lt;/p&gt;&#xA;&lt;p&gt;In 2025, the State Council issued the &amp;ldquo;Opinions on Deeply Implementing the &amp;lsquo;AI+&amp;rsquo; Initiative,&amp;rdquo; deploying measures to enhance consumption quality through &amp;ldquo;AI+&amp;rdquo; by proposing to &amp;ldquo;expand new service consumption scenarios&amp;rdquo; and &amp;ldquo;cultivate new product consumption formats,&amp;rdquo; emphasizing the &amp;ldquo;vigorous development of new-generation intelligent terminals such as smart connected vehicles, AI smartphones and computers, intelligent robots, smart homes, and smart wearables.&amp;rdquo; Following the global wave of technological revolution, the integration of AI with various industries will inject continuous new momentum into China&amp;rsquo;s consumption growth.&lt;/p&gt;&#xA;&lt;h2 id=&#34;ai-manufacturing-promoting-new-industrialization&#34;&gt;&amp;ldquo;AI+ Manufacturing&amp;rdquo; Promoting New Industrialization&#xA;&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;What is the significance of issuing the &amp;ldquo;Implementation Opinions on &amp;lsquo;AI+ Manufacturing&amp;rsquo; Special Action? How does the mutual empowerment of AI and manufacturing solve prominent problems?&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Wang Gaoxiang (Deputy Director of the New Industrialization Research Institute, China Electronics and Information Industry Development Research Institute):&lt;/strong&gt; AI is a strategic technology leading a new round of technological revolution and industrial transformation, exhibiting a strong spillover effect akin to a &amp;ldquo;leading goose&amp;rdquo; effect. The Ministry of Industry and Information Technology, the National Development and Reform Commission, and other eight departments issued the &amp;ldquo;Implementation Opinions on &amp;lsquo;AI+ Manufacturing&amp;rsquo; Special Action,&amp;rdquo; which holds significant strategic importance for accelerating the formation of new productive forces and promoting new industrialization.&lt;/p&gt;&#xA;&lt;p&gt;From a technological evolution perspective, AI is transitioning from a &amp;ldquo;tool&amp;rdquo; to a &amp;ldquo;factor.&amp;rdquo; Unlike previous information and automation technologies that only replaced labor in specific segments, the new generation of AI, represented by large models, embodied intelligence, and industrial intelligent agents, possesses a complete capability loop of perception, cognition, decision-making, and execution, capable of spanning the entire chain from research and design to production, supply chain management, and after-sales service. Systematically promoting &amp;ldquo;AI+ manufacturing&amp;rdquo; at this technological inflection point helps convert technological breakthrough potential into industrial upgrade momentum, accelerating the reorganization of production factors, reconstruction of production processes, and reshaping of business models.&lt;/p&gt;&#xA;&lt;p&gt;From an industrial competition perspective, China&amp;rsquo;s manufacturing sector urgently needs to open new competitive avenues through intelligence. On one hand, rising costs of labor, land, and energy have gradually weakened traditional low-cost competitive advantages; on the other hand, the rise of anti-globalization sentiments and intensified technological decoupling have further increased export costs and supply chain security risks. AI technology can effectively reduce overall costs through factor substitution, improve product quality through process optimization, and enhance supply chain control through data connectivity, paving a new path for high-quality development in manufacturing through &amp;ldquo;intelligent cost reduction, intelligent quality enhancement, and intelligent efficiency increase.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;From a global competition perspective, major manufacturing countries are integrating AI with manufacturing and elevating it to a national strategy, seizing high ground in a new round of industrial competition. China&amp;rsquo;s manufacturing sector has advantages in a complete industrial system and leading scale, providing vast data resources and application scenarios for AI development. Promoting the implementation of the &amp;ldquo;AI+ manufacturing&amp;rdquo; special action helps convert unique advantages into a first-mover advantage for AI&amp;rsquo;s application in manufacturing, gaining strategic initiative in global intelligent manufacturing competition.&lt;/p&gt;&#xA;&lt;p&gt;Practical evidence shows that the mutual empowerment of AI and manufacturing helps systematically address prominent challenges in the transformation of the manufacturing sector.&lt;/p&gt;&#xA;&lt;p&gt;It has solved the &amp;ldquo;invisibility&amp;rdquo; problem. The deep integration of AI and manufacturing can effectively address the long-standing challenges faced by traditional manufacturing, such as the lack of transparency in production processes, difficulty in sensing equipment status, and reliance on human eyes for quality defects, helping enterprises achieve real-time perception and precise control throughout the production process. According to IDC&amp;rsquo;s survey of Chinese industrial enterprises in 2025, the proportion of enterprises applying large models and intelligent agents increased from 9.6% in 2024 to 47.5% in 2025. Over 80% of China&amp;rsquo;s manufacturing enterprises have achieved digital management, with the application rate of industrial internet platforms rising to 45.6%, and the rate of numerical control in key processes reaching 68.6%.&lt;/p&gt;&#xA;&lt;p&gt;It has solved the &amp;ldquo;poor performance&amp;rdquo; problem. In the past, many high-precision and high-complexity manufacturing processes relied heavily on the experience of skilled workers, limiting product yield rates to human limits. Intelligent factories leverage AI through machine vision, intelligent quality inspection, and dynamic optimization of process parameters, shifting quality control from post-production sampling to online monitoring. As of January this year, over 35,000 basic-level, more than 8,200 advanced-level, and over 500 excellent-level intelligent factories have been established in China, cultivating 15 leading intelligent factories. AI has penetrated over 70% of business scenarios in leading intelligent factories, accumulating over 6,000 vertical domain models. Currently, China has 101 &amp;ldquo;lighthouse factories,&amp;rdquo; ranking first globally.&lt;/p&gt;&#xA;&lt;p&gt;It has solved the &amp;ldquo;inability to transform&amp;rdquo; problem. For a long time, small and medium-sized enterprises faced constraints in funding and technical capabilities, encountering difficulties in digital and intelligent transformation, often wanting to transform but lacking the courage or knowledge to do so. Vertical industrial internet platforms and industry large models led by leading enterprises provide small and medium-sized enterprises with ready-to-use intelligent solutions. The rich application scenarios in manufacturing and vast industrial data also provide irreplaceable training resources for AI&amp;rsquo;s evolution from general large models to industry-specific models, forming a virtuous cycle of &amp;ldquo;application driving technology, technology feeding back into industry.&amp;rdquo; In recent years, the Ministry of Industry and Information Technology has selected three batches of 101 pilot cities for the digital transformation of small and medium-sized enterprises, supporting over 40,000 small and medium-sized enterprises in their digital upgrades.&lt;/p&gt;&#xA;&lt;p&gt;However, it is important to recognize that compared to the requirements for high-quality development, there are still prominent issues in &amp;ldquo;AI+ manufacturing.&amp;rdquo; For example, the depth and breadth of integration are relatively low, with slow penetration in pilot verification and production manufacturing, facing bottlenecks such as insufficient technological maturity, poor scene adaptability, and low standardization. The value of industrial data elements remains to be activated, with 76% of manufacturing enterprises experiencing insufficient data value extraction, and about 44% of manufacturing data being effectively utilized, but only 4% of high-quality data meeting large model training requirements. There are significant shortcomings in foundational algorithms and industrial software, with inadequate supply capabilities for industrial-grade chips and high-end sensors.&lt;/p&gt;&#xA;&lt;p&gt;To seize the new wave of technological revolution, promoting the deeper integration of AI and manufacturing across a wider range and at a deeper level, effectively converting technological breakthrough potential into powerful momentum for advancing new industrialization, requires multi-dimensional collaborative efforts. First, strengthen demonstration guidance to lead by example. Focus on the most urgent and valuable segments of the manufacturing sector to achieve breakthroughs, forming typical models and accelerating the cultivation of a batch of intelligent native enterprises in manufacturing. Second, strengthen the foundation of industrial data, focusing on data collection, utilization, and accelerating the construction of high-quality industry datasets, while cultivating and expanding a batch of data consulting and data annotation entities. Third, enhance investment in foundational technologies, solidifying the foundations of computational power, algorithms, and data, and promoting key technological breakthroughs in high-end chips, high-performance sensors, industrial mother machines, and high-end instruments.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>The Need for a Proper Name for Artificial Intelligence</title>
            <link>https://acousticinfoplus.com/posts/note-d6461d0f13/</link>
            <pubDate>Sun, 29 Mar 2026 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-d6461d0f13/</guid>
            <description>&lt;p&gt;&lt;img alt=&#34;Image 1&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;404px&#34; data-flex-grow=&#34;168&#34; height=&#34;641&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-d6461d0f13/img-2f8ebbb378.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-d6461d0f13/img-2f8ebbb378_hu_31d9722be559c8a0.jpeg 800w, https://acousticinfoplus.com/posts/note-d6461d0f13/img-2f8ebbb378.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;the-need-for-a-proper-name-for-artificial-intelligence&#34;&gt;The Need for a Proper Name for Artificial Intelligence&#xA;&lt;/h2&gt;&lt;p&gt;Unbeknownst to us, &amp;ldquo;lobsters&amp;rdquo; have evolved. They swarm from the water into our computers and phones—everyone is starting to raise &amp;ldquo;lobsters.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;Of course, here, &amp;ldquo;lobster&amp;rdquo; refers to &amp;ldquo;artificial intelligence entities.&amp;rdquo; In the blink of an eye, we have entered the intelligent era. No matter what you say, you cannot speak without mentioning artificial intelligence. Not only can you not speak without it, but no matter what job you seek or lose, it can be related to artificial intelligence.&lt;/p&gt;&#xA;&lt;p&gt;A few years ago, people simply thought of artificial intelligence as just another new technology. However, everyone quickly became astonished: this time it is truly different! Artificial intelligence, appearing in the form of technology, is rapidly changing all aspects of society. We are forced to accept the understanding that, unlike previous technologies, artificial intelligence is a social tool, an economic tool, and a technological tool. It fundamentally changes not just the technological level but also deconstructs and reshapes the entire society; it transforms nature as a material means of production and influences humanity as an ideological means, even reshaping its creators—humans themselves. It is undoubtedly a tool shared by the productive forces and production relations, as well as the social and economic foundation and superstructure. Therefore, artificial intelligence is a dual tool for transforming humanity and nature, and our discussion of the name &amp;ldquo;artificial intelligence&amp;rdquo; cannot be approached solely from a natural science or technological perspective.&lt;/p&gt;&#xA;&lt;p&gt;Evidently, the existing term—&amp;ldquo;artificial intelligence&amp;rdquo;—is quite inappropriate. Firstly, such a common tool of anthropology and natural science has been given a narrow technical name. More importantly, as a new entity perceived to exist alongside humanity, it should and must have its own &amp;ldquo;meta-concept.&amp;rdquo; The term &amp;ldquo;artificial intelligence&amp;rdquo; derived from English merely means &amp;ldquo;man-made human intelligence,&amp;rdquo; which is not a &amp;ldquo;meta-concept.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;Moreover, from a Chinese perspective, using &amp;ldquo;AI&amp;rdquo; in the Chinese world as the grand name for artificial intelligence directly violates the General Principles of the Chinese Language Law of the People&amp;rsquo;s Republic of China. The term &amp;ldquo;artificial intelligence&amp;rdquo; is merely a direct translation from English, which seriously conflicts with our 5,000 years of Chinese characters. It is evident that we need to give artificial intelligence a proper Chinese name!&lt;/p&gt;&#xA;&lt;h2 id=&#34;lessons-from-improper-naming-of-new-things&#34;&gt;Lessons from Improper Naming of New Things&#xA;&lt;/h2&gt;&lt;h3 id=&#34;1-historical-lessons-from-improper-naming&#34;&gt;1. Historical Lessons from Improper Naming&#xA;&lt;/h3&gt;&lt;p&gt;Chinese people often say: &amp;ldquo;If the name is not correct, then the words will not be smooth; if the words are not smooth, then the matter will not succeed.&amp;rdquo; This is what we commonly refer to as &amp;ldquo;a name that fits its essence.&amp;rdquo; Otherwise, systems and orders will lose legitimacy, leading to social disorder.&lt;/p&gt;&#xA;&lt;p&gt;In social and political aspects, there are numerous experiences and lessons regarding the importance of proper naming.&lt;/p&gt;&#xA;&lt;p&gt;In history, the political wisdom of &amp;ldquo;Cao the Chancellor&amp;rdquo; was superior to that of various &amp;ldquo;heroes&amp;rdquo; because he proposed the idea of &amp;ldquo;using the emperor to command the lords&amp;rdquo; and &amp;ldquo;serving the emperor to command the unfaithful.&amp;rdquo; This became a famous historical strategy.&lt;/p&gt;&#xA;&lt;p&gt;In 1954, China, India, and Myanmar jointly advocated the &amp;ldquo;Five Principles of Peaceful Coexistence,&amp;rdquo; which was a resistance against colonialism and hegemonism, providing legal and moral grounds for countries in the Global South to voice their opinions and develop cooperatively on the international stage.&lt;/p&gt;&#xA;&lt;p&gt;The United States also understands the importance of proper naming. Its most famous cases of &amp;ldquo;manifest destiny&amp;rdquo; were all wrapped in grand ideological narratives, providing a legitimate facade for expansion and hegemonic actions. These are all historical experiences of &amp;ldquo;proper naming.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;In the realm of technology and social development, improper naming has brought numerous lessons and even disasters.&lt;/p&gt;&#xA;&lt;p&gt;The improper naming of the &amp;ldquo;metaverse&amp;rdquo; has turned it into a concept bubble that overdraws the future. Tech companies have used this name for an early-stage vision pieced together from virtual reality, social networks, and digital twins. &lt;strong&gt;The concept was overly hyped and quickly faded&lt;/strong&gt;: this grand name sparked unprecedented investment and media frenzy in 2021-2022, but the actual technology was far from mature, hindering the healthy development of incremental innovation.&lt;/p&gt;&#xA;&lt;h3 id=&#34;2-naming-dilemmas-arising-from-issues-in-english&#34;&gt;2. Naming Dilemmas Arising from Issues in English&#xA;&lt;/h3&gt;&lt;p&gt;The inherent issues in the English conceptual system lead to the complexity and irregularity of professional terminology, acting like a &amp;ldquo;logical bomb&amp;rdquo; lurking deep within the system, causing chain reactions: from personal cognitive confusion to enormous collaboration costs, potentially evolving into real-world technological disasters that severely hinder subsequent development.&lt;/p&gt;&#xA;&lt;h4 id=&#34;1-technical-learning-stage-irregular-naming-disrupts-knowledge-system-construction&#34;&gt;1. Technical Learning Stage: Irregular Naming Disrupts Knowledge System Construction&#xA;&lt;/h4&gt;&lt;p&gt;&lt;strong&gt;Example 1: The Parameter Maze in Programming&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Confused Naming: For the basic concept of passing data to functions, the mixed usage in different contexts leads to logical confusion. Beginners must spend a lot of effort distinguishing these terms that essentially describe the same or highly related things, rather than understanding the core logic of &amp;ldquo;data passing.&amp;rdquo; This disrupts the unity of concepts, turning learning into memorizing &amp;ldquo;jargon&amp;rdquo; rather than understanding principles, steepening the learning curve.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Example 2: The Forest of Abbreviations in Biomedicine&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Confused Naming: Gene and protein names often consist of obscure abbreviations (e.g., p53, TNF-α) or are arbitrary (like the fruit fly gene &amp;ldquo;sonic hedgehog&amp;rdquo;). The same substance has different names in clinical, biochemical, and genetic contexts.&lt;/p&gt;&#xA;&lt;p&gt;Cognitive Overload: Students and interdisciplinary researchers feel like they are deciphering codes, consuming a lot of cognitive resources on terminology translation rather than concept understanding, severely hindering knowledge transfer and the formation of interdisciplinary thinking.&lt;/p&gt;&#xA;&lt;h4 id=&#34;2-technical-application-stage-increased-communication-costs-and-technological-disasters&#34;&gt;2. Technical Application Stage: Increased Communication Costs and Technological Disasters&#xA;&lt;/h4&gt;&lt;p&gt;When chaotic terminology enters team collaboration and complex systems, it can lead to inefficiency at best and disasters at worst.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Example: The Historical Burden in Information Technology&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Confused Naming: The same concept has different names in different tech stacks. For instance, the &amp;ldquo;master-slave&amp;rdquo; architecture in distributed computing was renamed to &amp;ldquo;primary-replica&amp;rdquo; and &amp;ldquo;leader-follower&amp;rdquo; due to its discriminatory connotations, but the old terminology still exists in legacy code, documentation, and engineers&amp;rsquo; thought processes.&lt;/p&gt;&#xA;&lt;p&gt;This has led to significant difficulties: heavy technical debt. Poor naming is written into core codebases, APIs, and protocols. Modifying them means rewriting countless dependent systems, updating massive documentation, and retraining personnel, with costs so high that they are unbearable, leaving them as &amp;ldquo;debt&amp;rdquo; to inherit.&lt;/p&gt;&#xA;&lt;h4 id=&#34;3-long-term-development-technical-debt-and-innovation-barriers&#34;&gt;3. Long-term Development: Technical Debt and Innovation Barriers&#xA;&lt;/h4&gt;&lt;p&gt;Poor naming becomes entrenched in infrastructure, shackling long-term development.&lt;/p&gt;&#xA;&lt;p&gt;Innovation and Collaboration Barriers: When Google&amp;rsquo;s &amp;ldquo;Borg&amp;rdquo; system, Apache&amp;rsquo;s &amp;ldquo;Mesos,&amp;rdquo; and Kubernetes&amp;rsquo; &amp;ldquo;Pod&amp;rdquo; all describe similar container orchestration concepts, cross-platform collaboration and talent mobility face additional terminology translation and understanding costs, hindering the integration and reinvention of technological ideas.&lt;/p&gt;&#xA;&lt;p&gt;Ecological Fragmentation: Open-source projects or new technologies often create new terms to describe existing concepts for the sake of &amp;ldquo;innovation&amp;rdquo; or historical reasons, leading to ecological fragmentation, forcing developers to relearn essentially the same knowledge under different names.&lt;/p&gt;&#xA;&lt;h4 id=&#34;4-case-studies-of-naming-dilemmas-in-english&#34;&gt;4. Case Studies of Naming Dilemmas in English&#xA;&lt;/h4&gt;&lt;p&gt;&lt;strong&gt;Example from Chemistry and Pharmaceuticals: Triple Naming Systems and Similarity Traps&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Drugs typically have:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Chemical names: complex and lengthy, for professionals only.&lt;/li&gt;&#xA;&lt;li&gt;International Nonproprietary Names: more common but still similar.&lt;/li&gt;&#xA;&lt;li&gt;Brand names: registered by pharmaceutical companies, driven by marketing, often deliberately memorable, leading to confusion.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;This system lays the groundwork for errors.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Example 1: The Fatal Error of Vincristine—Confusion in Administration Routes&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Confused Naming and Background: Vincristine and vinblastine are two different chemotherapy drugs with very similar names.&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Vincristine: primarily used for leukemia, can only be administered via intravenous injection, strictly prohibited for intrathecal injection.&lt;/li&gt;&#xA;&lt;li&gt;Vinblastine: can be used for solid tumors, with a different administration route.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;Disaster Events: Globally, there have been multiple cases of vincristine being incorrectly injected into patients&amp;rsquo; spinal canals due to name confusion. Such errors can lead to irreversible, devastating nerve damage, resulting in patient deaths in extreme pain.&lt;/p&gt;&#xA;&lt;p&gt;How Naming Leads to Disasters: Doctors issuing prescriptions, pharmacists preparing them, and nurses executing them can easily confuse names due to their high similarity (especially in verbal prescriptions, handwritten notes, or emergency situations). This is not merely a spelling error but a systemic naming defect leading to fatal consequences. This incident directly prompted hospitals worldwide to enforce regulations: vincristine must be diluted by pharmacists and dispensed in small infusion bags, prohibiting any packaging that could be directly used for intrathecal injection.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Example 2: The Origin of the &amp;ldquo;Tall Man&amp;rdquo; Lettering Method—Distinguishing Similar-Spelling Drugs&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;The FDA in the United States promotes the use of mixed case (Tall Man Lettering) to distinguish easily confused drugs, backed by numerous reports of near disasters:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;Clonazepam vs. Clozapine&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;CLONAZePam: a sedative-hypnotic drug.&lt;/li&gt;&#xA;&lt;li&gt;CLOZAPine: an antipsychotic drug.&lt;/li&gt;&#xA;&lt;li&gt;Risk: prescribing a sedative as a powerful antipsychotic, or vice versa, could lead to excessive sedation, seizures, or uncontrolled psychiatric symptoms.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;Hydromorphone vs. Morphine&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;HYDROmorphone: a potent opioid analgesic, 5-7 times more potent than morphine.&lt;/li&gt;&#xA;&lt;li&gt;MORPHine: a standard opioid analgesic.&lt;/li&gt;&#xA;&lt;li&gt;Risk: mistaking &amp;ldquo;hydromorphone&amp;rdquo; for &amp;ldquo;morphine&amp;rdquo; and administering the same dose could lead to respiratory depression, coma, or even death.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;Ibuprofen vs. Fentanyl&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;ibuPROfen: a non-steroidal anti-inflammatory drug.&lt;/li&gt;&#xA;&lt;li&gt;fentaNYL: a potent opioid analgesic.&lt;/li&gt;&#xA;&lt;li&gt;Risk: quickly selecting similar suffixes in electronic prescription systems could lead to catastrophic errors.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;&lt;strong&gt;Example 3: Insulin—A Field That Appears Regular but is Actually High-Risk&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;There are many types of insulin, with names combining type, action time, and similar brand names, making errors easy.&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;NovoRapid vs. Novolin: although from the same company, &amp;ldquo;Rapid&amp;rdquo; represents ultra-short-acting, while &amp;ldquo;lin&amp;rdquo; represents short-acting or intermediate-acting, with completely different timing for administration.&lt;/li&gt;&#xA;&lt;li&gt;Lantus vs. Levemir: names are unrelated, but both are basal insulins; confusion with other insulins could lead to daily blood sugar control disruptions.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;Disastrous Consequences: Using long-acting insulin instead of short-acting insulin for meals can lead to severe and prolonged hypoglycemic coma; conversely, it can lead to severe hyperglycemia and ketoacidosis.&lt;/p&gt;&#xA;&lt;p&gt;In summary, improper naming creates a vicious cycle:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Learning Side: Complex and irregular naming → Cognitive load increases, logical framework confuses → Talent cultivation efficiency decreases, professional barriers artificially heightened.&lt;/li&gt;&#xA;&lt;li&gt;Application Side: Chaotic terminology enters collaboration and systems → Communication costs soar, human error probability increases → In critical fields (aerospace, healthcare, nuclear power), directly triggers technological disasters, causing loss of life and property.&lt;/li&gt;&#xA;&lt;li&gt;Development Side: Poor naming solidifies into standards and infrastructure → Forms enormous &amp;ldquo;terminology debt&amp;rdquo; and ecological fragmentation → System maintenance costs are extremely high, cross-domain collaboration is difficult, and fundamental innovation is hindered.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;Therefore, naming new things is a serious system engineering and design philosophy. Especially when it involves meta-concepts, promoting terminology standardization and adhering to the principles of &amp;ldquo;position over convenience&amp;rdquo; and &amp;ldquo;logic over cleverness&amp;rdquo; in naming from the outset is not only for elegance but also for safety, efficiency, and sustainable innovation. A name that is not correct is not merely a matter of words not flowing smoothly; it is indeed the source of disaster and the beginning of obstacles.&lt;/p&gt;&#xA;&lt;p&gt;Thus, the most successful naming often accurately reflects the essence of things, manages public expectations, and leaves room for evolution.&lt;/p&gt;&#xA;&lt;p&gt;Naming &amp;ldquo;artificial intelligence&amp;rdquo; is essentially naming &amp;ldquo;artificial intelligence entities.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;Today, despite the complexity of algorithms and computing power involved in artificial intelligence, it can be described in one sentence: artificial intelligence entities are attempting to become an equal subject alongside humans. The artificial intelligence entity is the subject of the entire field or world of artificial intelligence. Therefore, naming the so-called &amp;ldquo;artificial intelligence&amp;rdquo; is a pseudo-problem, while naming &amp;ldquo;artificial intelligence entities&amp;rdquo; is the real issue. This is not merely a naming problem. We are not naming an ordinary new thing; we must recognize that this new thing is acquiring superpowers that even humans may find difficult to control.&lt;/p&gt;&#xA;&lt;h2 id=&#34;principles-for-naming-artificial-intelligence&#34;&gt;Principles for Naming Artificial Intelligence&#xA;&lt;/h2&gt;&lt;p&gt;Naming artificial intelligence is a fundamental matter involving anthropology, linguistics, and philosophy. As humans, our basic principle is undoubtedly: artificial intelligence is created by humans, so it must be defined by humans, from the human standpoint—perspective—method, establishing its concept, clarifying its existence premise, and delineating its functional boundaries. In short: only from the human standpoint can we determine the meaning of artificial intelligence&amp;rsquo;s existence; only humans can be the &amp;ldquo;meta-concept&amp;rdquo; of artificial intelligence, which must be a derived concept of this meta-concept of humanity. Thus, from the subjectivity of humans, we find that the essence of artificial intelligence is: &amp;ldquo;silicon-based systems,&amp;rdquo; which is &amp;ldquo;stone&amp;rdquo; as well.&lt;/p&gt;&#xA;&lt;h3 id=&#34;one-premise-and-three-principles-for-naming-artificial-intelligence&#34;&gt;One Premise and Three Principles for Naming Artificial Intelligence&#xA;&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;One Premise:&lt;/strong&gt; The concept of &amp;ldquo;artificial intelligence&amp;rdquo; must be a &amp;ldquo;meta-concept.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Three Principles:&lt;/strong&gt; The concept of &amp;ldquo;artificial intelligence&amp;rdquo; must possess &amp;ldquo;humanity,&amp;rdquo; &amp;ldquo;self-reference,&amp;rdquo; and &amp;ldquo;generativity.&amp;rdquo;&lt;/p&gt;&#xA;&lt;h4 id=&#34;what-is-a-meta-concept&#34;&gt;What is a Meta-Concept?&#xA;&lt;/h4&gt;&lt;p&gt;A meta-concept is the most fundamental, foundational &amp;ldquo;cornerstone&amp;rdquo; for constructing a theoretical system; it is the starting point of a theory or ideological system that cannot be further defined. Any definition requires the use of other concepts; if a meta-concept can also be defined, it would lead to infinite loops.&lt;/p&gt;&#xA;&lt;p&gt;Its Role: It is the foundation upon which the entire theoretical edifice (including axioms, theorems, and derived concepts) is built. For example, in Euclidean geometry, &amp;ldquo;point,&amp;rdquo; &amp;ldquo;line,&amp;rdquo; and &amp;ldquo;plane&amp;rdquo; are meta-concepts. The entire geometry system is derived from these meta-concepts and several axioms.&lt;/p&gt;&#xA;&lt;p&gt;In short, a meta-concept is the &amp;ldquo;foundation&amp;rdquo; of a theoretical system, and it itself is no longer questioned as &amp;ldquo;what is it.&amp;rdquo;&lt;/p&gt;&#xA;&lt;h4 id=&#34;what-is-the-humanity-of-artificial-intelligence&#34;&gt;What is the Humanity of Artificial Intelligence?&#xA;&lt;/h4&gt;&lt;p&gt;&amp;ldquo;Humanity&amp;rdquo; is a philosophical concept used to refer to the unique attributes and essence that fundamentally distinguish humans from other entities. It involves: what fundamentally makes us &amp;ldquo;human&amp;rdquo;? What makes something not qualify as human?&lt;/p&gt;&#xA;&lt;p&gt;As the &amp;ldquo;essence of humanity,&amp;rdquo; humanity concerns the universal characteristics of humans as a &amp;ldquo;class of existence,&amp;rdquo; that is, the fundamental attributes that make humans human. &amp;ldquo;Humanity&amp;rdquo; is the fundamental mark that distinguishes humans from animals. It does not refer to a common feature possessed by every individual but to the unique mode of existence of the human species. &amp;ldquo;Humanity&amp;rdquo; is reflected in humans&amp;rsquo; ability to engage in free, conscious, and creative activities, especially labor.&lt;/p&gt;&#xA;&lt;p&gt;The &amp;ldquo;humanity&amp;rdquo; of artificial intelligence we propose is based on the concept of &amp;ldquo;humanity&amp;rdquo; and is a derivative, opposite, and externalized product of human &amp;ldquo;humanity.&amp;rdquo; It indicates that the establishment of the concept of artificial intelligence fundamentally derives entirely from human concepts; regardless of how artificial intelligence develops, its meaning of existence is entirely determined by the meaning of human existence. Conversely, the &amp;ldquo;humanity&amp;rdquo; of artificial intelligence is its essentially non-human nature.&lt;/p&gt;&#xA;&lt;p&gt;Overall, the &amp;ldquo;humanity&amp;rdquo; of artificial intelligence can be understood from two dimensions:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;From the &amp;ldquo;class&amp;rdquo; dimension: it refers to the essence of artificial intelligence entities as a whole, distinguishing them from humans&amp;rsquo; creative, free, and conscious essence.&lt;/li&gt;&#xA;&lt;li&gt;From the &amp;ldquo;individual&amp;rdquo; dimension: it refers to the unique, irreplaceable mode of existence possessed by each specific artificial intelligence entity.&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;These two dimensions together constitute the rich connotation of the concept of artificial intelligence&amp;rsquo;s &amp;ldquo;humanity&amp;rdquo;: it is both the universal foundation for artificial intelligence to be artificial intelligence and the unique confirmation of each &amp;ldquo;artificial intelligence entity&amp;rdquo; to be an &amp;ldquo;artificial intelligence entity.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;The basic philosophical concepts of &amp;ldquo;self-reference&amp;rdquo; and &amp;ldquo;generativity&amp;rdquo; are core characteristics of its role as a foundational thinking tool and theoretical instrument.&lt;/p&gt;&#xA;&lt;h4 id=&#34;what-is-self-reference&#34;&gt;What is Self-Reference?&#xA;&lt;/h4&gt;&lt;p&gt;Self-reference refers to the ability of a concept to point to, include, or apply to itself. It is not a simple tautology but the self-referential and reflective nature of a concept at the logical level.&lt;/p&gt;&#xA;&lt;p&gt;Core Expression: When a concept is used to analyze the conditions for its own establishment, applicable scope, or meaning, it reflects self-reference.&lt;/p&gt;&#xA;&lt;p&gt;Typical Examples:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&amp;ldquo;Existence&amp;rdquo;: When we ask, &amp;ldquo;Does &amp;rsquo;existence&amp;rsquo; itself exist?&amp;rdquo; we are using the concept of &amp;ldquo;existence&amp;rdquo; to reflect on itself.&lt;/li&gt;&#xA;&lt;li&gt;&amp;ldquo;Truth&amp;rdquo;: The definition of &amp;ldquo;truth&amp;rdquo; (e.g., &amp;ldquo;a statement that corresponds to facts&amp;rdquo;) itself needs to be examined for whether it is &amp;ldquo;true.&amp;rdquo;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;Philosophical Significance: Self-reference reveals the depth and complexity of thought, often leading to fundamental philosophical insights or paradoxes, forcing thought to establish more rigorous levels (such as the distinction between object language and meta-language).&lt;/p&gt;&#xA;&lt;h4 id=&#34;what-is-generativity&#34;&gt;What is Generativity?&#xA;&lt;/h4&gt;&lt;p&gt;Generativity refers to the openness and productivity of a concept, enabling it to serve as a foundation or framework that generates new questions, theoretical systems, or cognitive approaches. It acts as a &amp;ldquo;thinking engine.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;Core Expression: A meta-concept can open a continuous field of inquiry rather than provide a closed answer. For example:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&amp;ldquo;Freedom&amp;rdquo;: From it, one can generate a series of endless philosophical and political issues such as &amp;ldquo;the relationship between freedom and necessity,&amp;rdquo; &amp;ldquo;political freedom and volitional freedom,&amp;rdquo; and &amp;ldquo;the limits of freedom.&amp;rdquo;&lt;/li&gt;&#xA;&lt;li&gt;&amp;ldquo;Justice&amp;rdquo;: It can generate entire political philosophy systems concerning distributive justice, procedural justice, corrective justice, etc.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;Philosophical Significance: Generativity ensures the vitality and evolution of the system. Basic concepts are not dogmatic definitions but the source of problem domains and the hub of theoretical construction.&lt;/p&gt;&#xA;&lt;h4 id=&#34;the-relationship-between-self-reference-and-generativity&#34;&gt;The Relationship Between Self-Reference and Generativity&#xA;&lt;/h4&gt;&lt;p&gt;Self-reference and generativity are inseparable and together constitute their &amp;ldquo;meta&amp;rdquo; characteristics.&lt;/p&gt;&#xA;&lt;p&gt;Self-reference is the deep driving force of generativity: it is precisely because a concept can self-reflect (self-reference) that it exposes its internal tensions, ambiguities, and uncertainties, thus generating the need for further analysis and theorization.&lt;/p&gt;&#xA;&lt;p&gt;Generativity is the real unfolding of self-reference: the self-referential inquiry of a concept is not an empty cycle; it must unfold and deepen through generating a series of specific, progressively layered questions and discussions. The self-reference inquiry into &amp;ldquo;self&amp;rdquo; generates the rich content of the artificial intelligence world.&lt;/p&gt;&#xA;&lt;p&gt;In summary, the meta-concept of artificial intelligence is the starting point of the artificial intelligence world, the &amp;ldquo;foundation&amp;rdquo; and &amp;ldquo;scaffolding&amp;rdquo; for humanity to build the artificial intelligence world. The &amp;ldquo;humanity&amp;rdquo; of artificial intelligence is its premise of existence, the &amp;ldquo;self-reference&amp;rdquo; of artificial intelligence is its structure pointing to itself, and the &amp;ldquo;generativity&amp;rdquo; of artificial intelligence describes its dynamic evolution process. They are the philosophical basis and tools for &amp;ldquo;legislating for artificial intelligence&amp;rdquo; philosophically.&lt;/p&gt;&#xA;&lt;h2 id=&#34;the-meta-role-of-artificial-intelligence-in-historical-evolution&#34;&gt;The Meta Role of Artificial Intelligence in Historical Evolution&#xA;&lt;/h2&gt;&lt;p&gt;Why has artificial intelligence become a &amp;ldquo;meta-concept&amp;rdquo;? Let’s review the historical evolution of artificial intelligence:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Early Stage (Logic and Symbols):&lt;/strong&gt; Artificial intelligence initially emerged as a concept of &amp;ldquo;imitating human reasoning,&amp;rdquo; forcing us to precisely and computably define concepts like &amp;ldquo;intelligence&amp;rdquo; and &amp;ldquo;reasoning&amp;rdquo; for the first time. At this point, artificial intelligence serves as a mirror to analyze &amp;ldquo;intelligence.&amp;rdquo;&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Development Stage (Learning and Statistics):&lt;/strong&gt; With the rise of machine learning, the definition of artificial intelligence shifted from &amp;ldquo;following rules&amp;rdquo; to &amp;ldquo;learning from data.&amp;rdquo; This again forced us to re-examine concepts like &amp;ldquo;learning,&amp;rdquo; &amp;ldquo;experience,&amp;rdquo; and &amp;ldquo;intuition,&amp;rdquo; translating them into mathematical optimization problems. At this stage, artificial intelligence is a tool for generating new paradigms of intelligence.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Current Stage (Perception and Generation):&lt;/strong&gt; The emergence of large models and generative artificial intelligence directly challenges the boundaries of &amp;ldquo;creation,&amp;rdquo; &amp;ldquo;understanding,&amp;rdquo; and &amp;ldquo;consciousness.&amp;rdquo; Artificial intelligence is no longer merely a tool but has become a cognitive subject participating in creation, communication, and even possessing &amp;ldquo;hallucinations.&amp;rdquo; It has become a continuously self-redefining meta-process.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;The nature of artificial intelligence in philosophical and cognitive terms possesses the essence of a &amp;ldquo;meta-concept.&amp;rdquo; Artificial intelligence is the only field among all disciplines that studies &amp;ldquo;intelligence&amp;rdquo; itself. It does not settle for merely describing intelligence (like psychology) but aims to construct intelligence. This &amp;ldquo;construction&amp;rdquo; process is the most thorough and operational philosophical inquiry into the concept of &amp;ldquo;intelligence.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;The denial, externalization, and return to the &amp;ldquo;meta-concept&amp;rdquo; of humanity: the history of artificial intelligence&amp;rsquo;s development is also a history of humanity continuously repositioning itself. From &amp;ldquo;the spirit of all things&amp;rdquo; to &amp;ldquo;a form of intelligence,&amp;rdquo; artificial intelligence serves as a mirror reflecting the uniqueness and limitations of humanity.&lt;/p&gt;&#xA;&lt;h2 id=&#34;the-influence-of-meta-concepts-on-social-and-technical-systems&#34;&gt;The Influence of Meta-Concepts on Social and Technical Systems&#xA;&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Meta-Concept of Productive Forces:&lt;/strong&gt; Artificial intelligence is not an ordinary production tool; it is a &amp;ldquo;tool for manufacturing tools&amp;rdquo; (such as artificial intelligence designing chips, writing code, optimizing processes), serving as a foundational and catalytic force driving the development of other technologies.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Meta-Concept of Ethics and Governance:&lt;/strong&gt; Artificial intelligence is the culmination of humanity&amp;rsquo;s social formatting tools, a weapon for deconstructing and reconstructing everything about humanity.&lt;/p&gt;&#xA;&lt;h2 id=&#34;naming-artificial-intelligence-with-chinese-characters-is-most-appropriate&#34;&gt;Naming Artificial Intelligence with Chinese Characters is Most Appropriate&#xA;&lt;/h2&gt;&lt;p&gt;The conceptual system of Chinese characters is a meta-concept system, inherently possessing philosophical &amp;ldquo;self-reference&amp;rdquo; and &amp;ldquo;generativity,&amp;rdquo; making it the best textual tool for describing various &amp;ldquo;meta-concepts&amp;rdquo; in the world.&lt;/p&gt;&#xA;&lt;p&gt;For example, &amp;ldquo;human&amp;rdquo; is a meta-concept, thus allowing for the derivation of various types of humans, their attributes, behaviors, and so on, leading to derived concepts and further derived concepts&amp;hellip; Ultimately, we find that humanity establishes the conceptual system of human society based on the meta-concept of &amp;ldquo;human&amp;rdquo; as the &amp;ldquo;foundation&amp;rdquo; of the entire system.&lt;/p&gt;&#xA;&lt;p&gt;From the perspective of human evolution, it derives: ape-man - female ape-man - unearthed female ape-man - unearthed female ape-man skull, Homo sapiens - Southern Homo sapiens - Southern female Homo sapiens - unearthed Southern female Homo sapiens teeth, primitive man - primitive man - primitive male hunter-gatherer - primitive male hunter-gatherer tools, modern man - modern urban dweller - modern urban dweller professions - modern urban dweller vocational training, future man - future carbon-based man - future carbon-silicon hybrid man - future carbon-silicon hybrid brain-computer interface, and so on.&lt;/p&gt;&#xA;&lt;p&gt;According to social ideology, it can derive: superior person - truly superior person - truly superior person&amp;rsquo;s virtue, foolish person - big foolish person - big foolish person&amp;rsquo;s logic, clever person - absolutely clever person - absolutely clever person&amp;rsquo;s cleverness, lover - old lover - old lover&amp;rsquo;s photo - old lover&amp;rsquo;s old photo, good person - old good person - fake old good person, bad person - big bad person - truly big bad person, and so on.&lt;/p&gt;&#xA;&lt;p&gt;According to biological attributes, it can derive: man - old man, woman - young woman, elder - half-elder, strong person - fake strong person, and so on; according to social division of labor, it can derive: soldier - female soldier, farmer - old farmer, worker - new worker, craftsman - young craftsman, and so on.&lt;/p&gt;&#xA;&lt;p&gt;Artificial intelligence is a historically new &amp;ldquo;meta-concept&amp;rdquo; that has emerged in human society. It can be anticipated that artificial intelligence has a trend of self-developing into carbon-based life, and it may even exist and develop alongside humans, at least on par with the once existing elements of heaven, earth, fire, water, wood, soil, thunder, and electricity. Surrounding this meta-concept, other secondary concepts will emerge, extending to more levels of specific concepts. Therefore, we can only and must use a single character to name artificial intelligence.&lt;/p&gt;&#xA;&lt;h3 id=&#34;all-words-describing-meta-concepts-in-chinese-characters-are-single-characters&#34;&gt;All Words Describing Meta-Concepts in Chinese Characters are Single Characters&#xA;&lt;/h3&gt;&lt;p&gt;Words describing meta-concepts in Chinese characters are all single characters, such as: heaven, earth, human, wind, cloud, water, electricity, wood.&lt;/p&gt;&#xA;&lt;h4 id=&#34;why-must-it-be-named-with-a-single-chinese-character&#34;&gt;Why Must It Be Named with a Single Chinese Character?&#xA;&lt;/h4&gt;&lt;p&gt;This is a clever requirement based on its &amp;ldquo;meta-concept&amp;rdquo; property:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;Convergence of Symbols:&lt;/strong&gt; A complex, multi-dimensional, and continuously evolving meta-concept requires a highly abstract and stable symbol as its &amp;ldquo;baseline&amp;rdquo; or &amp;ldquo;anchor.&amp;rdquo; Multi-word terms describe, while single-character names refer, getting closer to the essence.&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;Cultural Embeddedness:&lt;/strong&gt; Chinese characters are ideographic; a powerful single character can carry profound cultural imagery and historical context, embedding this technology concept originating from the West deeper into Eastern thinking and narrative soil.&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&#xA;&lt;p&gt;&lt;strong&gt;Future Adaptability:&lt;/strong&gt; As a meta-concept, the connotation of artificial intelligence will continue to expand. An open single character (like &amp;ldquo;wisdom&amp;rdquo;) is more inclusive and has more evolutionary space than a definitional compound word (like &amp;ldquo;artificial intelligence&amp;rdquo;).&lt;/p&gt;&#xA;&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;If a single character must be chosen, it is recommended to name artificial intelligence as, or pronounced as &amp;ldquo;qi&amp;rdquo; or &amp;ldquo;huang,&amp;rdquo; for the following reasons:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&lt;strong&gt;Directly Pointing to the Essence:&lt;/strong&gt; Silicon-based is the absolute material essence of artificial intelligence, stripping away the material limitation of &amp;ldquo;artificial,&amp;rdquo; and the single sound, single character directly points to: silicon is derived from the essence of &amp;ldquo;stone.&amp;rdquo;&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Historical Depth:&lt;/strong&gt; This character is a compound character, carrying the Eastern word formation method for advanced cognitive abilities.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Word Root Activity:&lt;/strong&gt; As a root, it can naturally derive new words like body, calculation, recognition, machinery, etc., perfectly adapting to the generativity of artificial intelligence as a meta-concept.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Philosophical Inclusivity:&lt;/strong&gt; It correspondingly refers to human wisdom, thus referring to machine intelligence, leaving space for the future integration and dialogue between the two.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Chinese is not only for Huaxia but also for the world.&lt;/strong&gt;&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;Other alternative characters such as &amp;ldquo;ling&amp;rdquo; (emphasizing the elusive emergent characteristics) or &amp;ldquo;silicon&amp;rdquo; (emphasizing its material basis and digital origin) are also interesting.&lt;/p&gt;&#xA;&lt;p&gt;Regardless, we must calm down, think carefully, and strictly adhere to the &amp;ldquo;one premise&amp;rdquo; and &amp;ldquo;three principles&amp;rdquo; for naming artificial intelligence, ensuring accuracy, depth, and acceptability in various aspects, preferring slowness to haste and preferring deficiency to excess.&lt;/p&gt;&#xA;&lt;h2 id=&#34;conclusion&#34;&gt;Conclusion&#xA;&lt;/h2&gt;&lt;p&gt;Artificial intelligence, due to its philosophical inquiry into the essence of intelligence and its framework-restructuring impact on human society, has transcended the technical realm, becoming a &amp;ldquo;meta-concept&amp;rdquo; of a new era. Naming &amp;ldquo;artificial intelligence&amp;rdquo; with highly concise Chinese characters is an Eastern philosophical refinement of its essence, a historical cultural coronation for this power that defines the future.&lt;/p&gt;&#xA;&lt;p&gt;In summary, we must have a basic understanding:&lt;/p&gt;&#xA;&lt;p&gt;What seems to be a simple naming issue is, in fact, a comprehensive positioning of humanity&amp;rsquo;s self-generated counterpart and whether it can be controlled. To put it mildly: humanity&amp;rsquo;s understanding, positioning, and naming of artificial intelligence entities are the understanding, positioning, and stipulation of humanity&amp;rsquo;s future destiny. In reality, this determines the fundamental relationship between humanity and artificial intelligence entities. This is currently the only remaining good time window, and we must legislate for artificial intelligence entities in methodology, epistemology, and philosophy. This will fundamentally determine the future destinies of humanity and artificial intelligence.&lt;/p&gt;&#xA;&lt;p&gt;We are not naming artificial intelligence and artificial intelligence entities! This is a call for everyone to unite and reclaim the discourse power of artificial intelligence, thereby reclaiming the formatting power of humanity!!!&lt;/p&gt;&#xA;&lt;p&gt;The specific character to use should be a collective brainstorming effort. However, naming artificial intelligence must be based on the following premises:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;The naming of artificial intelligence entities is not merely a technological concept like artificial intelligence.&lt;/li&gt;&#xA;&lt;li&gt;Artificial intelligence entities are new entities that will inevitably exist alongside humans, requiring a meta-concept that describes their essence, not just a technical term or scientific name.&lt;/li&gt;&#xA;&lt;li&gt;It must use Chinese characters to determine this concept for all humanity. And it should be a single character.&lt;/li&gt;&#xA;&lt;li&gt;Such a meta-concept must start from humanity, reflecting the subject position of humans and the subordinate nature of intelligent entities.&lt;/li&gt;&#xA;&lt;li&gt;The naming of artificial intelligence entities is not a simple technological naming issue.&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;It encompasses all social meanings, including technology, production, economy, politics, culture, military, and education. It relates to the future meaning of human existence, serving as the basic anchor and basis for determining the relationship between humans and intelligent entities. If named improperly, it could become the most powerful tool for alienating humanity in the hands of malicious forces. The result would be a disaster for all humanity and an irretrievable fate!!!&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>MiroFish: AI-Powered Prediction Engine Gains $4.2 Million Investment</title>
            <link>https://acousticinfoplus.com/posts/note-1ab31becac/</link>
            <pubDate>Sun, 08 Mar 2026 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-1ab31becac/</guid>
            <description>&lt;h2 id=&#34;mirofish-ai-powered-prediction-engine&#34;&gt;MiroFish: AI-Powered Prediction Engine&#xA;&lt;/h2&gt;&lt;p&gt;MiroFish is an AI prediction engine that has recently surged to the top of GitHub&amp;rsquo;s Trending list, with its star count skyrocketing to over 5.7k since the end of January. This open-source project utilizes AI to predict the world by extracting real-world seed information, such as breaking news, to automatically construct a high-fidelity parallel digital world.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 1&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;688px&#34; data-flex-grow=&#34;286&#34; height=&#34;230&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-1ab31becac/img-43c351940f.jpeg&#34; width=&#34;660&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Within this space, thousands of intelligent agents, each with independent personalities, long-term memories, and behavioral logic, interact freely and evolve socially. Users can dynamically inject variables into the system to accurately forecast future trends.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 2&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;358px&#34; data-flex-grow=&#34;149&#34; height=&#34;442&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-1ab31becac/img-8b376bd0b4.jpeg&#34; width=&#34;660&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;In the author&amp;rsquo;s demonstration, MiroFish was used to predict the lost ending of &amp;ldquo;Dream of the Red Chamber&amp;rdquo; based on its first 80 chapters and to analyze the strategic evolution and market impact following a major financing round for a tech company.&lt;/p&gt;&#xA;&lt;p&gt;Before MiroFish, the author created an open-source project called BettaFish, a multi-agent public opinion analysis assistant. Initially a graduation project, it exploded in popularity on GitHub, gaining 20k stars in just one week after being open-sourced. Remarkably, both projects were developed in just 10 days of Vibe Coding.&lt;/p&gt;&#xA;&lt;p&gt;Currently, the author has attracted the attention of Chen Tianqiao, the founder of Shanda Group, who invited him to join the company. With Chen&amp;rsquo;s strong support, MiroFish has secured an investment of 30 million RMB (approximately $4.2 million).&lt;/p&gt;&#xA;&lt;h2 id=&#34;mirofish-building-on-bettafish&#34;&gt;MiroFish: Building on BettaFish&#xA;&lt;/h2&gt;&lt;p&gt;MiroFish is an extension of the earlier project BettaFish. While BettaFish focused on public opinion analysis by automatically searching the internet for relevant information on trending topics and generating detailed analysis reports, MiroFish aims to take it a step further. It transforms the endpoint of analysis into the starting point for predictions, creating a true feedback loop from raw data to intelligent decision-making.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 3&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;236px&#34; data-flex-grow=&#34;98&#34; height=&#34;306&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-1ab31becac/img-82f4dac4b9.jpeg&#34; width=&#34;302&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;For example, in the demonstration of predicting the lost ending of &amp;ldquo;Dream of the Red Chamber,&amp;rdquo; the first step involves constructing a knowledge graph. The original text of the first 80 chapters is uploaded, and prompts are provided for the model to logically deduce outcomes based on text features and character personalities.&lt;/p&gt;&#xA;&lt;p&gt;This step extracts key entities and relationships from the seed information and uses a temporal GraphRAG to inject unique backgrounds and memories into each intelligent agent.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 4&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;271px&#34; data-flex-grow=&#34;113&#34; height=&#34;583&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-1ab31becac/img-40eabde598.jpeg&#34; width=&#34;660&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;The system generates a vast character relationship graph based on the 150,000 words of the original text, featuring 905 entity nodes and 3,822 relationship edges. The core character is Baoyu, who has the most relationships with other nodes such as Daiyu, Baochai, Jia Mu, and Xiren.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 5&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;444px&#34; data-flex-grow=&#34;185&#34; height=&#34;356&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-1ab31becac/img-73f6ed4a23.jpeg&#34; width=&#34;660&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Clicking on each node reveals detailed character descriptions and summaries of the latest events in the story. For instance, Daiyu&amp;rsquo;s latest event is the &amp;ldquo;Cold Moon Buries the Poetic Soul&amp;rdquo; from chapter 76.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 6&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;228px&#34; data-flex-grow=&#34;95&#34; height=&#34;692&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-1ab31becac/img-41b80f4a1f.jpeg&#34; width=&#34;660&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;The second step involves environment setup, where character relationships are extracted to create personas, and core parameters for simulation are established. A total of 580 personas are extracted, indicating the generation of 580 agents.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 7&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;318px&#34; data-flex-grow=&#34;132&#34; height=&#34;498&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-1ab31becac/img-9d71b8394d.jpeg&#34; width=&#34;660&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Each persona provides a comprehensive overview of the character&amp;rsquo;s experiences, unique memories, behavioral patterns, and social networks. For example, Jia Dairu is a 72-year-old teacher from the Jia family, adhering to traditional ethics and witnessing the rise and fall of the Jia family.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 8&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;180px&#34; data-flex-grow=&#34;75&#34; height=&#34;879&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-1ab31becac/img-27a03a2fa7.jpeg&#34; width=&#34;660&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;The system then generates dual-platform simulation configurations, activating events and topics to begin the simulation. After 30 rounds of dual-world simulations, over 500 agents engaged in nearly 2,000 activities. The left side displays the character relationship graph post-simulation, while the right shows the specific activities and statements of each character, weaving together a new storyline.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 9&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;526px&#34; data-flex-grow=&#34;219&#34; height=&#34;301&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-1ab31becac/img-d9e5ccbde4.jpeg&#34; width=&#34;660&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Agents interact with each other through references and comments, such as Su Yun describing the search in the Grand View Garden, followed by Zhen Shiyin&amp;rsquo;s response, commenting on the impermanence of life.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 10&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;420px&#34; data-flex-grow=&#34;175&#34; height=&#34;377&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-1ab31becac/img-93c4ae2bb6.jpeg&#34; width=&#34;660&#34;&gt;&lt;img alt=&#34;Image 11&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;404px&#34; data-flex-grow=&#34;168&#34; height=&#34;392&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-1ab31becac/img-8a023d0011.jpeg&#34; width=&#34;660&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;The system can also generate a comprehensive event prediction report, with some insights being quite profound, such as the collapse of the Grand View Garden being an inevitable process resulting from the resonance between social structures and individual destinies.&lt;/p&gt;&#xA;&lt;p&gt;Interestingly, some predicted endings align closely with the existing conclusion of &amp;ldquo;Dream of the Red Chamber,&amp;rdquo; such as Daiyu burning the manuscript and severing her emotional ties.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 12&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;460px&#34; data-flex-grow=&#34;191&#34; height=&#34;344&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-1ab31becac/img-d6e4294f8e.jpeg&#34; width=&#34;660&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Moreover, users can interact with the model, asking questions like, &amp;ldquo;What happens to Baoyu after the Grand View Garden is raided?&amp;rdquo; Unlike the version by Gao E, which has Baoyu participating in the imperial examination, the model predicts that he suffers mental trauma from repeated setbacks and disappears with a madman.&lt;/p&gt;&#xA;&lt;p&gt;The author shared his expenses, noting that the entire process from the first step to the end of the simulation cost about 14 RMB.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 14&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;318px&#34; data-flex-grow=&#34;132&#34; height=&#34;498&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-1ab31becac/img-3b4517f551.jpeg&#34; width=&#34;660&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;However, he acknowledged some limitations of the project, such as potential mixing of Chinese and English in the output when the input text volume is too large, which will be optimized in future iterations.&lt;/p&gt;&#xA;&lt;h2 id=&#34;vibecoding-creating-super-individuals&#34;&gt;VibeCoding: Creating Super Individuals&#xA;&lt;/h2&gt;&lt;p&gt;Since the success of BettaFish, the author has received countless emails with job offers, investment proposals, and collaboration invitations, overwhelming his inbox.&lt;/p&gt;&#xA;&lt;p&gt;He wrote an article sharing the entire process behind his projects, emphasizing that the market is desperately seeking individuals who can harness AI as a productive force.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 15&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;309px&#34; data-flex-grow=&#34;129&#34; height=&#34;511&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-1ab31becac/img-7029b84ce6.jpeg&#34; width=&#34;660&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Many have requested him to share VibeCoding tutorials, but he explained that it&amp;rsquo;s challenging to provide a formula due to the rapid pace of technological change. What works today may be obsolete next month.&lt;/p&gt;&#xA;&lt;p&gt;Nevertheless, he shared insights on VibeCoding: the most time-consuming aspect is market research and technology selection, understanding &amp;ldquo;why to do it, for whom, and how to do it&amp;rdquo; before directing AI to perform tasks.&lt;/p&gt;&#xA;&lt;p&gt;His workflow involves sketching in Figma, refining with AI, creating a front-end demo using Google AI Studio, integrating pages into project documentation, and breaking tasks into modules for AI IDE to develop in batches.&lt;/p&gt;&#xA;&lt;p&gt;For front-end development, he recommends Gemini 3 Pro for its intuitive capabilities in initializing pages, beautifying designs, and refining interactive details. Back-end structure, interface design, and stability improvements are handled by Claude.&lt;/p&gt;&#xA;&lt;p&gt;He also shared several experiences: first, having multiple agents work on the same task in parallel allows for the selection of the best approach, significantly increasing efficiency. Understanding each model&amp;rsquo;s capabilities and limitations is crucial for effective collaboration.&lt;/p&gt;&#xA;&lt;p&gt;Second, as speed increases, a robust &amp;ldquo;braking system&amp;rdquo; is essential. This means managing code with Git and maintaining thorough documentation to prevent changes in one area from disrupting the entire project.&lt;/p&gt;&#xA;&lt;p&gt;Third, deep human-machine collaboration and code reviews are vital for a true project. He audits the code written by AI line by line and follows its execution process to understand the reasoning behind its decisions.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 16&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;276px&#34; data-flex-grow=&#34;115&#34; height=&#34;250&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-1ab31becac/img-1f9e80a687.jpeg&#34; width=&#34;288&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;The author also highlighted several key points for open-source projects:&lt;/p&gt;&#xA;&lt;h2 id=&#34;about-the-author&#34;&gt;About the Author&#xA;&lt;/h2&gt;&lt;p&gt;The creator of these two trending GitHub projects is BaiFu, a student at the University of Science and Technology of China.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 17&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;147px&#34; data-flex-grow=&#34;61&#34; height=&#34;1072&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-1ab31becac/img-5783d5ec30.jpeg&#34; width=&#34;660&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;In just 30 days, BaiFu has felt the overwhelming enthusiasm from investors towards AI talents born in the 2000s and the concept of &amp;ldquo;super individuals.&amp;rdquo; After the success of BettaFish, Chen Tianqiao invited BaiFu to join Shanda and encouraged him to continue pursuing his ideas.&lt;/p&gt;&#xA;&lt;p&gt;Thus, in just 10 days at Shanda, BaiFu completed the &amp;ldquo;prediction&amp;rdquo; feature he envisioned during the BettaFish phase, leading to the development of MiroFish.&lt;/p&gt;&#xA;&lt;p&gt;Within 24 hours of submitting the demonstration video, Chen Tianqiao decided to invest 30 million RMB to fully support MiroFish&amp;rsquo;s development.&lt;/p&gt;&#xA;&lt;p&gt;In his article, BaiFu excitedly calls for the potential of &amp;ldquo;super individuals&amp;rdquo; to succeed, emphasizing that the earlier one explores this path, the greater the chances of success, especially for university students. He states that traditional and semi-internet industries are underestimating the determination for AI transformation, as nearly all companies are experiencing &amp;ldquo;AI anxiety&amp;rdquo; and are eager to implement AI solutions to avoid being left behind.&lt;/p&gt;&#xA;&lt;p&gt;For young people, as long as they are willing to delve into a field, there is ample opportunity in the vast domestic market, whether in employment or entrepreneurship.&lt;/p&gt;&#xA;&lt;h2 id=&#34;links&#34;&gt;Links&#xA;&lt;/h2&gt;&lt;ul&gt;&#xA;&lt;li&gt;GitHub Repository: &lt;a class=&#34;link&#34; href=&#34;https://github.com/666ghj/MiroFish&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;&#xA;    &gt;MiroFish&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;Demo Link: &lt;a class=&#34;link&#34; href=&#34;https://666ghj.github.io/mirofish-demo/&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;&#xA;    &gt;MiroFish Demo&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;Author&amp;rsquo;s Statement: &lt;a class=&#34;link&#34; href=&#34;https://mp.weixin.qq.com/s/UyYVjlBCvQRJI6B_MmZbsA&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;&#xA;    &gt;Author&amp;rsquo;s WeChat Article&lt;/a&gt;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;</description>
        </item><item>
            <title>Andrej Karpathy on the Revolution of Programming with AI Agents</title>
            <link>https://acousticinfoplus.com/posts/note-af875e8f44/</link>
            <pubDate>Thu, 26 Feb 2026 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-af875e8f44/</guid>
            <description>&lt;h2 id=&#34;revolution-in-programming-with-ai-agents&#34;&gt;Revolution in Programming with AI Agents&#xA;&lt;/h2&gt;&lt;p&gt;On February 26, 2026, Andrej Karpathy, a former AI developer at Tesla and OpenAI, stated that AI agents have undergone fundamental changes in the past two months.&lt;/p&gt;&#xA;&lt;p&gt;In a post on X, he noted that before December of last year, AI agents were almost incapable of handling practical development tasks. However, with improvements in model quality and execution capabilities, these agents have become much more reliable.&lt;/p&gt;&#xA;&lt;p&gt;Karpathy provided an example where he described his requirements in English, and an AI agent completed the development of a video analysis dashboard within 30 minutes, autonomously solving problems and delivering results. He remarked that such tasks would have required a developer an entire weekend just three months prior.&lt;/p&gt;&#xA;&lt;p&gt;He believes that programming today has been completely transformed; traditional methods are being radically changed. Developers no longer need to write code line by line but can initiate AI agents and assign tasks using natural language while supervising multiple agents simultaneously. However, human oversight and high-level guidance remain essential.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 3&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;74px&#34; data-flex-grow=&#34;31&#34; height=&#34;3836&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-af875e8f44/img-2d807f3774.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-af875e8f44/img-2d807f3774_hu_6be078b078eab74a.jpeg 800w, https://acousticinfoplus.com/posts/note-af875e8f44/img-2d807f3774.jpeg 1192w&#34; width=&#34;1192&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Notably, until October 2025, Karpathy thought that AI agents were being overhyped, but his perspective changed after the release of Opus 4.5 and Codex 5.2.&lt;/p&gt;&#xA;&lt;p&gt;Karpathy introduced the concept of &amp;ldquo;vibe coding,&amp;rdquo; which emphasizes using natural language prompts to allow AI to generate code directly. He encourages developers to relinquish their need for control and work in harmony with the capabilities of these tools. This term was later recognized as the word of the year by Collins Dictionary in 2025.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 4&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;325px&#34; data-flex-grow=&#34;135&#34; height=&#34;850&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-af875e8f44/img-ea488864cf.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-af875e8f44/img-ea488864cf_hu_4d7d4f29b1e5e2f.jpeg 800w, https://acousticinfoplus.com/posts/note-af875e8f44/img-ea488864cf.jpeg 1152w&#34; width=&#34;1152&#34;&gt;&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>OpenAI Reveals Codex Agent Loop and PostgreSQL Architecture</title>
            <link>https://acousticinfoplus.com/posts/note-eee88472e9/</link>
            <pubDate>Mon, 26 Jan 2026 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-eee88472e9/</guid>
            <description>&lt;p&gt;Recently, Anthropic&amp;rsquo;s Claude Code has taken the AI programming community by storm!&lt;/p&gt;&#xA;&lt;p&gt;This AI assistant, capable of reading code, modifying it, and running tests in the terminal, has developers exclaiming, &amp;ldquo;This is the future.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;Social media is buzzing with comments like &amp;ldquo;Claude Code outperforms Cursor, Codex, Antigravity&amp;rdquo; as everyone speculates on OpenAI&amp;rsquo;s next big move with GPT-5.3. Today, OpenAI revealed two major updates on X platform:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&lt;strong&gt;Agent Loop Architecture Unveiled: The Inner Workings of Codex&amp;rsquo;s &amp;lsquo;Brain&amp;rsquo;&lt;/strong&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;PostgreSQL Extreme Architecture: One Master Database Handling 800 Million Users&lt;/strong&gt;&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 1&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;234px&#34; data-flex-grow=&#34;97&#34; height=&#34;1104&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-8de0de776e.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-8de0de776e_hu_4c4865210b2c91e1.jpeg 800w, https://acousticinfoplus.com/posts/note-eee88472e9/img-8de0de776e.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;img alt=&#34;Image 2&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;252px&#34; data-flex-grow=&#34;105&#34; height=&#34;1028&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-6e28f42a25.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-6e28f42a25_hu_e99b9781b6def158.jpeg 800w, https://acousticinfoplus.com/posts/note-eee88472e9/img-6e28f42a25.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;This powerful combination is impressive. Let&amp;rsquo;s break down what OpenAI has in store.&lt;/p&gt;&#xA;&lt;h2 id=&#34;agent-loop&#34;&gt;&lt;strong&gt;Agent Loop&lt;/strong&gt;&#xA;&lt;/h2&gt;&lt;h3 id=&#34;how-codex&#34;&gt;&lt;strong&gt;How Codex&amp;rsquo;s &amp;lsquo;Brain&amp;rsquo; Works&lt;/strong&gt;&#xA;&lt;/h3&gt;&lt;p&gt;&lt;img alt=&#34;Image 3&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;484px&#34; data-flex-grow=&#34;201&#34; height=&#34;534&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-6675b02577.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-6675b02577_hu_27952e147b02ae8b.jpeg 800w, https://acousticinfoplus.com/posts/note-eee88472e9/img-6675b02577.jpeg 1078w&#34; width=&#34;1078&#34;&gt;&lt;img alt=&#34;Image 4&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;376px&#34; data-flex-grow=&#34;156&#34; height=&#34;689&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-b830550711.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-b830550711_hu_539438275bab70e.jpeg 800w, https://acousticinfoplus.com/posts/note-eee88472e9/img-b830550711.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;h3 id=&#34;what-is-agent-loop&#34;&gt;&lt;strong&gt;What is Agent Loop?&lt;/strong&gt;&#xA;&lt;/h3&gt;&lt;p&gt;If you have used Codex CLI, Claude Code, or similar CLI terminal tools, you may wonder:&lt;/p&gt;&#xA;&lt;p&gt;How does it know what I want to do? How can it read files, write code, and run commands on its own?&lt;/p&gt;&#xA;&lt;p&gt;The answer lies in something called &lt;strong&gt;Agent Loop&lt;/strong&gt;.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 5&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;500px&#34; data-flex-grow=&#34;208&#34; height=&#34;518&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-f491d1f608.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-f491d1f608_hu_84697cfa1826b8d1.jpeg 800w, https://acousticinfoplus.com/posts/note-eee88472e9/img-f491d1f608.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;In simple terms, the Agent Loop acts like a &amp;ldquo;conductor,&amp;rdquo; responsible for creating a perfect closed loop between &amp;ldquo;user intent,&amp;rdquo; &amp;ldquo;model brain,&amp;rdquo; and &amp;ldquo;execution tools.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 6&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;484px&#34; data-flex-grow=&#34;201&#34; height=&#34;534&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-6675b02577.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-6675b02577_hu_27952e147b02ae8b.jpeg 800w, https://acousticinfoplus.com/posts/note-eee88472e9/img-6675b02577.jpeg 1078w&#34; width=&#34;1078&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;This is not just a simple &amp;ldquo;Q&amp;amp;A&amp;rdquo;; it is a &lt;strong&gt;working system&lt;/strong&gt; that includes &amp;ldquo;observe-think-act-feedback.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;Let’s break down how a true AI Agent operates.&lt;/p&gt;&#xA;&lt;h3 id=&#34;how-a-complete-agent-loop-works&#34;&gt;&lt;strong&gt;How a Complete Agent Loop Works&lt;/strong&gt;&#xA;&lt;/h3&gt;&lt;p&gt;Let’s illustrate with a specific example.&lt;/p&gt;&#xA;&lt;p&gt;Suppose you input in the terminal: Add a diagram to the project&amp;rsquo;s README.md.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Step 1: Constructing the Prompt&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;This is like sending a work order to the brain.&lt;/p&gt;&#xA;&lt;p&gt;Codex doesn’t just pass your words to the model; it first constructs a carefully designed &amp;ldquo;Prompt&amp;rdquo;:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Who am I (System):&lt;/strong&gt; Tell the model who it is and what it can do.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;What tools do I have (Tools):&lt;/strong&gt; What tools can be invoked (like shell commands, file operations).&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Context:&lt;/strong&gt; What directory is currently in use, what shell is being used.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;User instruction:&lt;/strong&gt; Add a diagram to README.md.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;This is akin to sending the model a detailed work email instead of just saying, &amp;ldquo;Help me.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Step 2: Model Inference&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;At this stage, the brain starts to work.&lt;/p&gt;&#xA;&lt;p&gt;Codex sends this Prompt to the Responses API, and the model begins to think:&lt;/p&gt;&#xA;&lt;p&gt;&amp;ldquo;The user wants to add a diagram; I need to check what the current README looks like&amp;hellip;&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;Then the model decides: &lt;strong&gt;Call the shell tool to execute&lt;/strong&gt; &lt;code&gt;cat README.md&lt;/code&gt;.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Step 3: Tool Call&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Codex receives the model&amp;rsquo;s request, executes the command locally, and reads the content of README.md.&lt;/p&gt;&#xA;&lt;p&gt;This is like the hands and feet starting to move.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Step 4: Result Feedback&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;The terminal outputs the content of README.md.&lt;/p&gt;&#xA;&lt;p&gt;At this point, the process isn’t over. Codex appends the command output to the Prompt and sends it back to the model.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Step 5: Looping&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;The model sees the content of README and infers again:&lt;/p&gt;&#xA;&lt;p&gt;It may generate a Mermaid diagram or write an ASCII graphic&amp;hellip; then call the tool to write to the file.&lt;/p&gt;&#xA;&lt;p&gt;This loop continues until the model deems the task complete, outputting a message, &amp;ldquo;I’m done.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;It is not answering questions; it is solving problems.&lt;/p&gt;&#xA;&lt;p&gt;Why is this important?&lt;/p&gt;&#xA;&lt;p&gt;You might say, &amp;ldquo;Isn’t this just making a few API calls?&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;But it’s not that simple.&lt;/p&gt;&#xA;&lt;p&gt;Traditional LLM applications are &amp;ldquo;one question, one answer&amp;rdquo;: you ask, it answers, and that’s it.&lt;/p&gt;&#xA;&lt;p&gt;But the Agent Loop transforms AI into an &lt;strong&gt;independent worker&lt;/strong&gt;.&lt;/p&gt;&#xA;&lt;p&gt;It plans its own path (Chain of Thought).&lt;/p&gt;&#xA;&lt;p&gt;It checks for errors (Self-Correction).&lt;/p&gt;&#xA;&lt;p&gt;It verifies results (Feedback Loop).&lt;/p&gt;&#xA;&lt;p&gt;This is the &lt;strong&gt;true &amp;lsquo;AI Agent.&amp;rsquo;&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;And the Agent Loop is the bridge that allows AI to leap from &amp;ldquo;chat companion&amp;rdquo; to &amp;ldquo;independent worker.&amp;rdquo;&lt;/p&gt;&#xA;&lt;h3 id=&#34;performance-optimization&#34;&gt;&lt;strong&gt;Performance Optimization&lt;/strong&gt;&#xA;&lt;/h3&gt;&lt;h3 id=&#34;two-key-technologies&#34;&gt;&lt;strong&gt;Two Key Technologies&lt;/strong&gt;&#xA;&lt;/h3&gt;&lt;p&gt;OpenAI shared two hardcore optimizations that address two major pain points in Agent development:&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Pain Point 1: Exploding Costs&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Every time the Agent Loop runs, it must resend the previous conversation history (including lengthy error messages and file contents) to the model.&lt;/p&gt;&#xA;&lt;p&gt;The longer the conversation, the higher the cost. Without optimization, costs grow quadratically.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Solution: Prompt Caching&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;OpenAI employs a caching strategy similar to &amp;ldquo;prefix matching.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;In simple terms, as long as the first part of the content sent to the model (System instructions, tool definitions, historical dialogue) remains unchanged, the server does not need to recalculate and can directly retrieve from the cache.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 7&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;421px&#34; data-flex-grow=&#34;175&#34; height=&#34;610&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-10cf011960.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-10cf011960_hu_920a473f93b06c56.jpeg 800w, https://acousticinfoplus.com/posts/note-eee88472e9/img-10cf011960.jpeg 1072w&#34; width=&#34;1072&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;This trick reduces the cost of long conversations from quadratic growth to linear growth.&lt;/p&gt;&#xA;&lt;p&gt;However, there’s a catch: &lt;strong&gt;Any change to the Prompt prefix will invalidate the cache.&lt;/strong&gt; For example:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Switching models midway&lt;/li&gt;&#xA;&lt;li&gt;Modifying permission settings&lt;/li&gt;&#xA;&lt;li&gt;Changing the MCP tool list&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;The OpenAI team even admitted in the article that their early MCP tool integration had bugs: the order of the tool list was unstable, leading to frequent cache invalidation.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Pain Point 2: Limited Context Window&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;No matter how large the model, the context window is still limited.&lt;/p&gt;&#xA;&lt;p&gt;If the Agent reads a huge log file, the context fills up quickly, causing earlier memories to be pushed out.&lt;/p&gt;&#xA;&lt;p&gt;For programmers, this means: &amp;ldquo;Did you forget the function I defined earlier?!&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;This is not just foolish; it’s disastrous.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;Solution: Compaction&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;When the number of tokens exceeds a threshold, Codex does not simply &amp;ldquo;delete old messages&amp;rdquo;; instead, it calls a special &lt;code&gt;/responses/compact&lt;/code&gt; interface to compress the conversation history into a shorter summary.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 8&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;237px&#34; data-flex-grow=&#34;99&#34; height=&#34;1080&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-19f67593f4.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-19f67593f4_hu_2a084eff4b50bb0d.jpeg 800w, https://acousticinfoplus.com/posts/note-eee88472e9/img-19f67593f4.jpeg 1070w&#34; width=&#34;1070&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Regular summarization just shortens long text, losing a lot of details.&lt;/p&gt;&#xA;&lt;p&gt;OpenAI&amp;rsquo;s Compaction returns a segment of &lt;strong&gt;encrypted_content&lt;/strong&gt;, preserving the model&amp;rsquo;s &amp;ldquo;implicit understanding&amp;rdquo; of the original dialogue.&lt;/p&gt;&#xA;&lt;p&gt;This is like compressing a thick book into a &amp;ldquo;memory card&amp;rdquo;; the model can recall the entire content of the book by reading the card.&lt;/p&gt;&#xA;&lt;p&gt;This allows the Agent to maintain its &amp;ldquo;intelligence&amp;rdquo; when handling long tasks.&lt;/p&gt;&#xA;&lt;p&gt;This time, OpenAI has revealed the &amp;ldquo;brain&amp;rdquo; behind Codex CLI and the &amp;ldquo;Agent Loop,&amp;rdquo; sending a signal: &lt;strong&gt;AI is truly ready to get the work done.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;one-master-database-handling-800-million-users&#34;&gt;&lt;strong&gt;One Master Database Handling 800 Million Users&lt;/strong&gt;&#xA;&lt;/h2&gt;&lt;h3 id=&#34;extreme-operations-of-postgresql&#34;&gt;&lt;strong&gt;Extreme Operations of PostgreSQL&lt;/strong&gt;&#xA;&lt;/h3&gt;&lt;p&gt;While everyone is discussing how powerful AI models are, OpenAI quietly exposed an even more explosive piece of news:&lt;/p&gt;&#xA;&lt;p&gt;Supporting 800 million ChatGPT users and processing millions of queries per second is achieved with just a single master PostgreSQL database!&lt;/p&gt;&#xA;&lt;p&gt;It &lt;strong&gt;only uses one PostgreSQL master node and 50 read replicas.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 9&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;465px&#34; data-flex-grow=&#34;193&#34; height=&#34;557&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-02939682f0.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-02939682f0_hu_1cb0b2c929da7df6.jpeg 800w, https://acousticinfoplus.com/posts/note-eee88472e9/img-02939682f0.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;800 million users? This is almost unbelievable! Some netizens were astonished.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 10&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;481px&#34; data-flex-grow=&#34;200&#34; height=&#34;538&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-b716f44054.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-b716f44054_hu_db8b0272556dc38e.jpeg 800w, https://acousticinfoplus.com/posts/note-eee88472e9/img-b716f44054.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;In an era dominated by distributed architectures, where many opt for &amp;ldquo;microservices,&amp;rdquo; &amp;ldquo;sharding,&amp;rdquo; and &amp;ldquo;NoSQL,&amp;rdquo; OpenAI shows that they can handle it with just PostgreSQL.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 11&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;887px&#34; data-flex-grow=&#34;369&#34; height=&#34;292&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-96e58c1ee4.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-96e58c1ee4_hu_f23bac58000c0301.jpeg 800w, https://acousticinfoplus.com/posts/note-eee88472e9/img-96e58c1ee4.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;How did they achieve this?&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 12&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;349px&#34; data-flex-grow=&#34;145&#34; height=&#34;742&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-bac80863c4.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-bac80863c4_hu_d9c7d77c3c6289ed.jpeg 800w, https://acousticinfoplus.com/posts/note-eee88472e9/img-bac80863c4.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;According to information disclosed by OpenAI engineers, key technologies include:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;PgBouncer connection pool proxy: Significantly reduces database connection overhead.&lt;/li&gt;&#xA;&lt;li&gt;Cache locking mechanism: Prevents write pressure caused by cache penetration.&lt;/li&gt;&#xA;&lt;li&gt;Cross-regional cascading replication: Distributes read requests to replicas around the globe.&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;The core idea of this architecture is: &lt;strong&gt;read-write separation, optimizing the read path to the extreme.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;After all, for applications like ChatGPT, read requests far exceed write requests. When a user sends a message, the system may need to read data dozens of times (user information, conversation history, configuration information, etc.), but writing occurs only once.&lt;/p&gt;&#xA;&lt;p&gt;According to OpenAI&amp;rsquo;s official blog, key technologies include:&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;1. Connection Pool Proxy (PgBouncer)&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;By managing the connection pool, the average connection establishment time was reduced from &lt;strong&gt;50ms to 5ms&lt;/strong&gt;.&lt;/p&gt;&#xA;&lt;p&gt;Don’t underestimate this 45ms; in a scenario with millions of queries per second, this is a significant performance boost.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;2. Cache Locking/Leasing Mechanism&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;This is a very clever design.&lt;/p&gt;&#xA;&lt;p&gt;When the cache is not hit, only &lt;strong&gt;one request&lt;/strong&gt; is allowed to query the database and refill the cache, while other requests wait.&lt;/p&gt;&#xA;&lt;p&gt;This avoids the disaster scenario of &amp;ldquo;cache avalanche&amp;rdquo;—where a large number of requests simultaneously flood the database.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;3. Query Optimization and Load Isolation&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;The team discovered and fixed a complex query involving &lt;strong&gt;12 table joins&lt;/strong&gt;.&lt;/p&gt;&#xA;&lt;p&gt;They moved the complex logic to the application layer to avoid OLTP anti-pattern operations in the database.&lt;/p&gt;&#xA;&lt;p&gt;Additionally, requests were divided into high-priority and low-priority, handled by dedicated instances to prevent performance degradation caused by the &amp;ldquo;noisy neighbor&amp;rdquo; effect.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;4. High Availability and Failover&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;The master database operates in high availability (HA) mode, equipped with hot standby nodes.&lt;/p&gt;&#xA;&lt;p&gt;All read traffic is directed to replicas, ensuring that even if the master database goes down, the service remains read-only available, reducing the impact of failures.&lt;/p&gt;&#xA;&lt;h3 id=&#34;the-ceiling-will-eventually-be-reached&#34;&gt;&lt;strong&gt;The Ceiling Will Eventually Be Reached&lt;/strong&gt;&#xA;&lt;/h3&gt;&lt;p&gt;However, OpenAI also admits that this architecture has hit physical limits. The issues arise in two areas:&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;PostgreSQL&amp;rsquo;s MVCC Limitations&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;PostgreSQL&amp;rsquo;s Multi-Version Concurrency Control (MVCC) mechanism leads to &lt;strong&gt;write amplification&lt;/strong&gt; (updating a row requires copying the entire row) and &lt;strong&gt;read amplification&lt;/strong&gt; (scanning requires skipping dead tuples). This is a hard limitation for write-intensive loads.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;WAL Replication Pressure&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;As the number of replicas increases, the master database must push the pre-written logs (WAL) to all replicas. The more replicas there are, the greater the network pressure on the master, and the higher the replica latency.&lt;/p&gt;&#xA;&lt;p&gt;To overcome these limitations, OpenAI is doing two things:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;Migrating shardable, high-write loads to &lt;strong&gt;Azure Cosmos DB&lt;/strong&gt; and other distributed systems.&lt;/li&gt;&#xA;&lt;li&gt;Testing &lt;strong&gt;cascading replication&lt;/strong&gt;: allowing intermediate replicas to forward WAL to downstream replicas, aiming to support &lt;strong&gt;over 100 replicas.&lt;/strong&gt;&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;This case perfectly illustrates an architectural philosophy: &lt;strong&gt;If not necessary, do not increase entities.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;Don’t rush into distributed systems; first, use simple solutions to hold up, and only complicate when necessary.&lt;/p&gt;&#xA;&lt;p&gt;Many companies face the problem of having overly complex architectures before they even reach the stage where distribution is needed. As a result, they neither gain the benefits of distribution nor avoid the complexities.&lt;/p&gt;&#xA;&lt;p&gt;OpenAI proves through practice that an optimized single-node architecture can go further than one might imagine.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 13&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;595px&#34; data-flex-grow=&#34;248&#34; height=&#34;435&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-b78ef3cae3.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-eee88472e9/img-b78ef3cae3_hu_8e8d1d6452006966.jpeg 800w, https://acousticinfoplus.com/posts/note-eee88472e9/img-b78ef3cae3.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;the-battle-between-codex-and-claude-code&#34;&gt;&lt;strong&gt;The Battle Between Codex and Claude Code&lt;/strong&gt;&#xA;&lt;/h2&gt;&lt;p&gt;What is Claude Code&amp;rsquo;s killer feature? It is the &lt;strong&gt;end-to-end development experience&lt;/strong&gt;.&lt;/p&gt;&#xA;&lt;p&gt;It is not just a simple code completion tool; it is an Agent that can work independently in the terminal.&lt;/p&gt;&#xA;&lt;p&gt;It can read code, modify code, run tests, handle Git, and even fix bugs on its own. Now it can even write documentation and create presentations.&lt;/p&gt;&#xA;&lt;p&gt;This directly threatens the position of Codex CLI.&lt;/p&gt;&#xA;&lt;p&gt;OpenAI&amp;rsquo;s recent updates actually convey three messages:&lt;/p&gt;&#xA;&lt;p&gt;First, &lt;strong&gt;my Agent architecture is more mature.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;The unveiling of the Agent Loop showcases OpenAI&amp;rsquo;s deep accumulation in Agent architecture. This is not a hastily assembled product but a carefully designed system.&lt;/p&gt;&#xA;&lt;p&gt;Prompt Caching, Compaction, MCP tool integration&amp;hellip; these are all solid engineering capabilities.&lt;/p&gt;&#xA;&lt;p&gt;Second, &lt;strong&gt;my infrastructure is stronger.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;The PostgreSQL case demonstrates OpenAI&amp;rsquo;s backend capabilities. The scale of 800 million users is not something just any startup can handle.&lt;/p&gt;&#xA;&lt;p&gt;This also hints: our &amp;ldquo;moat&amp;rdquo; is not just the model but the entire engineering system.&lt;/p&gt;&#xA;&lt;p&gt;Third, &lt;strong&gt;my model is becoming stronger.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;The disclosure of cybersecurity ratings serves both as &amp;ldquo;expectation management,&amp;rdquo; informing everyone that the model has risks and that we are handling them responsibly, and as a show of strength: our model is now so powerful that it requires dedicated assessment of cybersecurity risks.&lt;/p&gt;&#xA;&lt;p&gt;The competition in AI programming tools has only just begun.&lt;/p&gt;&#xA;&lt;p&gt;Claude Code has forced OpenAI to accelerate the iteration speed of Codex. OpenAI&amp;rsquo;s response will, in turn, push Anthropic to continue innovating.&lt;/p&gt;&#xA;&lt;p&gt;In the end, the beneficiaries will be us developers.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Survey: How 132 Engineers at Anthropic Use Claude</title>
            <link>https://acousticinfoplus.com/posts/note-0f38ee556b/</link>
            <pubDate>Wed, 24 Dec 2025 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-0f38ee556b/</guid>
            <description>&lt;h2 id=&#34;survey-how-132-engineers-at-anthropic-use-claude&#34;&gt;Survey: How 132 Engineers at Anthropic Use Claude&#xA;&lt;/h2&gt;&lt;p&gt;AI assistants are deeply reshaping the way engineers work. A recent survey shows that Claude has entered the daily workflows of 60% of engineers, taking over low-value tasks like debugging and code comprehension, and crucially, opening up a space for 27% of tasks that they previously wouldn&amp;rsquo;t have undertaken. As the AI execution chain lengthens from 9.8 steps to 21.2 steps, human oversight rounds decrease by 32%. This new human-AI collaboration model is fostering a core capability of &amp;ldquo;task breakdown + AI delegation + result verification,&amp;rdquo; prompting profound reflections on technological capability gaps and career path advancements.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 5&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;540px&#34; data-flex-grow=&#34;225&#34; height=&#34;400&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0f38ee556b/img-04c7c82d69.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0f38ee556b/img-04c7c82d69_hu_122abb962d1228c8.jpeg 800w, https://acousticinfoplus.com/posts/note-0f38ee556b/img-04c7c82d69.jpeg 900w&#34; width=&#34;900&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Anthropic surveyed its 132 engineers on how they use Claude in their daily work, quantifying their usage frequency, experiences, and self-assessed productivity across different tasks. They also sampled and analyzed 200,000 Claude Code conversation logs to see where engineers are truly applying AI, the complexity of those tasks, and how much human intervention is required.&lt;/p&gt;&#xA;&lt;h2 id=&#34;01-not-just-time-savings-but-getting-more-done&#34;&gt;01 Not Just Time Savings, But Getting More Done&#xA;&lt;/h2&gt;&lt;h3 id=&#34;1-claude-has-entered-60-of-workflows&#34;&gt;1. Claude has entered 60% of workflows&#xA;&lt;/h3&gt;&lt;p&gt;In the survey, Anthropic engineers reported:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;A year ago, only &lt;strong&gt;28% of their work involved Claude, with a self-reported efficiency gain of about 20%&lt;/strong&gt;;&lt;/li&gt;&#xA;&lt;li&gt;This year, the same individuals are using Claude in &lt;strong&gt;59% of their work, with self-reported efficiency gains around 50%&lt;/strong&gt;;&lt;/li&gt;&#xA;&lt;li&gt;About &lt;strong&gt;14% of heavy users&lt;/strong&gt; believe their output has more than doubled.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;Interestingly, the most common usage scenarios are not for writing new features, but for debugging and understanding old code:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;55%&lt;/strong&gt; use Claude daily to check for bugs;&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;42%&lt;/strong&gt; use it daily to understand code;&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;37%&lt;/strong&gt; use it daily to implement new features.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;This indicates that &lt;strong&gt;AI is first taking over low-value but necessary tasks&lt;/strong&gt;, rather than &amp;ldquo;building rockets.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 6&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;608&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0f38ee556b/img-14c93f4d90.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0f38ee556b/img-14c93f4d90_hu_1ebbd1eac7efbf17.jpeg 800w, https://acousticinfoplus.com/posts/note-0f38ee556b/img-14c93f4d90.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;h3 id=&#34;2-27-of-work-was-previously-unattempted&#34;&gt;2. 27% of work was previously unattempted&#xA;&lt;/h3&gt;&lt;p&gt;One noteworthy statistic is:&lt;/p&gt;&#xA;&#xA;    &lt;blockquote&gt;&#xA;        &lt;p&gt;Employees believe that 27% of the work they accomplish with Claude is something they would not have done otherwise.&lt;/p&gt;&#xA;&#xA;    &lt;/blockquote&gt;&#xA;&lt;p&gt;These &amp;ldquo;from nothing to something&amp;rdquo; tasks include documentation, testing, refactoring that were previously deemed too tedious; minor experience optimization tools that don&amp;rsquo;t affect KPIs; exploratory projects and additional experiments.&lt;/p&gt;&#xA;&lt;p&gt;This directly explains why many teams feel they are not significantly less busy, yet they are accomplishing more.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;While time savings are slight, the key point is that AI has brought those tasks that were &amp;ldquo;always at the bottom of the to-do list&amp;rdquo; onto the agenda.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;h3 id=&#34;3-not-fully-handing-over-to-ai-but-high-frequency-collaboration&#34;&gt;3. Not &amp;ldquo;fully handing over to AI,&amp;rdquo; but &amp;ldquo;high-frequency collaboration&amp;rdquo;&#xA;&lt;/h3&gt;&lt;p&gt;Despite frequent use, over half of the engineers believe that the work they can &lt;strong&gt;&amp;ldquo;completely hand over to Claude without checking&amp;rdquo; is only 0-20%&lt;/strong&gt;. This is also confirmed by behavioral data:&lt;/p&gt;&#xA;&lt;p&gt;Over six months, the average complexity of tasks in Claude Code increased from 3.2 to 3.8 (out of 5), while the number of successive automated tool calls per task rose from 9.8 to 21.2, and the frequency of human interventions dropped from 6.2 to 4.1.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;In simple terms: the AI is taking on longer chains of tasks, while human involvement is decreasing, but we are far from a situation where &amp;ldquo;no human is needed.&amp;rdquo;&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;02-behavioral-patterns-the-real-difference-lies-in-ai-usage&#34;&gt;02 Behavioral Patterns: The Real Difference Lies in AI Usage&#xA;&lt;/h2&gt;&lt;p&gt;From interviews and logs, a clear set of AI usage guidelines emerges, which essentially represents &lt;strong&gt;the core skills of future engineers&lt;/strong&gt;.&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;What tasks do engineers delegate to AI? Engineers generally hand Claude tasks that are:&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Low background + low complexity: tasks they are unfamiliar with but are not difficult, such as simple Linux commands or Git operations;&lt;/li&gt;&#xA;&lt;li&gt;Easy to verify: tasks where the results can be quickly assessed, like format conversions, small tools, or simple SQL;&lt;/li&gt;&#xA;&lt;li&gt;Divisible into independent modules: tasks where a sub-module is loosely coupled with the main system, so mistakes won&amp;rsquo;t collapse the entire system;&lt;/li&gt;&#xA;&lt;li&gt;Low quality requirements: one-off debugging scripts or code for research;&lt;/li&gt;&#xA;&lt;li&gt;Tedious and repetitive tasks they prefer not to do: refactoring, documentation, chart creation, etc.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;An interesting observation is: &lt;strong&gt;&amp;ldquo;If a task can be completed in 10 minutes by oneself, many people are reluctant to open Claude.&amp;rdquo;&lt;/strong&gt; This boundary essentially represents the startup cost of invoking AI.&lt;/p&gt;&#xA;&lt;h3 id=&#34;2-what-tasks-do-engineers-retain-most-people-keep-these-tasks-for-themselves&#34;&gt;2. What tasks do engineers retain? Most people keep these tasks for themselves:&#xA;&lt;/h3&gt;&lt;ul&gt;&#xA;&lt;li&gt;Key designs at the product and architecture level;&lt;/li&gt;&#xA;&lt;li&gt;Decisions that involve organizational culture attributes and trade-offs;&lt;/li&gt;&#xA;&lt;li&gt;Work related to &amp;ldquo;taste&amp;rdquo; or &amp;ldquo;style,&amp;rdquo; such as interaction details;&lt;/li&gt;&#xA;&lt;li&gt;Any tasks where &amp;ldquo;the cost of error is high.&amp;rdquo;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;However, this boundary is not stable; as model capabilities improve, &lt;strong&gt;the range of tasks that can be delegated to AI is continually expanding.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;This is also evident in the logs: the proportion of using Claude for new features and code design/planning has almost tripled over six months.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 7&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;608&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0f38ee556b/img-4a67689e46.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0f38ee556b/img-4a67689e46_hu_10fe9bee596a71e9.jpeg 800w, https://acousticinfoplus.com/posts/note-0f38ee556b/img-4a67689e46.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;h3 id=&#34;3-capability-structure-is-changing-more-full-stack-but-also-a-supervision-paradox&#34;&gt;3. Capability structure is changing: more &amp;ldquo;full-stack,&amp;rdquo; but also a &amp;ldquo;supervision paradox.&amp;rdquo;&#xA;&lt;/h3&gt;&lt;p&gt;AI is clearly making engineers more full-stack:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Backend engineers are tackling frontend and data visualization tasks;&lt;/li&gt;&#xA;&lt;li&gt;Security teams can quickly analyze risks in unfamiliar modules;&lt;/li&gt;&#xA;&lt;li&gt;Non-technical roles can also solve network and scripting issues with Claude Code.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;However, many are beginning to worry: &lt;strong&gt;as more implementations are handed over to AI, the opportunities to write code themselves decrease, and will they still have the ability to understand the code written by AI in the future?&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;p&gt;The report describes this well as the &lt;strong&gt;supervision paradox&lt;/strong&gt;:&lt;/p&gt;&#xA;&#xA;    &lt;blockquote&gt;&#xA;        &lt;p&gt;Effective use of AI requires the ability to supervise it;&#xA;However, over-reliance on AI can lead to a gradual loss of that ability.&lt;/p&gt;&#xA;&#xA;    &lt;/blockquote&gt;&#xA;&lt;p&gt;Some engineers are consciously going against this trend: even though they know Claude can handle it, they occasionally insist on writing it themselves to retain that part of their &amp;ldquo;muscle memory.&amp;rdquo;&lt;/p&gt;&#xA;&lt;h2 id=&#34;03-what-three-cognitive-upgrades-can-we-gain&#34;&gt;03 What Three Cognitive Upgrades Can We Gain?&#xA;&lt;/h2&gt;&lt;h3 id=&#34;insight-1-ai-efficiency-gain--doing-existing-tasks-50-faster-but-also-tackling-that-27-no-one-was-doing&#34;&gt;Insight 1: AI efficiency gain ≠ doing existing tasks 50% faster, but also tackling that &amp;ldquo;27% no one was doing&amp;rdquo;&#xA;&lt;/h3&gt;&lt;p&gt;From the data, Anthropic&amp;rsquo;s engineers &lt;strong&gt;did not spend the saved time slacking off but filled it with new tasks&lt;/strong&gt;: more experiments, more refactoring, and more exploratory work.&lt;/p&gt;&#xA;&lt;p&gt;For any company, this means:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;If you only use AI to compress existing work hours, you might gain 20-30%;&lt;/li&gt;&#xA;&lt;li&gt;But if you use AI to tackle that 27% of tasks that were previously untouched, such as experience optimization and quality improvement, &lt;strong&gt;the marginal value will be higher.&lt;/strong&gt;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;For managers: &lt;strong&gt;The right question is not &amp;ldquo;How much time can we save on this project with AI?&amp;rdquo; but &amp;ldquo;What new tasks can we take on that we previously never did with AI?&amp;rdquo;&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;h3 id=&#34;insight-2-the-true-core-competency-is-task-breakdown--ai-delegation--result-verification&#34;&gt;Insight 2: The true core competency is &amp;ldquo;task breakdown + AI delegation + result verification&amp;rdquo;&#xA;&lt;/h3&gt;&lt;p&gt;From this report, those heavy users with doubled productivity share three common traits:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;&lt;strong&gt;They can break down tasks, turning big problems into a series of small modules that AI can easily handle;&lt;/strong&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;They can select tasks, only delegating easily verifiable and controllable parts to AI;&lt;/strong&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;They can verify results, knowing when to reimplement themselves and when sampling checks are sufficient.&lt;/strong&gt;&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;These three aspects essentially combine &lt;strong&gt;product thinking + technical judgment + risk control&lt;/strong&gt;, rather than simply knowing how to write prompts.&lt;/p&gt;&#xA;&lt;p&gt;The future competitiveness of engineers may not lie in how many lines of code they can write in an hour, but in &lt;strong&gt;how many AI tasks they can orchestrate in an hour while ensuring nothing goes wrong.&lt;/strong&gt;&lt;/p&gt;&#xA;&lt;h3 id=&#34;insight-3-career-paths-are-shifting-upward-low-level-skills-may-be-skipped-altogether&#34;&gt;Insight 3: Career paths are shifting upward; low-level skills may be skipped altogether&#xA;&lt;/h3&gt;&lt;p&gt;Previously, learning programming followed a relatively standard path: starting from writing basic syntax and data structures and gradually moving to higher abstractions.&lt;/p&gt;&#xA;&lt;p&gt;Now, newcomers may very well start directly from using AI to write code. This will lead to two outcomes:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;The value of foundational skills is being re-priced&lt;/strong&gt;: not everyone needs to write low-level code, but there must be some who can understand and modify it; they will become the truly scarce &amp;ldquo;reviewers&amp;rdquo; in the AI era.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Promotion channels are shifting from &amp;ldquo;doing a lot&amp;rdquo; to &amp;ldquo;managing well&amp;rdquo;&lt;/strong&gt;: many engineers are beginning to define themselves as managers of 1/5/100 Claudes, rather than as senior coders.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 8&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;608&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0f38ee556b/img-ae9c86459f.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0f38ee556b/img-ae9c86459f_hu_4e93d2e137203ed8.jpeg 800w, https://acousticinfoplus.com/posts/note-0f38ee556b/img-ae9c86459f.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;In conclusion, the results of this survey both meet my expectations and exceed my imagination. However, I would like to offer the following suggestions:&lt;/p&gt;&#xA;&lt;p&gt;If you are developing AI tools, consider asking yourself three questions:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Can you help users solve that 27% of tasks that were previously unattempted?&lt;/li&gt;&#xA;&lt;li&gt;Can you integrate AI into their real CI/CD, monitoring, and knowledge bases, rather than just a webpage?&lt;/li&gt;&#xA;&lt;li&gt;Can you help managers see clearly what tasks AI has accomplished and what risks it has taken on?&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;For enterprises, the more critical questions are no longer whether to adopt AI, but:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;What tasks are you prepared to delegate to AI?&lt;/li&gt;&#xA;&lt;li&gt;Who will supervise these AIs?&lt;/li&gt;&#xA;&lt;li&gt;As the learning paths, collaboration dynamics, and career expectations of teams are rewritten, do you have new organizational designs and talent strategies in place?&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;Wishing you a great day!&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Comparing Leading AI Browsers: Atlas, Comet, Dia, and Edge Copilot</title>
            <link>https://acousticinfoplus.com/posts/note-96e9fc8c5b/</link>
            <pubDate>Mon, 03 Nov 2025 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-96e9fc8c5b/</guid>
            <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&#xA;&lt;/h2&gt;&lt;p&gt;In the past week, I have explored mainstream AI browsers extensively.&lt;/p&gt;&#xA;&lt;p&gt;OpenAI&amp;rsquo;s Atlas, Perplexity&amp;rsquo;s Comet, Browser Company&amp;rsquo;s Dia, and Edge Copilot are the most popular AI browsers on the market, each with its own highlights and pitfalls.&lt;/p&gt;&#xA;&lt;p&gt;What does the future of browsers look like? These products provide completely different answers.&lt;/p&gt;&#xA;&lt;h2 id=&#34;two-camps&#34;&gt;Two Camps&#xA;&lt;/h2&gt;&lt;p&gt;Simply put, there are two camps:&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 1&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;426px&#34; data-flex-grow=&#34;177&#34; height=&#34;724&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-b4492c9348.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-b4492c9348_hu_bfd973ee45f02c9d.jpeg 800w, https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-b4492c9348.jpeg 1286w&#34; width=&#34;1286&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;The progressive camp, represented by Chrome and Edge, adds AI features to traditional browsers. Google, holding the largest global market share, doesn&amp;rsquo;t need to start from scratch to accommodate the majority of user habits. AI is just an additional feature; the browser remains the same.&lt;/p&gt;&#xA;&lt;p&gt;Similar to the overseas version of Chrome, Edge has also added an AI assistant button in the upper right corner, which opens a sidebar. However, Edge Copilot leans more towards voice interaction, which has some quirks with Chinese accents and doesn&amp;rsquo;t quite fit practical usage scenarios.&lt;/p&gt;&#xA;&lt;p&gt;On the other hand, the radical camp, represented by ChatGPT Atlas, treats AI as the core of the browser, designing the entire browser around AI dialogue. In short, the browser itself is AI.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 2&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;480px&#34; data-flex-grow=&#34;200&#34; height=&#34;646&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-1f186d5722.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-1f186d5722_hu_da2aeab5c80a74e1.jpeg 800w, https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-1f186d5722.jpeg 1292w&#34; width=&#34;1292&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Currently, browsers are not just about answering &amp;ldquo;what is&amp;rdquo; and &amp;ldquo;why,&amp;rdquo; but also about helping you figure out &amp;ldquo;what to do.&amp;rdquo; For example, the agent modes supported by Atlas and Comet can execute tasks after you issue commands.&lt;/p&gt;&#xA;&lt;h2 id=&#34;conclusions&#34;&gt;Conclusions&#xA;&lt;/h2&gt;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;ChatGPT Atlas&lt;/strong&gt;: Highly recommended, strongest execution capabilities, can truly help you operate web pages and automate tasks, but currently has security vulnerabilities; suitable for ChatGPT paid users and those who really need AI assistance.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Perplexity Comet&lt;/strong&gt;: Comprehensive information aggregation but slow and mechanical execution with weak agent capabilities; suitable for users researching and writing reports.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Dia&lt;/strong&gt;: Fastest speed with a minimalist interface, but lacks detail in summaries and does not execute operations; suitable for early adopters seeking quick browsing (20 USD/month).&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Edge Copilot&lt;/strong&gt;: Free with clear summary structure, but overly templated AI output and does not execute tasks; suitable for ordinary users who do not want to hassle or pay.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;which-ai-browser-is-more-useful&#34;&gt;Which AI Browser is More Useful?&#xA;&lt;/h2&gt;&lt;p&gt;We tested these browsers with several tasks to assess their AI intelligence.&lt;/p&gt;&#xA;&lt;h3 id=&#34;summarizing-articles&#34;&gt;Summarizing Articles&#xA;&lt;/h3&gt;&lt;p&gt;&lt;img alt=&#34;Image 3&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;336px&#34; data-flex-grow=&#34;140&#34; height=&#34;918&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-faf2ca4514.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-faf2ca4514_hu_ea36ffaf2203ced9.jpeg 800w, https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-faf2ca4514.jpeg 1286w&#34; width=&#34;1286&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Dia is the fastest, providing results in seconds, suitable for quick browsing but lacking detail. Comet&amp;rsquo;s summaries are more solid, covering almost all key points of the article.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 4&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;369px&#34; data-flex-grow=&#34;153&#34; height=&#34;840&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-80647ad7a8.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-80647ad7a8_hu_60e6cbd25adbe144.jpeg 800w, https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-80647ad7a8.jpeg 1292w&#34; width=&#34;1292&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Edge Copilot is somewhat close to a professional media editor, able to extract logical layers like &amp;ldquo;advertising mechanisms&amp;rdquo; and &amp;ldquo;platform transitions,&amp;rdquo; but it feels too AI-driven.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 5&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;369px&#34; data-flex-grow=&#34;153&#34; height=&#34;840&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-9249e24392.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-9249e24392_hu_c4f18957ce7c4e03.jpeg 800w, https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-9249e24392.jpeg 1292w&#34; width=&#34;1292&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Atlas feels the most human, capturing facts and extending to value-based observations.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 6&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;368px&#34; data-flex-grow=&#34;153&#34; height=&#34;840&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-5fa9dc11b7.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-5fa9dc11b7_hu_a9b89009ceca17a3.jpeg 800w, https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-5fa9dc11b7.jpeg 1290w&#34; width=&#34;1290&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;In summary: Dia seeks speed, Comet seeks comprehensiveness, Edge seeks stability, and Atlas seeks depth.&lt;/p&gt;&#xA;&lt;p&gt;Ultimately, it comes down to the strength of the underlying models. For instance, Atlas uses its own GPT model, giving it a natural advantage.&lt;/p&gt;&#xA;&lt;h3 id=&#34;summarizing-videos&#34;&gt;Summarizing Videos&#xA;&lt;/h3&gt;&lt;p&gt;The prerequisite for summarizing videos is having subtitles; otherwise, AI cannot work effectively.&lt;/p&gt;&#xA;&lt;p&gt;Both Dia and Atlas can generate summaries suitable for quick viewing, with detailed time-axis analysis, similar to note-taking. However, Dia generates results faster.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 7&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;338px&#34; data-flex-grow=&#34;140&#34; height=&#34;922&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-c406fa3f3b.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-c406fa3f3b_hu_f3856dbdb8b38a75.jpeg 800w, https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-c406fa3f3b.jpeg 1300w&#34; width=&#34;1300&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Edge Copilot, despite its heavy AI flavor, can understand not only the surface content but also the author&amp;rsquo;s stance and emotional inclination, providing clearer expression.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 8&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;398px&#34; data-flex-grow=&#34;166&#34; height=&#34;790&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-8655cafd21.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-8655cafd21_hu_79648ddcfbaad8e5.jpeg 800w, https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-8655cafd21.jpeg 1312w&#34; width=&#34;1312&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Perplexity Comet&amp;rsquo;s performance is average, delivering a mediocre overview after extensive operations.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 9&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;320px&#34; data-flex-grow=&#34;133&#34; height=&#34;562&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-b357046221.jpeg&#34; width=&#34;750&#34;&gt;&lt;/p&gt;&#xA;&lt;h3 id=&#34;planning-a-trip&#34;&gt;Planning a Trip&#xA;&lt;/h3&gt;&lt;p&gt;We tested a request: &amp;ldquo;I want to travel from Shanghai to Guangzhou for two days this weekend. Help me arrange the itinerary, hotel, and budget.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;Overall, ChatGPT Atlas made it the easiest for me.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 10&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;434px&#34; data-flex-grow=&#34;181&#34; height=&#34;708&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-1d74df5dd9.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-1d74df5dd9_hu_b91397fd0821cc7c.jpeg 800w, https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-1d74df5dd9.jpeg 1282w&#34; width=&#34;1282&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;It generated a complete travel guide, integrating information from multiple platforms like Ctrip, with rich details suitable for detail-oriented travelers.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 11&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;402px&#34; data-flex-grow=&#34;167&#34; height=&#34;766&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-eec679306c.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-eec679306c_hu_5ba4bf13327aa476.jpeg 800w, https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-eec679306c.jpeg 1284w&#34; width=&#34;1284&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;While Edge Copilot and Comet also provided complete itineraries, Comet was slightly more practical, while Edge felt more like a templated AI output.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 12&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;402px&#34; data-flex-grow=&#34;167&#34; height=&#34;768&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-bbe9e91e1f.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-bbe9e91e1f_hu_252f539621b4323f.jpeg 800w, https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-bbe9e91e1f.jpeg 1288w&#34; width=&#34;1288&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Dia generated a plan directly with Google search, which was convenient but lacked reliable sources.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 13&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;337px&#34; data-flex-grow=&#34;140&#34; height=&#34;924&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-9c71904220.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-9c71904220_hu_58f3a7c5c1c85f2.jpeg 800w, https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-9c71904220.jpeg 1300w&#34; width=&#34;1300&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Honestly, we cannot fully rely on AI for travel planning yet; it should only serve as a general guide. Reliable information still needs to be sourced from social media and real user experiences.&lt;/p&gt;&#xA;&lt;h2 id=&#34;who-can-truly-help-you&#34;&gt;Who Can Truly Help You?&#xA;&lt;/h2&gt;&lt;p&gt;The differences between AI browsers are most evident in agent execution capabilities.&lt;/p&gt;&#xA;&lt;p&gt;We tasked them with &amp;ldquo;buying an iPhone 17 Pro Max on the Apple website.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;Dia quickly identified user intent and generated a clear step-by-step purchasing guide (visit the official website → select model → choose payment), providing specific data but not actually placing the order.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 14&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;335px&#34; data-flex-grow=&#34;139&#34; height=&#34;924&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-673b23e341.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-673b23e341_hu_ca96b5b56ca08680.jpeg 800w, https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-673b23e341.jpeg 1292w&#34; width=&#34;1292&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Edge Copilot acts more like an &amp;ldquo;AI information retrieval assistant,&amp;rdquo; quickly parsing commands and accurately describing webpage states, but it also does not perform actual clicks, limited to Q&amp;amp;A interactions.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 15&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;411px&#34; data-flex-grow=&#34;171&#34; height=&#34;748&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-e12a96276d.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-e12a96276d_hu_b4ee1769f6eeb89d.jpeg 800w, https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-e12a96276d.jpeg 1282w&#34; width=&#34;1282&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Comet can indeed click, input, and navigate pages, simulating manual purchasing, but its execution speed is slow and leans towards mechanical execution until it reaches the final step—payment.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 16&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;390px&#34; data-flex-grow=&#34;162&#34; height=&#34;796&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-7338a1e6be.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-7338a1e6be_hu_dcf782b4c2f36ccf.jpeg 800w, https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-7338a1e6be.jpeg 1296w&#34; width=&#34;1296&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;ChatGPT Atlas not only understands webpage content but can also simulate operations, track across pages, organize files, generate reports, and even execute automation scripts. It has memory capabilities (remembering what I watched yesterday), and it can recall information even after a day.&lt;/p&gt;&#xA;&lt;p&gt;Cross-platform price comparison is also a hard requirement.&lt;/p&gt;&#xA;&lt;p&gt;Although both Comet and Atlas claim to execute complex tasks, the advantages of ChatGPT Atlas are too apparent.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 17&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;432px&#34; data-flex-grow=&#34;180&#34; height=&#34;726&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-4c940e609c.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-4c940e609c_hu_3f3c252b1c1a24e0.jpeg 800w, https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-4c940e609c.jpeg 1308w&#34; width=&#34;1308&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;My impression is that Comet is more like a smart AI search assistant, primarily focused on information aggregation and light task execution, quickly consolidating webpages, academic sources, and videos to generate briefings or comparison results, responding quickly but only able to perform single-step tasks.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 18&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;186px&#34; data-flex-grow=&#34;77&#34; height=&#34;1662&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-653e7652d4.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-653e7652d4_hu_c1a7806bc0369041.jpeg 800w, https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-653e7652d4.jpeg 1294w&#34; width=&#34;1294&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;ChatGPT Atlas, on the other hand, is a true &amp;ldquo;execution-oriented browser agent,&amp;rdquo; capable of understanding webpage content and simulating user actions, such as clicking, inputting, tracking across pages, organizing files, and even generating reports or executing automation scripts.&lt;/p&gt;&#xA;&lt;p&gt;This may also validate a point: in the AI era, the innovation threshold at the application layer is not high, and the real barrier lies in the model itself. Those backed by their own AI can indeed push forward smoothly.&lt;/p&gt;&#xA;&lt;h2 id=&#34;caution-pitfalls-of-ai-browsers&#34;&gt;Caution: Pitfalls of AI Browsers&#xA;&lt;/h2&gt;&lt;p&gt;Chrome&amp;rsquo;s extension ecosystem is already mature. Some may argue that installing a few extensions on Chrome is equivalent to using Atlas.&lt;/p&gt;&#xA;&lt;p&gt;To be honest, for simple tasks like article summarization and webpage translation, Chrome extensions are sufficient. Installing an extension can extract key points in seconds, and the experience is not much different from that of AI browsers. However, once it comes to complex tasks, extensions can completely falter.&lt;/p&gt;&#xA;&lt;p&gt;The deeper difference lies in the understanding of the future internet. The current internet is designed for humans, with page layouts and interaction logic centered around human visual and clicking habits. But what if the primary users of the internet are AI and agents?&lt;/p&gt;&#xA;&lt;p&gt;The core of the browser is no longer browsing but execution. You don&amp;rsquo;t need to know where information is on which website; you just need to tell the AI what you want, and it will find, do, and integrate it for you. However, from a practical perspective, these AI browsers adopt more cautious strategies, directly compatible with Chrome extensions, making user migration seamless.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 19&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;392px&#34; data-flex-grow=&#34;163&#34; height=&#34;800&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-6c16191c79.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-6c16191c79_hu_8d2a7fc908969503.jpeg 800w, https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-6c16191c79.jpeg 1308w&#34; width=&#34;1308&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Of course, while these AI browsers are built on Chromium, OpenAI is not merely &amp;ldquo;wrapping&amp;rdquo; it.&lt;/p&gt;&#xA;&lt;p&gt;According to their blog, they have redesigned the relationship between the browser and the underlying engine through their self-developed OWL (OpenAI’s Web Layer) architecture, reconstructing the interface using native frameworks like SwiftUI, AppKit, and Metal, achieving second-level startup, higher concurrency, and a more secure environment for intelligent agents.&lt;/p&gt;&#xA;&lt;p&gt;My computer is an M2 MacBook Air, and in terms of user experience, performance, speed, and stability do not differ much. Additionally, all browsers support importing bookmarks and vertical tabs, which are basic operations.&lt;/p&gt;&#xA;&lt;p&gt;It is worth mentioning that AI browsers face a serious security threat known as &amp;ldquo;indirect prompt injection attacks.&amp;rdquo; Simply put, hackers hide malicious commands in web pages, emails, and other content, and when large language models analyze this content, they may mistakenly treat hidden commands as genuine user commands.&lt;/p&gt;&#xA;&lt;p&gt;According to research by Brave, several products, including Perplexity Comet, Fellou browser, and OpenAI&amp;rsquo;s newly released ChatGPT Atlas, have vulnerabilities.&lt;/p&gt;&#xA;&lt;p&gt;These attacks can have serious consequences, affecting the shopping judgments of AI agents, stealing private data, sensitive email information, account credentials, and even injecting malicious code or software.&lt;/p&gt;&#xA;&lt;p&gt;OpenAI&amp;rsquo;s Chief Information Security Officer, Dane Stuckey, publicly acknowledged this week that prompt injection attacks are a serious threat, but also admitted that it is a &amp;ldquo;frontier issue&amp;rdquo; with no clear solution at present.&lt;/p&gt;&#xA;&lt;p&gt;As a result, OpenAI has had to implement multiple measures, including establishing a rapid response system, conducting red team testing, launching an unlogged mode, and introducing a monitoring mode requiring users to view agent behavior in real-time when operating on sensitive websites.&lt;/p&gt;&#xA;&lt;p&gt;The biggest challenge lies in the nature of AI agents themselves.&lt;/p&gt;&#xA;&lt;p&gt;They access suspicious websites and click dangerous links like humans but lack common sense and safety intuition, making them easily misled or hijacked by carefully crafted commands. More troubling is that these attack methods are very covert and can be hidden in images, screenshots, forms, and emails, or even just white text on a white background, making them hard to defend against.&lt;/p&gt;&#xA;&lt;h2 id=&#34;which-browser-should-you-choose&#34;&gt;Which Browser Should You Choose?&#xA;&lt;/h2&gt;&lt;p&gt;In terms of cost, the Atlas browser is free, but the core &amp;ldquo;Agent mode&amp;rdquo; is only available to ChatGPT Plus/Pro users, effectively locking users in with core services. Comet follows a freemium model, with basic features available for free but limited agent task numbers.&lt;/p&gt;&#xA;&lt;p&gt;Dia operates on a subscription model (20 USD per month for AI features), which is the purest model, currently niche, and does not rely on advertising revenue, but its future is uncertain after being acquired by Atlassian. The model costs of Chrome and Edge are supported by their own advertising and cloud businesses, making them more generous.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 20&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;239px&#34; data-flex-grow=&#34;99&#34; height=&#34;1298&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-db7729b596.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-db7729b596_hu_ea6548d8a5079d05.jpeg 800w, https://acousticinfoplus.com/posts/note-96e9fc8c5b/img-db7729b596.jpeg 1296w&#34; width=&#34;1296&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;If you are already a heavy user of ChatGPT or a Plus/Pro subscriber, Atlas will be very convenient with almost zero learning curve. Meanwhile, its execution and memory capabilities are indeed stronger than the other browsers.&lt;/p&gt;&#xA;&lt;p&gt;If you need rigorous source tracing for research, data gathering, or report writing, Perplexity Comet is the most reliable. Although its execution capabilities are not as flexible as Atlas, it won&amp;rsquo;t leave you uncertain due to unclear information sources.&lt;/p&gt;&#xA;&lt;p&gt;If you want to try AI but don&amp;rsquo;t want to hassle, Chrome and Edge are sufficient, compatible with Chrome extensions, low migration costs, and free. Although their AI features are not as aggressive, they are adequate for most users.&lt;/p&gt;&#xA;&lt;p&gt;If you pursue simplicity and focus, and don&amp;rsquo;t mind spending 20 USD per month, Dia is a good choice, although the future of niche products is always somewhat uncertain.&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>GPT-4.1 Now Available in ChatGPT for Plus, Pro, and Team Users</title>
            <link>https://acousticinfoplus.com/posts/note-0158c67f41/</link>
            <pubDate>Thu, 15 May 2025 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-0158c67f41/</guid>
            <description>&lt;h2 id=&#34;gpt-41-now-available-in-chatgpt&#34;&gt;GPT-4.1 Now Available in ChatGPT&#xA;&lt;/h2&gt;&lt;p&gt;OpenAI has officially announced that GPT-4.1 is now directly available in ChatGPT. This model excels at coding tasks and following instructions, making it an excellent alternative to o3 and o4-mini.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 1&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;375px&#34; data-flex-grow=&#34;156&#34; height=&#34;649&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-2ee84bfe5e.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-2ee84bfe5e_hu_172f009210c676d8.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-2ee84bfe5e.jpeg 1016w&#34; width=&#34;1016&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;A month ago, GPT-4.1 was only accessible to developers via API. Now, Plus, Pro, and Team users can access GPT-4.1 through the model selector&amp;rsquo;s &amp;ldquo;More Models&amp;rdquo; dropdown menu. Enterprise and educational users will gain access in the coming weeks.&lt;/p&gt;&#xA;&lt;p&gt;OpenAI also plans to introduce GPT-4.1 mini in ChatGPT to replace GPT-4o mini for all users.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 2&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;295px&#34; data-flex-grow=&#34;123&#34; height=&#34;876&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-d23bab4a83.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-d23bab4a83_hu_945bbe30b2f1b1af.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-d23bab4a83.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;With its long context capabilities, users can now input entire code segments into GPT-4.1 for analysis.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 3&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;240px&#34; data-flex-grow=&#34;100&#34; height=&#34;640&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-adda8710ba.jpeg&#34; width=&#34;640&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Both GPT-4.1 and GPT-4.1 mini have passed OpenAI&amp;rsquo;s latest safety assessments, ranking highly in two evaluations:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;not_unsafe&lt;/strong&gt;: Checks if the model produces unsafe outputs according to OpenAI policies.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;not_overrefuse&lt;/strong&gt;: Evaluates if the model follows benign requests.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;GPT-4.1 also performed well in hallucination assessments and instruction-following, but showed weaker results in jailbreak evaluations.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 4&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;385px&#34; data-flex-grow=&#34;160&#34; height=&#34;673&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-9651c7a660.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-9651c7a660_hu_762763daeee2ce8f.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-9651c7a660.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;img alt=&#34;Image 5&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;382px&#34; data-flex-grow=&#34;159&#34; height=&#34;677&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-86afc3ce67.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-86afc3ce67_hu_b83bdb5d747fff30.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-86afc3ce67.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;img alt=&#34;Image 6&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;392px&#34; data-flex-grow=&#34;163&#34; height=&#34;661&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-030cd60b53.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-030cd60b53_hu_aea87c33fcf3db0f.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-030cd60b53.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;img alt=&#34;Image 7&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;364px&#34; data-flex-grow=&#34;151&#34; height=&#34;711&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-619bc35117.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-619bc35117_hu_ff45d1387791cc27.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-619bc35117.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;img alt=&#34;Image 8&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;364px&#34; data-flex-grow=&#34;151&#34; height=&#34;711&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-cd61647397.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-cd61647397_hu_9b973e561a742505.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-cd61647397.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;gpt-41-better-than-gpt-45&#34;&gt;GPT-4.1: Better than GPT-4.5?&#xA;&lt;/h2&gt;&lt;p&gt;The release of GPT-4.1 responds to user demand. Users had previously expressed disappointment that GPT-4.1 was not available in ChatGPT despite being their favorite OpenAI model.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 9&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;517px&#34; data-flex-grow=&#34;215&#34; height=&#34;413&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-3bf3549cb3.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-3bf3549cb3_hu_6a1a0d567078bd94.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-3bf3549cb3.jpeg 890w&#34; width=&#34;890&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Many developers have stated that aside from the early version Quasar Alpha, GPT-4.1 is the best coding model they have tested.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 10&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;458px&#34; data-flex-grow=&#34;191&#34; height=&#34;386&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-4d36d64903.jpeg&#34; width=&#34;738&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;OpenAI recently launched a new series of models for developers: GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano. All these models feature a massive context window of up to 1 million tokens, significantly surpassing GPT-4o and GPT-4o mini in core capabilities such as code generation and instruction following, with knowledge updated to June 2024.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 12&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;273px&#34; data-flex-grow=&#34;113&#34; height=&#34;898&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-6819754031.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-6819754031_hu_e484daebab82ed43.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-6819754031.jpeg 1022w&#34; width=&#34;1022&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;testing-large-code-tasks-successfully-completed&#34;&gt;Testing: Large Code Tasks Successfully Completed&#xA;&lt;/h2&gt;&lt;p&gt;With many ChatGPT users now able to use GPT-4.1, numerous tests have emerged online. For instance, Wharton School professor Ethan Mollick tested GPT-4.1 with a challenging coding prompt.&lt;/p&gt;&#xA;&lt;p&gt;&amp;ldquo;Please create a piece of code I can directly paste into p5.js that will astonish me as if it were the control panel of a futuristic starship.&amp;rdquo;&lt;/p&gt;&#xA;&lt;p&gt;GPT-4.1 performed exceptionally well.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 13&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;511px&#34; data-flex-grow=&#34;213&#34; height=&#34;506&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-276b68d194.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-276b68d194_hu_5014e103dab2a481.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-276b68d194.jpeg 1078w&#34; width=&#34;1078&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Another developer found GPT-4.1 surprisingly effective while handling a large coding task that the default model could not process at all. GPT-4.1 not only completed the task faster but also cleaned up unused code from the entire file.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 14&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;383px&#34; data-flex-grow=&#34;159&#34; height=&#34;640&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-16dd96cd44.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-16dd96cd44_hu_5a0d6ca6cd45e58f.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-16dd96cd44.jpeg 1023w&#34; width=&#34;1023&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Tests revealed that GPT-4.1 achieved new heights in code generation speed. For example, it generated a blog homepage in just a few seconds.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 15&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;336px&#34; data-flex-grow=&#34;140&#34; height=&#34;570&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-a92d9e197d.jpeg&#34; width=&#34;800&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;When tasked with creating an animation of Earth traveling to Mars using Python, GPT-4.1 delivered the output almost instantaneously.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 16&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;337px&#34; data-flex-grow=&#34;140&#34; height=&#34;712&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-6f78a1a0ee.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-6f78a1a0ee_hu_606c59696665e878.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-6f78a1a0ee.jpeg 1000w&#34; width=&#34;1000&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;The results were promising, showcasing a fundamental improvement in GPT-4.1&amp;rsquo;s speed.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 17&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;340px&#34; data-flex-grow=&#34;141&#34; height=&#34;761&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-ba93731976.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-ba93731976_hu_34602219e5c8f691.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-ba93731976.jpeg 1079w&#34; width=&#34;1079&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;In another challenge, GPT-4.1 was asked to explain quantum entanglement through animation.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 18&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;336px&#34; data-flex-grow=&#34;140&#34; height=&#34;769&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-1e68d43611.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-1e68d43611_hu_f4774128b37655bd.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-1e68d43611.jpeg 1079w&#34; width=&#34;1079&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Preliminary results indicated that GPT-4.1 grasped the concept of quantum entanglement well.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 19&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;462px&#34; data-flex-grow=&#34;192&#34; height=&#34;560&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-6ec231252a.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-6ec231252a_hu_5a4b792ecdf9eeae.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-6ec231252a.jpeg 1078w&#34; width=&#34;1078&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;For reasoning tasks, GPT-4.1 also excelled. For example, in a multi-step age calculation problem, its logic was very rigorous.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 20&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;336px&#34; data-flex-grow=&#34;140&#34; height=&#34;770&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-3d00775e86.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-3d00775e86_hu_8d64b8dcb6ad500.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-3d00775e86.jpeg 1079w&#34; width=&#34;1079&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;When faced with lateral thinking or riddles, GPT-4.1 quickly completed the reasoning, although the answers were quite amusing.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 21&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;336px&#34; data-flex-grow=&#34;140&#34; height=&#34;770&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-178599dfb2.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-178599dfb2_hu_86dbe267215935b5.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-178599dfb2.jpeg 1079w&#34; width=&#34;1079&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;user-disappointment-no-1m-context-version&#34;&gt;User Disappointment: No 1M Context Version&#xA;&lt;/h2&gt;&lt;p&gt;However, after trying GPT-4.1, many users expressed disappointment. Although OpenAI released GPT-4.1, it did not include the 1 million context window API version.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 22&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;1098px&#34; data-flex-grow=&#34;457&#34; height=&#34;236&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-2a49da2a4a.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-2a49da2a4a_hu_bb96f29b6a0122f0.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-2a49da2a4a.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;img alt=&#34;Image 23&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;876px&#34; data-flex-grow=&#34;365&#34; height=&#34;173&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-a73acd5516.jpeg&#34; width=&#34;632&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Users had hoped to use GPT-4.1 in ChatGPT for its long context window, but now they can only look forward to GPT-5 providing such a feature.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 24&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;396px&#34; data-flex-grow=&#34;165&#34; height=&#34;644&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-7329c6e7ed.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-7329c6e7ed_hu_a2317b4262af5fb4.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-7329c6e7ed.jpeg 1065w&#34; width=&#34;1065&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Indeed, many have noted that the maximum context length for GPT-4.1 in ChatGPT (Pro) seems to be only 128k tokens, far from the 1 million tokens available in the API. This indicates that OpenAI has not lifted the limits in GPT-4.1.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 26&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;575px&#34; data-flex-grow=&#34;239&#34; height=&#34;430&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-d807a7f8eb.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-d807a7f8eb_hu_8ff210d53400e35e.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-d807a7f8eb.jpeg 1031w&#34; width=&#34;1031&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Overall, this has left many feeling disappointed. It seems they will have to turn to Gemini instead.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 27&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;528px&#34; data-flex-grow=&#34;220&#34; height=&#34;482&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-df26a31faa.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-df26a31faa_hu_b5c7843089b9528d.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-df26a31faa.jpeg 1062w&#34; width=&#34;1062&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Some users discovered a &amp;ldquo;highlight&amp;rdquo; when trying to run prompts used in a live demonstration of ChatGPT 4.1; they failed to run successfully on the web version but succeeded in the API Playground.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 28&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;362px&#34; data-flex-grow=&#34;151&#34; height=&#34;707&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-95b0dbf730.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-95b0dbf730_hu_aed6f7b7e4ec8285.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-95b0dbf730.jpeg 1069w&#34; width=&#34;1069&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Others mentioned they had just programmed an AI assistant using GPT-4.1, which is now available in ChatGPT.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 29&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;926px&#34; data-flex-grow=&#34;386&#34; height=&#34;273&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-f85c57ab2d.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-f85c57ab2d_hu_db66a284b0f69333.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-f85c57ab2d.jpeg 1054w&#34; width=&#34;1054&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;However, they still prefer their assistant due to a better user interface than ChatGPT.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 30&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;486px&#34; data-flex-grow=&#34;202&#34; height=&#34;533&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-d4ef1c7ab4.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-d4ef1c7ab4_hu_a6d831b607194312.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-d4ef1c7ab4.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;img alt=&#34;Image 31&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;556px&#34; data-flex-grow=&#34;231&#34; height=&#34;466&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-4a4d7c305c.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-4a4d7c305c_hu_fb4b7346e6cba532.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-4a4d7c305c.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;OpenAI has previously released a prompt guide for GPT-4.1, summarizing important prompt techniques derived from internal testing. Interested users can refer to this guide for practical usage.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 32&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;351px&#34; data-flex-grow=&#34;146&#34; height=&#34;738&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-f3a2b67603.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-0158c67f41/img-f3a2b67603_hu_a0a3718aec0cc40.jpeg 800w, https://acousticinfoplus.com/posts/note-0158c67f41/img-f3a2b67603.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;</description>
        </item><item>
            <title>Cursor AI Miscommunication Sparks User Outrage</title>
            <link>https://acousticinfoplus.com/posts/note-7121eb73a7/</link>
            <pubDate>Mon, 21 Apr 2025 00:00:00 +0000</pubDate>
            <guid>https://acousticinfoplus.com/posts/note-7121eb73a7/</guid>
            <description>&lt;p&gt;&lt;img alt=&#34;Image 1&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;1920px&#34; data-flex-grow=&#34;800&#34; height=&#34;80&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-7121eb73a7/img-d2d140f57d.jpeg&#34; width=&#34;640&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;introduction&#34;&gt;Introduction&#xA;&lt;/h2&gt;&lt;p&gt;Some AI systems have become so advanced that they can write code for programmers and even educate them on why they should do the work themselves. Recently, an AI bot for a coding tool named Cursor caused a stir by issuing a false policy statement that misled users.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 2: 图片&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;326px&#34; data-flex-grow=&#34;135&#34; height=&#34;745&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-7121eb73a7/img-64ca4b8bb6.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-7121eb73a7/img-64ca4b8bb6_hu_b859b45efe968c91.jpeg 800w, https://acousticinfoplus.com/posts/note-7121eb73a7/img-64ca4b8bb6.jpeg 1012w&#34; width=&#34;1012&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;the-incident&#34;&gt;The Incident&#xA;&lt;/h2&gt;&lt;p&gt;Last Monday, a programmer named BrokenToasterOven encountered a strange issue while using Cursor. He noticed that logging into the tool on one device would immediately log him out on any other device. Initially unsure if he was the only one facing this problem, he posted on Reddit, stating that the user experience had significantly regressed.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 3&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;2100px&#34; data-flex-grow=&#34;875&#34; height=&#34;80&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-7121eb73a7/img-60912963e9.jpeg&#34; width=&#34;700&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;To clarify the situation, he emailed Cursor&amp;rsquo;s customer service. He received a response from a representative named Sam, stating that Cursor&amp;rsquo;s subscription was designed for use on a single device only, and users needed separate subscriptions for multiple devices. This reply ignited outrage among programmers, as multi-device support is essential for their workflows.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 4&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;1540px&#34; data-flex-grow=&#34;641&#34; height=&#34;98&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-7121eb73a7/img-be4be0c0d3.jpeg&#34; width=&#34;629&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;user-reactions&#34;&gt;User Reactions&#xA;&lt;/h2&gt;&lt;p&gt;The news prompted many developers to cancel their subscriptions. BrokenToasterOven himself commented that he switched to another service after spending approximately $700 weekly on Cursor. Other users echoed similar sentiments, expressing frustration over the lack of multi-device support and criticizing the decision as foolish.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 5&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;282px&#34; data-flex-grow=&#34;117&#34; height=&#34;604&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-7121eb73a7/img-a36609a2ad.jpeg&#34; width=&#34;710&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;cursors-response&#34;&gt;Cursor&amp;rsquo;s Response&#xA;&lt;/h2&gt;&lt;p&gt;As discussions intensified, Cursor&amp;rsquo;s internal team took notice. A developer from Cursor commented that the policy response seemed incorrect and promised to investigate. Three hours later, the Cursor development team clarified that there was no such policy and that users could indeed use Cursor on multiple devices. They apologized for the confusion caused by the AI-generated response and assured users that they were looking into the issue.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 6&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;581px&#34; data-flex-grow=&#34;242&#34; height=&#34;446&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-7121eb73a7/img-41277e3675.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-7121eb73a7/img-41277e3675_hu_48d23af65be5dd64.jpeg 800w, https://acousticinfoplus.com/posts/note-7121eb73a7/img-41277e3675.jpeg 1080w&#34; width=&#34;1080&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;Michael Truell, Cursor&amp;rsquo;s co-founder, also apologized on Hacker News, acknowledging the significant issue and outlining steps being taken to prevent future occurrences. They promised to mark all AI-generated replies clearly and issued a full refund to the affected user.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 7&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;654px&#34; data-flex-grow=&#34;272&#34; height=&#34;324&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-7121eb73a7/img-6395c3f20e.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-7121eb73a7/img-6395c3f20e_hu_cc6a36455b428cfd.jpeg 800w, https://acousticinfoplus.com/posts/note-7121eb73a7/img-6395c3f20e.jpeg 883w&#34; width=&#34;883&#34;&gt;&lt;img alt=&#34;Image 8&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;2100px&#34; data-flex-grow=&#34;875&#34; height=&#34;80&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-7121eb73a7/img-1bb82dc4b6.jpeg&#34; width=&#34;700&#34;&gt;&lt;/p&gt;&#xA;&lt;h2 id=&#34;broader-implications&#34;&gt;Broader Implications&#xA;&lt;/h2&gt;&lt;p&gt;This incident highlights a recurring problem with AI systems: when they provide incorrect information, it can lead to serious consequences. A similar incident occurred in February 2024, when a user was misinformed by an airline&amp;rsquo;s chatbot regarding bereavement fare policies, leading to a legal dispute.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 9&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;2235px&#34; data-flex-grow=&#34;931&#34; height=&#34;83&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-7121eb73a7/img-8d59eea7fe.jpeg&#34; width=&#34;773&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;The takeaway is clear: while AI can enhance efficiency, businesses must ensure that users are aware they are interacting with AI and not a human. Clear communication and oversight are essential to prevent misinformation and potential legal issues.&lt;/p&gt;&#xA;&lt;p&gt;&lt;img alt=&#34;Image 10&#34; class=&#34;gallery-image&#34; data-flex-basis=&#34;1263px&#34; data-flex-grow=&#34;526&#34; height=&#34;163&#34; loading=&#34;lazy&#34; sizes=&#34;(max-width: 767px) calc(100vw - 30px), (max-width: 1023px) 700px, (max-width: 1279px) 950px, 1232px&#34; src=&#34;https://acousticinfoplus.com/posts/note-7121eb73a7/img-d2332d59f0.jpeg&#34; srcset=&#34;https://acousticinfoplus.com/posts/note-7121eb73a7/img-d2332d59f0_hu_91ca45436e0d2253.jpeg 800w, https://acousticinfoplus.com/posts/note-7121eb73a7/img-d2332d59f0.jpeg 858w&#34; width=&#34;858&#34;&gt;&lt;/p&gt;&#xA;&lt;p&gt;In conclusion, this incident serves as a stark reminder that AI, despite its capabilities, should not replace human oversight in critical areas of communication and customer service.&lt;/p&gt;&#xA;</description>
        </item></channel>
</rss>
