Just now, Sam Altman was attacked again, this time by gunfire

By: rootdata|2026/04/13 15:10:03
0
Share
copy

Author: BAI Capital

Sam Altman has been attacked again.

If the Molotov cocktail incident two days ago could be seen as an extreme, sporadic, and personal attack, then the second incident that just occurred is of a completely different nature.

In the early hours of Sunday local time, a car stopped outside OpenAI CEO Sam Altman's residence and fired a shot in the direction of the house. The San Francisco Police Department subsequently arrested two suspects, 25-year-old Amanda Tom and 23-year-old Muhamad Tarik Hussein, who are currently being held for negligent discharge of a firearm.

Surveillance footage of the suspects outside Sam Altman's home

This is the second attack on Sam Altman's residence in San Francisco since last Friday. Neither incident has resulted in substantial injuries, but they have pushed an issue that was previously confined to public opinion to the brink of real violence.

The reason Sam Altman has become a focal point for such emotions is not just because he is the head of OpenAI, but because what he represents has long transcended the identity of a tech company CEO. He is not only the leader of cutting-edge AI products but also a connection point between computing power, capital, policy, public opinion, and the state apparatus.

The true significance of these two attacks is not simply that the public is beginning to oppose technological progress, but that an increasing number of people are viewing AI companies as a quasi-political force. In the past, discussions surrounding tech companies focused more on product experience, monopolies, privacy, and platform governance; now, OpenAI's reach touches on employment, tax systems, wealth redistribution, national security, infrastructure, geopolitics, and even the use of models in warfare. In other words, Altman is increasingly perceived not as an ordinary business figure but as someone straddling the roles of entrepreneur, policy player, and quasi-public power. Once perceived this way, he can easily transform from a business figure into a vessel for political sentiment.

This is precisely where the danger lies. The public's fear of AI is not entirely unfounded; even Altman himself acknowledges that this fear is reasonable. After the first attack, he wrote that people's fears and anxieties about AI are justified, stating, "We are experiencing perhaps the largest societal change in a long time, maybe ever."

Last week, OpenAI happened to release a policy document discussing a new social contract for the superintelligent era centered around humanistic principles, proposing ideas such as a public wealth fund, robot tax, and a four-day workweek.

Not long ago, OpenAI unexpectedly acquired the Silicon Valley tech talk show TBPN and announced plans to establish an office in Washington, creating a space called OpenAI Workshop for non-profit organizations and policymakers to understand and discuss the company's technology. OpenAI's competitor Anthropic also announced the establishment of its own think tank, the Anthropic Institute, focusing on how AI growth impacts society.

As the impacts of AI become more concrete, calls for increased scrutiny of tech giants are rising. The industry has clearly realized that societal discontent is spreading, and while acknowledging the existence of this sentiment, it is attempting to redefine the debate and rewrite external understanding of the entire industry.

Last month, Sam Altman mentioned the public perception issues faced by AI companies at a meeting held by BlackRock in Washington. He noted that there is a lot of headwind at the moment. AI is not popular in the U.S.; rising electricity prices are blamed on data centers, and almost all companies that have laid off workers attribute the responsibility to AI, regardless of whether AI is actually the cause.

Polls also confirm that public distrust of AI is deepening. This distrust is not only directed at changes in the labor market but also at AI as a social force itself. A survey released by the Pew Research Center last year showed that only 16% of Americans believe AI will help people be more creative, and only 5% believe AI will help people build more meaningful relationships. A poll by NBC News last month indicated that only 26% of voters hold a positive view of AI, with its net negative rating even lower than that of U.S. Immigration and Customs Enforcement by 2 percentage points...

It is difficult to explain why people are so averse to AI in just one sentence. It may be because the industry initially packaged its technology as capable of destroying the world, or it could be due to economic anxieties surrounding job displacement, or a broader, long-standing resentment towards large tech companies. Faced with an increasing number of movements against data centers, proposals to restrict AI, and evident public disdain, the entire industry has begun to feel uneasy.

This unease has first led to a wave of public relations actions. Writing policy documents, discussing new social contracts, proposing public wealth funds, robot taxes, and four-day workweeks; acquiring more friendly content channels, establishing offices and communication spaces aimed at Washington; and forming research institutions to shift discussions from model performance to employment, welfare, education, democracy, and national competitiveness.

The problem lies precisely here. If a company only releases products, the public's judgment of it mostly revolves around usability, cost, and privacy concerns; but once it begins to discuss how to rewrite labor systems, how to distribute technological benefits, and how to arrange social safety nets in the superintelligent era, it is no longer just a market entity but is reaching into the public domain.

Moreover, this new narrative carries a stark contrast. On one side are phrases like human-centered, inclusive dividends, and shared benefits; on the other side are increasingly towering data centers, increasingly concentrated computing power and capital, increasingly complex relationships between politics and business, and increasingly sophisticated policy lobbying. What people feel is no longer just the uncertainty brought by technological progress, but a more difficult-to-articulate sense of tension: those who claim to design buffer mechanisms for society are often the ones most capable of accelerating the impact.

This is also why the controversy surrounding Sam Altman is particularly sensitive. He is both a hero, a prophet, a speculator, and a source of risk, and has also become a target of attacks. What is most unsettling about him may not be mere ambition, but his ability to articulate almost valid points in different contexts. He talks about growth and scale to investors, responsibility and regulation to policymakers, risks and bottom lines to security advocates, and how technology will benefit everyone to the public. Each statement has its logic and reality; however, when these statements accumulate and even pull against each other in reality, it becomes difficult for the outside world not to develop deeper questions: which layer is the most authentic?

And this doubt is not new. Internally, there have been repeated concerns that the initial commitments regarding non-profit missions, safety priorities, and avoiding power imbalances are being gradually pushed aside by product pressures, revenue targets, and expansion impulses. The safety team, once prominently showcased, now receives far fewer resources than promised; principles originally meant to constrain the company often yield to more pragmatic goals when they are truly needed. The starting point may have been to create an exception, but the endpoint increasingly resembles those large companies that, in the name of changing the world, ultimately push the world further towards centralization.

Therefore, the current dissatisfaction surrounding OpenAI cannot simply be understood as technological pessimism, nor is it merely about AI taking human jobs. It resembles the result of several overlapping emotions: anxiety over rewritten personal destinies, resentment towards highly concentrated power, disappointment that regulation cannot keep pace with reality, and vigilance against large companies demanding understanding while seeking greater discretion. These emotions were originally dispersed, but when society cannot find sufficiently clear institutional outlets, they instinctively seek the most vivid, concrete, and easily identifiable target to bear them.

Thus, an abstract systemic issue ultimately falls on a specific individual. In a highly mediated era, complex forces tend to coalesce into some form of personified symbol. Whoever resembles the spokesperson for the future most closely becomes the easiest target for emotions. This mechanism itself is not new; it is just that today it has first fully landed on the AI industry.

Exterior view of Sam Altman's mansion

Therefore, the most urgent answer cannot simply be to raise walls, increase security, or isolate risks outside a certain residence. Today it is Sam Altman; tomorrow it may not be him, and the problem will not disappear automatically.

What truly needs to be addressed are clearer boundaries, more credible external oversight, more honest disclosures of interests, and governance mechanisms that can penetrate corporate narratives. Otherwise, technology will continue to advance, capital will continue to increase, and policy discussions will continue to grow grander, but societal doubts will only accumulate, not dissipate. What people truly fear has never been just how powerful a particular model is, but rather that such a force is rapidly shaping reality without a corresponding structure of checks and balances appearing alongside it.

Of course, any violence must be unequivocally rejected. Dissatisfaction with a company, questioning a founder, or concerns about AI's direction cannot cross this line. The real pressure test of the AI era is no longer just the capabilities of models, but whether society can still establish sufficiently solid trust and constraints to embrace this change.

-- Price

--

You may also like

Soaring 50 times, with an FDV exceeding 10 billion USD, why RaveDAO?

What exactly is RaveDAO? Why is Rave able to rise so much?

1 billion DOTs were minted out of thin air, but the hacker only made 230,000 dollars

Liquidity saved Polkadot's life.

After the blockade of the Strait of Hormuz, when will the war end?

The US has taken away Iran’s most important card, but has also lost the path to ending the war

Before using Musk's "Western WeChat" X Chat, you need to understand these three questions

The X Chat will be available for download on the App Store this Friday. The media has already covered the feature list, including self-destructing messages, screenshot prevention, 481-person group chats, Grok integration, and registration without a phone number, positioning it as the "Western WeChat." However, there are three questions that have hardly been addressed in any reports.


There is a sentence on X's official help page that is still hanging there: "If malicious insiders or X itself cause encrypted conversations to be exposed through legal processes, both the sender and receiver will be completely unaware."


Question One: Is this encryption the same as Signal's encryption?


No. The difference lies in where the keys are stored.


In Signal's end-to-end encryption, the keys never leave your device. X, the court, or any external party does not hold your keys. Signal's servers have nothing to decrypt your messages; even if they were subpoenaed, they could only provide registration timestamps and last connection times, as evidenced by past subpoena records.


X Chat uses the Juicebox protocol. This solution divides the key into three parts, each stored on three servers operated by X. When recovering the key with a PIN code, the system retrieves these three shards from X's servers and recombines them. No matter how complex the PIN code is, X is the actual custodian of the key, not the user.


This is the technical background of the "help page sentence": because the key is on X's servers, X has the ability to respond to legal processes without the user's knowledge. Signal does not have this capability, not because of policy, but because it simply does not have the key.


The following illustration compares the security mechanisms of Signal, WhatsApp, Telegram, and X Chat along six dimensions. X Chat is the only one of the four where the platform holds the key and the only one without Forward Secrecy.


The significance of Forward Secrecy is that even if a key is compromised at a certain point in time, historical messages cannot be decrypted because each message has a unique key. Signal's Double Ratchet protocol automatically updates the key after each message, a mechanism lacking in X Chat.


After analyzing the X Chat architecture in June 2025, Johns Hopkins University cryptology professor Matthew Green commented, "If we judge XChat as an end-to-end encryption scheme, this seems like a pretty game-over type of vulnerability." He later added, "I would not trust this any more than I trust current unencrypted DMs."


From a September 2025 TechCrunch report to being live in April 2026, this architecture saw no changes.


In a February 9, 2026 tweet, Musk pledged to undergo rigorous security tests of X Chat before its launch on X Chat and to open source all the code.



As of the April 17 launch date, no independent third-party audit has been completed, there is no official code repository on GitHub, the App Store's privacy label reveals X Chat collects five or more categories of data including location, contact info, and search history, directly contradicting the marketing claim of "No Ads, No Trackers."


Issue 2: Does Grok know what you're messaging in private?


Not continuous monitoring, but a clear access point.


For every message on X Chat, users can long-press and select "Ask Grok." When this button is clicked, the message is delivered to Grok in plaintext, transitioning from encrypted to unencrypted at this stage.


This design is not a vulnerability but a feature. However, X Chat's privacy policy does not state whether this plaintext data will be used for Grok's model training or if Grok will store this conversation content. By actively clicking "Ask Grok," users are voluntarily removing the encryption protection of that message.


There is also a structural issue: How quickly will this button shift from an "optional feature" to a "default habit"? The higher the quality of Grok's replies, the more frequently users will rely on it, leading to an increase in the proportion of messages flowing out of encryption protection. The actual encryption strength of X Chat, in the long run, depends not only on the design of the Juicebox protocol but also on the frequency of user clicks on "Ask Grok."


Issue 3: Why is there no Android version?


X Chat's initial release only supports iOS, with the Android version simply stating "coming soon" without a timeline.


In the global smartphone market, Android holds about 73%, while iOS holds about 27% (IDC/Statista, 2025). Of WhatsApp's 3.14 billion monthly active users, 73% are on Android (according to Demand Sage). In India, WhatsApp covers 854 million users, with over 95% Android penetration. In Brazil, there are 148 million users, with 81% on Android, and in Indonesia, there are 112 million users, with 87% on Android.



WhatsApp's dominance in the global communication market is built on Android. Signal, with a monthly active user base of around 85 million, also relies mainly on privacy-conscious users in Android-dominant countries.


X Chat circumvented this battlefield, with two possible interpretations. One is technical debt; X Chat is built with Rust, and achieving cross-platform support is not easy, so prioritizing iOS may be an engineering constraint. The other is a strategic choice; with iOS holding a market share of nearly 55% in the U.S., X's core user base being in the U.S., prioritizing iOS means focusing on their core user base rather than engaging in direct competition with Android-dominated emerging markets and WhatsApp.


These two interpretations are not mutually exclusive, leading to the same result: X Chat's debut saw it willingly forfeit 73% of the global smartphone user base.


Elon Musk's "Super App"


This matter has been described by some: X Chat, along with X Money and Grok, forms a trifecta creating a closed-loop data system parallel to the existing infrastructure, similar in concept to the WeChat ecosystem. This assessment is not new, but with X Chat's launch, it's worth revisiting the schematic.



X Chat generates communication metadata, including information on who is talking to whom, for how long, and how frequently. This data flows into X's identity system. Part of the message content goes through the Ask Grok feature and enters Grok's processing chain. Financial transactions are handled by X Money: external public testing was completed in March, opening to the public in April, enabling fiat peer-to-peer transfers via Visa Direct. A senior Fireblocks executive confirmed plans for cryptocurrency payments to go live by the end of the year, holding money transmitter licenses in over 40 U.S. states currently.


Every WeChat feature operates within China's regulatory framework. Musk's system operates within Western regulatory frameworks, but he also serves as the head of the Department of Government Efficiency (DOGE). This is not a WeChat replica; it is a reenactment of the same logic under different political conditions.


The difference is that WeChat has never explicitly claimed to be "end-to-end encrypted" on its main interface, whereas X Chat does. "End-to-end encryption" in user perception means that no one, not even the platform, can see your messages. X Chat's architectural design does not meet this user expectation, but it uses this term.


X Chat consolidates the three data lines of "who this person is, who they are talking to, and where their money comes from and goes to" in one company's hands.


The help page sentence has never been just technical instructions.


Parse Noise's newly launched Beta version, how to "on-chain" this heat?

Noise is planning to launch its mainnet on Base in the coming months, at which point the platform will be open to everyone and support real-money trading.

Is Lobster a Thing of the Past? Unpacking the Hermes Agent Tools that Supercharge Your Throughput to 100x

The longer you use it, the smarter it gets, what makes Hermes, where developers have migrated to, special?

Popular coins

Latest Crypto News

Read more