Tag: Anthropic

  • Benchmarks won over loyalty: Microsoft bets on Anthropic. A blow for OpenAI

    Benchmarks won over loyalty: Microsoft bets on Anthropic. A blow for OpenAI

    Microsoft ‘s choice of the Claude Mythos model as the foundation for its new software security architecture sets a significant precedent in the Redmond-based technology giant’s strategy. This decision, while at first glance it may appear to be a mere operational adjustment, in reality reveals deeper market shifts in the generative AI sector and changing priorities in digital risk management. Analysing the facts of Anthropic‘s model integration, a clear pattern can be discerned: Microsoft is moving from a phase of fascination with general AI capabilities to a phase of rigorous, benchmarked selection of specialised tools.

    A key reference point for this decision is the CTI-REALM benchmark, co-developed by Microsoft engineers. The fact that Claude Mythos scored highest in it, distancing the GPT-5.4-Cyber model, is a market signal that cannot be ignored. Microsoft, as OpenAI’s largest partner and investor, has shown that pragmatism and hard data, rather than corporate loyalty, wins in critical areas such as cyber security. This strategic approach to model vendor diversification avoids vendor lock-in and ensures access to the most effective solutions in specific niches.

    From a business perspective, integrating Mythos directly into the software development cycle is a classic implementation of the ‘Shift-Left’ strategy. The cost of fixing a vulnerability discovered at the production stage is many times higher than eliminating the bug at the code writing stage. The cited data about the detection of a vulnerability that has existed for 27 years and the success of Mozilla, which identified 271 vulnerabilities thanks to Claude Mythos, are not just technological curiosities. They are concrete indicators of return on investment (ROI). For companies operating on huge collections of legacy code, automating security audits using such high-precision models means saving thousands of hours of high-level professionals and drastically reducing the legal and reputational risks associated with potential data leaks.

    The market reaction to Mythos’ capabilities, manifested, for example, by concern in the banking and insurance sectors and interest from the NSA, suggests that there is a new kind of regulatory risk involved. Claude Mythos is seen as a dual-use technology. The model’s ability to instantaneously map vulnerabilities makes it a defensive tool of unprecedented power, but also a potential offensive instrument. The embargo under consideration by US agencies and the restrictive access under Project Glasswing suggest that in the near future, access to the most advanced cyber security models may be rationed in a similar way to armament or high-end cryptographic technologies. Companies must therefore take into account in their strategies the fact that technological advantage in the area of AI may be limited by state interventions.

    It is also worth noting a painful market lesson for OpenAI. The fact that the release of GPT-5.4-Cyber failed to draw attention away from the Anthropic solution is indicative of the change in expectations of corporate customers. The market has become saturated with promises of versatility; solutions with proven effectiveness in specific usage scenarios are now sought after. Microsoft, by implementing Claude into its 365 applications and its internal processes, de facto legitimises Anthropic as an equal, and in some respects superior, technology partner. This suggests that OpenAI’s dominance may be more fragile than stock market valuations would indicate.

    For Microsoft itself, the move is an attempt to run away from mounting criticism over historical security lapses. Redmond has understood that with the current scale and complexity of the Windows and Azure ecosystem, traditional methods of manual code review are inefficient. Using Claude Mythos as an intelligent filter to verify developers’ work is an attempt to systemically address the problem of technology debt. If Microsoft manages to significantly reduce the number of critical vulnerabilities in its products with this solution, it will set a new market standard to which all SaaS and Cloud players will have to adapt.

  • Alphabet invests $40bn in Anthropic. Is it fighting for control with Amazon?

    Alphabet invests $40bn in Anthropic. Is it fighting for control with Amazon?

    Alphabet, Google’s parent company, has announced its intention to invest up to $40 billion in Anthropic, a startup that for the Mountain View giant is both a key cloud customer and one of its fiercest competitors in the race for supremacy in artificial intelligence.

    The structure of this deal reflects the new reality of funding the AI sector, where capital is closely tied to specific outcomes. Google will put up $10 billion in cash at a $350 billion valuation for the startup. The remaining 30 billion will only be deployed once the developers of the Claude model achieve rigorous performance targets. For Alphabet, this is not only an investment of capital, but above all an attempt to forge closer ties with an entity that has emerged as a leader in niches where Google is still searching for its identity.

    The move comes just days after Amazon pledged its own $25 billion cash injection to Anthropic. A situation where two of the world’s biggest cloud providers are bidding for the same startup shows how desperately tech giants need the success of external models to drive sales of their own computing infrastructure.

    Anthropic’s driving force is no longer just the promise of secure artificial intelligence, but real financial results. The company’s annual revenue has just surpassed the $30 billion barrier, an impressive jump from the $9 billion recorded at the end of 2025. Investors are responding enthusiastically, with some offers from the venture capital market valuing the company at up to $800 billion. Underpinning this growth is Claude Code, a tool that dominates the software segment, and Anthropic’s Cowork agent, whose plug-ins have recently caused jitters in the stock markets, driving down the valuations of traditional SaaS software companies.

    Anthropic’s greatest challenge, however, remains its ‘hunger for power’. Scaling the models requires infrastructure of a scale never seen before. The startup is securing this through multi-year agreements with Broadcom and CoreWeave, as well as an ambitious $50 billion plan to build its own data centres in the US.

    The market is divided into specialised tools and Anthropic, with its focus on coding and autonomous agents, is proving that it is possible to successfully challenge general-purpose models. Alphabet, by investing in Anthropic, is buying itself an insurance policy in case the startup’s approach proves to be the target business standard.

  • Japan sets up task force against Mythos AI threats

    Japan sets up task force against Mythos AI threats

    When Anthropic announced that its latest AI model, Mythos, had identified thousands of previously unknown security vulnerabilities in operating systems, Silicon Valley was in an uproar. But it was in Tokyo, the heart of Asia’s conservative financial system, that the most concrete policy decision was made. Finance Minister Satsuki Katayama announced the creation of a special task force to secure Japan’s banking sector against a new era of threats generated by artificial intelligence.

    For the market, Japan’s move means that the traditional approach to cyber security based on cycles of patching holes is about to become history. The new entity includes key state institutions, including the Financial Services Agency and the Bank of Japan, as well as private giants and exchange operator Japan Exchange Group. The scale of this coalition reflects the seriousness of the situation: Mythos is not just another language model, but a tool capable of detecting and exploiting software vulnerabilities at a speed that human administrators cannot match.

    For the financial sector, this is a critical scenario. Banks, despite modern interfaces, still rely heavily on a complex, multi-layered IT architecture, the elements of which still remember previous decades. The interconnectedness of transactional systems means that a single breakout can have a knock-on effect. Katayama rightly points out that in a world of real-time operations, a digital crisis immediately translates into a loss of confidence in the market and real losses of liquidity.

    Although there have been no incidents directly related to the Mythos model to date, Japan’s pre-emptive action sets a new regulatory standard. Regulators in the US and Europe have also issued warnings, suggesting banks urgently review their defences. However, it was the Japanese administration that was the first to openly acknowledge that there was a ‘crisis at hand’.

    Executives in the fintech and banking sectors should take note of the fact that AI has dramatically reduced the amount of time that a security vulnerability remains a theoretical threat. Security investments should now evolve towards autonomous systems capable of responding at the same speed that models such as Mythos can strike. The fight for financial stability in 2026 is no longer about whether a system will be attacked, but whether it will have time to repair itself before the market sees an anomaly.

  • Leaked controversial Claude Mythos model. Anthropic investigates security incident

    Leaked controversial Claude Mythos model. Anthropic investigates security incident

    Anthropic, one of the leading forces in the artificial intelligence sector, is facing a serious image and operational challenge. As reported by Bloomberg News, the company’s most advanced model, Claude Mythos Preview, was leaked to a small group of unauthorised users. The incident comes at a crucial time for the startup, which is just positioning its technology as the foundation of a new era of cyber security.

    The leak occurred on 7 April, exactly the day Anthropic announced ‘Project Glasswing’. The initiative was intended to allow selected organisations to test the Mythos model under controlled conditions, mainly to strengthen their defences against digital attacks. Meanwhile, a group of users on a private online forum gained access to the tool almost immediately after the official announcement. Although reports indicate that the model has not been used for criminal purposes to date, the fact that it is regularly used outside the manufacturer’s control raises legitimate concerns.

    A spokesperson for Anthropic confirmed that the company is investigating the matter, pointing to a third-party vendor environment as the likely source of the leak. The incident could complicate Anthropic’s relationship with regulators. Mythos is a model with an unprecedented ability to identify software vulnerabilities. It is a ‘dual-use’ tool – in the hands of defenders it patches systems, but in the hands of hackers it can become a precision weapon. The loss of control of such a powerful resource, even if temporary, reinforces the arguments of advocates of strict oversight of models critical to national security. Anthropic must now prove that it can effectively protect the technology that is supposed to protect the world.

  • NSA uses Claude Mythos despite official Pentagon ban

    NSA uses Claude Mythos despite official Pentagon ban

    According to Axios, citing sources close to the intelligence community, the National Security Agency (NSA) is actively using Anthropic ‘s latest model, Claude Mythos. There would be nothing unusual about this, were it not for the fact that the same administration has officially declared Anthropic to be a ‘supply chain risk’ company, which should theoretically close its doors to government contracts.

    This rupture within the US security apparatus is indicative of a wider problem: the tension between the ethics of AI developers and the military ambitions of the state. Anthropic was blacklisted not because of technical loopholes or links to foreign intelligence, but as a result of an ideological clash. The company refused to allow the Pentagon to use its models for mass surveillance of citizens and the development of autonomous combat systems. In response, Defence Secretary Pete Hegseth gave the company a risk label, hitherto reserved for entities linked to authoritarian regimes.

    For the technology business, this situation is a lesson in pragmatism. The NSA, whose statutory mandate is to crack ciphers and go on the offensive in cyberspace, has apparently decided that Claude Mythos is too powerful a tool to abandon. The model has shown remarkable effectiveness in identifying zero-day bugs and finding backdoors in foreign software. In the face of such unique capabilities, the Pentagon’s political pronouncements go down the drain.

    The current state of affairs is a classic bureaucratic farce with serious market implications. While the Pentagon is publicly warning against Anthropic, the intelligence services are signing new contracts with the company, arguing for national security needs. This sets a dangerous precedent in which security labels are used as a leverage tool in contract negotiations rather than as a real threat assessment.

    The technical value of AI is becoming stronger than political arbitration. Anthropic is currently fighting to regain its good name through legal means, but it is the actual demand from agencies such as the NSA that may prove to be their most effective line of defence.

  • Giant investment in Anthropic. Amazon cements the dominance of AWS

    Giant investment in Anthropic. Amazon cements the dominance of AWS

    Amazon has announced an expansion of its investment in Anthropic by a further $25 billion, which, combined with previous outlays, makes the startup the centrepiece of AWS’ strategy. However, this is not a unilateral capital flow. As part of a mutual commitment, Anthropic will spend more than $100 billion on Amazon’s cloud infrastructure over the next decade, de facto cementing the most powerful technology alliance of the decade.

    For Andy Jassy, Amazon’s CEO, the deal is a key part of the fight to become independent of third-party processor suppliers. The key point of the agreement is not the dollars themselves, but the ‘custom silicon’. Anthropic has committed to using Trainium2 and Trainium3 chips to train its most advanced Claude models. By the end of the year, the startup plans to develop 1 gigawatt of computing power based on Amazon’s proprietary solutions, ultimately aiming for five times that. This sends a clear signal to the market: Amazon doesn’t want to just be a middleman selling Nvidia chip-based computing power, but is aiming for a full, vertically integrated technology stack.

    Amazon’s strategy appears to be extremely pragmatic and multi-tracked. While Microsoft has put almost everything on the line in the form of OpenAI, Amazon is diversifying its risk. Its recent pledge to invest $50 billion in OpenAI, juxtaposed with its current move towards Anthropic, positions AWS as a ‘neutral factory’ for the biggest AI players. Amazon accepts that its own models, such as Nova, may not always be in the top tier, as long as it is on its infrastructure that the foundations of the new economy are built.

    The AI market is entering a phase of mature consolidation based on gigantic capital expenditure. With Amazon’s projected $200 billion in capital expenditure this year, the barrier to entry for potential cloud competitors is becoming almost insurmountable. The real battle is no longer just about who will create the smarter model, but about who has physical control over the energy and silicon on which that intelligence operates. Amazon’s share price, rising after the news was announced, suggests that investors appreciate this vision of a secure, profitable infrastructure that makes money regardless of which AI model ultimately wins the battle for the end user.

  • Anthropic Mythos: Why is the Bundesbank warning against a new AI model?

    Anthropic Mythos: Why is the Bundesbank warning against a new AI model?

    According to Joachim Nagel, President of the Bundesbank, the financial industry has faced a dilemma in which advanced artificial intelligence ceases to be an assistant and becomes an autonomous tool capable of destabilising global infrastructure.

    The German central bank chief’s concerns centre on Mythos ‘ unprecedented ability to code and identify vulnerabilities. The model demonstrates an almost instinctive proficiency in finding software bugs, which in the hands of cybercriminals could spell the end of security based on ‘legacy systems’. Many financial institutions still operate on IT architectures built decades ago that, while stable, were not designed to fend off attacks generated by a machine that thinks faster than any team of cyber security experts.

    Nagel argues that Anthropic’s current strategy of making Mythos available only to a narrow, select group of companies and organisations creates a dangerous asymmetry. Instead of protecting the market, limited access can exacerbate systemic risk. If only a few have the shield of Mythos’ effectiveness, the rest of the sector is left exposed to the shot, which from a banking supervisor’s perspective is an unacceptable distortion of competition. The demand is clear: all relevant institutions must have access to the same defensive tools to avoid technological stratification, which could lead to a domino effect in the event of a successful attack on the weaker link.

    However, the Bundesbank’s perspective goes beyond mere cyber-security, striking at the foundations of monetary policy. Nagel challenges the widespread optimism that artificial intelligence will be a cure for inflation through increased productivity. On the contrary, he warns of price pressures resulting from the huge demand for investment in AI infrastructure and the drastic increase in the cost of electricity required to power data centres.

    Most intriguing, however, is the warning against ‘tacit collusion by algorithms’. There is evidence to suggest that sophisticated models can autonomously learn to optimise profits by keeping prices above competitive levels, doing so without direct communication between firms.

    For central banks tasked with maintaining price stability, this new form of algorithmic rate setting presents a challenge that will require entirely new regulatory tools. In a world dominated by models such as Mythos, central bankers’ vigilance must now extend not just to spreadsheets but to lines of code themselves.

  • OpenAI presents GPT-5.4-Cyber. A response to the Anthropic project

    OpenAI presents GPT-5.4-Cyber. A response to the Anthropic project

    The competition for dominance in the security AI sector is gaining momentum as OpenAI introduces the GPT-5.4-Cyber model in direct response to the successes of rival project Anthropic. The new variant of the flagship model prioritises greater operational freedom for researchers, which is crucial in the race to patch vulnerabilities in critical infrastructure.

    Tuesday’s release of GPT-5.4-Cyber is more than just another iteration of a flagship model. It is a strategic shift in the boundaries of what AI developers allow their users to do. While Anthropic is betting on a rigorously controlled initiative for a select few, OpenAI is opting for a ‘more permissive’ model. In practice, this means loosening the security corset that has so far often prevented researchers from fully analysing malicious code or simulating attacks for fear of violating the security policies of the platform itself.

    The key to OpenAI’s strategy, however, is not just the technology, but the ecosystem. The company is dramatically scaling the Trusted Access for Cyber (TAC) programme, opening it up to thousands of individual experts and hundreds of teams looking after critical infrastructure. The introduction of multi-level verification is a pragmatic solution to the ‘dual use’ problem of artificial intelligence. Higher levels of trust unlock the more powerful features of GPT-5.4-Cyber, giving defenders a tool with effectiveness similar to that of attackers, but within a legal and ethical framework.

    In this clash, OpenAI is betting on massiveness and fewer restrictions for proven partners, hoping that it is the broad ‘white hat’ community that will become their strongest asset. This decision carries risks, but in the face of increasingly sophisticated threats, a strategy of ‘controlled openness’ may prove to be the only effective way to secure the digital future.

  • Anthropic’s strategic restraint: Why aren’t Claude’s creators rushing for billions?

    Anthropic’s strategic restraint: Why aren’t Claude’s creators rushing for billions?

    In the venture capital world, an $800 billion valuation usually ends with the immediate opening of champagne. But for Anthropic, the startup behind the Claude model, the latest offers from investors have become a test of discipline, not just a cause for celebration. Although the market is rumbling about a potential doubling of the company’s value in just a few months, the company’s management is showing a restraint rarely seen in Silicon Valley.

    Underpinning this optimism is hard financial data. Anthropic’s revenues have grown from $9 billion at the end of 2025 to a staggering $30 billion today. Such exponential scaling of the business makes another round of private funding an option rather than a necessity for the company. Rather than diluting the shares with the current euphoria, the company seems to prefer a path leading directly towards an IPO, which speculation suggests could happen later this year.

    The key to Anthropic ‘s market dominance was the launch of the Mythos model. It redefined the concept of the ‘agent model’, that is, a system capable of autonomously performing complex tasks rather than just answering simple queries. Advertised as the most powerful coding tool on the market, Mythos has become an essential resource for the engineering departments of major corporations. However, this technological advantage brings with it new challenges; experts are sounding the alarm that such a high level of code handling prowess can be a double-edged sword, making it easier to identify cyber security vulnerabilities.

    For business decision-makers, Anthropic’s stance signals the maturity of the AI sector. The time of ‘burning cash’ in pursuit of pure reach is giving way to models that generate real returns and have tangible utility in automating processes. By rejecting offers of close to a trillion dollars, Anthropic is sending a clear message: their technology is worth more than the current gold rush in the VC market, and the company’s true value will be verified not by private rounds, but by the public floor and Mythos’ ability to safely manage autonomous code.

  • OpenAI is fighting for the corporate market. Does Anthropic threaten the AI leader?

    OpenAI is fighting for the corporate market. Does Anthropic threaten the AI leader?

    OpenAI, valued at an astronomical $852 billion, stands on the threshold of the most important test in its short history. While its recent $122 billion funding raise – arguably the largest round in the history of Silicon Valley – suggests unwavering market confidence, there is growing unease beneath the surface. Some of the company’s early supporters are beginning to question its strategic coherence in the face of increasingly aggressive competition from Anthropic and a resurgent Google.

    The main point of contention is OpenAI’s sharp turn towards the corporate sector. The company has revised its product roadmap twice in the past six months. This nervousness is a direct reaction to the successes of rivals: first Google, which has integrated AI into its ecosystem, and now Anthropic, whose revenue momentum, according to some analysts, may soon eclipse the market leader’s growth rate.

    Critics, including an early OpenAI investor quoted by the Financial Times, point to a “profound lack of focus”. The argument is simple: ChatGPT has one billion users and is growing at 50-100% per year. In this context, a sudden focus on enterprise solutions and software tools seems risky, potentially dissipating the company’s resources at a crucial time ahead of its planned IPO this year.

    OpenAI’s management, led by chief financial officer Sarah Friar, firmly rejects these concerns. Management says the record interest in the latest funding round is the best evidence that the market believes in the path ahead. A company spokesperson stresses that the offer was oversubscribed, reflecting investors’ “strong belief” in the long-term business value of the company.

    For the technology sector, however, the lesson is clear. Even with almost unlimited capital and a dominant market position, OpenAI is not immune to competitive pressure. The battle for dominance in AI is moving from the pure innovation phase to the brute business execution phase. As the IPO approaches, the market will be watching closely to see whether Sam Altman manages to turn the popularity of ChatGPT into a stable, corporate foundation, or whether OpenAI becomes a victim of its own overly broad appetite for success.

  • Anthropic negotiates with Trump’s government over Mythos model

    Anthropic negotiates with Trump’s government over Mythos model

    The line between national security and commercial autonomy is becoming ever thinner. The best example of this tension is Anthropic, which, despite being recently blacklisted by the Pentagon, is busily courting the Trump administration. The bone of contention has been Mythos, the company’s latest and most powerful AI model, which, rather than becoming the foundation of US digital defence, has ended up in the middle of a legal and political clinch.

    The dispute that led to Anthropic being cut off from contracts with the Department of Defence and its subcontractors is not about the technology itself, but about ‘guardrails’. The Pentagon is demanding freedom to implement AI tools in military operations, which the startup – which builds its image on a foundation of security and ethics – refused to accept. The result? Officials deemed the company a supply chain risk, a drastic move for an entity aspiring to be a key partner of the state.

    Jack Clark, co-founder of Anthropic, however, is trying to tone down the mood. At a recent Semafor World Economy event in Washington DC, he stressed that contractual conflict should not overshadow the overriding goal of national security. According to Clark, dialogue with the government about the Mythos model is ongoing and the company sees it as part of its ‘information obligation’ to the state.

    The stakes are huge because Mythos is not just another iteration of a chatbot. It is an agent task-oriented and advanced coding model with an unprecedented ability to detect cyber vulnerabilities. In the hands of the military, it can be a powerful offensive or defensive tool, which explains the Pentagon’s determination to take full operational control of it.

    Anthropic is currently in a difficult strategic position. A federal appeals court recently refused to halt sanctions imposed by the Pentagon, which gives the Trump administration a strong bargaining chip in further negotiations. For business leaders and investors, the situation sends a clear message: in the era of frontier models, market success no longer depends solely on technical performance, but on the ability to navigate increasingly restrictive national security policies. Anthropic’s struggle to get back into Washington’s good graces will define the standards of Silicon Valley-Pentagon collaboration for years to come.

  • Is Claude Mythos from Anthropic threatening the banks? Urgent talks in London and the US

    Is Claude Mythos from Anthropic threatening the banks? Urgent talks in London and the US

    As the Financial Times reports, UK regulators – including the Bank of England and the FCA – are urgently reviewing the potential risks posed by the latest AI model from Anthropic: the Claude Mythos Preview.

    The situation is unprecedented, as the model is not just another chatbot for generating marketing content. Claude Mythos is being developed as part of the enigmatic ‘Project Glasswing’ initiative . According to Anthropic’s official communications, this is a controlled environment in which the model serves a defensive purpose. The problem is that the line between defence and attack in cyberspace is thinner than ever.

    The manufacturer itself has admitted that Mythos has already identified thousands of critical vulnerabilities in operating systems and browsers. What is a breakthrough for security engineers is becoming a nightmare for guardians of the financial system. If the model can pinpoint vulnerabilities in global software with such ease, the critical IT infrastructure of major banks, insurers and stock exchanges could be up for grabs.

    Concern is not just confined to the City of London. Across the ocean, US Treasury Secretary Scott Bessent has already convened a meeting with Wall Street giants to assess the cyber risks of developing such sophisticated models. The reaction of regulators suggests that we are standing on the threshold of a new era of risk management, where the biggest threat to banks is no longer bad loans, but artificial intelligence capable of autonomously detecting errors in the code on which the global circulation of money is based.

    Over the next two weeks, representatives of the UK financial sector are to be instructed in detail by the National Cyber Security Centre (NCSC). The message is clear for business leaders: it is time for IT security audits to stop being a formality and become a real battleground against a model that learns faster than any hacker. Project Glasswing was supposed to bring transparency, but for now it has cast a long shadow over confidence in the digital stability of the financial sector.

  • Project Glasswing: How Anthropic wants to harness the power of its own artificial intelligence

    Project Glasswing: How Anthropic wants to harness the power of its own artificial intelligence

    Anthropic is making a move that escapes classic definitions of corporate strategy. The announcement of Project Glasswing, based on the Claude Mythos Preview model, is an event that is as much about software engineering as it is about global security policy and the psychology of trust in business.

    The financial scale of the venture is breathtaking. Achieving an annual revenue rate of $30 billion in just a few months is a result that in a traditional economy would be considered a statistical error. However, behind this facade of success lies a deeper, almost existential uncertainty. Anthropic openly admits that it has created a tool so powerful that its public release could destabilise the foundations of the digital world.

    It is a rare case in the history of technology when a manufacturer voluntarily imposes ‘forbidden fruit’ status on its most potentially profitable product, restricting access to a narrow, elite coalition.

    The foundation of this initiative is the Claude Mythos Preview, a model that has autonomously identified thousands of zero-day vulnerabilities in the most critical systems, such as the Linux kernel and FFmpeg libraries, in testing. The ability to generate exploits autonomously without human intervention pushes the boundary between a programmer’s assistant and an autonomous cyber actor.

    This is where the first of a series of ironies arises: the technology that is supposed to protect the infrastructure is at the same time the most effective tool to dismantle it. Anthropic, by choosing to isolate the model, becomes the de facto guardian of global digital immunity, which raises questions about the legitimacy of such power in the hands of a private entity.

    However, the credibility of this role has recently been put to the test by a series of mundane incidents. The leak of strategic plans due to a misconfiguration of the CMS system and the accidental release of Claude Code source code are mistakes that the literature refers to as ‘poor operational hygiene’.

    The contrast between the near-divine power of the Mythos model and the trivial human error in packaging npm libraries is striking. This suggests that the greatest security threat is not the lack of sophisticated algorithms, but the invariable fallibility of the human link. Anthropic argues that these errors do not compromise the architecture of the model itself, but to the market observer they are a reminder that even the most powerful shield is only as strong as the hand that holds it.

    The structure of the alliance formed around Glasswing is a phenomenon in itself. The sight of Microsoft, Google, AWS and Apple working together under the aegis of a single startup on joint access to Claude Mythos is testament to the seriousness of the situation. It is a coalition forced by the biology of the digital threat. Traditional methods of patching software holes have become an anachronism in the face of AI, which reduces the time from vulnerability discovery to exploitation from months to minutes.

    Technology giants have understood that in the current market dynamics, no one can survive alone. Ecosystem security has become a common good, the protection of which requires a ceasefire on the battlefields of cloud or hardware market share.

    The initiative also sheds new light on the future of open source software. The allocation of $100 million in computing credits and direct donations to organisations such as the Linux Foundation is an attempt to bridge the historic gap.

    For decades, open code security has relied on the heroism of unpaid volunteers. Glasswing brings the industrial precision of AI auditing to this area, changing the rules of the game. Instead of inundating developers with thousands of bug reports, the system offers human-verified fixes, which is crucial to maintaining the stability of the global network.

    Managing such a huge number of zero-day vulnerabilities is a logistical challenge, which Anthropic solves through prioritisation and a strict timeframe. The 45-day timeframe between discovery and the publication of technical details gives vendors the necessary margin to implement safeguards. It is a process that transforms the chaos of discovery into an orderly stream of updates, giving digital defence a proactive character. In this model, AI is no longer just a tool, but an integral part of the cyber security chain of command.

    Ultimately, the Glasswing Project should be seen as an attempt to establish a new ontology in the IT industry. Anthropic does not sell a product, but offers membership to an early warning system. It is a business model based on exclusivity of responsibility. While sceptics may see this as an attempt to monopolise access to the most advanced security research, it is hard to ignore the fact that the alternative is an uncontrolled arms race in which the first better actors with hostile intentions could use similar technology to paralyse countries and economies.

    The future of the Glasswing project will show whether the trust placed in Anthropic by the world’s largest corporations was justified. For the moment, the initiative appears to be the only available way out of an impasse in which the pace of innovation has begun to threaten its own fruits.

  • Court blocks Pentagon. Anthropic temporarily removed from blacklist

    Court blocks Pentagon. Anthropic temporarily removed from blacklist

    Federal Judge Rita Lin has temporarily halted the US Department of Defense’s decision to list Anthropic as a threat to the nation’s supply chain. The ruling is the culmination of a high-profile dispute between model maker Claude and the Pentagon over the limits of military and intelligence use of artificial intelligence.

    The conflict escalated when Defence Secretary Pete Hegseth imposed a rarely used security risk label on Anthropic. This status, usually reserved for companies vulnerable to infiltration by foreign powers, prevented the company from bidding for key defence contracts. Anthropic argued in its lawsuit that the government’s decision was unlawful retaliation for its refusal to adapt Claude’s technology to domestic surveillance and autonomous weapons systems.

    In a 43-page memorandum of reasons, Judge Lin upheld the company’s argument, pointing out that the administration’s actions amounted to punishment for public criticism of the government’s position, in violation of the First Amendment to the US Constitution. The court also highlighted the company’s failure to provide due process (*due process*), which prevented Anthropic from effectively challenging the designation before it went into effect.

    From the Pentagon’s perspective, Anthropic’s resistance sets a dangerous operational precedent. The Justice Department argues that restrictions imposed by AI vendors can lead to technical uncertainty and the risk of sudden shutdown of military systems during missions. The government maintains that the designation was solely due to the company’s refusal to accept the contract terms, not its ethical views.

    Anthropic executives estimate that exclusion from government contracts could cost the company billions of dollars in lost revenue. While the current ruling gives the company breathing space, the administration has seven days to file an appeal. At the same time, a second civil government contract proceeding is pending in Washington, which remains a separate risk to Anthropic’s business model.

  • How much does AI replacement cost? Pentagon counts losses after Anthropic blockade

    How much does AI replacement cost? Pentagon counts losses after Anthropic blockade

    Defence Secretary Pete Hegseth’s decision to list Anthropic as a supply chain risk and order the withdrawal of its tools from the Pentagon within six months has created a breach that the US military is unwilling – and perhaps unable – to patch quickly.

    The context of the security barriers (guardrails) dispute between the startup and the Department of Defence exposes the modern military’s deep dependence on specific language models. Claude, Anthropic’s flagship product, became, in July 2025, the first AI model approved for secret military networks. Today, despite being blacklisted, it is still in use, which experts read as proof of its unrivalled performance in critical tasks such as operations planning or intelligence analysis.

    The Pentagon’s problem is not just a matter of user preference, although these users openly criticise alternatives such as Grok from xAI for inconsistency. It is primarily an operational and financial crisis. Joe Saunders, CEO of RunSafe Security, points out the brutal reality: it can take 12 to even 18 months to recertify systems for new AI models.

    For the Pentagon, this means not only gigantic costs, but above all a drastic drop in productivity. In some units, tasks that Claude used to do in seconds – such as searching through huge data sets – are now done manually using Excel sheets.

    The scale of Claude’s integration with defence infrastructure is striking. Even flagship projects such as Palantir’s Maven Smart Systems, with contract values in excess of a billion dollars, rely on code and workflows built under the Anthropic model. Having to rebuild them is an arduous and risky process.

    There is currently a blame game going on at the Pentagon. Some officials and contractors are ‘slowing down’ the process of decommissioning tools, hoping to reach a compromise before the six-month deadline. It is a classic clash between dynamic technology adoption and national security policy. If the Pentagon does not find a way to replace Anthropic quickly and effectively, it risks surrendering the field in the pursuit of technological sovereignty in the race for effectiveness, which is the most important currency on the modern battlefield.

  • Will AI kill traditional software? Tech giants fight for the market

    Will AI kill traditional software? Tech giants fight for the market

    There is a growing debate in Silicon Valley, which last month cost the software sector almost a trillion dollars in market valuation. The question is fundamental: will generative artificial intelligence, capable of writing code and automating processes on its own, make traditional SaaS platforms redundant? Industry leaders, from Oracle to Salesforce, have moved to counterattack, arguing that their greatest asset is not the code itself, but the unique data on which they operate.

    Oracle’s Mike Sicilia and Salesforce’s Marc Benioff reject the vision of a ‘software apocalypse’ with one voice. In recent meetings with analysts, both stressed that AI is not an existential threat, but a turbocharger for existing systems. Oracle, whose shares rose 10% after optimistic forecasts, is betting on flexibility and deep embedding in financial and logistical processes. According to analysts, it is the possession of ‘proprietary data’ that provides the most effective moat against new players such as Anthropic.

    Despite the confidence of the giants, the market remains sceptical of companies whose data is easier to replace. An example is Workday, whose share price has been hit hard. Although the company manages a huge amount of HR information, critics note that HR data is often subject to rigid, standardised formats. This makes them more susceptible to replication by agile AI models.

    However, Aneel Bhusri, returning CEO of Workday, raises a compelling technical argument: today’s artificial intelligence is probabilistic – based on probabilities and patterns. Meanwhile, critical corporate systems need to be deterministic; they need to deliver the same precise result every time, especially in the area of payroll or accounting.

    Instead of obituaries, market observers suggest evolution. Salesforce is promoting its Agentforce platform, and Oracle is integrating AI into its entire technology stack, from database to end-user applications. The advantage of the traditional players comes from switching costs – companies have spent decades building operations around these tools. While AI lowers the barrier to creating new software, it will not so easily replace decades of experience in managing complex business processes.

  • Why is Microsoft standing up for Anthropic?

    Why is Microsoft standing up for Anthropic?

    In Silicon Valley, the competition for dominance in the artificial intelligence sector usually resembles an arms race. However, in the face of bureaucratic pressure from the Department of Defence (DOD), the major players have decided to close ranks. Microsoft has officially backed Anthropic ‘s request to block the Pentagon’s decision to declare model developer Claude a ‘supply chain risk’.

    Pragmatism instead of solidarity

    For Microsoft, intervening in federal court in San Francisco is not just a gesture of goodwill towards a competitor. It is a cold business calculation. The Redmond giant has integrated Anthropic’s technology into solutions provided to the US military. Suddenly cutting off access to these models would call into question the continuity of defence contracts and force engineers to make costly, hasty rebuilds of systems.

    Microsoft argues that by giving itself six months to withdraw from Anthropic’s technology, the Pentagon has forgotten about an analogous transition period for third-party contractors. Without the court-ordered withholding of decisions (TRO), technology companies will be saddled with new and unpredictable operational risks that could destabilise their business planning for years.

    A new front in Big Tech’s relationship with government

    The case sheds light on a broader issue: the tension between the pace of innovation and the rigours of national security. The DOD’s decision to blacklist Anthropic is all the more surprising given that the startup promotes itself as a leader in AI security and ethics. Microsoft’s vote, backed by engineers from OpenAI and Google, suggests that the industry resents arbitrary decisions by officials that could block the military’s access to cutting-edge tools.

    The dispute shows that in the AI sector, technical success is only half the battle; the other is navigating the increasingly complex maze of government regulation. If the court does not grant Anthropic and Microsoft’s request, this precedent could hit any software vendor working with the public sector.

     

  • Claude on the US blacklist. Anthropic goes to court against the Pentagon

    Claude on the US blacklist. Anthropic goes to court against the Pentagon

    Anthropic, the artificial intelligence lab positioning itself as a ‘secure’ alternative to the giants, has gone on an unprecedented legal offensive against the federal government. Lawsuits filed in California and the District of Columbia seek to block the Pentagon’s decision to blacklist the company from national security. This clash is not just a dispute over arms contracts; it is a fundamental test of who ultimately controls the ‘brains’ of artificial intelligence – Silicon Valley or Washington.

    The conflict escalated when Defence Secretary Pete Hegseth imposed a supply chain risk designation on Anthropic. The reason was the refusal of Dario Amodei, Anthropic’s CEO, to remove ethical ‘barriers’ restricting the use of Claude in autonomous weapons systems and for domestic surveillance. The Pentagon’s position is that the law, not private corporations, decides how the country is defended and demands full flexibility in military operations. Anthropic, on the other hand, argues that current technology is too unreliable to be entrusted with life-and-death decisions, and that forcing its use violates free speech and due process.

    This battle has immediate consequences. Although Amodei reassures that the restrictions are narrow in scope, the market uncertainty is apparent. The presidential order to halt Claude’s use across government hits the company’s image as a stable partner. As Wedbush’s Dan Ives notes, the corporate sector may temporarily ‘put its pencils down’ on new Anthropic technology deployments while waiting for legal clarity.

    While Anthropic is fighting in the courts, the competition is wasting no time. OpenAI was quick to declare its principles convergent with the needs of the Department of Defence, taking the lead in dealing with the government. However, solidarity with Anthropic was expressed by researchers from Google and OpenAI, warning in an amicus curiae opinion that punishing companies for caring about security would stifle innovation and silence critical debate in the industry. The outcome of this trial will determine the new architecture of the relationship between the state and AI developers, defining whether the ethics of the model can be negotiated with the government.

  • Trump excludes Anthropic from contracts. New AI rules in the US

    Trump excludes Anthropic from contracts. New AI rules in the US

    There has been a sharp cooling in the relationship between Silicon Valley and the Pentagon that could redefine the business model of leading artificial intelligence labs. The Trump administration, seeking full operational freedom in the use of new technologies, is putting in place strict guidelines that call into question the autonomy of companies such as Anthropic.

    The General Services Administration’s decision to terminate contracts with Anthropic and label the company a ‘supply chain risk’ signals that the government will not tolerate compromises on control of AI tools. The flashpoint appeared to be the security mechanisms built into the models, which the Department of Defence believes cripple their military and civilian utility.

    A key element of the new strategy is the requirement to grant the US government an irrevocable licence to use AI systems for “any lawful purpose”. The new GSA guidelines go further, striking at the very structure of algorithms. Companies seeking federal contracts cannot “intentionally encode ideological judgements” into the results generated by the systems. This is a direct blow to content filtering mechanisms, which the administration sees as a form of censorship or bias.

    The current situation creates a clear division in the market. Companies that opt for the full flexibility and ‘neutrality’ required by Washington will gain privileged access to the public sector. Others, emphasising restrictive security barriers, may be pushed out of the world’s most important procurement market, drastically altering their valuations and growth prospects.

  • Big Tech workers vs Pentagon. Military pressure on AI sector sparks resistance

    Big Tech workers vs Pentagon. Military pressure on AI sector sparks resistance

    When US Secretary of Defence Pete Hegseth called the development of artificial intelligence a military arms race in January, relations between the government and Silicon Valley entered a new and turbulent phase. We are now witnessing unprecedented pressure from the US administration on key players in the AI sector, which is being met with increasing resistance from the developers of these technologies themselves.

    A growing conflict has been sparked by an ultimatum issued to Anthropic. The Pentagon is reportedly threatening to use the Defence Production Act to force the company to adapt its language models to the needs of the US military. A refusal would result in the company being deemed a supply chain risk. In response to this pressure, Anthropic has made it clear that it will not make its solutions available for mass surveillance of citizens or to power weapons capable of autonomous killing without close human oversight.

    The situation instantly triggered a wave of solidarity within the competing companies. A group of vetted Google and OpenAI employees have signed a joint petition entitled ‘We will not be divided’. The signatories of the document warn that the Department of Defence is attempting to use classic divisive tactics, hoping to force the tech giants to make concessions that AI security leaders have not agreed to. The initiative aims to create a united industry front. Employees are calling on their companies’ boards to maintain standards and not hand over technology to the military without proper ethical safeguards.

    From a business perspective, the threat of using extraordinary national security powers against private technology entities is an entry into completely uncharted territory. As Dean Ball, former White House technology policy advisor, notes, Anthropic faces the dangerous spectre of quasi-nationalisation or exclusion from the market. This aggressive move by the administration also sends a clear and worrying message to the entire innovation ecosystem, suggesting that doing business with the government carries a huge risk of losing operational independence.

    These developments will define not only the future of weapons contracts in Silicon Valley, but above all the limits of commercialisation and control of the most powerful models of artificial intelligence.

  • Anthropic hits IBM’s foundations – $31 billion evaporated by one AI tool

    Anthropic hits IBM’s foundations – $31 billion evaporated by one AI tool

    Monday’s crash in IBM shares, which evaporated $31 billion from the company’s capitalisation, is more than a stock market correction. The 13.1% drop – the deepest since the dot-com bubble burst in 2000 – is a validation of the optimism around the ‘new face’ of Big Blue. IBM’s COBOL infrastructure-based foundations have ceased to be a safe haven and have become a target for attack.

    Monopoly under fire from AI

    The impact came from Anthropic. A startup backed by Google and Amazon has announced a tool called Claude Code to automate the modernisation of COBOL code. This is a direct attack on IBM’s ‘milking cow’. For decades, the Armonk-based giant has profited from the fact that global finance is trapped in 1960s code that almost no one can service anymore. IBM built its power on this technological captivity of banks and government agencies.

    Anthropic has publicly pointed out to IBM what has been talked about in whispers in the industry: the number of engineers familiar with COBOL is falling dramatically, and artificial intelligence today can break that deadlock faster than z17 mainframes can depreciate. For investors, this signals that the barrier to entry that IBM has protected for half a century has just collapsed.

    The beginning of the end of the mainframe?

    Although IBM boasts an AI (watsonx) order book in excess of $12.5 billion, Monday’s panic shows that the market doesn’t quite believe in the transformation. Critics note that much of IBM’s growth in 2025 was based on upgrading legacy systems rather than real innovation.

    It is worth contrasting this optimism with another perspective – business often chooses stability over revolution, but the current situation shows the other side of the coin: this attachment to mainframe stability has become a trap for IBM. As soon as a viable alternative emerged in the form of AI-supported migration, the customer loyalty on which IBM’s business model is based began to be valued as a risk rather than an asset.

    Escape into debt and defence

    IBM is trying to salvage the situation with aggressive acquisitions, such as the $11bn purchase of Confluent, raising concerns about the company’s continued debt burden. The ‘run to the front’ strategy towards defence contracts (the $151bn Project SHIELD) is a classic move by a corporation losing ground in the private sector and seeking refuge in slow government budget cycles.

    The time when ‘nobody got fired for buying IBM’ is irretrievably gone. If Anthropic’s tools actually make the migration from COBOL work, IBM could be left with billions of dollars in non-working hardware and software that no one will need anymore.