Tag: OpenAI

  • Mistral AI vs OpenAI – Has Europe gained a viable alternative to ChatGPT?

    Mistral AI vs OpenAI – Has Europe gained a viable alternative to ChatGPT?

    Last week will be remembered as the moment when Europe’s artificial intelligence sector went from defensive to precision technology offensive. In just 48 hours, Paris-based Mistral AI made a series of moves that go beyond mere model updates. By simultaneously launching the Mistral Medium 3.5 model, the Vibe development environment, the Workflows orchestration platform and the Le Chat mode of operation, the company unveiled a complete vertical technology stack (full-stack). For IT decision-makers and business leaders in Europe, the message is clear: digital sovereignty has become a measurable operational and financial category.

    The end of distributed models – Economics Mistral Medium 3.5

    A key element of the new strategy is Mistral Medium 3.5, a 128-billion-parameter scale model released under licence with open weights. From an analytical perspective, its greatest value is not just in its ‘raw power’, but in the unification of capabilities. It is the first Mistral model to combine advanced reasoning, deep instructional understanding and high consistency of generated code within a single parameter set.

    From a business perspective, such integration directly affects the total cost of ownership (TCO). Until now, companies have been forced to maintain a fleet of specialised models: one to analyse legal documents, another to support developers and another for simple classification tasks. Medium 3.5 allows for infrastructure consolidation. Results in benchmarks such as SWE-Bench Verified(77.6%) or tau³-Telecom (91.4%) prove that this model not only matches, but in specific engineering applications outperforms closed systems such as GPT-4o or Claude 3.5.

    Importantly for operations departments, Medium 3.5 can be deployed locally using four H100 or H200 GPUs. This opens the door to building private, secure AI environments inside corporate data centres, eliminating reliance on the latency and pricing policies of external cloud providers.

    From conversation to implementation – Vibe and Workflows

    Mistral AI has rightly diagnosed that the bottleneck for AI adoption in business is no longer the quality of the text generated, but the integration with processes. Vibe and Workflows tools are the answer.

    Vibe addresses a key productivity issue for engineering teams: developer lock-in when AI agents are working. The introduction of remote agents running in parallel in the Mistral cloud, while remaining fully synchronised with the local environment, changes the working paradigm. Integration with GitHub, Jira, Sentry and Slack means that AI ceases to be a ‘question assistant’ and becomes a ‘task performer’ that only notifies the human once the process is complete.

    Workflows, on the other hand, built on the proven Temporary engine (used by Stripe and Netflix, among others), is an orchestration layer that allows the construction of long-term, fault-tolerant workflows. This architecture separates the control plane from the data plane. In practice, this means that a regulated sector company can benefit from advanced process management in the cloud, while the data itself and its processing never leave the client’s secure, local infrastructure. This solution is ideally suited to the needs of players such as ASML or La Banque Postale, who are already automating customs processes or document compliance verification using it.

    Sovereignty as strategic risk management

    In 2026, the argument of digital sovereignty has evolved from an ideological discourse to a hard risk analysis. Statements by UK Secretary of State Liz Kendall or actions by the French Ministry of the Armed Forces point to a growing awareness of the risks posed by the concentration of computing power in the hands of just a few Silicon Valley players.

    For a European technology director, the on-premise model offered by Mistral is an insurance policy against three risks:

    1. political risk: the unpredictability of US export regulations and the impact of the US administration on the availability of AI services in situations of geopolitical tension.

    2 Regulatory risk: The need to strictly comply with RODO, the EU AI Act and the NIS2 and DORA directives. In the financial or healthcare sector, the ‘right to audit’ and full control over the location of data are legal requirements that standard APIs from OpenAI or Anthropic are not always structurally able to fulfil.

    3 Operational risk: Sudden changes in the behaviour of models (so-called model drift) or unilateral modifications of service terms by SaaS providers.

    With 60% of its revenues in Europe, Mistral has a natural interest in adapting to the local regulatory framework, making it a more predictable partner than its US competitors.

    Alliances and financial foundations

    Critics of the European approach have often pointed to a lack of capital and infrastructure. Mistral AI systematically refutes these claims. Institutional funding of €830 million from a consortium of banks (including BNP Paribas, HSBC, MUFG) for the purchase of 13,800 NVIDIA processors is a signal that AI in Europe is becoming an infrastructure asset, not just a speculative one.

    Equally important is Mistral’s incorporation into the NVIDIA Nemotron Coalition. The partnership with Jensen Huang allows Mistral to co-create boundary models on DGX Cloud infrastructure, while keeping them open. It is a strategic balancing act: using the best available hardware while promoting open model scales, driving innovation across the European developer ecosystem.

    Analysis of recent Mistral AI activities leads to three key conclusions for business leaders in Europe:

    • AI is becoming a commodity (Commodity), but control is not: Competitive advantage is built not by simply having access to models, but by being able to integrate them deeply into one’s own infrastructure without the risk of data leakage.
    • Cost optimisation requires flexibility: Open-weighted models allow for fine-tuning of performance to cost. The ability to run a Medium class model on your own servers drastically changes ROI calculations in AI projects.
    • Compliance is an opportunity, not a burden: Companies that choose the path of sovereign AI will pass through the regulatory sieve of the EU AI Act and NIS2 more quickly, gaining the trust of customers in critical sectors.

    Mistral AI is no longer just a ‘European alternative’. In May 2026, it appears as the mature architect of a new technological order in which performance goes hand in hand with autonomy. On the global chessboard of artificial intelligence, Europe, thanks to Mistral, has gained the ability to play its own sovereign game. Companies that recognise this now will gain a strategic resilience that no contract with a supplier from overseas can provide.

  • The end of Microsoft’s monopoly on OpenAI. What does the new agreement mean for the market?

    The end of Microsoft’s monopoly on OpenAI. What does the new agreement mean for the market?

    The most influential partnership in the history of artificial intelligence has just undergone a fundamental transformation. Microsoft and OpenAI have announced a renegotiation of the terms of their partnership, ending Azure’s previous exclusivity to offer ChatGPT creator models. The new agreement paves the way for the startup to have a direct presence in the ecosystems of Microsoft’s biggest competitors, including Amazon Web Services and Google Cloud. While the original deal, backed by a $13 billion investment, defined the current AI landscape, both parties recognised that the existing formula had become too cramped for their growing ambitions.

    Strategic foundations for change

    Under the new arrangement, Microsoft will remain OpenAI’s primary cloud partner until 2032, and the startup has committed to spend at least $250 billion on Azure services. The Redmond giant retains priority rights to deploy new products, but loses its sales monopoly. In return, Microsoft has secured a 20 per cent share of OpenAI’s revenue by 2030, importantly including if the startup achieves so-called artificial general intelligence (AGI). Previous provisions would have allowed OpenAI to stop paying Microsoft when it made the technological leap to AGI, which was a significant risk for the investor. At the same time, Microsoft stops sharing profits with OpenAI from offering their models within Azure, simplifying the giant’s financial structure.

    The loosening of ties is a move dictated by the maturity of the market. OpenAI, as it prepares to go public, needs to demonstrate its ability to scale its enterprise business beyond a single vendor’s infrastructure, especially in a clash with the rising Anthropic. From Microsoft’s perspective, giving up some control of OpenAI’s model distribution is the price of taking off the burden of funding the giant infrastructure needed by the startup and, perhaps most importantly, easing pressure from antitrust authorities in the US and Europe. Satya Nadella’s strategy is evolving towards diversification; Microsoft is increasingly promoting its own models and third-party solutions within Copilot, reducing the critical dependence on a single technology provider.

    It is worth noting the increasing freedom to build multi-cloud strategies. It seems a good direction to review current contracts with cloud providers for upcoming AWS Bedrock or Google Vertex AI deployments, which will optimise costs and reduce latency. It is also worth monitoring the pace of Microsoft’s in-house models, as their growing role in Copilot 365 may soon offer better value for money than standard external models.

  • Benchmarks won over loyalty: Microsoft bets on Anthropic. A blow for OpenAI

    Benchmarks won over loyalty: Microsoft bets on Anthropic. A blow for OpenAI

    Microsoft ‘s choice of the Claude Mythos model as the foundation for its new software security architecture sets a significant precedent in the Redmond-based technology giant’s strategy. This decision, while at first glance it may appear to be a mere operational adjustment, in reality reveals deeper market shifts in the generative AI sector and changing priorities in digital risk management. Analysing the facts of Anthropic‘s model integration, a clear pattern can be discerned: Microsoft is moving from a phase of fascination with general AI capabilities to a phase of rigorous, benchmarked selection of specialised tools.

    A key reference point for this decision is the CTI-REALM benchmark, co-developed by Microsoft engineers. The fact that Claude Mythos scored highest in it, distancing the GPT-5.4-Cyber model, is a market signal that cannot be ignored. Microsoft, as OpenAI’s largest partner and investor, has shown that pragmatism and hard data, rather than corporate loyalty, wins in critical areas such as cyber security. This strategic approach to model vendor diversification avoids vendor lock-in and ensures access to the most effective solutions in specific niches.

    From a business perspective, integrating Mythos directly into the software development cycle is a classic implementation of the ‘Shift-Left’ strategy. The cost of fixing a vulnerability discovered at the production stage is many times higher than eliminating the bug at the code writing stage. The cited data about the detection of a vulnerability that has existed for 27 years and the success of Mozilla, which identified 271 vulnerabilities thanks to Claude Mythos, are not just technological curiosities. They are concrete indicators of return on investment (ROI). For companies operating on huge collections of legacy code, automating security audits using such high-precision models means saving thousands of hours of high-level professionals and drastically reducing the legal and reputational risks associated with potential data leaks.

    The market reaction to Mythos’ capabilities, manifested, for example, by concern in the banking and insurance sectors and interest from the NSA, suggests that there is a new kind of regulatory risk involved. Claude Mythos is seen as a dual-use technology. The model’s ability to instantaneously map vulnerabilities makes it a defensive tool of unprecedented power, but also a potential offensive instrument. The embargo under consideration by US agencies and the restrictive access under Project Glasswing suggest that in the near future, access to the most advanced cyber security models may be rationed in a similar way to armament or high-end cryptographic technologies. Companies must therefore take into account in their strategies the fact that technological advantage in the area of AI may be limited by state interventions.

    It is also worth noting a painful market lesson for OpenAI. The fact that the release of GPT-5.4-Cyber failed to draw attention away from the Anthropic solution is indicative of the change in expectations of corporate customers. The market has become saturated with promises of versatility; solutions with proven effectiveness in specific usage scenarios are now sought after. Microsoft, by implementing Claude into its 365 applications and its internal processes, de facto legitimises Anthropic as an equal, and in some respects superior, technology partner. This suggests that OpenAI’s dominance may be more fragile than stock market valuations would indicate.

    For Microsoft itself, the move is an attempt to run away from mounting criticism over historical security lapses. Redmond has understood that with the current scale and complexity of the Windows and Azure ecosystem, traditional methods of manual code review are inefficient. Using Claude Mythos as an intelligent filter to verify developers’ work is an attempt to systemically address the problem of technology debt. If Microsoft manages to significantly reduce the number of critical vulnerabilities in its products with this solution, it will set a new market standard to which all SaaS and Cloud players will have to adapt.

  • DeepSeek and Chinese AI – Why is the State Department warning allies?

    DeepSeek and Chinese AI – Why is the State Department warning allies?

    US diplomacy is entering a new phase of offensive against Chinese artificial intelligence leaders. The State Department has issued global guidelines to its outposts, ordering them to warn foreign governments about the practices of companies such as DeepSeek, Moonshot AI and MiniMax. The crux of the dispute is no longer just access to processors, but the process of so-called distillation, which Washington explicitly calls the theft of American technological thought.

    From a business perspective, distillation is a tempting shortcut. It allows smaller, cheaper-to-operate models to be trained on the results generated by powerful systems such as those from OpenAI. For Chinese startups, it’s a way to erode the US advantage at a fraction of the research cost. However, according to the US administration, this process not only copies intellectual architecture, but is done without authorisation, hitting Silicon Valley’s commercial foundations.

    DeepSeek’s situation is key here. The startup, which recently electrified the market with its V3 model, has just unveiled the V4 version, optimised for Huawei hardware. This is a clear signal of building an independent ecosystem that challenges the hegemony of Nvidia and Microsoft. While DeepSeek has consistently denied using synthetic data from OpenAI, US lawmakers have received reports suggesting the opposite: deliberately replicating the behaviour of models in order to clone them.

    Washington alerts that ‘distilled’ models often lack built-in fuses and controls, making them unpredictable for corporate use. At the same time, many Western institutions are already banning the use of DeepSeek tools, citing data privacy concerns.

    The timing of this escalation is no coincidence. The escalation in rhetoric comes just weeks before President Donald Trump’s planned visit to Beijing. The dispute over AI intellectual property becomes a bargaining chip in a broader technology war, which, after a brief period of relaxation, is again gaining momentum. The choice of AI model supplier is ceasing to be a purely technical decision and is becoming a statement in a growing geopolitical conflict.

  • OpenAI presents GPT-5.4-Cyber. A response to the Anthropic project

    OpenAI presents GPT-5.4-Cyber. A response to the Anthropic project

    The competition for dominance in the security AI sector is gaining momentum as OpenAI introduces the GPT-5.4-Cyber model in direct response to the successes of rival project Anthropic. The new variant of the flagship model prioritises greater operational freedom for researchers, which is crucial in the race to patch vulnerabilities in critical infrastructure.

    Tuesday’s release of GPT-5.4-Cyber is more than just another iteration of a flagship model. It is a strategic shift in the boundaries of what AI developers allow their users to do. While Anthropic is betting on a rigorously controlled initiative for a select few, OpenAI is opting for a ‘more permissive’ model. In practice, this means loosening the security corset that has so far often prevented researchers from fully analysing malicious code or simulating attacks for fear of violating the security policies of the platform itself.

    The key to OpenAI’s strategy, however, is not just the technology, but the ecosystem. The company is dramatically scaling the Trusted Access for Cyber (TAC) programme, opening it up to thousands of individual experts and hundreds of teams looking after critical infrastructure. The introduction of multi-level verification is a pragmatic solution to the ‘dual use’ problem of artificial intelligence. Higher levels of trust unlock the more powerful features of GPT-5.4-Cyber, giving defenders a tool with effectiveness similar to that of attackers, but within a legal and ethical framework.

    In this clash, OpenAI is betting on massiveness and fewer restrictions for proven partners, hoping that it is the broad ‘white hat’ community that will become their strongest asset. This decision carries risks, but in the face of increasingly sophisticated threats, a strategy of ‘controlled openness’ may prove to be the only effective way to secure the digital future.

  • Novo Nordisk enters into partnership with OpenAI for drug development

    Novo Nordisk enters into partnership with OpenAI for drug development

    Novo Nordisk, the Danish leader in the pharmaceutical sector, has announced an extensive collaboration with OpenAI. The partnership aims to deploy artificial intelligence throughout the company’s value chain – from the early stages of research and development (R&D), through manufacturing processes to commercial operations and logistics. The decision comes at a time when the company is intensifying its efforts to regain its lead over US competitor Eli Lilly in the fast-growing market for weight-loss drugs.

    According to the collaboration, OpenAI technologies will be used to analyse complex medical datasets and identify promising molecules. In the operational area, AI is expected to improve supply chain management and distribution of Wegova and Ozempic formulations, for which global demand continues to outstrip production capacity. While the pharmaceutical industry is already successfully using algorithms to automate regulatory filings or select clinical trial participants, the full use of AI to design new drugs remains a challenge that the technology has yet to fully realise.

    Novo Nordisk’s strategy is to make artificial intelligence a tool to increase the productivity of its existing workforce, rather than a factor in downsizing. CEO Mike Doustdar stressed that the aim is to ‘supercharge’ the competencies of scientists, with the long-term aim of reducing the rate of new hires while increasing the scale of operations. This is a significant statement in the context of last year’s restructuring, which involved 9,000 posts.

    Market analysts estimate that the obesity drug sector will be worth more than $100 billion over the next decade. Novo Nordisk, which launched an oral version of Wegova in January, is facing strong pressure from Eli Lilly, whose Foundayo pill recently received US approval.

    Financial details of the agreement with OpenAI were not disclosed. The implementation timetable calls for pilot programmes to begin in key departments later this year, while full integration of the systems into global structures is expected by the end of 2026. Sam Altman, CEO of OpenAI, indicated that the co-operation is not only aimed at business optimisation, but also at accelerating scientific discoveries that can realistically extend human life. All processes are to be carried out with strict protocols for data protection and human oversight.

  • OpenAI is fighting for the corporate market. Does Anthropic threaten the AI leader?

    OpenAI is fighting for the corporate market. Does Anthropic threaten the AI leader?

    OpenAI, valued at an astronomical $852 billion, stands on the threshold of the most important test in its short history. While its recent $122 billion funding raise – arguably the largest round in the history of Silicon Valley – suggests unwavering market confidence, there is growing unease beneath the surface. Some of the company’s early supporters are beginning to question its strategic coherence in the face of increasingly aggressive competition from Anthropic and a resurgent Google.

    The main point of contention is OpenAI’s sharp turn towards the corporate sector. The company has revised its product roadmap twice in the past six months. This nervousness is a direct reaction to the successes of rivals: first Google, which has integrated AI into its ecosystem, and now Anthropic, whose revenue momentum, according to some analysts, may soon eclipse the market leader’s growth rate.

    Critics, including an early OpenAI investor quoted by the Financial Times, point to a “profound lack of focus”. The argument is simple: ChatGPT has one billion users and is growing at 50-100% per year. In this context, a sudden focus on enterprise solutions and software tools seems risky, potentially dissipating the company’s resources at a crucial time ahead of its planned IPO this year.

    OpenAI’s management, led by chief financial officer Sarah Friar, firmly rejects these concerns. Management says the record interest in the latest funding round is the best evidence that the market believes in the path ahead. A company spokesperson stresses that the offer was oversubscribed, reflecting investors’ “strong belief” in the long-term business value of the company.

    For the technology sector, however, the lesson is clear. Even with almost unlimited capital and a dominant market position, OpenAI is not immune to competitive pressure. The battle for dominance in AI is moving from the pure innovation phase to the brute business execution phase. As the IPO approaches, the market will be watching closely to see whether Sam Altman manages to turn the popularity of ChatGPT into a stable, corporate foundation, or whether OpenAI becomes a victim of its own overly broad appetite for success.

  • ChatGPT as a search engine? EU checks OpenAI for DSA

    ChatGPT as a search engine? EU checks OpenAI for DSA

    When OpenAI integrated search functions directly into ChatGPT, the boundary between an AI assistant and a traditional search engine became blurred. Now the European Commission intends to formalise this boundary. Commission spokesperson Thomas Regnier confirmed that Brussels is analysing whether OpenAI’s flagship product should be classified as a Very Large Internet Search Engine (VLOSE) under the Digital Services Act (DSA).

    The decision comes after OpenAI disclosed operational data that puts the company in a difficult negotiating position. Under EU rules, the threshold for enhanced surveillance is 45 million users per month in the EU. Meanwhile, ChatGPT Search recorded an average of 120.4 million active users in the six months ending September 2025. This is almost three times the limit, which imposes strict obligations on tech giants in terms of algorithmic transparency and systemic risk management.

    For OpenAI, the eventual reclassification marks the end of an era of freedom to shape search results. As a VLOSE, Sam Altman’s company would have to share its data with researchers, conduct annual external audits and proactively counter misinformation on pain of penalties of up to 6% of global turnover. Although the Commission declares that it considers each case of large language models on a case-by-case basis, the precedent set by ChatGPT could define the future of the entire generative AI sector in Europe.

    The move forces OpenAI investors and partners to re-evaluate operating costs in the European market. Rather than focusing solely on product innovation, the AI market leader now has to expand its powerful compliance apparatus to meet the demands that have so far mainly concerned Google or Bing. Europe is once again showing that there is a high price to pay for access to its gigantic internal market in the form of strict supervision.

  • OpenAI closes Sora app. The end of a billion dollar deal with Disney

    OpenAI closes Sora app. The end of a billion dollar deal with Disney

    OpenAI ‘s decision to extinguish its Sora app is a signal of rare corporate discipline. The tool, which only a year ago heralded a revolution in video production and struck fear into Hollywood, is being withdrawn at a time when competition from Google and Anthropic is gaining momentum.

    Instead of chasing social media reach, Sam Altman is betting on pragmatism. Fidji Simo, responsible for the applications area at OpenAI, has made it clear that the company is ending the phase of expensive experimentation. In an industry where computing power costs run into the billions, the luxury of being ‘distracted’ by consumer toys is becoming too expensive. OpenAI is taking the route of direct business support, targeting lucrative corporate contracts that require stability rather than viral clips.

    The collapse of the project has tangible financial implications. The most acute is the severance of a billion-dollar contract with Disney. The project, which was supposed to revive pop culture icons such as Mickey Mouse and Iron Man in a new digital form, lands in the bin. Even giant partnerships aren’t worth keeping products that don’t fit into the company’s austere new revenue architecture.

    The move is a lesson in managing priorities. OpenAI is no longer aspiring to be a community platform and is beginning to cement its position as the foundation of AI infrastructure for major market players. This is a painful but arguably necessary step towards profitability.

  • OpenAI creates a super-application: ChatGPT and Codex in one place

    OpenAI creates a super-application: ChatGPT and Codex in one place

    OpenAI is taking a sharp turn towards usability. The company has confirmed Wall Street Journal reports that it plans to integrate its flagship products – ChatGPT, the Codex development platform and browser functionality – into a single, cohesive desktop application. This strategic move aims to end the era of distributed tools and create an artificial intelligence command centre directly on users’ computers.

    The decision to merge is not just a cosmetic interface change, but a deep operational restructuring. Greg Brockman, co-founder and president of OpenAI, will temporarily take the helm of the product redesign, underlining the importance the company attaches to this project. At the same time, Fidji Simo, head of applications, will focus on building sales structures, preparing the ground for the market debut of the integrated solution.

    From a business perspective, the diagnosis made by OpenAI management is clear: excessive fragmentation has become ballast. In an internal memo, Simo acknowledged that the dispersion of resources across multiple applications and technology stacks slows down the development process and makes it difficult to maintain the highest quality standards. With growing pressure from Anthropic and increasing competition in the code generation segment, OpenAI cannot afford to be inefficient.

    To date, the use of AI has often required employees to juggle browser tabs and separate developer environments. Bringing these functions together in a single desktop ecosystem can dramatically lower the entry threshold for advanced AI features in everyday office and developer work.

    The launch of the standalone version of Codex earlier in the year was a signal of expansion, but it is the current consolidation that is set to be the ultimate argument in the battle for dominance on the professional desktop. OpenAI is ceasing to be a provider of distributed services and beginning to aspire to be a complete operating system for AI-supported work. The success of this strategy will depend on whether the promised ‘simplification of experience’ actually translates into real productivity gains in business, or whether it turns out to be merely an attempt to centralise power over user data.

  • OpenAI on the AWS platform? Microsoft fights for cloud exclusivity

    OpenAI on the AWS platform? Microsoft fights for cloud exclusivity

    The Financial Times reports that Microsoft is considering legal action against OpenAI and Amazon. The bone of contention is a $50 billion deal that could end the Redmond giant’s previous dominance as the exclusive cloud infrastructure provider for ChatGPT developers.

    ‘Frontier’, OpenAI’s new commercial product, has become a flashpoint. The key question is whether making it available via Amazon Web Services violates the exclusivity provisions of the Azure platform.

    For business leaders, this signals that the era of monolithic partnerships in AI is coming to an end. OpenAI, seeking to diversify its revenue and reach, is beginning to test the limits of loyalty to its largest investor.

    From a market perspective, the potential litigation could redefine the standards of cooperation between model providers and infrastructure giants. If Amazon manages to break Microsoft’s monopoly, a new wave of competitiveness awaits, forcing enterprises to be more flexible in their multicloud strategies.

  • Sora in ChatGPT: OpenAI integrates the video generator into the platform

    Sora in ChatGPT: OpenAI integrates the video generator into the platform

    OpenAI is continuing its strategy of building AI ‘super-applications’ by integrating its most advanced video model, Sora, directly into the ChatGPT platform. The move is reported by The Information to consolidate multimodal tools into a single interface, which could significantly change the way companies approach visual content creation.

    The decision to integrate Sora into ChatGPT – a flagship product with hundreds of millions of users – is a clear signal that OpenAI wants to move beyond the niche market of video professionals and hit the mass business audience. To date, Sora has operated as a standalone app, launching in September 2025, offering advanced editing features and social video sharing. Maintaining both access paths suggests that the San Francisco-based giant is copying the model made famous by DALL-E: deep integration for the general public and a dedicated tool for professionals.

    For the business sector, this integration is all about lowering the barriers to entry. Instead of managing multiple subscriptions and switching between windows, marketing or internal communications departments will be given the ability to generate dynamic video content in the same thread as scripts or strategies. It’s a blow to the competition – Meta and Google Alphabet are also developing their video models, but it’s OpenAI that currently has the most loyal corporate user base.

    However, the challenges remain the same: issues of copyright and content authenticity. While Sora allows for the generation of impressive content, the industry is keeping a close eye on how OpenAI will handle the filtering of protected content. Despite these controversies, the move cements ChatGPT’s position as a central hub for the new creative economy, where video becomes as natural an element of enquiry as text or code.

  • SoftBank borrows $40 billion to invest in OpenAI

    SoftBank borrows $40 billion to invest in OpenAI

    Masayoshi Son is back at the game he knows best: the highest stakes game. After a period of relative quiet and licking his wounds after Vision Fund’s turbulence, the SoftBank leader is once again reaching out for aggressive debt financing to fund his most ambitious project yet – domination of the OpenAI ecosystem.

    According to reports from Bloomberg, the Japanese conglomerate is in advanced talks to raise a bridge loan of up to $40 billion. Major financial institutions, led by JPMorgan, are involved in the process. The facility, with a planned 12-month maturity, is expected to serve as capital fuel for further expansion in the artificial intelligence sector. While the terms of the financing may yet change, the move itself signals Son’s return to the ‘all in’ strategy that defined Silicon Valley’s investment landscape years ago.

    The centre of gravity for SoftBank has become OpenAI. The Japanese company, which at the end of last year controlled around 11% of the ChatGPT developer, is set to play a key role in the upcoming giant funding round. With a total of $110 billion, SoftBank is expected to put $30 billion on the table, lining up with giants such as Nvidia and Amazon. This concentration of capital, with OpenAI’s valuation reaching $840 billion, suggests that Son sees Sam Altman’s company as an entity of scale comparable to the world’s largest technology corporations.

    However, the question of the time horizon is moot. The short-term nature of the bridge loan points to preparations for a specific liquidity event. OpenAI is already laying the groundwork for an IPO, with optimists indicating that an IPO could value the company at up to a trillion dollars. The debt financing allows SoftBank to maximise its share of this potential growth without immediately committing its own cash reserves.

    In this strategic jigsaw, SoftBank ceases to be just a venture capital fund and becomes a key architect of AI infrastructure. If Son’s bet succeeds, SoftBank will secure its position as the most important external shareholder in a company that is defining a new technological era. However, if OpenAI’s valuation does not live up to market expectations, the burden of the 40-billion-dollar debt could become a major operational challenge for the Japanese giant. At this point, however, Masayoshi Son seems convinced that second place does not exist in the race for artificial intelligence.

  • OpenAI loses a key leader. Kalinowski warns of lack of barriers in AI

    OpenAI loses a key leader. Kalinowski warns of lack of barriers in AI

    The resignation of OpenAI‘s head of robotics and consumer hardware, Caitlin Kalinowski, announced last Saturday, sheds light on cracks within the company over its growing involvement in the defence sector. For an organisation that is aggressively pursuing new revenue streams under Sam Altman, the public opposition of such a high-profile executive is a wake-up call about corporate governance and team stability.

    The immediate reason for Kalinowski’s departure was OpenAI’s contract with the US Department of Defence. According to her account, the company decided to deploy its models on the Pentagon’s secret cloud networks without due deliberation or establishing clear controls. The former leader, who previously led the development of AR glasses at Meta Platforms for years, argues that rushing into such strategic contracts is a management error. In her view, the lines of demarcation between supporting national security and uncontrolled surveillance or autonomous combat systems have been blurred in the process.

    For OpenAI, this is an image and operational blow. Kalinowski joined the company in just 2024, tasked with creating momentum for the startup’s hardware ambitions. Her departure suggests that there is growing resistance within the organisation to the pace at which the mission of ‘AI for the good of humanity’ is being redefined for geopolitical pragmatism.

    While OpenAI responded almost immediately with a statement about ‘red lines’ that exclude participation in domestic surveillance or weapons development, the no-strings-attached narrative made public by Kalinowski may make it more difficult for the company to attract further engineering and ethical talent. From a business perspective, this situation exposes the challenge facing AI giants: how to scale government partnerships without losing the trust of key leaders.

  • Big Tech workers vs Pentagon. Military pressure on AI sector sparks resistance

    Big Tech workers vs Pentagon. Military pressure on AI sector sparks resistance

    When US Secretary of Defence Pete Hegseth called the development of artificial intelligence a military arms race in January, relations between the government and Silicon Valley entered a new and turbulent phase. We are now witnessing unprecedented pressure from the US administration on key players in the AI sector, which is being met with increasing resistance from the developers of these technologies themselves.

    A growing conflict has been sparked by an ultimatum issued to Anthropic. The Pentagon is reportedly threatening to use the Defence Production Act to force the company to adapt its language models to the needs of the US military. A refusal would result in the company being deemed a supply chain risk. In response to this pressure, Anthropic has made it clear that it will not make its solutions available for mass surveillance of citizens or to power weapons capable of autonomous killing without close human oversight.

    The situation instantly triggered a wave of solidarity within the competing companies. A group of vetted Google and OpenAI employees have signed a joint petition entitled ‘We will not be divided’. The signatories of the document warn that the Department of Defence is attempting to use classic divisive tactics, hoping to force the tech giants to make concessions that AI security leaders have not agreed to. The initiative aims to create a united industry front. Employees are calling on their companies’ boards to maintain standards and not hand over technology to the military without proper ethical safeguards.

    From a business perspective, the threat of using extraordinary national security powers against private technology entities is an entry into completely uncharted territory. As Dean Ball, former White House technology policy advisor, notes, Anthropic faces the dangerous spectre of quasi-nationalisation or exclusion from the market. This aggressive move by the administration also sends a clear and worrying message to the entire innovation ecosystem, suggesting that doing business with the government carries a huge risk of losing operational independence.

    These developments will define not only the future of weapons contracts in Silicon Valley, but above all the limits of commercialisation and control of the most powerful models of artificial intelligence.

  • 600 billion for computing: OpenAI’s gigantic spending ahead of debut

    600 billion for computing: OpenAI’s gigantic spending ahead of debut

    OpenAI is building a new digital infrastructure, and the cost of this operation is beginning to dwarf the budgets of medium-sized countries. Recent reports suggest that Sam Altman’s company plans to spend a staggering $600 billion on computing capacity by 2030. This is strategically laying the groundwork for an IPO that could value the company at around a trillion dollars.

    The year 2025 proved to be an operational breakthrough for the San Francisco-based giant. Revenues of $13 billion significantly beat forecasts, and cost discipline allowed it to close $8 billion in expenses – a billion below its target. These figures lay the foundation for an ongoing funding round in excess of $100 billion, with Nvidia emerging as a key player with a $30 billion investment. With a market valuation of up to $830 billion, OpenAI is emerging as the absolute leader in the private equity market.

    But beneath the cloak of explosive growth lie structural challenges that investors are watching closely. Although the company expects its revenues to reach $280 billion by 2030, symmetrically distributed between the consumer and corporate sectors, profitability is being challenged. Inference (inference) costs, the ongoing maintenance of running models, quadrupled last year. The result is a drop in adjusted gross margin from 40% to 33%.

    It is becoming clear to business leaders that the days of ‘cheap artificial intelligence’ are coming to an end. Altman himself has announced the need to invest $1.4 trillion to develop 30 gigawatts of energy resources. This is the scale that turns OpenAI from a software company into an energy and infrastructure giant. The success of this venture depends on whether revenue growth manages to outpace the models’ appetite for power and silicon.

  • US$30bn for OpenAI from Nvidia. Why is the chip giant investing in the customer?

    US$30bn for OpenAI from Nvidia. Why is the chip giant investing in the customer?

    There is an old saying circulating in Silicon Valley that during the gold rush, the best money is made by shovel salesmen. But in the age of generative artificial intelligence, this relationship is evolving into something much more complex: a shovel manufacturer is just laying out billions to keep its biggest digger at work.

    Nvidia is close to finalising a $30 billion investment in OpenAI. The move is part of a giant funding round that is expected to value the ChatGPT creator at an astronomical $830 billion. While these sums are mind-boggling, for Jensen Huang, it is a strategic hedge of supply chain and demand all in one.

    The mechanism of this deal resembles a business perpetual motion machine. OpenAI needs massive computing power to train next-generation models, and Nvidia needs a guarantee that its most expensive H100 and Blackwell systems will have a steady customer. Most of the capital that Nvidia now ‘gives’ to OpenAI will come back to it in the form of processor orders. This is essentially greasing the gears of an ecosystem in which the two companies are interdependent.

    This round, which also involves SoftBank and Amazon, sheds light on the new power structure in the technology sector. The boundaries between hardware vendors, cloud giants and software developers are blurring. Nvidia, traditionally associated with component manufacturing, is becoming a key financial architect of the industry, ensuring that its largest customers do not lose momentum in the AI arms race.

    The barrier to entry for ‘General Artificial Intelligence’ (AGI) has ceased to be measured in algorithms and has begun in hundreds of billions of dollars and access to silicon. The partnership with OpenAI, which took longer than expected to negotiate, shows that even giants have to tread carefully on a ground full of antitrust regulations and technical challenges. Ultimately, however, at a valuation of $830 billion, OpenAI is becoming too big for Nvidia to let it seek solutions from competitors.

  • OpenClaw creator joins OpenAI. What’s next for the project?

    OpenClaw creator joins OpenAI. What’s next for the project?

    The success of a project such as OpenClaw usually ends with setting up your own company and fighting for millions of dollars from investors. Peter Steinberger, however, chose a completely different path. Instead of building his own empire, the AI assistant creator decided to join the OpenAI team. This decision says a lot about how the balance of power in the artificial intelligence industry is changing today.

    Steinberger openly admits that the role of CEO, business management and fundraising simply do not interest him. He is a flesh-and-blood engineer, and his goal is simple, yet ambitious: he wants to create a digital assistant so easy to use that even his mother can use it without a problem. He quickly realises that to achieve this, he needs a background that cannot be built alone in a garage.

    Key to this decision was access to technology. During discussions with the San Francisco labs, it became clear that only the major players had the models and security standards necessary to create a secure tool for a mass audience. For OpenAI, this transfer is also a strategic victory. Altman himself has confirmed that Steinberger will handle the development of the next generation of personal assistants. This suggests that the ChatGPT developers want to move as quickly as possible from chatbots that we just talk to, to agents that can perform specific tasks for us.

    Significantly, Steinberger’s move to the giant does not mean the end of OpenClaw. The project will not be shut down internally within the corporation. Instead, a special foundation is being set up, which will be financially supported by OpenAI. This will ensure that the tool remains free and open to the developer community. It’s a clever move that allows OpenAI to draw innovation from the open source community and build a good relationship with independent developers while keeping tabs on technology development.

  • China’s DeepSeek accused of stealing data. How will this affect the race against the US?

    China’s DeepSeek accused of stealing data. How will this affect the race against the US?

    In Silicon Valley, admiration for the technical prowess of Chinese start-up DeepSeek is quickly giving way to a hardline defensiveness. According to a memo to US lawmakers accessed by Reuters, OpenAI has officially raised the alarm over the methods used by its Hangzhou-based rival. The creators of ChatGPT claim that DeepSeek has not only challenged US dominance, but has done so by systematically draining the intellectual value developed by US-based labs.

    The essence of the dispute centres around the distillation process. While this is a familiar technique in the open-source world, OpenAI gives it a predatory character in this context. According to the company led by Sam Altman, DeepSeek employees were said to use extensive infrastructure, including obscure third-party routers, to programmatically circumvent security and extract data en masse from OpenAI models. The aim was to ‘feed’ the company’s own algorithms with responses generated by more sophisticated systems, which in practice dramatically reduces the time and cost of training the model while maintaining high quality results.

    For policymakers in Washington and business leaders, the message is clear: Chinese models such as DeepSeek-V3 or R1, which until recently were praised for their cost-effectiveness, may be the product of clever reverse engineering, not just a breakthrough in algorithmics. OpenAI calls this ‘free riding’, suggesting that Chinese competitors are taking shortcuts not only in terms of development, but also in terms of AI security.

    From a business strategy point of view, the OpenAI movement – to proactively remove accounts linked to distillation attempts – marks the end of an era of naive openness. If the most powerful models are to serve as free tutors for third-party rivals, the boundary between public APIs and intellectual property will become a major point of legal and political contention. For investors, the key question remains whether DeepSeek’s success is evidence of a waning US advantage, or merely a signal that the barriers to entry into the top AI league are easier to leap over using data developed by competitors.

  • End of the honeymoon: Why doesn’t Nvidia want to give OpenAI $100 billion?

    End of the honeymoon: Why doesn’t Nvidia want to give OpenAI $100 billion?

    News of Nvidia ‘s potential withdrawal from its giant investment in OpenAI is causing the market to tremble. The original plan, for an astronomical $100 billion, seemed like a natural alliance: the maker of the world’s most powerful chips would fund its largest customer, cementing the dominance of both entities. However, the business reality proved more complex than the optimistic headlines of September.

    Pragmatism over prestige

    The standstill in talks, as reported by the Wall Street Journal, sheds light on growing scepticism inside Nvidia. Jensen Huang, CEO of the chip giant, has begun privately distancing himself from the non-binding deal, pointing to a lack of business discipline inside OpenAI. For Nvidia’s leadership, which is renowned for its rigorous supply chain and margin management, the ChatGPT creator’s spending model may seem too risky, even at the startup’s record valuation of $830 billion.

    A landscape full of rivals

    Nvidia is no longer operating in a vacuum. The AI market is becoming increasingly crowded, and loyalty to one player may be a strategic mistake. Anthropic’s rapid growth and Google’s growing ambitions mean that Huang must weigh up whether such powerful support for OpenAI will close the door on his ability to work with other industry leaders. What’s more, there are other players with fat wallets on the horizon – Amazon is considering a $50 billion investment and SoftBank is constantly monitoring the situation.

    For executives and investors, this situation is a lesson in the maturity of the AI market. Even the most obvious partnerships are subject to brutal validation in the face of the profit and loss account. Nvidia, as the ‘picks and shovels’ provider in this gold rush, is in a privileged position – it can afford to wait, while it is OpenAI that urgently needs cash to maintain its infrastructure.

    While official communiqués still speak of a ‘ten-year partnership’ and a desire for further cooperation, the change in tone is clear. Instead of an unconditional cheque for $100 billion, much more modest capital sums are now on the table. The retreat from the original promise signals that the era of unlimited optimism in AI funding is giving way to hard business calculation.

  • Oracle’s risky gambit: $50 billion on the altar of AI

    Oracle’s risky gambit: $50 billion on the altar of AI

    Larry Ellison has never been one of the risk-averse leaders, but Oracle ‘s latest financial plan pushes the boundaries even by Silicon Valley standards. The company has announced its intention to raise between $45bn and $50bn in 2026, an unprecedented injection of capital to fund a rapid expansion of its cloud infrastructure. While the goal is clear – to satisfy the hunger for computing power of giants like Nvidia, OpenAI and xAI – the path to achieve it is causing growing concern among bondholders.

    Oracle’s strategy is based on a breakneck balance between equity and debt markets. Half of the amount is to come from equity issues and hybrid instruments, including a new $20 billion ‘at-the-market’ equity sale programme. The other half will be financed by new bonds that will hit the market early next year. It is a bold move at a time when the market price of insuring Oracle’s debt against insolvency has reached levels not seen in half a decade.

    For executives and investors, the key question is no longer whether Oracle can build data centres, but whether the foundations on which these investments grow are stable. The fate of the Austin-based giant is becoming inextricably linked to the financial health of OpenAI. Sam Altman’s start-up, which is one of the key tenants of Oracle Cloud Infrastructure, remains unprofitable and has not provided a clear funding path for its ambitious plans.

    In the background of these events, a legal battle is unfolding that casts a shadow over the company’s transparency. A lawsuit filed by bondholders suggests that Oracle deliberately concealed the scale of debt needed so as not to hound the markets prematurely. For the business, there is a lesson here about the high price of dominance in the AI era: technological superiority today requires not only engineering genius but, above all, nerves of steel in managing the balance sheet. If Ellison’s bets on AI don’t pay off, Oracle could wake up with a future-ready infrastructure that no one will be able to pay for.