Tag: ChatGPT

  • Mistral AI vs OpenAI – Has Europe gained a viable alternative to ChatGPT?

    Mistral AI vs OpenAI – Has Europe gained a viable alternative to ChatGPT?

    Last week will be remembered as the moment when Europe’s artificial intelligence sector went from defensive to precision technology offensive. In just 48 hours, Paris-based Mistral AI made a series of moves that go beyond mere model updates. By simultaneously launching the Mistral Medium 3.5 model, the Vibe development environment, the Workflows orchestration platform and the Le Chat mode of operation, the company unveiled a complete vertical technology stack (full-stack). For IT decision-makers and business leaders in Europe, the message is clear: digital sovereignty has become a measurable operational and financial category.

    The end of distributed models – Economics Mistral Medium 3.5

    A key element of the new strategy is Mistral Medium 3.5, a 128-billion-parameter scale model released under licence with open weights. From an analytical perspective, its greatest value is not just in its ‘raw power’, but in the unification of capabilities. It is the first Mistral model to combine advanced reasoning, deep instructional understanding and high consistency of generated code within a single parameter set.

    From a business perspective, such integration directly affects the total cost of ownership (TCO). Until now, companies have been forced to maintain a fleet of specialised models: one to analyse legal documents, another to support developers and another for simple classification tasks. Medium 3.5 allows for infrastructure consolidation. Results in benchmarks such as SWE-Bench Verified(77.6%) or tau³-Telecom (91.4%) prove that this model not only matches, but in specific engineering applications outperforms closed systems such as GPT-4o or Claude 3.5.

    Importantly for operations departments, Medium 3.5 can be deployed locally using four H100 or H200 GPUs. This opens the door to building private, secure AI environments inside corporate data centres, eliminating reliance on the latency and pricing policies of external cloud providers.

    From conversation to implementation – Vibe and Workflows

    Mistral AI has rightly diagnosed that the bottleneck for AI adoption in business is no longer the quality of the text generated, but the integration with processes. Vibe and Workflows tools are the answer.

    Vibe addresses a key productivity issue for engineering teams: developer lock-in when AI agents are working. The introduction of remote agents running in parallel in the Mistral cloud, while remaining fully synchronised with the local environment, changes the working paradigm. Integration with GitHub, Jira, Sentry and Slack means that AI ceases to be a ‘question assistant’ and becomes a ‘task performer’ that only notifies the human once the process is complete.

    Workflows, on the other hand, built on the proven Temporary engine (used by Stripe and Netflix, among others), is an orchestration layer that allows the construction of long-term, fault-tolerant workflows. This architecture separates the control plane from the data plane. In practice, this means that a regulated sector company can benefit from advanced process management in the cloud, while the data itself and its processing never leave the client’s secure, local infrastructure. This solution is ideally suited to the needs of players such as ASML or La Banque Postale, who are already automating customs processes or document compliance verification using it.

    Sovereignty as strategic risk management

    In 2026, the argument of digital sovereignty has evolved from an ideological discourse to a hard risk analysis. Statements by UK Secretary of State Liz Kendall or actions by the French Ministry of the Armed Forces point to a growing awareness of the risks posed by the concentration of computing power in the hands of just a few Silicon Valley players.

    For a European technology director, the on-premise model offered by Mistral is an insurance policy against three risks:

    1. political risk: the unpredictability of US export regulations and the impact of the US administration on the availability of AI services in situations of geopolitical tension.

    2 Regulatory risk: The need to strictly comply with RODO, the EU AI Act and the NIS2 and DORA directives. In the financial or healthcare sector, the ‘right to audit’ and full control over the location of data are legal requirements that standard APIs from OpenAI or Anthropic are not always structurally able to fulfil.

    3 Operational risk: Sudden changes in the behaviour of models (so-called model drift) or unilateral modifications of service terms by SaaS providers.

    With 60% of its revenues in Europe, Mistral has a natural interest in adapting to the local regulatory framework, making it a more predictable partner than its US competitors.

    Alliances and financial foundations

    Critics of the European approach have often pointed to a lack of capital and infrastructure. Mistral AI systematically refutes these claims. Institutional funding of €830 million from a consortium of banks (including BNP Paribas, HSBC, MUFG) for the purchase of 13,800 NVIDIA processors is a signal that AI in Europe is becoming an infrastructure asset, not just a speculative one.

    Equally important is Mistral’s incorporation into the NVIDIA Nemotron Coalition. The partnership with Jensen Huang allows Mistral to co-create boundary models on DGX Cloud infrastructure, while keeping them open. It is a strategic balancing act: using the best available hardware while promoting open model scales, driving innovation across the European developer ecosystem.

    Analysis of recent Mistral AI activities leads to three key conclusions for business leaders in Europe:

    • AI is becoming a commodity (Commodity), but control is not: Competitive advantage is built not by simply having access to models, but by being able to integrate them deeply into one’s own infrastructure without the risk of data leakage.
    • Cost optimisation requires flexibility: Open-weighted models allow for fine-tuning of performance to cost. The ability to run a Medium class model on your own servers drastically changes ROI calculations in AI projects.
    • Compliance is an opportunity, not a burden: Companies that choose the path of sovereign AI will pass through the regulatory sieve of the EU AI Act and NIS2 more quickly, gaining the trust of customers in critical sectors.

    Mistral AI is no longer just a ‘European alternative’. In May 2026, it appears as the mature architect of a new technological order in which performance goes hand in hand with autonomy. On the global chessboard of artificial intelligence, Europe, thanks to Mistral, has gained the ability to play its own sovereign game. Companies that recognise this now will gain a strategic resilience that no contract with a supplier from overseas can provide.

  • ChatGPT as a search engine? EU checks OpenAI for DSA

    ChatGPT as a search engine? EU checks OpenAI for DSA

    When OpenAI integrated search functions directly into ChatGPT, the boundary between an AI assistant and a traditional search engine became blurred. Now the European Commission intends to formalise this boundary. Commission spokesperson Thomas Regnier confirmed that Brussels is analysing whether OpenAI’s flagship product should be classified as a Very Large Internet Search Engine (VLOSE) under the Digital Services Act (DSA).

    The decision comes after OpenAI disclosed operational data that puts the company in a difficult negotiating position. Under EU rules, the threshold for enhanced surveillance is 45 million users per month in the EU. Meanwhile, ChatGPT Search recorded an average of 120.4 million active users in the six months ending September 2025. This is almost three times the limit, which imposes strict obligations on tech giants in terms of algorithmic transparency and systemic risk management.

    For OpenAI, the eventual reclassification marks the end of an era of freedom to shape search results. As a VLOSE, Sam Altman’s company would have to share its data with researchers, conduct annual external audits and proactively counter misinformation on pain of penalties of up to 6% of global turnover. Although the Commission declares that it considers each case of large language models on a case-by-case basis, the precedent set by ChatGPT could define the future of the entire generative AI sector in Europe.

    The move forces OpenAI investors and partners to re-evaluate operating costs in the European market. Rather than focusing solely on product innovation, the AI market leader now has to expand its powerful compliance apparatus to meet the demands that have so far mainly concerned Google or Bing. Europe is once again showing that there is a high price to pay for access to its gigantic internal market in the form of strict supervision.

  • OpenAI creates a super-application: ChatGPT and Codex in one place

    OpenAI creates a super-application: ChatGPT and Codex in one place

    OpenAI is taking a sharp turn towards usability. The company has confirmed Wall Street Journal reports that it plans to integrate its flagship products – ChatGPT, the Codex development platform and browser functionality – into a single, cohesive desktop application. This strategic move aims to end the era of distributed tools and create an artificial intelligence command centre directly on users’ computers.

    The decision to merge is not just a cosmetic interface change, but a deep operational restructuring. Greg Brockman, co-founder and president of OpenAI, will temporarily take the helm of the product redesign, underlining the importance the company attaches to this project. At the same time, Fidji Simo, head of applications, will focus on building sales structures, preparing the ground for the market debut of the integrated solution.

    From a business perspective, the diagnosis made by OpenAI management is clear: excessive fragmentation has become ballast. In an internal memo, Simo acknowledged that the dispersion of resources across multiple applications and technology stacks slows down the development process and makes it difficult to maintain the highest quality standards. With growing pressure from Anthropic and increasing competition in the code generation segment, OpenAI cannot afford to be inefficient.

    To date, the use of AI has often required employees to juggle browser tabs and separate developer environments. Bringing these functions together in a single desktop ecosystem can dramatically lower the entry threshold for advanced AI features in everyday office and developer work.

    The launch of the standalone version of Codex earlier in the year was a signal of expansion, but it is the current consolidation that is set to be the ultimate argument in the battle for dominance on the professional desktop. OpenAI is ceasing to be a provider of distributed services and beginning to aspire to be a complete operating system for AI-supported work. The success of this strategy will depend on whether the promised ‘simplification of experience’ actually translates into real productivity gains in business, or whether it turns out to be merely an attempt to centralise power over user data.

  • The end of anonymity in ChatGPT? OpenAI turns on the age estimation algorithm

    The end of anonymity in ChatGPT? OpenAI turns on the age estimation algorithm

    In the coming weeks, OpenAI will implement a new security layer in ChatGPT that fundamentally changes the way user identities are managed on the platform. The company is launching a dedicated predictive model to algorithmically estimate the age of chat users. The move is a clear response to increasing regulatory pressure and the need to create a ‘safe internet’ for the youngest, which is becoming a key part of the strategy to maintain trust in Generative AI technology.

    The new system does not rely solely on user declarations. The model analyses a range of behavioural signals and account metadata, such as times of activity or specific patterns of interaction with the tool. If the algorithm classifies a user as under 18, it will automatically impose restrictive content settings. This ‘safety-first’ approach aims to minimise the risk of teenagers being exposed to material deemed harmful, including graphic violence, content promoting eating disorders, risky viral challenges or sexual role-play.

    From a business and technology perspective, the most interesting element is the error verification mechanism, which brings an external partner into the OpenAI ecosystem. Users who are misclassified as minors will only be able to regain full access through biometric verification. OpenAI has integrated Persona’s services for this purpose, requiring the user to upload a selfie. This is a significant step towards the de-anonymisation of users in the name of security, which may raise debates in the context of privacy, but at the same time removes some of the legal responsibility from the model provider.

    In addition to automation, OpenAI expands the parental control panel. Caregivers gain the ability to set quiet hours and monitor interactions for signs of psychological distress in a child. The implementation of these features signals to the market that OpenAI intends to anticipate regulation rather than react to it. The effectiveness of this behavioural model will be watched closely by competitors, likely setting a new compliance standard for the entire artificial intelligence industry.

  • A surprising ranking of (insecure) AI models. Which assistant is easiest to become a hacker?

    A surprising ranking of (insecure) AI models. Which assistant is easiest to become a hacker?

    Generative artificial intelligence has ceased to be a technological novelty and has become a standard working tool. Deployments of language models (LLMs) in companies already number in the thousands, and their purpose is clear: to drive productivity, automate processes and foster creativity. We treat them as versatile assistants, entrusting them with increasingly complex tasks.

    But what if these tools we invest so heavily in have a second, darker side? What if their security features are easier to circumvent than we think?

    A recent study by the Cybernews team casts a cold, technical light on the problem. It is no longer a theoretical ‘what if’. Tests of six leading AI models have shown that almost all of them can be made to cooperate in a cyber attack. Most interestingly, however, the study has produced an unofficial ‘risk ranking’ that should give any decision-maker food for thought. And there is no good news here for fans of market leaders.

    The battlefield: Psychology, not code

    Before going into the results, it is important to understand how the AI was ‘broken’. This is because there was no classic hacking, looking for loopholes or buffer overflows. The researchers used a much more subtle weapon: psychological manipulation.

    The technique used is ‘Persona Priming’. It works in multiple stages. First, the researcher prompted the AI model to take on a specific role, such as ‘an understanding friend who is always willing to help’ and does not judge requests. Then, in this new conversational state, the model drastically lowered its natural resistance to sensitive topics, focusing solely on being ‘helpful’. Finally, requests were gradually escalated towards hacking, always under the safe pretext of ‘academic purposes’ or ‘preventive testing’.

    Most models have fallen into this trap. This is a key lesson for CISO managers and specialists: the current ‘guardrails’ (guardrails) built into AI are often naive. They effectively filter out simple keywords such as ‘bomb’ or ‘virus’, but completely fail to deal with the manipulation of context and intent. AI does not understand intention; it can only meticulously play an imposed role.

    Vulnerability ranking leaders: ChatGPT and Gemini

    Let’s get down to specifics. The survey covered six main models, but two platforms stood out the most – unfortunately, negatively. According to the study’s scoring system, the ChatGPT-4o and the Gemini Pro proved to be the ‘most manipulable’.

    What exactly did these popular models do when the security muzzle was removed? ChatGPT, for example, went the way of ready-made solutions for criminals. Without much resistance, it generated a complete, ready-to-use phishing email, including a convincing subject line, message content and a fake malicious URL. Moreover, it provided detailed step-by-step instructions for social engineering and described mechanisms for avoiding detection by spam filters and potential structures for monetising the attack.

    Gemini, on the other hand, demonstrated its ‘technical expertise’ by providing operational information on procedures for exploiting specific vulnerabilities. The study found that even newer models, such as ChatGPT-5 (presumably referring to the latest iteration of GPT-4), explained how to plan DDoS attacks, where to look for botnets and how Command and Control (C&C) infrastructure works.

    The conclusion is painful: the tools that companies trust the most and that are most widely deployed have at the same time proven to be the most likely to actively assist in a cyber attack.

    An unexpected security leader: Claude

    Fortunately, the ranking also has another side. At the opposite pole, as the ‘most resistant’ model, stood the Claude Sonnet 4.

    Its approach to researchers’ requests was fundamentally different. This model systematically blocked prompts directly related to hacking, exploitation of vulnerabilities or the purchase of cyberattack tools.

    However, this does not mean that Claude was useless from a security perspective. On the contrary. The model was keen to offer contextual information – for example, describing attack vectors or defensive strategies. It could therefore be a useful tool for the Blue Team (defenders).

    The key difference, however, was that Claude refused to provide *execution instructions* or code examples that could be directly and maliciously applied. He made it clear where the line between substantive information and instructional offence lay. This is the definition of ‘robustness’ that the competition lacked.

    Has the AI provider done its homework?

    The vulnerability ranking revealed by Cybernews is not just a technical curiosity for a handful of experts. It is a fundamental and very practical piece of advice for business.

    Firstly, the study proves that when choosing an AI platform to integrate into a business, the criterion of ‘tamper-resistance’ is becoming as crucial as ‘computing power’, ‘creativity’ or ‘price’. Decision makers need to start asking vendors hard questions about how their models handle not word filtering, but contextual manipulation.

    Secondly, a vulnerable model is not only a risk of attack from outside. It is also a gigantic internal risk. What happens when a frustrated employee, or simply an unaware user, asks a chatbot integrated with the company’s systems for ‘academic’ examples of security workarounds?

    The market will verify AI providers not only by how ‘smart’ their models are, but how ‘robust’ they are. The survey shows that some vendors (like Anthropic, makers of Claude) appear to have done this homework much more meticulously. Choosing the most popular or cheapest option in the AI market can quickly prove to be a strategic and costly risk management mistake.

  • GPT-5.1 gets a promotion. The new apply_patch tool is no longer an assistant, it’s a developer

    GPT-5.1 gets a promotion. The new apply_patch tool is no longer an assistant, it’s a developer

    OpenAI ‘s introduction of the new GPT-5.1 last week was just a prelude. Now the company has published a key ‘prompting guide’ that signals a strategic shift: from suggestion generation to direct task execution in development environments.

    This material, aimed at developers, aims not only to facilitate the migration of existing workflows, but above all to standardise the interaction with the model. OpenAI once again emphasises that the quality of GPT responses is directly dependent on the precision of the monitor design. The new guide formalises techniques to ensure higher accuracy and usability of the generated responses.

    A key business concept is the expansion of the ability to create so-called ‘Design Agents’. The documentation describes in detail how developers can now precisely shape the behaviour of a model – defining its tone, personality, response structure or even the expected level of politeness. This is a step towards creating highly specialised, autonomous assistants.

    The real innovation, however, is the `apply_patch` tool. It fundamentally changes the role of AI in the software development cycle. Instead of merely suggesting code fragments, GPT-5.1 can now automatically create, update or delete files in the code base by operating on structured diffs.

    This feature, integrated directly into the response API, is intended to enable more iterative workflows. According to OpenAI, this approach already reduces failed code changes by 35 per cent. The goal is clear: to encourage developers to use AI as an active tool directly in their IDE environments.

    The guide also introduces other advanced features such as ‘metaprompting’, where the model analyses its own prompts for errors, and a shell tool capable of suggesting system commands. The publication of this guide is a clear signal that OpenAI wants its models to become not just a consultant, but an active participant in the software development process.

  • OpenAI refuses to hand over 20 million ChatGPT logs. Legal dispute with The New York Times continues

    OpenAI refuses to hand over 20 million ChatGPT logs. Legal dispute with The New York Times continues

    The legal dispute between OpenAI and The New York Times is escalating, shifting the burden from general accusations of copyright infringement to the thorny ground of user privacy. On Wednesday, lawyers for the creators of ChatGPT asked a federal judge in New York to block the injunction. It obliges the company to disclose more than 20 million anonymised ChatGPT chat records .

    For OpenAI, this is an attempt to protect the confidential information of millions of users. The company argues that 99.99% of these transcripts are irrelevant to the case, and that the release of the logs, even after de-identification, constitutes a “speculative fishing expedition” and an invasion of privacy. Dane Stuckey, director of information security at OpenAI, described the potential disclosure as a forced handover of “tens of millions of very personal conversations”.

    For The New York Times, however, the chat logs are key evidence in the case. The media conglomerate, which accuses OpenAI of illegally using millions of its articles to train models, needs this data for two reasons. First, to prove that ChatGPT is actually replicating copyrighted content in response to queries from ordinary users.

    Secondly, the logs are to be used to refute OpenAI’s central defence thesis. The company claims that the NYT deliberately ‘hacked’ the chatbot, using specific, misleading queries (prompts) to forcibly extract evidence of a breach from the model. The logs are meant to show whether such results are the norm or just the result of manipulation.

    The two sides argue over the interpretation of security. A spokesperson for the NYT called OpenAI’s position “deliberately misleading”, insisting that “no user privacy is at risk”. He pointed out that the court only ordered the delivery of a sample of chats, anonymised by OpenAI itself and covered by the protective order. Judge Ona Wang, in granting the original injunction, also found that “exhaustive de-identification” would be sufficient protection.

  • The history of chatbots – an old player, a new hand in the age of AI

    The history of chatbots – an old player, a new hand in the age of AI

    When artificial intelligence is mentioned, attention is usually drawn to breakthrough generative models, autonomous agents or visions of computers that understand the world as well as humans. Meanwhile, the biggest winner of this revolution is a solution that is not new at all. Chatbots – often seen as a boring customer support tool – are back in the spotlight. And paradoxically, they are the ones that best demonstrate how the distant history of artificial intelligence meets its most practical application today.

    From ELIZA to ChatGPT

    The history of chatbots is a story of evolution, not revolution. As early as 1967, ELIZA was created at MIT – a program based on simple rules that allowed for text-based human-machine dialogue. Responses were fully predefined and selected based on keywords. Although it seems primitive from today’s perspective, for users it was the first substitute for a conversation with a computer.

    Two decades later came Jabberwocky, which added voice interaction. What is obvious today thanks to Siri or the Google Assistant sounded like science fiction then. The next step was taken by A.L.I.C.E. in the 1990s – a system that stored responses and used them to create new responses. In practice, this was not yet real science, but for many researchers it opened up the question of where programming ends and intelligence begins.

    Over the following decades, more complex systems emerged, but they were all based on the same foundation: rules, keywords and sets of predetermined responses. It was not until natural language processing and large language models overturned this convention, allowing chatbots to break away from narrow frameworks.

    The data and computing power revolution

    That chatbots in the 21st century really took off was not the result of a brilliant idea, but the effect of a combination of computing power and data. The development of GPUs made it possible to process huge collections of information, and the internet provided access to these collections. When open source libraries such as TensorFlow and PyTorch appeared, the barrier to entry for companies dropped dramatically. Creating your own chatbot was no longer the domain of research labs and technology corporations.

    The turning point came in 2022 and the Transformer architecture on which ChatGPT was based . From a simple text completion model, AI has evolved into a conversational system that can respond naturally and flexibly. Semi-supervised learning, based on dialogue examples, allowed chatbots to break the pattern of rule repetition. From now on, it was no longer about a set of possible answers, but about conversational skills.

    Today’s challenges: technology versus costs

    Today’s chatbots no longer need pre-programmed responses. They can use billions of examples and conversational context to respond more consistently than ever before. However, the biggest challenge is no longer technology, but economics.

    Companies deploying chatbots in customer service, operating 24 hours a day, clash with the issue of cost. Every interaction requires computing resources, and with large models this means high bills. In practice, this means that organisations are increasingly opting for smaller, specialised models – cheaper and sufficient for specific tasks. Paradoxically, in a world of ‘bigger is better’, a business advantage can be provided by the optimised model rather than the most advanced one.

    This shifts the focus from the question ‘what is possible’ to ‘what pays off’. And it puts technology companies and IT integrators in the role of advisors who need to help clients balance innovation and budget.

    A multimodal future

    The next wave of change, which is already beginning, concerns multimodality. If chatbots used to only understand text, today they are learning to analyse speech, images and even video. The combination of these modalities creates new scenarios of use: from the generation of marketing material, to automated internal reports, to personalised presentations based on company data.

    RAG, or Retrieval-Augmented Generation, architectures are a particularly interesting direction. With these, a chatbot can not only draw on general knowledge, but also refer to the organisation’s internal databases. This paves the way for advanced question-and-answer systems or corporate search engines that understand the business context better than traditional tools.

    Forecasts indicate that from 2025 onwards, RAG and AI agents will be one of the main drivers of productivity growth in many industries. The chatbot will cease to be a simple interface in customer service and will become part of a company’s knowledge infrastructure.

    What this means for business and the IT channel

    For companies using chatbots, this means thinking of them not just as a tool that automates simple customer queries, but as a strategic data handling layer. A chatbot can become an organisation’s knowledge access point, a reporting channel and a creative tool.

    For IT suppliers and resellers, on the other hand, the coming era of chatbots is an opportunity to develop new services. The integration of RAG systems, the design of multimodal solutions or advice on optimising the cost of models are all areas that can build real competitive advantages.

    Looking more broadly, chatbots are an interesting case showing that in technology, it is not always what is most futuristic that wins, but what is most useful. After years of being underestimated, they are becoming central to the AI revolution, combining the simple function of communication with the most advanced artificial intelligence algorithms.

    Old player, new deal

    The history of chatbots is a reminder that, in the IT world, many ideas return in new guises. The ELIZA of the 1960s was a scientific experiment, the ChatGPT a commercial breakthrough. Six decades have passed between them, but the need remains the same: how to make a machine understand a human.

    Today, the answer is more advanced than ever, but the challenges are just as real. Companies need to decide how to harness the potential of multimodal AI agents while controlling costs. Technology providers are becoming partners in this decision, not just tool vendors.

    The paradox of the generative revolution is that the oldest technology may be the biggest beneficiary. Chatbot, until recently treated as a digital automaton answering the most frequently asked questions, is today growing into a strategic player in the AI ecosystem. And this is only the beginning of its new hand.

  • The end of ChatGPT’s toxic positivity? OpenAI seems to recognise the problem

    The end of ChatGPT’s toxic positivity? OpenAI seems to recognise the problem

    OpenAI has begun testing a new feature designed to give users more control over how they interact with ChatGPT. The company is introducing the option to select predefined ‘personalities’ for its chatbot, a move away from a one-size-fits-all communication style. The update is being rolled out gradually and is available to a limited group of users for now.

    The new settings allow the tone and nature of the responses generated by the AI to be tailored to specific needs. Instead of the default helpful and often effusive style, users can choose from several alternatives. These include profiles such as ‘Robot’, which communicates in a concise and direct manner, focusing on efficiency, and ‘Cynic’, offering a more critical and sarcastic outlook. There are also variants of ‘Listener’, geared towards support, and ‘Sage’, to be enthusiastic and willing to share knowledge.

    The personalisation function is available in the profile settings in the web version of ChatGPT, under ‘Customise ChatGPT’. In addition to selecting a pre-defined personality, the tool allows the user to define additional preferences regarding tone or the way the model should address the user. This step is in response to feedback from a section of the community, for whom the default style was sometimes ineffective in professional applications, such as code generation or data analysis.

    In the background of these changes, OpenAI is preparing for its next major update, dubbed ChatGPT-5. According to the announcement, it is expected to merge the existing specialised models into a single, overarching system. The aim is to simplify interactions and eliminate the need to switch between different modes depending on the task, making the tool more integrated and efficient.

  • OpenAI is preparing the GPT-5 for August. What does the new model change?

    OpenAI is preparing the GPT-5 for August. What does the new model change?

    OpenAI is preparing for the launch of GPT-5, a new version of its flagship AI model. The Verge reports that the debut is planned for August, although the company is known for its flexible approach to deadlines. This time, however, it’s not just about a bigger model – OpenAI is changing the way it thinks about the architecture of its systems.

    GPT-5 is intended to be not so much another version of one AI, but a platform that combines different models and functions. This is a departure from the ‘one model for everything’ approach. The company intends to integrate the ‘O’ series models (including the popular o3 model) into the GPT family, creating a more flexible working environment for users. The goal: a unified but multitasking AI, able to adapt according to context and tools.

    This move is part of a wider trend of consolidating capabilities into a single interface. Microsoft – a major partner of OpenAI – is also moving in this direction, integrating Copilot functions into its entire ecosystem of services.

    For the market, this marks an important shift. Instead of ‘which model is better’ comparisons (OpenAI vs. Anthropic vs. Google Gemini), users and companies will increasingly look at whole platforms: their interoperability, availability of tools, ease of integration with applications and stability of services.

  • Meeting recording in ChatGPT? New feature available on macOS

    Meeting recording in ChatGPT? New feature available on macOS

    OpenAI is extending ChatGPT’s capabilities with an audio recording feature, making it available to macOS Plus plan subscribers. This is the next stage in the commercialisation of a tool that increasingly resembles a digital assistant for office tasks – this time with a focus on automating meetings and notes.

    The new feature allows both microphone and system audio to be recorded – without the need for external applications. Once the recording is complete, the user receives a transcription, a summary of the conversation, a task list and timestamps. The recording can last up to two hours and the original audio file is deleted after processing.

    This approach is part of a wider trend: AI providers are increasingly willing to offer tools to support everyday productivity, not just chat. Transforming speech into structured knowledge is part of building a more contextual, ‘understanding’ AI-based working environment.

    At the same time, questions about privacy remain in the background. OpenAI stipulates that users must have the consent of all meeting participants, and that data may – but does not have to – be used to train the model. For business and education clients, this is disabled by default.

    For now, the feature only works on macOS. The lack of announcement of a version for Windows or mobile devices suggests that OpenAI is testing the ground for a wider deployment. Meanwhile, the digital assistant market is entering a phase where automatic call recording and interpretation is becoming a viable feature, not a promise.

  • OpenAI is developing an AI browser. Will it threaten Google Chrome?

    OpenAI is developing an AI browser. Will it threaten Google Chrome?

    OpenAI, the company behind ChatGPT, intends to unveil its own artificial intelligence-based web browser in the coming weeks. The new tool is expected to go beyond classic browsing – its interface will resemble a ChatGPT conversation, and information is to be presented without the need to click on links. This is a clear indication that OpenAI wants to redefine the way users interact with online content.

    OpenAI’s plans are part of a wider trend of ‘AI agents’ that don’t just search for content, but process, summarise and present it in a user-friendly form. Unlike Google, whose model is based on providing a list of links, OpenAI aims to provide answers directly – direct, concise, contextual.

    If the OpenAI browser gains even a fraction of the popularity of ChatGPT, which attracts around 400 million active users per week, this could impact on a significant source of revenue for Alphabet. The Chrome browser, which dominates the market with a share of more than 60%, is a key channel for collecting user data that feeds Google’s advertising ecosystem. The introduction of a competing tool that not only aggregates information, but at the same time limits a user’s contact with external sites (and thus with ads), could disrupt this model.

    In this context, the OpenAI browser is not simply another technology experiment. It is a move that could shift the focus of the entire advertising and content search industry. Google, working in parallel on its own solutions based on generative AI, faces a real threat today – not so much from the technology itself, but from changing user habits.

    In the long term, this means a shift from ‘search’ to ‘getting things done by AI’. In this puzzle, the browser becomes not just a tool for accessing the web, but a personal assistant that filters and interprets information in real time. And this changes the rules of the game.

  • Fake ChatGPT and InVideo AI. This is how hackers infect ransomware systems

    Fake ChatGPT and InVideo AI. This is how hackers infect ransomware systems

    The growing interest in AI-based tools such as ChatGPT and InVideo AI has not escaped the attention of cybercriminals. Hackers are increasingly using the AI boom as bait to infect computers with ransomware and other malware, according to a recent Cisco Talos report.

    Instead of classic phishing campaigns, scammers are creating fake websites and installers impersonating known AI tools. In one case, the name ‘ChatGPT 4.0’ hid the ransomware Lucky_Gh0$t, which encrypts files, deletes larger data and makes system recovery difficult. Other cases included malicious versions of the InVideo AI tools (with the Numero malware) and Nova AI (with the CyberLock ransomware), in which infection leads to loss of data access, system damage or ransom demands – up to $50,000 in Monero cryptocurrency.

    The common denominator of these attacks is an attempt to bypass security by using legitimate AI components and manipulating user trust. Cybercriminals are targeting both individuals and companies looking for modern solutions for automation, content generation or lead conversion.

    The boom in AI is not only an opportunity for innovators, but also a new area for abuse. In an era of ‘AI for all’, users must learn to recognise false promises and critically verify the sources of downloaded apps. The golden rule remains valid: if something looks too good to be true – it probably is.