Tag: Cybersecurity

  • Why are AI agents becoming the target of cyber attacks? Trend overview 2026

    Why are AI agents becoming the target of cyber attacks? Trend overview 2026

    Over the past eighteen months, the enterprise sector has moved from a fascination with generative artificial intelligence to a phase of actively implementing it into operational processes. A key trend in this evolution is the shift from passive language models (LLMs) to AI agents – autonomous systems capable not only of generating text but also of performing tasks: writing code, managing email communications, calling APIs or authorising financial transactions. With this agility, however, comes a critical new category of threats: Indirect Prompt Injection (IPI). Recent data from reports by Google and Forcepoint shed new light on the scale and sophistication of these attacks, suggesting that agent systems security will become one of the biggest challenges for chief information security officers (CISOs) in the coming years.

    IPI mechanism: Data as instructions

    Traditional prompt injection attacks relied on direct manipulation of the model by the user (e.g. attempting to ‘jailbreak’ a bot by giving it the command to ignore security). Indirect Prompt Injection is a much more insidious phenomenon. It involves inserting malicious instructions into content that the AI agent processes as input – this could be web pages, PDF documents, emails or code repositories.

    The problem lies in the very architecture of current LLM models, which cannot absolutely separate system instructions (issued by the tool developer) from external data. When an AI agent analyses a web page in search of information, it may come across hidden text, which the model will interpret as a new overarching command. As a result, the attacker takes control of the agent’s logic, instructing it to, for example, send sensitive data to an external server or perform a destructive operation on the user’s file system.

    Analysis of market trends

    Google Security Research researchers, analysing CommonCrawl resources, point to an alarming trend. Between November 2025 and February 2026, there was a 32 per cent increase in the number of detected malicious injection attempts in publicly accessible web resources. This relatively short time frame demonstrates the dynamism with which the criminal community is adapting to new technologies.

    From a market perspective, Google’s observation on cost-benefit calculus is key. Until recently, IPI attacks were considered the realm of academic research – they were difficult to implement and often failed due to the instability of the results generated by AI. Now, with the increased reliability and agility of agents, these attacks are becoming ‘viable’. AI’s ability to autonomously call external tools (tool calling) means that a successful injection of instructions has an immediate and measurable financial or operational impact.

    The Google study allowed the current IPI trials to be categorised into five groups:

    1. Harmless jokes: Attempts to change the tone of an agent’s response.
    2. Helpful tips: Suggesting preferential answers to the model (often on the edge of ethics).
    3. Optimisation for AI (AI-SEO):Hidden phrases to position products in assistants’ responses.
    4. Deterring agents: Instructions prohibiting AI from indexing or summarising a particular page.
    5. Malicious attacks: Data exfiltration and sabotage (deletion of files, destruction of backups).

    Although the latter are often at an experimental stage at present, their increasing complexity suggests that it is only a matter of time before they enter the phase of mass attacks.

    From coding assistants to financial transactions

    The Forcepoint report provides concrete evidence of how IPI manifests itself in professional software and financial tools. Experts have identified ten verified indicators of attacks targeting popular tools such as GitHub Copilot, Cursor and Claude Code.

    The attack scenario is mundane: a programmer uses an AI agent to analyse a library or documentation on an external site. This site contains a hidden AI instruction. When the agent ‘reads’ the site, it is instructed to execute a command in the terminal that destroys local backups. Since the agent has permission to operate on the file system (which is essential in a programmer’s job), the command can be executed without additional verification.

    Even more dangerous are attempts at financial fraud. Forcepoint points to cases where complete transaction instructions are sewn into web content, e.g. PayPal.me links with a predefined amount along with step-by-step instructions on how the agent is to finalise the payment. In systems where AI has access to digital wallets or corporate payment systems, the risk of capital loss becomes immediate.

    The paradox of detection and the challenges for business

    One of the most worrying findings from the Forcepoint report is the so-called detection paradox. The phrases and keywords used by attackers to inject hints are identical to the terminology the cyber security community uses to describe and analyse these threats. This renders simple filters based on word blacklists ineffective – either blocking legitimate expert communications or letting intelligently worded attacks through.

  • Challenges and priorities in the managed services market: Evolving from ‘handyman’ to business partner

    Challenges and priorities in the managed services market: Evolving from ‘handyman’ to business partner

    Imagine two scenarios. In the first, it’s 2003, and the owner of a small manufacturing company looks anxiously at a silent server that has paralysed the ordering system.

    In a panic, he calls his ‘IT man’, hoping that he will find time to come and diagnose the problem. Every minute of downtime is a measurable loss.

    In the second scenario, it is today. The CEO of a technology company receives a notification on his smartphone. It’s an automated report from his Managed Service Provider (MSP), informing him that a potential vulnerability in the company’s cloud security was discovered and patched overnight, before cybercriminals had time to exploit it.

    The company’s operations were not disrupted even for a second.

    This contrast perfectly illustrates the fundamental transformation that has taken place in the world of IT services. The evolution of managed service providers is not just a story of adaptation to new technologies.

    It is a story of a complete redefinition of the business model, driven by escalating cyber threats, the increasing complexity of cloud environments and the need for automation.

    The modern MSP has ceased to be just an external IT department called in to put out fires. It has become a key partner in risk management, an engine of digital transformation and a guardian of business continuity.

    Foundations of the past: the era of the “Break-Fix” model

    Before IT service providers became proactive partners, the dominant operating model was the so-called ‘break-fix’. Its logic was simple: when something breaks, a specialist is called in to fix it.

    The process was purely transactional: the customer experienced a breakdown, the technician arrived, repaired it and invoiced for his time and parts.

    The biggest drawback of this model was its fundamental economic structure, which created an inevitable conflict of interest. The IT service provider only made money when there were problems at the client.

    The more failures, the higher the provider’s profits. The customer sought maximum stability, while the provider’s business model depended on instability.

    This structural flaw prevented the building of relationships based on trust and had to give way as soon as companies understood that their survival depended on reliable technology.

    Proactive breakthrough: the birth of the modern SME

    The twilight of the ‘break-fix’ era has been accelerated by technologies that have enabled fundamental change. Remote monitoring and management (RMM) and professional services automation (PSA) platforms have catalysed the revolution.

    RMM tools allowed suppliers to continuously monitor the health of customer systems in an automated manner, enabling issues to be identified and resolved before they led to downtime.

    The most important innovation, however, was a change in the business model. MSPs moved away from hourly rates to a fixed monthly subscription fee (Monthly Recurring Revenue, MRR).

    For the customer, this meant cost predictability and for the SME, a stable revenue stream. The introduction of service level agreements (SLAs) gave customers contractual guarantees on response times or system availability.

    Most importantly, this model has united the interests of both parties. The MSP’s profitability became directly proportional to the stability of the client’s IT environment. Each failure was now a cost to the provider, rather than an opportunity to make money, motivating the provider to ensure maximum efficiency.

    The cyber security imperative: from administrator to defender

    If proactivity was the spark that started the revolution, the explosion of cyber threats has become the fuel that drives further evolution. Small and medium-sized enterprises (SMEs) have become a prime target for cybercriminals, and the fear of attack has become one of the top business priorities.

    Research from 2024 revealed that as many as 78% of SME companies fear that a major cyber-attack could bankrupt them.

    In response, cyber security has ceased to be an add-on and has become central to the MSP’s offering and a key driver of revenue growth.

    Market analysis shows that 97% of the highest revenue MSPs offer a wide range of managed security services. Clients are no longer just looking for tools; 64% expect strategic guidance from their MSP.

    This has forced providers to evolve towards a managed security service provider (MSSP) model, offering advanced solutions such as managed detection and response (MDR), security information and event management (SIEM) and security awareness training.

    By taking responsibility for cyber security, the MSP has fundamentally changed its role – it no longer just manages the technology, but the customer’s business risk.

    The cloud revolution: managing hybrid complexity

    Contrary to early predictions, the growth of public clouds has not made MSPs redundant. On the contrary, the mass adoption of hybrid and multi-cloud (multi-cloud) strategies has created an intense new level of complexity that companies have been unable to cope with on their own.

    This has opened up a huge opportunity for mature MSPs. They have transformed themselves into cloud strategists and integrators, helping clients develop strategies, implement complex migrations and, crucially, optimise cloud costs (FinOps).

    In an era of increasing data privacy regulation, MSPs have also started to act as a ‘data sovereignty broker’, advising on where data can and should be stored to comply with regulations.

    The ability to design and manage a fully customised hybrid environment, combining on-premises resources with private and public cloud, has strengthened the MSP’s position as a central coordinator of the client’s entire IT ecosystem.

    Innovation horizon: AIOps and Hhperautomation

    The most mature MSPs today stand on the threshold of the next evolutionary leap, whose horizon is marked by AIOps (AI for IT Operations) and hyper-automation. AIOps uses big data and machine learning to automate and streamline IT operations, moving management from proactive to predictive.

    Instead of reacting to known potential problems, AIOps predicts and prevents them before any symptoms become apparent.

    Practical applications include intelligent correlation of thousands of alerts into a single usable incident, predictive analytics that forecast future resource requirements and automated remediation that resolves repetitive problems without human intervention.

    Combined with hyper-automation, which streamlines entire business processes (e.g. implementation of new customers), these technologies become a key competitive advantage.

    AIOps is becoming a prerequisite for managing modern, complex IT environments, and vendors who successfully implement these technologies will be able to serve more demanding customers with greater efficiency.

    An essential engine for digital transformation

    The evolution of managed service providers is a story of remarkable adaptation and continuous climb up the value chain. From a reactive technician whose success was measured by the speed of repair, to a predictive, strategic partner whose value is defined by its contribution to the innovation, resilience and profitability of the client’s business.

    The MSP of the future is not a technology vendor, but a consultancy with deep technical expertise. It thrives in an environment of complexity, actively manages risk and uses intelligent automation to deliver measurable results.

  • 14,000 cyber attacks in three months: Why is the 1970s protocol still a big risk for the industry?

    14,000 cyber attacks in three months: Why is the 1970s protocol still a big risk for the industry?

    The security of modern factories and power plants still relies on technology from almost half a century ago, which is becoming a growing concern for global business. The latest report from experts at Cato Networks warns of a wave of cyber attacks targeting industrial controllers (PLCs). Hackers are taking advantage of the fact that the widely used Modbus protocol was developed in the 1970s and has no security features – for someone who knows how to use it, taking control of a networked machine is worryingly easy today.

    Modbus, a communication protocol developed in 1979, is in the spotlight. At the time of its creation, no one assumed that industrial controllers (PLCs) would ever be connected to the public Internet. Modbus was designed with trusted, isolated internal networks in mind. As a result, it was completely devoid of the mechanisms we recognise as elementary today: encryption and authentication. This openness, once an advantage to facilitate system integration, has become an invitation to hackers.

    The scale of the problem is illustrated by data collected by a team led by Dr Guy Waizel and Jacob Osmani. Over just three months in autumn 2025, they identified coordinated activity targeting PLCs, involving more than 14,000 attacked IP addresses in 70 countries. These are not isolated incidents, but a systematic mapping of global industry vulnerabilities.

    The attackers’ strategy is multi-layered and precise. Most of the identified interactions – more than 235,000 requests – involved so-called data extraction. The hackers do not immediately try to destroy machines; instead, they quietly read the contents of registers, learning about process parameters and device configuration. The next step is to ‘fingerprint’ the hardware. By knowing the manufacturer and software version, criminals can match specific security vulnerabilities to a particular machine.

    What starts as innocent information gathering can quickly turn into a catastrophic scenario. To understand the real risks, Cato Networks experts ran a simulation on the Wildcat-Dam project. They demonstrated that, with just a laptop and access to the unsecured Modbus protocol, they were able to take control of the digital logic of the firewall. By manipulating register values, the researchers caused an artificial flood, overriding security limits and remotely opening the dam’s gates.

    The geography of the attacks coincides with the map of global industrial powers. The United States, France and Japan have been the main targets, together accounting for 61 per cent of incidents. It is also worrying that attackers are not confined to one industry. Although the manufacturing sector is the most common victim, traces of intrusion have been found in healthcare facilities, construction and even urban infrastructure management systems. What emerges is a picture of opportunistic hacking: attackers are looking for any available controller that has been recklessly exposed to the public network.

    Technical analysis suggests that some of this activity is coming from infrastructure located in China, although the identity of the actors remains hidden behind intermediary server systems. For business decision-makers, however, the key conclusion is not to identify a specific culprit, but to realise a structural flaw in their own systems.

  • The printer as a ‘Trojan horse’ in the corporate network? How to turn the weakest link into a secure part of the IT ecosystem

    The printer as a ‘Trojan horse’ in the corporate network? How to turn the weakest link into a secure part of the IT ecosystem

    Digital transformation in the SME sector has reached a tipping point, but in this technological rush, one of the most obvious elements of office infrastructure has been forgotten. While the attention of IT departments is focused on securing the cloud, implementing AI and protecting employee laptops, there are ‘sleeper agents’ in the corners of offices – multifunction devices (MFPs). Today, the printer is no longer just a simple peripheral; it is an advanced endpoint with its own processor, hard drive and operating system, permanently connected to the heart of the corporate network.

    This makes printing devices the biggest ‘blind spot’ (blind spot) of modern cyber security. The data is unforgiving: according to Quocirca’s Managed Print Services Landscape report, more than 60% of organisations admitted to having experienced a data security breach linked directly to their print infrastructure in the past year.

    Why do hackers ‘love’ printers so much? The answer is painful in its simplicity. These devices are rarely covered by log monitoring systems (SIEM), their firmware tends to be updated sporadically, and in many companies – horror of horrors – they still operate on default administrator passwords. For a cybercriminal, an unsecured printer is the perfect ‘Trojan horse’ – a silent port of entry that allows them to infiltrate a network without sounding the alarm on major defence systems.

    Anatomy of an attack: How does a printer become a base of operations?

    Today’s cybercriminal rarely attacks the most heavily guarded ‘front door’ of the IT infrastructure. Instead, he or she looks for a side entrance, which increasingly turns out to be an unsecured multifunctional device (MFP). The attack through the printer is a textbook example of a lateral movement strategy – once the device has been infiltrated, the attacker uses it as a base to silently scan the internal network and escalate privileges. Because MFPs rarely come under the magnifying glass of monitoring systems (SIEM), a hacker can spend months intercepting scanned documents or stealing data from the device’s hard drive, remaining completely invisible to traditional anti-viruses.

    Nor should we forget the simplest, physical dimension of risk. Confidential financial reports or personal data left unattended on a receiving tray is an invitation to a data leak, which can have dramatic consequences under the RODO regime. Sharp expert Szymon Trela points out that the foundation of defence here is rigorous configuration hygiene, which still remains the biggest challenge for IT departments:

    “Among the most important mistakes in the configuration of MFPs is the lack of settings to restrict access to the device. It is worth considering defining IP or MAC addresses of devices with print privileges and blocking unused ports, which significantly reduces the field of attack. A very restrictive but effective setting is also to create a list of applications and processes that can communicate with the MFP. The second group of settings are encryption issues – both network communication and data stored by the device, always using the latest versions of the protocols. And finally, automatic system software updates are key. New firmware versions respond to emerging threats and address critical security issues. These updates are downloaded from the manufacturer’s trusted servers, which in the case of Sharp is a standard option for our customers,” – says Szymon Trela, Product Manager at Sharp Systems Business Poland.

    From ‘weakest link’ to active protection

    In 2026, the endpoint protection paradigm has shifted from defensive access blocking towards active analytics and real-time anomaly detection. Modern MFPs have ceased to be passive recipients of data and have become intelligent security sensors. Thanks to the Security by Design architecture, solutions such as integration with antivirus engines (e.g. Bitdefender) or TPM (Trusted Platform Module) modules allow system integrity to be verified at the boot stage. If the system software has been compromised, the device will simply not boot, preventing the spread of infections within the network.

    However, the real revolution is happening in the active monitoring layer. In the age of AI-driven automated attacks, humans cannot react fast enough. Therefore, it is the device itself that must take on the role of gatekeeper. This approach turns the MFP from a potential ‘Trojan horse’ into an advanced defence post that not only protects itself, but also alerts the entire organisation to danger.

    Szymon Trela, Sharp
    source: Sharp

    “There are a number of solutions in modern MFPs that help to monitor IT networks for security. One example is the anti-virus software installed on the device. Its primary task is, of course, to detect viruses that may appear in the print data. But in addition to this function, it also monitors the device’s system software and detects potential attempts to infect it with viruses or malware. In addition to this, it scans all network traffic passing through the device, blocking attempts to use the MFP to break into the corporate network. Of course, any suspicious events can be reported to those responsible. This solution is extremely useful in smaller organisations that do not have dedicated departments responsible for security. Another solution is the detection of attempted DoS attacks. If too many communication attempts from the same IP addresses are detected within a certain time period, the device automatically blocks the suspicious addresses, creating a list of them. This process takes place in the background, but it is also possible to report these events to the relevant people. For corporate customers, it is extremely important to integrate MFPs with SIEM class systems, which report any incidents in real time.” – comments Szymon Trela, Product Manager at Sharp Systems Business Poland.

    The use of anti-virus software directly on the MFP is a ‘game changer’ for the SME sector. In small businesses, where one person often combines the roles of IT manager, administrator and technical support, any automation is at a premium. A device that blocks Denial of Service (DoS) attacks and cuts off suspicious IP addresses on its own acts like an invisible bodyguard.

    For the big players, on the other hand, integration with SIEM systems closes the infrastructure visibility gap that has been treated as an audit blind spot for years. It brings printer logs into the same dashboard as data from servers or firewalls, allowing for full event correlation and instant NIS2-compliant incident response. In this way, the MFP becomes a fully-fledged, active component of the cyber security ecosystem.

    Printer in the NIS2 and RODO regime: Technical standards

    In 2026, ‘compliance’ has become a matter of business survival. The entry into force of the stringent requirements of the NIS2 Directive and the evolving interpretation of RODO have meant that any gap in the infrastructure – including that ‘standing in the corner of the corridor’ – can give rise to severe financial penalties. For an auditor, a printer is no longer a peripheral device; it is a data processing node that must meet so-called state-of-the-art cyber security standards.

    The biggest challenge for security engineers today is to ensure the so-called Root of Trust, i.e. an unchanging foundation of trust in the hardware. Standard software security is not enough. If a device’s firmware is altered by an attacker, no amount of file encryption will help.

    “It is extremely important to have functionalities that guarantee the integrity of the device, i.e. to ensure that the device systems have not been altered in an unauthorised way. For this reason, features that automatically detect the correctness of the system software and BIOS and, if they are changed, automatically restore the correct version are of great importance. This protects the device at the most basic level and ensures overall security. The second extremely important issue is the reporting of any suspicious events to the responsible persons, and it is important, even in the smallest organisation, to designate such persons and establish a procedure to deal with such cases. Finally, it should be noted that the technical aspects are only part of the security problem. In order to manage it properly, especially in the context of RODO, it is necessary to introduce other measures, related to the protection of documents, primarily these are: secure printing and user authorisation.” – says Szymon Trela, Product Manager at Sharp Systems Business Poland.

    The approach mentioned by the expert fits perfectly with the Security by Design concept. The mechanisms of a ‘self-healing’ BIOS (Self-Healing BIOS) is a key parameter that procurement departments should look at today. From a NIS2 perspective, a device that can detect manipulation in its own code and restore a secure version of the software drastically reduces risk in the supply chain.

    However, technology is only half the battle. RODO requires evidence of data protection at every point of contact. That’s why features such as Secure Print, which requires a contactless card to be swiped or a PIN to be entered at the device, are ceasing to be a convenient add-on and becoming an essential means of control. Without them, every payroll or contract left on a collection tray is a potential security incident that, in 2026, you must report to a supervisory authority within 72 hours.

  • Leaked controversial Claude Mythos model. Anthropic investigates security incident

    Leaked controversial Claude Mythos model. Anthropic investigates security incident

    Anthropic, one of the leading forces in the artificial intelligence sector, is facing a serious image and operational challenge. As reported by Bloomberg News, the company’s most advanced model, Claude Mythos Preview, was leaked to a small group of unauthorised users. The incident comes at a crucial time for the startup, which is just positioning its technology as the foundation of a new era of cyber security.

    The leak occurred on 7 April, exactly the day Anthropic announced ‘Project Glasswing’. The initiative was intended to allow selected organisations to test the Mythos model under controlled conditions, mainly to strengthen their defences against digital attacks. Meanwhile, a group of users on a private online forum gained access to the tool almost immediately after the official announcement. Although reports indicate that the model has not been used for criminal purposes to date, the fact that it is regularly used outside the manufacturer’s control raises legitimate concerns.

    A spokesperson for Anthropic confirmed that the company is investigating the matter, pointing to a third-party vendor environment as the likely source of the leak. The incident could complicate Anthropic’s relationship with regulators. Mythos is a model with an unprecedented ability to identify software vulnerabilities. It is a ‘dual-use’ tool – in the hands of defenders it patches systems, but in the hands of hackers it can become a precision weapon. The loss of control of such a powerful resource, even if temporary, reinforces the arguments of advocates of strict oversight of models critical to national security. Anthropic must now prove that it can effectively protect the technology that is supposed to protect the world.

  • eAuditor V10 AI – scalability and flexibility in modern IT management

    eAuditor V10 AI – scalability and flexibility in modern IT management

    eAuditor is an advanced IT security and management platform that brings significant enhancements and new operational capabilities in the V10 AI version. The system offers full freedom of technology choice – from support for open-source databases and containerised solutions to support for alternative virtualisation platforms. It allows you to build high-performance environments tailored to market challenges and optimise costs by moving away from restrictive licensing models.

    Innovations in eAuditor V10 AI

    Learn about the key new features and improvements made to the system:

    • Support for Proxmox virtualisation: Extension of support to open source environments, used among other things as an alternative to VMware.
    • Container-based architecture: Support for Docker, Kubernetes and OpenShift technologies in an on-premise model for instant scalability and easier application management.
    • Native support for PostgreSQL: Implementation of a new database engine allowing full optimisation of operating costs by eliminating the need to purchase MS SQL Server licences.
    • Mobile User Panel: A dedicated Android app that integrates the service request handling processes within the eAuditor and eHelpDesk systems, increasing the availability of technical support.

    Key advantages and benefits of eAuditor V10 AI

    The changes made to eAuditor V10 AI translate directly into business value:

    • lower implementation and maintenance costs – thanks to the use of PostgreSQL and open source technology,
    • better adaptation to market changes – migrating from VMware to Proxmox without losing visibility of the environment,
    • greater infrastructure flexibility – thanks to support for container technologies (Docker, Kubernetes),
    • increased user efficiency – through the introduction of a new interface (GUI) and a Mobile User Panel for Android.

    Source: BTC

  • SME cyber security 2026: How to build 360° resilience?

    SME cyber security 2026: How to build 360° resilience?

    As we enter the second quarter of 2026, the threat landscape for the SME sector resembles a minefield where the mines themselves can look for a target. According to the latest ENISA Threat Landscape report, cybercrime has undergone the ultimate metamorphosis: from guerrilla attacks to a fully professionalised Ransomware-as-a-Service (RaaS) model. Nowadays, the aggressor does not need to be a brilliant programmer – all they need is a purchased subscription and AI algorithms that scan the network with surgical precision for the smallest cracks.

    The statistics are merciless: as many as 43% of all cyber attacks target small and medium-sized companies directly. Most striking, however, is the distance between risk and preparedness – only 14% of businesses in this sector feel realistically prepared to fend off an incident.

    This is because the notion that security is ‘an IT department problem’ is still being perpetuated. True security requires a radical paradigm shift: moving from protecting the devices themselves to protecting processes, identities and data flows. If you only protect the ‘boxes’, you are leaving the door open to the heart of your business.

    Extended definition of endpoint

    In the traditional security model that prevailed just a few years ago, the ‘endpoint’ was a static and easily defined concept – usually a laptop in an employee’s bag or a workstation connected to a company cable. However, in 2026, this framing is a dangerous oversimplification. Today’s endpoint is any piece of infrastructure with an IP address and access to data resources: from smart CCTV cameras and environmental sensors, to private smartphones (BYOD), to sophisticated printing and document digitisation systems.

    It is the latter, often treated as ‘background devices’, that are becoming a favourite gateway for cybercriminals. The modern MFP is in reality a powerful computer with its own operating system, hard drive and direct access to the user directory. Poorly secured, it becomes the ideal launching point for a lateral movement attack. A hacker does not need to break into the best-protected server; all he needs to do is take control of the printer and, from within it, silently and methodically scan the internal network for vulnerabilities in other devices.

    Understanding these dynamics requires decision-makers in the SME sector to abandon the ‘box protection’ mindset in favour of protecting the entire information flow cycle.

    “In many SME companies, security is still mainly associated with the employee’s laptop and the antivirus installed on it. The problem is that today’s IT environment has long ceased to end with the PC. From our perspective, what is most often overlooked are those elements that “just run in the background” – network devices, servers, printers or access to cloud systems from private devices. A very often underestimated area is also the user accounts themselves – because today it is the identity, not the device, that is the main target of attack. The key change is that a cyber-attack no longer has to ‘enter via a virus’. A single hijacked account or employee inattention is enough. Therefore, classic antivirus, while still necessary, no longer provides the full picture. It protects a fragment of the environment, but does not show what is happening in the entire company ecosystem. And today, security is precisely the ability to combine all these elements into one coherent whole.” – says Roman Porechin, Business Development Manager at Sharp Systems Business Poland.

    Zero Trust architecture as a foundation for SMEs

    The traditional security model, based on building a ‘digital fortress’ and trusting everything inside the corporate network, has become an anachronism. It is worth noting that, at a time when distributed team-based and hybrid working models are becoming popular, the notion of a secure office perimeter no longer exists. A solution that has gone from the enterprise segment to ‘under the thatch’ of smaller companies is the Zero Trust architecture. Its foundation is a simple but relentless principle: ‘never trust, always verify’.

    For the SME sector, implementing Zero Trust is a hard economic calculation. Citing data from IBM’s Cost of a Data Breach report, companies that have implemented this model save an average of USD 1.5 million on the impact of potential data leaks compared to organisations relying on legacy systems.

    However, the biggest barrier to implementing rigorous policies in smaller companies is the fear of decreased efficiency. Decision makers fear that additional layers of verification will turn work into a constant battle with the system. And how are business systems designed to combine high levels of restriction with the fluidity and intuitiveness of working in a hybrid environment?

    Roman Porechin Sharp
    Roman Porechin, Sharp Systems Business Poland

    “At Sharp we take a very practical approach. We start by analysing the way the organisation works, rather than imposing ready-made security policies. We first identify the key processes and access to systems, and then build the policies in such a way that they are least impactful on the user. We place great emphasis on ensuring that the employee has access to exactly what they need – without excessive privileges, but also without unnecessary barriers. In practice, this means, among other things, using mechanisms that simplify work, such as single sign-on or a contextual approach to access. The system itself assesses whether a login is secure and when additional steps are required. In this way, security works ‘in the background’ and the user sees an orderly and predictable environment rather than additional complications. In many cases, customers even notice an improved user experience after implementation, because we eliminate access chaos and unnecessary infrastructure elements,” comments Roman Porechin, Sharp Systems Business Polska.

    From the perspective of the modern SME, Zero Trust is therefore not just a ‘shield’, but an optimisation tool. Rather than building walls that make it difficult for employees themselves to move around, smart systems use contextual security. If an employee logs in from the office at 9am from a trusted laptop, the system will not harass them with ten levels of verification. However, if the same attempt is made at 3am from another continent, the barriers will be immediately raised.

    Infrastructure management and the role of AI

    The SME sector is facing a painful paradox: on the one hand, cyber threats have become more sophisticated than ever; on the other, the shortage of skilled IT staff has reached a critical level. Small and medium-sized companies can rarely afford to maintain their own 24/7 Security Operations Centre (SOC). In this reality, Managed Security Services, the outsourcing of security to specialised partners, has become the dominant model. It allows organisations to benefit from professional security without having to fight for scarce and expensive experts in the labour market.

    Another pillar of modern defence is artificial intelligence, which has ceased to be a marketing buzzword and has become a necessity. Because attacks today are automated and driven by AI, defences must react at machine speed. Predictive systems do not wait for an incident to occur – they analyse billions of signals in real time, detecting anomalies in the behaviour of users or devices before these turn into real data leaks.

    However, in this whole technological arms race, the most serious change has been in the philosophy of risk management itself. However, technology is only part of the success – the change in attitude of decision-makers is key.

    “Until recently, the prevailing approach was ‘let’s protect ourselves so that nothing happens’. Today we know that this is not a realistic assumption. The focus has changed – from prevention alone to the ability to detect and respond quickly. Because, in practice, it is not a question of whether an incident happens, but when and how quickly it is noticed. The companies that do best do not necessarily have the most tools. Instead, they have a structured approach and know what to do when there is a problem. For SME companies with limited budgets, the key is to focus on the fundamentals:
    – securing access to systems,
    – regular updates,
    – a working and tested backup.
    Only on this can the next elements be built. The biggest mistake is to try to ‘buy security’ as a single solution. In practice, it’s always a process and it’s consistency in building it that makes the biggest difference.” – Roman Porechin, Business Development Manager at Sharp Systems Business Poland, concludes.

    Security as a process

    It is thus becoming clear that cyber security has ceased to be a purely ‘technical’ domain and has become a strategic foundation for any modern SME. The most important lesson from our analysis is simple: security is not a product that can be bought and forgotten about, but a process that needs to be managed on an ongoing basis. Predictions for the coming years point to a further escalation of attacks using deep machine learning, which will make the line between a genuine message and a phishing attempt almost invisible to the human eye.

  • Project Glasswing: How Anthropic wants to harness the power of its own artificial intelligence

    Project Glasswing: How Anthropic wants to harness the power of its own artificial intelligence

    Anthropic is making a move that escapes classic definitions of corporate strategy. The announcement of Project Glasswing, based on the Claude Mythos Preview model, is an event that is as much about software engineering as it is about global security policy and the psychology of trust in business.

    The financial scale of the venture is breathtaking. Achieving an annual revenue rate of $30 billion in just a few months is a result that in a traditional economy would be considered a statistical error. However, behind this facade of success lies a deeper, almost existential uncertainty. Anthropic openly admits that it has created a tool so powerful that its public release could destabilise the foundations of the digital world.

    It is a rare case in the history of technology when a manufacturer voluntarily imposes ‘forbidden fruit’ status on its most potentially profitable product, restricting access to a narrow, elite coalition.

    The foundation of this initiative is the Claude Mythos Preview, a model that has autonomously identified thousands of zero-day vulnerabilities in the most critical systems, such as the Linux kernel and FFmpeg libraries, in testing. The ability to generate exploits autonomously without human intervention pushes the boundary between a programmer’s assistant and an autonomous cyber actor.

    This is where the first of a series of ironies arises: the technology that is supposed to protect the infrastructure is at the same time the most effective tool to dismantle it. Anthropic, by choosing to isolate the model, becomes the de facto guardian of global digital immunity, which raises questions about the legitimacy of such power in the hands of a private entity.

    However, the credibility of this role has recently been put to the test by a series of mundane incidents. The leak of strategic plans due to a misconfiguration of the CMS system and the accidental release of Claude Code source code are mistakes that the literature refers to as ‘poor operational hygiene’.

    The contrast between the near-divine power of the Mythos model and the trivial human error in packaging npm libraries is striking. This suggests that the greatest security threat is not the lack of sophisticated algorithms, but the invariable fallibility of the human link. Anthropic argues that these errors do not compromise the architecture of the model itself, but to the market observer they are a reminder that even the most powerful shield is only as strong as the hand that holds it.

    The structure of the alliance formed around Glasswing is a phenomenon in itself. The sight of Microsoft, Google, AWS and Apple working together under the aegis of a single startup on joint access to Claude Mythos is testament to the seriousness of the situation. It is a coalition forced by the biology of the digital threat. Traditional methods of patching software holes have become an anachronism in the face of AI, which reduces the time from vulnerability discovery to exploitation from months to minutes.

    Technology giants have understood that in the current market dynamics, no one can survive alone. Ecosystem security has become a common good, the protection of which requires a ceasefire on the battlefields of cloud or hardware market share.

    The initiative also sheds new light on the future of open source software. The allocation of $100 million in computing credits and direct donations to organisations such as the Linux Foundation is an attempt to bridge the historic gap.

    For decades, open code security has relied on the heroism of unpaid volunteers. Glasswing brings the industrial precision of AI auditing to this area, changing the rules of the game. Instead of inundating developers with thousands of bug reports, the system offers human-verified fixes, which is crucial to maintaining the stability of the global network.

    Managing such a huge number of zero-day vulnerabilities is a logistical challenge, which Anthropic solves through prioritisation and a strict timeframe. The 45-day timeframe between discovery and the publication of technical details gives vendors the necessary margin to implement safeguards. It is a process that transforms the chaos of discovery into an orderly stream of updates, giving digital defence a proactive character. In this model, AI is no longer just a tool, but an integral part of the cyber security chain of command.

    Ultimately, the Glasswing Project should be seen as an attempt to establish a new ontology in the IT industry. Anthropic does not sell a product, but offers membership to an early warning system. It is a business model based on exclusivity of responsibility. While sceptics may see this as an attempt to monopolise access to the most advanced security research, it is hard to ignore the fact that the alternative is an uncontrolled arms race in which the first better actors with hostile intentions could use similar technology to paralyse countries and economies.

    The future of the Glasswing project will show whether the trust placed in Anthropic by the world’s largest corporations was justified. For the moment, the initiative appears to be the only available way out of an impasse in which the pace of innovation has begun to threaten its own fruits.

  • Attacks on US critical infrastructure. How Iran exploited flaws in the OT

    Attacks on US critical infrastructure. How Iran exploited flaws in the OT

    The false sense of security of modern infrastructure is shattered not by sophisticated algorithms, but by mundane negligence, which in the hands of state actors is gaining the status of a strategic weapon. Incidents targeting US operational technology systems prove that the weakest link in digital power can sometimes be a lack of elementary network hygiene, turning a routine configuration into a critical point for state stability.

    While the public debate revolves around mythical zero-day tools and sophisticated cyber-espionage, the reality turned out to be painfully trivial. The key to physical process control systems was not a new generation of digital lockpicks, but an open door that no one saw fit to close.

    Fundamental to this problem is the methodological regression of the aggressors. Traditionally, we view state-owned hacking groups as digital laboratories creating unique code with huge market value. Meanwhile, actions targeting the water or energy sectors reveal a shift towards an operational model based on cost efficiency.

    Instead of investing millions of dollars in finding unknown software vulnerabilities, the attackers used widely available scanners of network resources. In this new doctrine of ‘cyber-pragmatism’, it is not the hacker that adapts to the target, but the target that is chosen because of its public visibility and lack of elementary barriers such as unique passwords or multi-component authentication.

    This situation exposes a profound crisis in the concept of air-gapping, the physical isolation of operational technology (OT) systems from external networks. For decades, the belief in the security of PLC logic controllers or SCADA systems was based on their supposed inaccessibility. However, the Industry 4.0 paradigm, enforcing a constant flow of analytical data and the need to remotely service devices, has quietly and effectively crushed this wall.

    In many cases, systems that were listed as isolated in the documentation actually had active connections to the internet, configured on an ad hoc basis for the convenience of administrators or external providers. This ‘digital convenience’ has become the most effective ally of foreign intelligence.

    Operational technology has specific characteristics that make it extremely vulnerable to simple attacks. Unlike the dynamic world of IT, where the life cycle of hardware closes in a few years, industrial infrastructure is designed for decades. Many of the controllers currently in operation date back to a time when communication protocols such as Modbus were built with performance in mind, completely ignoring security aspects. In that world, trust was the default.

    Today, these same devices, lacking encryption or identity verification mechanisms, are rendered defenceless against anyone who can establish a communication session with them. This is not a bug in the code; it is a bug in the very design philosophy of systems that have suddenly gained global connectivity.

    An analytical look at the timing of these attacks allows us to see them as a form of digital signal diplomacy. These incidents occurred at a sensitive moment of international tensions, suggesting that their main objective was not total physical destruction, but a demonstration of capability. Hitting the municipal sector, often seen as less protected than military systems, allows the aggressor to dose the pressure with precision. It is a kind of proof of access – proof of having access to the critical switches of the state, which can be used as a bargaining chip at the negotiating table. Such a strategy allows operating below the threshold of open armed conflict, while creating real social and political unrest.

    It should be noted that attribution in cyberspace always remains subject to a degree of uncertainty, which favours a strategy of so-called plausible deniability. The use of simple tools and known vulnerabilities means that traces left by attackers can mimic the actions of amateur hacking groups or common cyber criminals. For the targeted state, this creates a doctrinal dilemma: how to respond to an incident that is technically primitive but strategically strikes at the heart of citizen security.

    The lessons learned are harsh for existing risk management models. Focusing resources on combating the most advanced threats while ignoring digital hygiene in the OT sphere is akin to building an armoured door in a house with open windows. The challenge is no longer simply to purchase more expensive AI-based defence systems, but to return to rigorous network segmentation and auditing of the simplest access settings.

  • How are NIS2 and DORA changing IT departments? New strategies in IT recruitment

    How are NIS2 and DORA changing IT departments? New strategies in IT recruitment

    Until recently, the IT security debate centred around the number of vacancies, treating the shortage of manpower as a major brake on growth. However, the SANS and GIAC Workforce Research 2026 report sheds a whole new light on this diagnosis. It turns out that it is not empty chairs that account for the fragility of systems, but the invisible to the naked eye gaps in the competencies of the people who already sit in those chairs. 60% of organisations have complete teams that, despite being fully staffed, remain vulnerable to modern threats.

    The dawn of regulatory engineering

    The traditional division between legal departments looking after the letter of the law and technical departments looking after the bits and bytes no longer exists. The exponential increase in the importance of regulatory compliance – from 40 to 95 per cent in just one year – has forced the birth of a new caste of specialists. Directives such as NIS2 or DORA have ceased to be regarded as an onerous bureaucratic obligation, becoming the foundation of job role design. Today’s job market is no longer simply looking for a systems administrator; it covets a regulatory engineer who can translate a rigorous regulatory framework into a cloud architecture.

    In March 2026, there were more than two and a half thousand active advertisements for AI and ML security engineers. This phenomenon shows that the market no longer believes in the versatility of former experts. Almost one in three companies has created dedicated positions for people operating at the intersection of artificial intelligence and data protection. This specialisation is not an aesthetic choice, but a necessity driven by the fact that it is at the intersection of new technologies and the lack of knowledge of how to secure them that 27 per cent of successful attacks occur.

    Foundation erosion and cognitive paralysis

    Automation, which was supposed to be a saviour for overloaded teams, has introduced an unexpected disruption to the HR ecosystem. Artificial intelligence has taken over entry-level tasks that for decades served as a natural testing ground for junior SOC analysts. By cutting out these career tiers, organisations have inadvertently dismantled the early training system for future experts. A generational gap is being created that cannot be bridged by ad hoc hiring, as the market lacks ready candidates to meet the exacting requirements of 2026.

    At the same time, the highest levels of human resources face a phenomenon known as ‘AI Fry’. This is a specific type of burnout resulting from the constant context-switching between numerous tools supported by artificial intelligence. Although these tools reduce manual analysis time, they paradoxically increase stress levels in 61 per cent of employees. The overabundance of data and the need to constantly verify the suggestions generated by the algorithms make even the most experienced professionals work at the limit of their cognitive capacity.

    New currency: Proof instead of a promise

    Competency verification has undergone the most radical transformation in the history of the IT sector. An academic degree, once the gold standard for recruitment, is now in the priorities of only 17 per cent of employers. In a world where technology becomes obsolete in quarterly cycles, a theoretical university foundation has given way to certifications and practical evidence of proficiency. For 64 per cent of leaders, it is the certificate that is the hard currency verifiable during an audit.

    This shift towards pragmatism forces organisations to use structured competency frameworks such as NICE or ECSF. They make it possible to precisely map the gaps in the team, turning the intuitive search for a ‘good IT professional’ into a mathematical operation of filling in the missing links in the security chain. Investing in the development of existing staff ceases to be seen as a benefit and becomes a key element of operational risk management.

    Education as a hard infrastructure component

    A common management mistake is to treat learning time as a resource that can be sacrificed in the name of day-to-day operations. However, the data is inexorable: 60 per cent of companies admit that it is pure workload that prevents necessary training, which in a straight line leads to project delays and weakened incident response. Teams trapped in reactive mode lose their ability to adapt, which, in the context of severe penalties for non-compliance with NIS2, becomes a real financial threat to the entire corporation.

  • Rowhammer attacks: is this the end of secure multi-tenancy? Why GPU-level isolation is now just an illusion

    Rowhammer attacks: is this the end of secure multi-tenancy? Why GPU-level isolation is now just an illusion

    The architecture of cloud computing resembles the structure of a modern glass office building. Companies rent spaces in it, trusting that robust door locks, monitoring systems and professional security guarantee complete privacy. In the IT world, these safeguards are encryption, virtualisation and logical process isolation. However, recent reports from the world of hardware security suggest that the foundations of this office building hide a structural flaw.

    Rowhammer-type attacks, transferred from classical operational memories to graphics processing units (GPUs), show that walls between cloud users can become transparent under the influence of appropriately directed electrical oscillations.

    Graphics chips equipped with GDDR6 memory have become the foundation of the artificial intelligence revolution. It is their enormous bandwidth that allows language models to be trained or gigantic data sets to be analysed in real time. For years, there was a belief that GPUs were a safe enclave, isolated from the vulnerabilities plaguing traditional CPUs.

    Research conducted by scientists at UNC Chapel Hill and Georgia Tech brutally verifies this optimism. It turns out that the physical proximity of memory cells in NVIDIA’s state-of-the-art chips, such as the Ampere and Ada Lovelace architectures, becomes their greatest weakness.

    The Rowhammer phenomenon is not a bug in the code that can be fixed with a simple software update. It is a defect resulting from the very physics of silicon and the drive for extreme miniaturisation. When a system repeatedly and at high frequency references a particular row of data in DRAM, an electromagnetic field is created that begins to affect neighbouring cells. This ‘leakage’ of energy can lead to a spontaneous change in the state of a bit – zeros become ones and ones become zeros. On a micro scale, this is a minor anomaly, but on a system scale, it is a tool to break down the door to the core of the operating system. By precisely manipulating these changes, an attacker can achieve privilege escalation, gaining full administrative access to the host.

    For the business world, which is moving its most valuable resources en masse to the public cloud, this information is of strategic importance. The resource-sharing model, known as multi-tenancy, is based on the assumption that one client’s processes are completely separate from another client’s operations, even if they share the same physical GPU. The discovery of the GDDRHammer and GeForge vulnerabilities casts a shadow over this assumption. A theoretical, but evidence-based, possibility arises in which an entity with bad intentions rents a low-cost GPU instance on the same platform as a large financial institution or pharmaceutical company, and then uses the physical properties of the hardware to spy on its ‘neighbour’.

    The risks go beyond simple file theft. In the age of the AI arms race, a company’s most valuable asset is model weights and training data. By taking control of GPU memory, this information can be extracted, de facto stealing the competitive advantage developed over years. Moreover, cloud providers operate under a shared responsibility model. While they guarantee the security of the logical and network layers, they are rarely able to fully protect against fundamental design flaws in the processors themselves, especially when hardware manufacturers such as NVIDIA suggest using solutions with limited effectiveness.

    Proposed methods of mitigating these attacks, such as the inclusion of error correction codes or IOMMU memory management units, are only a partial barrier. A key concern for IT decision-makers becomes the economic calculus. The inclusion of full protection mechanisms is almost always associated with a perceived decrease in computing performance and available memory. In business realities, where model training time translates directly into costs of thousands of dollars, the choice between absolute security and operational efficiency becomes a difficult management dilemma.

    A key task for technical directors and security officers is becoming a new classification of resources. Not every process requires the highest degree of isolation, but projects critical to the future of the business may require a revision of the public cloud approach. Bare metal solutions, where the customer is given exclusive access to a physical server, or building dedicated private clouds, are no longer the domain of the paranoid and are becoming a rational response to the physical limitations of modern silicon.

    The 2026 audit of cloud service providers should include not only ISO certifications, but also specific questions about physical isolation architecture at the GPU level. A mature business needs to understand that as technology approaches physical barriers, traditional software security methods are becoming insufficient. Rowhammer on the GPU signals that it is time for a new era of hardware hygiene, where awareness of the limitations of matter is as important as the quality of the code being written.

  • The CIO’s dilemma: How to reconcile speed of development with maximum protection?

    The CIO’s dilemma: How to reconcile speed of development with maximum protection?

    Business architecture resembles a complex organism in which the flow of information determines survival and growth. For decades, those in charge of technology strategy in companies have operated within a paradigm that today is becoming not only inefficient, but downright risky. The traditional division of roles, in which one group of specialists built efficient data buses and another – often in some isolation – sought to secure them, is becoming a thing of the past.

    When security is tacked on as the final piece of the puzzle, it ceases to serve its purpose. It becomes a brake, a generator of unnecessary costs and, worst of all, a source of a false sense of control.

    Historically, the primary responsibility of CIOs has been to ensure operational and process continuity. Protecting digital assets has been treated as a necessary but secondary add-on, often implemented in response to emerging threats. Today’s regulatory landscape, boardroom pressures and unprecedented technological fragmentation, however, have forced a complete reversal of this order.

    Security is no longer a finish line to aim for, but a foundation without which modern business cannot take off at all. Accepting the premise that security must be an integral part of the design phase is not just a technical requirement, but above all a business maturity.

    For years, IT directors have been grappling with a classic dilemma: how to accelerate digital transformation while raising the security bar, operating within strictly defined budgets. In the traditional view, these two objectives appear to be mutually exclusive. Any additional security is seen as a layer that increases latency, and any attempt to speed up the network is seen as a risky exposure of the guard.

    This tension, however, is largely an illusion resulting from managing the two disciplines as independent mechanisms. The problem lies not in the sheer desire to be fast and safe at the same time, but in the architectural fragmentation that makes these systems constantly compete with each other instead of working together.

    Complexity has become the silent enemy of efficiency. For years, enterprises had been amassing point solutions from different vendors, building ecosystems consisting of dozens of independent consoles, agents and rule sets. Each new piece of this puzzle, while theoretically enhancing a particular slice of protection, actually generated more operational friction.

    Deadlocks were created and IT teams wasted time manually correlating data from multiple incompatible sources. In such an environment, business agility becomes a purely theoretical concept, as every attempt to change the configuration or implement a new service requires painstaking reconciliation of conflicting security and network policies.

    The solution to this crisis is convergence, i.e. adopting an operational model based on unified platforms that integrate network and security into a single, consistent data source. When these two worlds begin to speak the same language, the conflict of interest disappears. Security ceases to be an external filter and becomes a native function of the infrastructure itself.

    This allows for unprecedented operational clarity, even in the most distributed environments, from local data centres to public clouds and remote access points. With this approach, it is possible to drastically reduce the time it takes to detect anomalies and stop incidents before they can have a real impact on the company’s bottom line.

    When security is natively built into the network fabric, optimisation occurs that cannot be achieved by layer-by-layer methods. Systems respond more smoothly because the need for multiple inspections of the same packets by separate devices is eliminated. At the same time, policy consistency becomes a reality – the same access and protection rules apply whether an employee logs in from the company’s head office or home office.

    It is also worth noting that no platform, even the most advanced, can replace human intelligence, but it can significantly multiply its capabilities. The talent deficit in the area of cyber security is a structural challenge faced by almost every industry. In this context, artificial intelligence and automation are becoming key tools in the hands of the CIO.

    Properly integrated into the operations platform, this technology allows for instant analysis of patterns, summarising alerts and taking over repetitive, tedious tasks. This allows highly skilled professionals to focus on strategic operations and creative problem-solving, rather than getting lost in a thicket of false alerts.

    The evolution of the IT director’s role today is shifting from managing technology to building business resilience. Unified architectures are becoming the most important ally in this process. They allow regulatory requirements and compliance issues to be transformed from an onerous obligation into a natural, automated process. Instead of a constant race against time and attempts to patch more vulnerabilities, the organisation gains a solid foundation that supports innovation.

    Safety in this way is akin to the assistance systems in a modern racing car. They are not installed to make the driver go slower, but so that he or she can drive at maximum speed with complete confidence in the machine, confident that in a critical situation the systems will react faster and more precisely than he or she can.

  • Why is NIS2 a revolution in management, not just a change in IT?

    Why is NIS2 a revolution in management, not just a change in IT?

    For decades, there was an unwritten belief in the corporate world that cyber security was the domain of basements and server rooms – an airtight world of zeros and ones in which IT directors acted as isolated gatekeepers. Boards treated digital risk issues as a necessary evil, an operational cost to be minimised, or a technical glitch that could be fixed with the next software update.

    This comfortable distance is just now becoming history. The introduction of the EU’s NIS2 directive is not just another regulatory change; it is a fundamental redefinition of corporate governance that makes information security as much a part of reporting as the bottom line or market strategy.

    Fundamental to this change is the understanding that in the modern economy there is no longer a divide between business and technology. Every business process, from the supply chain to the customer relationship, is inextricably intertwined with the digital infrastructure.

    Thus, any gap in this infrastructure becomes a gap at the heart of the organisation. NIS2 recognises this relationship, shifting the burden of responsibility from administrators directly onto the shoulders of top management. In the new state of the law, lack of knowledge of the state of security is no longer a line of defence, but becomes evidence of gross negligence in oversight.

    A new definition of leader responsibility

    The evolution of regulations introduces a mechanism that can be called personal responsibility for digital resilience. Governing bodies are now obliged not only to approve cyber security budgets, but more importantly to actively oversee the implementation of risk management measures. This is a subtle but crucial difference. It is no longer enough to sign a document prepared by the technical department; what is required is an understanding of how these measures correlate with the business continuity of the company.

    It is worth noting that the sanctions envisaged by the regulator go far beyond severe financial penalties, which can run into millions of euros. The most painful supervisory instrument may turn out to be the possibility of temporarily suspending executives from performing their duties. This signals that the legislator is treating cyber security as an elementary duty of care, just like taking care of liquidity or complying with environmental standards. Risk management therefore ceases to be a project with an end date and becomes an ongoing process that must be reported and monitored at the highest levels of the organisational structure.

    The trap of paper compliance

    Many businesses fall into the trap of creating extensive libraries of policies and procedures that, in theory, make the organisation compliant. However, NIS2 presents businesses with a much more difficult task: demonstrating the real effectiveness of these measures. Documentation that is not reflected in employees’ daily habits and viable defence scenarios is worthless in the face of an incident. Regulators will increasingly ask not whether a company has a security policy in place, but how that policy has stood the test of reality.

    In this context, safety culture, which is an auditable resource, becomes crucial. Since statistics inexorably show that most breaches originate from human decisions – often made under time pressure or as a result of routine – it is the behavioural resilience of staff that becomes the most valuable quality certificate. For management, this means investing in solutions that measure staff preparedness. Evidence of staff’s ability to recognise a threat and react according to protocol becomes much more convincing in the eyes of the auditor than the fact of having the most expensive technical solutions that can be circumvented with one careless click.

    Security as the foundation of market value

    While the new regulations are sometimes seen as an administrative burden, forward-looking leaders see them as an opportunity to build a sustainable competitive advantage. The domino mechanism that NIS2 introduces for supply chain verification makes each company a link in a larger system of interconnected vessels. Companies that can prove their digital maturity become partners of first choice. Transparency in the area of cyber security builds trust not only with counterparties, but also with investors and financial institutions, for whom operational stability is a key indicator of a company’s valuation.

    Modern leadership maturity also manifests itself in the acceptance that absolute network invulnerability is a myth. Instead of striving for impossible technical perfection, the focus is on resilience – the ability of an organisation to survive an incident and return to full operational capability in no time. This approach removes the odium of a technical problem from cyber security and gives it the status of strategic crisis management.

    Horizon of change for modern management

    When facing the enforcement of new regulations, organisations need a clear plan of action that goes beyond IT. The first step is always to educate executives themselves so that they can dialogue with technical experts without feeling excluded from the discourse. Next, there needs to be robust verification of the effectiveness of the safeguards in place through resilience tests that reflect real threats, not just theoretical models. Finally, a shift in the investment vector towards human capital is needed.

    Ultimately, the NIS2 directive promotes a vision of a business that is aware of its vulnerabilities and actively manages them. It is not a bureaucratic hurdle, but a signpost showing how to build an organisation capable of operating in a world where information is the most valuable currency and its loss the greatest threat. True corporate resilience is born where advanced technology meets conscious leadership, creating a system that protects not only the data, but more importantly the value and future of the entire enterprise.

  • ISO 27001 in business: Why is certification an investment, not a cost?

    ISO 27001 in business: Why is certification an investment, not a cost?

    Only a decade ago, digitalisation was seen as an optional enhancement; today, it is fundamental to existence. With this evolution, the security paradigm has changed dramatically. The question of whether an organisation protects its information assets has given way to a much more stringent demand: how is a company able to prove its resilience in a world full of digital turbulence?

    “Security by accident” is irrevocably passing, giving way to professional risk management, of which the international ISO 27001 standard has become a symbol.

    Psychology of trust

    In B2B relationships, trust is rarely a matter of intuition, but increasingly the result of cold calculation and verifiable evidence. In this setting, ISO 27001 certification acts as a kind of ‘social proof’ at corporate level.

    For a potential counterparty, especially in international markets, having a partner with a structured Information Security Management System (ISMS) is a signal of operational maturity. It drastically shortens due diligence processes and reduces the decision-making resistance that often arises with high-risk contracts.

    This phenomenon can be described as security psychology. The customer, when entrusting his data to a third-party company, is looking for guarantees that it will not become the weakest link in his own value chain. The implementation of the standard transforms security from an abstract concept into a measurable process.

    This makes the certificate a viable commercial asset, opening the door to public tenders and cooperation with global giants for whom the lack of documented protection procedures is an insurmountable barrier.

    The foundation for a stable scaling organisation

    One of the most common cognitive errors in management is to see ISO standards as a bureaucratic corset that restrains company dynamics. The reality, however, presents itself quite differently. ISO 27001 provides a framework that brings order where rapid growth could create chaos. In organisations scaling their operations, the lack of structured information flow processes becomes a bottleneck, generating errors and unnecessary costs.

    Applying the PDCA (Plan-Do-Check-Act) model in the context of information security teaches an organisation to be systematic. It is a mechanism for continuous improvement that goes beyond the purely technological sphere to affect overall management effectiveness.

    A clear definition of roles, responsibilities and procedures ensures that the organisation is not plunged into decision paralysis in crisis situations. Instead of improvising, the team follows a pre-tested scenario, which minimises the impact of potential failures and allows for a rapid return to full operational efficiency.

    A holistic view of human capital and work culture

    An oft-repeated myth is the belief that information security is the domain of IT departments alone. The ISO 27001 standard places a strong emphasis on the fact that the most modern firewall is useless if the human factor fails. A holistic approach to ISMS assumes that security is embedded in the company culture and is not just a technological overlay.

    Traditional control methods are no longer effective. Education and awareness-building for workers become key elements of a protection strategy. Rather than imposing restrictive prohibitions that workers will try to circumvent in the name of convenience, the standard promotes an understanding of risk.

    A well-instructed team becomes the first and most effective line of defence, which in turn allows for greater flexibility and freedom in the choice of working tools while maintaining full data integrity.

    Profitability of protection vs. real return on investment

    When considering the implementation of ISO 27001, the financial aspect cannot be overlooked. Although certification requires an investment of time and resources, it should be seen in terms of smart insurance and a high-return investment. The cost of a single major data breach incident – including legal penalties, damages, loss of reputation and downtime – many times outweighs the expense of building a management system.

    Risk analysis, the heart of the standard, allows resources to be precisely located where they are needed most. Companies often waste budgets on haphazard technological solutions, while real risks lurk in underdeveloped internal processes. ISO 27001 forces the rationalisation of this expenditure. Furthermore, higher resilience against internal errors and technical failures directly translates into financial stability.

    In the eyes of investors and financial institutions, a certified company is an entity with a much lower risk profile, which can result in more favourable financing or business insurance terms.

    Security as the backbone of a modern brand

    The implementation of ISO 27001 is a defining moment in the development of a company. It is a shift from reactive firefighting to proactive management of the future. In a world where digital transformation is no longer a choice but a necessity, information security is becoming an integral part of business ethics and brand promise.

    Organisations that opt for a structured approach to protecting their most valuable assets gain more than just a certificate on the wall. They gain operational certainty, the trust of their most demanding customers and a foundation that allows them to safely experiment with new business models.

    Understood as a strategic ‘Business Enabler’, information security ceases to be a burden and becomes the drive that allows a company to aspire to the top league of global business.

  • Secure artificial intelligence in business – how to protect your business?

    Secure artificial intelligence in business – how to protect your business?

    As the use of AI in business increases, so does the risk of digital incidents, for which many organisations are still not fully prepared. Less than a third of companies in Poland believe they are cyber resilient. Palo Alto Networks presents new solutions that allow companies to effectively protect systems, control risks and use AI tools securely.

    Artificial intelligence is increasingly becoming part of companies’ day-to-day operations, but along with its applications, digital risks are also increasing. Data shows that only 29% of companies rate their level of cyber resilience as high or very high. One in five organisations has experienced a security incident in the last year and 85% of market experts warn that the dynamics of threats will soon overtake the defensive capabilities of business if they do not become more proactive in cyber security. These figures are a good indication that, despite the growing interest in AI in business, companies’ security systems are still failing to keep up with the pace of change, not only in technology, but also in growing threats.

    In response to these challenges, Palo Alto Networks is introducing three new cyber security solutions: Next-Generation Trust Security (NGTS), Prisma Browser with extensions within Prisma SASE, and Prisma AIRS 3.0. Each addresses specific business needs – from automating digital certificate management, to securing work with autonomous AI agents in the browser, to monitoring and controlling AI agents across the IT environment.

    In the context of digital certificates, it is particularly important to manage them properly, which is still a fragmented and error-prone process in many organisations. The answer to this challenge is Next-Generation Trust Security (NGTS), a platform that automates the management of digital certificates and helps companies make their systems more resilient to cyber threats. In the face of shrinking security certificate validity periods and increasing requirements for post-quantum encryption standards, NGTS helps companies prevent downtime due to certificate expiry. The solution uses technology from CyberArk (a Palo Alto Networks company) to control machine identities, automatically update certificates and monitor key network resources, allowing companies to maintain a high level of system security, minimising the risk of failure.

    – Expired or non-compliant certificates can cause downtime for applications, infrastructure and cloud services. Manual updates are time-consuming and require the coordination of multiple teams, and with the increasing scale and pace of operations, this way of working becomes inefficient. With NGTS and our post-quantum era secure platform, the network allows certificates to be controlled and updated automatically – emphasises Wojciech Gołębiowski, vice president and managing director of Palo Alto Networks in Central and Eastern Europe.

    The change in the way we work with AI can also be seen at the level of the tools that employees use on a daily basis, in particular web browsers. The Prisma Browser with extensions within Prisma SASE – a web browser that monitors the activity of AI agents, restricts their access to selected resources and protects against prompt injection attacks and third-party takeovers – addresses these needs. The browser also distinguishes between actions performed by humans and those performed by autonomous systems, making it easier to comply with existing AI regulations. This enables companies to use autonomous AI tools in their day-to-day processes while maintaining the security and stability of their systems.

    As the number of AI agents grows, so does the need for better control over their performance and the assessment of the risks they generate. The third solution from Palo Alto Networks is Prisma AIRS 3.0, a platform that allows companies to monitor the performance of AI agents in real time, whether in the cloud, SaaS systems or on end devices. Prisma AIRS 3.0 assesses agent risk, detects security vulnerabilities and recommends how to effectively secure them. The platform also allows agents to be managed and controlled more effectively and their permissions verified, facilitating the secure deployment of AI-based tools such as encryption agents, while maintaining the highest security standards.

    – Agentic AI is a significant step forward, where it stops being just a conversational tool and starts performing tasks on its own, changing the way we work and impacting productivity. When AI ceases to be just a conversational tool and begins to perform tasks autonomously, new challenges arise, such as more difficult control of agents and unpredictable behaviour during their operation. In response to these challenges, Prisma AIRS 3.0 provides a comprehensive platform that monitors AI agents in real time, assesses the risk of their actions and secures systems against threats, comments Wojciech Gołębiowski.

    The growing role of artificial intelligence in business means that security issues are no longer solely the domain of IT teams, but are becoming one of the key elements of organisational management. In practice, this means regularly reviewing the solutions used, updating knowledge and adapting security policies to the changing technological environment. This is the only way companies can effectively exploit the potential of AI, while mitigating risks and maintaining control over key processes.


    Source: Palo Alto Networks

  • What is digital resilience and why does business need it?

    What is digital resilience and why does business need it?

    Modern business operates in an environment of permanent risk. Digitalisation has dramatically increased the speed of processes while introducing systemic fragility. Companies operate in a dense network of relationships with cloud providers, external platforms and distributed data centres. Such a model means that a failure in a remote technology node can bring an entity’s sales or logistics on the other side of the world to a halt in minutes. Market positioning today is determined by digital resilience – the technical ability to continue operations despite errors and downtime.

    Foundations of systemic sustainability

    Building a resilient organisation requires the implementation of specific architectural solutions. A key tool is a flexible system structure based on microservices and redundancy. Instead of monolithic structures where one fault paralyses the whole, modules capable of isolating failures are used. These systems autonomously repair faulty fragments or switch processes to backup paths without human intervention.

    Active management of the technological supply chain is the second pillar of stability. Responsibility for processes does not end at the office door. It requires full operational transparency with technology partners and having viable exit scenarios (exit strategies). SLAs are only a legal instrument; real security is guaranteed by the technical ability to quickly migrate data and services in the event of a loss of stability by the supplier.

    Proactivity over reaction

    Mature organisations are replacing a culture of firefighting with automated monitoring of critical processes. Real-time systems analyse deviations from the norm and allow a response before the problem affects the end customer. Digital resilience is about accurately measuring every step of the transaction and automating the detection of bottlenecks.

    Integrating technical competence with business decisions is an essential part of this strategy. Every selection of a new IT tool has a real impact on the degree of operational risk for the entire company. Executives need to understand the technological underpinnings of the business, while IT departments need to demonstrate a complete orientation to market objectives. A common language between the two areas eliminates the information silos that are the greatest burden during a crisis.

    Operational continuity as a market argument

    Digital resilience is a company’s highest insurance policy. Service interruptions are statistically inevitable, so customer trust is built by the speed of recovery. Businesses with emergency procedures and a consistent recovery plan gain a measurable competitive advantage.

    The ultimate test for a business remains agility during a crisis, rather than simply avoiding it. Investment in digital sustainability directly translates into financial stability and brand credibility. The ability to maintain operational liquidity, regardless of external perturbations, defines the modern, mature enterprise today.

  • How do you effectively protect your company data? Tips for World Backup Day

    How do you effectively protect your company data? Tips for World Backup Day

    According to IBM Security’s 2025 Cost of a Data Breach Report study, as recently as 20 years ago, as many as 45% of data leaks resulted from the loss of devices such as laptops and flash drives. Today, the scale is smaller, but incidents of this type still occur and are noted in security reports. So how do you protect your data from leakage? On the occasion of World Backup Day, which falls on 31 March, Kingston’s expert advises on how to effectively protect your information from falling into the wrong hands and your business from reputational and legal consequences.

    In modern times, data is undoubtedly one of the most important assets that businesses have. At the same time, as the amount of information stored increases, so does the scale of the consequences when it is lost through hardware failure, cyber attack or human error. Therefore, backup is no longer a matter of choice or an optional activity, but a necessity.

    – Making backups is a very sensitive process, as copies usually contain all of a company’s most important and sensitive data. Therefore, their protection strategy should include a provision on how to secure copy media. Such an approach will ensure peace of mind for the company’s management and IT staff, as it eliminates the risk of violating the law, in particular the Personal Data Protection Regulation,’ says Robert Sepeta, Business Development Manager at Kingston.

    Key data, greatest protection

    The first step towards ensuring data security should be to identify those that require the most protection. Typically, this is financial and operational information, personal or legally protected data, or other information with the greatest impact on the business. As the results of this analysis can have a direct impact on business continuity, it should be done by management together with IT professionals who know how the data is currently stored and what threats it is exposed to. It is also worth checking that, in addition to the information processed by the company, backups are made of operating system images and applications.

    One of the most basic data storage strategies is the 3-2-1 rule, which requires three copies of the same data, on two different storage media, one of which should be kept in a separate location, outside of the production environment. This method will significantly increase data security and minimise the risk of data loss due to physical factors as well as cyber attacks.

    Once you have chosen the media for the backups, you need to plan the schedule for the backups. This too depends on a business decision rather than a technical one. For each type of data, the company’s management should determine the period of time during which copies will not be made, so that information produced or acquired during this period may be lost. The frequency with which copies are made, and therefore the associated costs, will depend on this. Data from some types of systems (less critical or less frequently updated) will only need to be backed up once a week, but from others, even several times a day. At the same time, it is important to determine the time during which the possibility of recovering data or restoring IT systems is guaranteed. It is also always necessary to remember to verify the correctness of the backup and the recoverability of the data.

    Storing backups – in the cloud or on disk?

    An important recommendation of the 3-2-1 rule is to store one of your data copies off-site to protect against theft or the effect of disasters such as fire, flood or building collapse. One possible method is to store a copy of the data in the cloud. This is one of the simplest and most convenient solutions – it allows automation of the copying process and access to data from anywhere. However, this method has three drawbacks: the time to access the data is long if a large amount of data needs to be restored to the production environment, the security of this data against cyber attack is sometimes insufficient, and – most importantly – many entities processing sensitive data (medical, financial) cannot use this method due to legal restrictions.

    In such a situation, a good solution is to use USB-connected external media, which, according to the 3-2-1 principle, will be a second type of backup medium. Depending on your budget or preferences, you can opt for mechanical drives, which usually offer more capacity at a lower price, or solid-state drives (SSD), which are more expensive but offer greater data durability, resistance to mechanical damage, speed of data transfer, and some models also have a built-in hardware encryption module.

    – Nowadays, encrypting data on portable drives, especially if these are copies of the most important sensitive company information, is almost mandatory. This is the most effective way to ensure that data does not leak if the media is lost. Professional encrypted media are equipped with a number of protective mechanisms, including the function of destroying the data on them if an incorrect password is entered several times,” says Robert Sepeta.

    Nowadays, in the face of increasing threats, which are mainly legal consequences due to data leakage, it is worth paying special attention to the way information is stored. Disclosure of sensitive information can have a significant impact on the operation of a company through high costs associated with litigation or damage to its image. Implementing a few simple rules, the cost of which is several orders of magnitude lower than the potential losses, will allow you to protect your data and thus maintain a stable and secure position in the market.


    Source: Kingston

  • Iranian hackers broke into the FBI director. Private data leaked

    Iranian hackers broke into the FBI director. Private data leaked

    The hacking of FBI Director Kash Patel’s private email inbox by Handal’s group – associated with Iranian intelligence – is more than just a tabloid leak of photos with cigars and rum in the background. It is a precisely aimed message in a psychological war that is increasingly permeating from the government sphere into the private sector. While the FBI is reassuring that the stolen data is historical and does not contain state secrets, the incident exposes a gap in the security architecture of modern leaders: the blurring line between the professional and personal spheres.

    A strategy of public humiliation

    Experts, including Gil Messing of Check Point, point to a clear shift in Iran’s tactics. Instead of sophisticated attacks on critical infrastructure, which could be met with a devastating military response, Tehran is betting on hack-and-leak operations. The aim is simple: to make US policymakers feel vulnerable. The publication of Patel’s private correspondence from 2010-2019 is intended to show that no one, not even the head of the Federal Bureau of Investigation, is beyond the reach of the Islamic republic’s digital tentacles.

    Risks to business: The case of Stryker and Lockheed

    For business, however, the most important wake-up call is not the attack on Patel, but Handala’s parallel actions targeting giants such as Stryker and Lockheed Martin. The group is not confined to politics; it is hitting medical supply chains and the data of defence workers. This shows that Iran’s cyber units treat corporations as an extension of state targets. Leaking employee data in the Middle East is a direct physical threat that goes beyond the typical cybercrime.

    The Patel incident is reminiscent of the 2016 scenario and the hacking of John Podesta’s inbox. Despite the passage of a decade, relatively simple breaches of private Gmail or AOL accounts remain the most effective method of infiltration. For executives, there is one lesson here: digital hygiene in private life is now an integral part of corporate risk management. A private message from a decade ago can become a weapon in today’s conflict. Iran, analysts suggest, is ‘firing everything it has’, heralding a series of further leaks targeting those closest to the administration and key industries.

  • Cyber security in business – Traps of apparent protection and audits

    Cyber security in business – Traps of apparent protection and audits

    Many organisations, seeking to ensure operational continuity, have based their defence foundations on cyclic penetration testing. This is a solid, even indispensable, foundation, but in the current technological realities it is beginning to resemble building a moat around a castle in the aviation age. While the presence of safeguards gives executives the desired peace of mind, it is often a peace of mind based on fragile assumptions. The problem is that the traditional approach to systems verification is increasingly becoming a form of security theatre, where the main focus is not on real defensive skills, but on the satisfaction of putting a green marker on an audit tally.

    Traditional penetration tests, while substantive and necessary, are inherently limited exercises. They take place in a controlled environment, have a well-defined timeframe and budget, and are constrained by a contract between vendor and client. Meanwhile, a true hacker collective does not operate under any contract. For an attacker, there is no concept of ‘scope of work’ or ‘operational hours’. The real threat is characterised by unpredictability, flexibility and the absence of any rules of the game. While the auditor checks the strength of a particular lock on the front door, the real aggressor patiently searches for an unlatched window in the basement or analyses the fatigue of a guard in order to get inside without using force.

    The biggest weakness of conventional simulations is their predictability. Most tests focus on examining the infrastructure from the defender’s point of view, looking at the technologies and processes that seem most logical. However, what is logical to a systems engineer rarely coincides with the creative chaos that cyber criminals sow. They exploit often overlooked systems, operational weaknesses and attack vectors that escape standard methodologies. In this clash, asymmetry works in the attacker’s favour: he only needs to succeed once, while the organisation needs to defend itself effectively every time, on every front.

    What is becoming particularly worrying is the evolution of social engineering, which has become frighteningly effective in the age of the ubiquity of artificial intelligence. Formerly primitive phishing attempts have given way to sophisticated campaigns in which the language barrier no longer exists. The use of AI makes it possible to create messages with such a high degree of authenticity that distinguishing them from official correspondence becomes a challenge even for conscious users. Voice cloning, generating real-looking service numbers or preparing emails from legitimate-looking domains are techniques that build enormous psychological pressure on employees. In such a scenario, the individual, despite their best intentions, becomes an unwitting accomplice of the criminal. Unfortunately, rarely does any company choose to incorporate such radical and realistic psychological testing into its standard security strategy, fearing to compromise team comfort or complicate procedures.

    Data from security reports gives a broader view of the directions in which attacks are evolving. The shift away from traditional Office documents with macros embedded in them to image files in SVG or IMG formats is a sign that hackers have left long-established paths. The situation is similar in cloud environments such as Azure, where the goal is no longer simply to take over data, but to master the control plane or use session tokens to bypass multi-component authentication. Focusing solely on the so-called crown jewels, the most important critical systems, while intuitive, can sometimes be short-sighted. Often, it is the marginal services, such as Key Vault or cloud-based automation functions, that become the beachhead from which an attacker can conduct silent surveillance of a network for months.

    The key to building real business resilience is a paradigm shift: moving from a simple wall-based defence to a holistic strategy focused on detection and response. Penetration testing should only be the starting point, not the ultimate goal. It becomes essential to implement procedures based on actual tactics, techniques and processes observed in active hacker groups. Only by systematically comparing its own defences with up-to-date threat intelligence is an organisation able to reduce an intruder’s time on the network and minimise potential losses.

    From a strategic management point of view, cyber security should not be seen as an IT cost, but as an immanent part of operational risk management. Too often, treating security testing merely as a compliance requirement leads to superficial assessments that give a false sense of protection. In fact, the tests that are most valuable to the business are those that expose weaknesses in the strategy, not those that validate the correct configuration of tools. The strategic need for action should come from an analysis of the most likely crisis scenarios, not from a desire for certification.

  • Artificial intelligence in IT – why investments don’t give quick returns?

    Artificial intelligence in IT – why investments don’t give quick returns?

    The cybersecurity landscape currently resembles a scene from the gold rush, where enthusiasm mixes with deep uncertainty and promises of instant profits clash with the cold pragmatism of spreadsheets.

    The latest data coming out of the consultancy sector, including widely reported analysis from EY, paints a fascinating picture, albeit one that is far from hugely optimistic. Almost every security leader (96%) sees AI as the cornerstone of modern defence, but when the battle dust of deployments settles, it appears that real return on investment remains an elusive mirage for many.

    This specific ‘agent paradox’ is becoming a focal point of discussion in Polish and global boardrooms. On the one hand, there is an almost religious faith in technology; on the other, there is a hard landing in the reality that half of organisations are unable to generate a satisfactory return from AI tools. In a business world where every zloty spent on IT must be justified by a measurable increase in efficiency, this situation is becoming increasingly difficult to accept without a deeper revision of existing strategies.

    Anatomy of costly optimism

    The disappointment resulting from the low ROI is not evidence of a weakness in the technology itself, but rather a testament to the immaturity of its implementation processes. Many organisations have fallen prey to the belief that artificial intelligence is a ‘boxed’ product that, once installed, will automatically patch the holes in the security system. Meanwhile, algorithms in cyber security act more like advanced surgical instruments – their effectiveness is directly correlated to the skills of the operator and the quality of the sterile environment in which they work.

    In the Polish business context, where IT budgets are often planned with great caution, investing in expensive licences without adequate analytics leads to dead resources. Companies are happy to buy an ‘engine’, forgetting the need to provide quality fuel in the form of structured data.

    As a result, sophisticated agent tools, instead of autonomously detecting APT threats, become mere costly notification generators that have to be verified by overloaded analysts anyway. The situation is complicated by the fact that the aggressors are not lagging behind. Since hackers are also using AI to automate attacks, simply having AI ceases to be a competitive advantage and becomes just a ticket to the game of survival.

    Agent = liability

    A key misunderstanding that inhibits return on investment is equating ‘task automation’ with ‘agency operations’. The former allows a machine to perform simple, repetitive tasks, freeing up precious minutes of human labour. However, the real potential lies in the latter – in autonomous agents capable of making split-second decisions. The problem is that moving to this level requires a huge amount of trust in the algorithm, which most organisations are not yet ready for.

    The lack of this trust manifests itself in a phenomenon known as the ‘black box’. Security leaders are afraid to hand over the reins to the machine because they do not understand the logic behind its operation, and the possible hallucinations of AI at critical moments in an attack could have catastrophic consequences. This leads to decision paralysis, where technology that is supposed to speed up the response, paradoxically slows it down by requiring multi-step human verification.

    Additionally, the labour market in Poland drastically verifies ambitious implementation plans. Staff shortages among specialists able not only to operate, but also train AI models, mean that even the best software remains untapped potential.

    The foundation for a new management culture

    Getting out of the low ROI quagmire requires a paradigm shift: from technology to management. Only a handful of companies (20%) have so far managed to integrate an AI management culture into day-to-day operations. The others treat these issues as an unpleasant regulatory obligation, rather than seeing them as an opportunity for optimisation. A robust governance framework is not just a set of prohibitions and prescriptions, it is first and foremost a mechanism to ensure the reliability of the data and the predictability of the algorithm’s actions.

    Without a precise definition of where machine autonomy ends and human responsibility begins, investment in AI will continue to generate more questions than answers in quarterly reports.

    From expenditure to capital

    In order for AI investments to begin to realistically earn their keep, organisations must abandon the vision of AI as a ‘silver bullet’ solving all cyber security problems at the click of a button. A successful strategy requires patience and focus in three key areas.

    The first is internal education to enable teams to work seamlessly with AI agents.

    The second is the standardisation of processes, without which even the most intelligent tool will get lost in organisational chaos.

    The third is a bold but controlled transition from the automation of single activities to complex agency operations.

    Instead of asking how much money can be saved with AI, business leaders should start asking how much a company’s resilience to incidents can be increased with the same human resources. After all, the value of AI in cyber security does not manifest itself in reduced licence costs, but in avoiding astronomical losses due to production downtime or reputational damage.

    In the Polish business ecosystem, the winners will be those who understand that the agent paradox is solved not by buying a newer version of software, but by managing wisely and rigorously what they already have.

    Investing in AI is a marathon in which the fastest start does not guarantee success. Only by combining technological finesse with corporate discipline will it be possible to surpass the magic million-dollar profit barrier and make algorithms truly viable allies in the digital war.

  • DarkSword and company security: How to protect company iPhones?

    DarkSword and company security: How to protect company iPhones?

    The public release on the GitHub platform of the code of the powerful hacking tool DarkSword moves the discussion of iOS security from the realm of niche threats into the mainstream of business risk. What was previously a precision instrument in the hands of sophisticated hacking groups has become a publicly available set of instructions, forcing IT departments to revise their mobile device management policies.

    DarkSword is not a single virus, but a complete zero-day exploit chain that Google Threat Intelligence Group has been tracking since November 2025. Its effectiveness is based on infecting devices running iOS 18.4 to 18.7. Although Apple has already released patches for version 26.3, the problem remains the ‘long tail’ of older devices and the speed at which the code spreads through the cybercrime ecosystem.

    In the hybrid working model, the phone is the digital key to company resources. The use of DarkSword allows the installation of malware families, such as GHOSTBLADE or GHOSTSABER, which go beyond simple SMS theft. Experts, including Steve Cobb of SecurityScorecard, point to a critical aspect of this threat: an infected phone becomes a beachhead for attacks on SaaS platforms, cloud environments and corporate authentication systems. An attacker no longer needs to breach corporate firewalls if they have access tokens stolen from an employee’s mobile device.

    This is exacerbated by the fact that DarkSword is the second such advanced leak in a short space of time, after the Coruna incident in March. This suggests a worrying professionalisation of the black market for spyware, where state-grade tools are becoming a common commodity. As AttackIQ’s Pete Luban notes, we are seeing a dangerous fusion of espionage with pure monetisation – the same data that serves intelligence in the morning can be used for financial theft in the evening.