Over the past eighteen months, the enterprise sector has moved from a fascination with generative artificial intelligence to a phase of actively implementing it into operational processes. A key trend in this evolution is the shift from passive language models (LLMs) to AI agents – autonomous systems capable not only of generating text but also of performing tasks: writing code, managing email communications, calling APIs or authorising financial transactions. With this agility, however, comes a critical new category of threats: Indirect Prompt Injection (IPI). Recent data from reports by Google and Forcepoint shed new light on the scale and sophistication of these attacks, suggesting that agent systems security will become one of the biggest challenges for chief information security officers (CISOs) in the coming years.
IPI mechanism: Data as instructions
Traditional prompt injection attacks relied on direct manipulation of the model by the user (e.g. attempting to ‘jailbreak’ a bot by giving it the command to ignore security). Indirect Prompt Injection is a much more insidious phenomenon. It involves inserting malicious instructions into content that the AI agent processes as input – this could be web pages, PDF documents, emails or code repositories.
The problem lies in the very architecture of current LLM models, which cannot absolutely separate system instructions (issued by the tool developer) from external data. When an AI agent analyses a web page in search of information, it may come across hidden text, which the model will interpret as a new overarching command. As a result, the attacker takes control of the agent’s logic, instructing it to, for example, send sensitive data to an external server or perform a destructive operation on the user’s file system.
Analysis of market trends
Google Security Research researchers, analysing CommonCrawl resources, point to an alarming trend. Between November 2025 and February 2026, there was a 32 per cent increase in the number of detected malicious injection attempts in publicly accessible web resources. This relatively short time frame demonstrates the dynamism with which the criminal community is adapting to new technologies.
From a market perspective, Google’s observation on cost-benefit calculus is key. Until recently, IPI attacks were considered the realm of academic research – they were difficult to implement and often failed due to the instability of the results generated by AI. Now, with the increased reliability and agility of agents, these attacks are becoming ‘viable’. AI’s ability to autonomously call external tools (tool calling) means that a successful injection of instructions has an immediate and measurable financial or operational impact.
The Google study allowed the current IPI trials to be categorised into five groups:
Harmless jokes: Attempts to change the tone of an agent’s response.
Helpful tips: Suggesting preferential answers to the model (often on the edge of ethics).
Optimisation for AI (AI-SEO):Hidden phrases to position products in assistants’ responses.
Deterring agents: Instructions prohibiting AI from indexing or summarising a particular page.
Malicious attacks: Data exfiltration and sabotage (deletion of files, destruction of backups).
Although the latter are often at an experimental stage at present, their increasing complexity suggests that it is only a matter of time before they enter the phase of mass attacks.
From coding assistants to financial transactions
The Forcepoint report provides concrete evidence of how IPI manifests itself in professional software and financial tools. Experts have identified ten verified indicators of attacks targeting popular tools such as GitHub Copilot, Cursor and Claude Code.
The attack scenario is mundane: a programmer uses an AI agent to analyse a library or documentation on an external site. This site contains a hidden AI instruction. When the agent ‘reads’ the site, it is instructed to execute a command in the terminal that destroys local backups. Since the agent has permission to operate on the file system (which is essential in a programmer’s job), the command can be executed without additional verification.
Even more dangerous are attempts at financial fraud. Forcepoint points to cases where complete transaction instructions are sewn into web content, e.g. PayPal.me links with a predefined amount along with step-by-step instructions on how the agent is to finalise the payment. In systems where AI has access to digital wallets or corporate payment systems, the risk of capital loss becomes immediate.
The paradox of detection and the challenges for business
One of the most worrying findings from the Forcepoint report is the so-called detection paradox. The phrases and keywords used by attackers to inject hints are identical to the terminology the cyber security community uses to describe and analyse these threats. This renders simple filters based on word blacklists ineffective – either blocking legitimate expert communications or letting intelligently worded attacks through.
Until recently, cyber security experts warned of a linear increase in threats. However, the year 2025 has brought a change that can be described as a statistical shock. The latest data from the CERT Polska team (NASK) shows that in the first six months of this year alone, cybercriminals created more than 100,000 domains used to extort data and money.
To understand the scale of this phenomenon, one only needs to look back: a total of 92,000 such addresses have been added to the Warning List in the entire year to date, a record year 2024. This means that the current dynamics of the criminal infrastructure is more than double that of a year ago. Cybercrime has ceased to be the domain of ‘hackers in hoodies’ and has become a scalable, automated business that runs faster than any legitimate startup.
The threat landscape: Investing in illusions
What is driving these statistics? While ‘undelivered package’ or ‘unpaid electricity bill’ attacks are still popular, the biggest jump was seen in the fake investment segment.
Several tens of thousands of the newly detected domains are professionally prepared traps, tempting with the promise of a sure, quick profit. Using images of well-known politicians, athletes or celebrities (often generated or animated by AI), fraudsters create platforms that, at first glance, are no different from legitimate brokerage house sites or cryptocurrency exchanges.
The increase in the number of detected sites is the result of two factors. On the one hand, CERT Polska’s detection systems are getting better and better. On the other – the barrier to entry into the world of cybercrime has fallen dramatically. Today, you do not need to write code to launch a phishing campaign. It is enough to buy ready-made tools in the “Phishing-as-a-Service” model.
Marketing the evil. How do modern scammers operate?
Experts are under no illusion – criminal groups are adopting the same techniques used by major advertising agencies. Targeting, A/B testing, sales psychology – these are all in the attackers’ arsenal today.
“Phishing and marketing have long gone hand in hand. Fraudsters use social engineering very similar to salespeople, i.e. imposing time pressure, assurances of uniqueness, promises of massive savings or profits, for example. Criminals advertise their websites on social media and search engines, profiling these adverts to the audience they think is most susceptible to a particular fraud scheme.” – notes Karol Bojke, an expert from CERT Polska.
Moreover, the technology that is supposed to serve us is becoming a weapon in the hands of attackers. “The use of AI only exacerbates problems that already exist (e.g. unlawful use of an image), and the ease of automation increases their scale,” – Bojke adds.
In this arms race, the CERT Polska Warning List, run since 2020, remains a key defensive tool. It is implemented by telecom operators, allowing millions of Poles to automatically block access to malicious sites. However, with the rate of hundreds of new domains per day, there is a question about the effectiveness of this solution.
Can the list keep up with the dynamics of criminals who can set up and roll up a fake shop in a few hours?
“A correctly implemented Warning List protects against threats detected up to five minutes earlier”. – Karol Bojke explains. However, the expert points out that technology is only half the battle. “The key here is cooperation and sharing information about new scams with the CERT Polska team – both from partner institutions and ‘ordinary’ Internet users. Public awareness in this area is growing, thanks to which the number of reports is increasing, but there is still a lot of work and education ahead of us. The private sector, of course, should also take care of its customers, so we encourage you to use our recommendations available on cert.pl.“
Behavioural resilience: The power of small habits
Since technology cannot catch 100% of threats, the last line of defence remains the human being. Here, however, is where the problem arises: years of scare-mongering about hackers have caused many users to develop security fatigue – the fatigue of constant warnings.
Therefore, the education paradigm is changing in 2025 . An example of the new approach is theSafe Zlotyscampaign, implemented by the Ministry of Finance in cooperation with the THINK! Foundation and NASK. It is part of a broader puzzle – the National Strategy for Financial Education. Decision-makers have understood that digital security is inextricably linked to financial security. Losing login details today is a simple way to lose your life savings.
But how do you teach effectively when your audience is bombarded with information?
“Habits are established not from hype announcements, but from small, repetitive steps. That’s why in Safe Zloty we combine knowledge with simple rules: I check the sender of the message, I don’t click on a link from an SMS if I don’t know who it’s from, I use two-step verification… This is the power of everyday habits.” – explains Anna Bichta, President of the THINK!
The expert stresses that the key to social resilience is to get out of the bubble of individualism. “It also manifests itself in sharing knowledge with those around you – relatives or neighbours,” adds Bichta.
Empathy instead of fear
The Safe Golds campaign also diagnoses the language used to talk about cyber security. To date, the narrative has often been based on technical jargon or the stigmatisation of victims (“how could you have clicked on that?”). Meanwhile, victims of investment fraud are not only older people, but increasingly young, digitally proficient people, deceived by the professionalism of fake platforms.
“We certainly need a language that doesn’t embarrass, but helps us understand our own emotions,” Anna Bichta emphasises.
It is the emotions – greed, fear, but also the hope of an improved existence – that are the attack vector. Clicking on a fake link is often not due to a lack of technical knowledge, but to the impulse of the moment.
“People click on ‘a certain opportunity’ because they want a quick sense of relief or hope for something good. That’s why in the campaign we focus on real stories and examples from which we draw practical conclusions without judging anyone,” concludes the President of the THINK! Foundation.
Cyber security as an economic competence
The involvement of the Ministry of Finance in the topic of phishing is a clear signal: cyber security has ceased to be a problem for IT departments and has become a key economic competence for every citizen. Monika Wojciechowska, Plenipotentiary of the Minister of Finance for the Financial Education Strategy, calls it explicitly “an investment in the financial resilience of society”.
In a reality where 100,000 new threats are created in half a year, it is impossible to completely eliminate risk. However, it is possible to manage it. However, this requires a combination of two worlds: hard technology (artificial intelligence on the CERT side, automatic domain locks) and soft skills (critical thinking, emotional control).
If 2025 is to bring a breakthrough in the fight against cybercrime, it will not come through a new anti-virus application, but through a massive change in habits. Stopping for three seconds before clicking on a link with a ‘super investment opportunity’ is the most effective firewall we can install today.
See? React.
Suspicious link SMS messages can be reported by forwarding them to the toll-free number 8080. Any other incidents and fake domains are worth reporting directly to incident.cert.co.uk. Each report shortens the life of a fake domain and could save another person’s savings.
In January 2024, an employee in the finance department at multinational engineering firm Arup received an email that appeared to be from the chief financial officer (CFO) at its UK headquarters. The email informed of a secret transaction and included an invitation to a video conference. The employee, although initially suspicious, joined the call. On the other side of the screen, he saw not only the CFO, but also several other board members he knew. Their appearance and voices were perfectly reproduced. Convinced of the authenticity of the meeting, over the next few days he authorised 15 transfers totalling $25.6 million. Only after the fact did he discover that he had been the victim of one of the most audacious frauds in history. All the participants in the video conference were digital clones, generated by artificial intelligence.
This incident is not a sci-fi movie scenario, but the brutal reality of a new era of cyber threats. Welcome to the world of Phishing 2.0 – an evolution of phishing that, thanks to artificial intelligence, machine learning and deepfake technology, has become more sophisticated, personalised and dangerous than ever before. Traditional attacks, which we have learned over the years to recognise by grammatical errors and generic phrases, are becoming a thing of the past. In their place are campaigns that are almost indistinguishable from authentic communication, precisely targeting specific individuals and capable of bypassing traditional defences.
Artificial intelligence is no longer just a tool that improves phishing; it is fundamentally redefining it. It democratises access to advanced attack techniques that were once the domain of only specialised hacking groups, and fuels an arms race in cyberspace. In this new reality, both attackers and defenders are engaged in a battle of algorithms, with data, finance and trust at stake as the foundation of the digital economy.
Feature
Phishing 1.0 (Before the era of AI)
Phishing 2.0 (AI-supported)
Language and grammar
Frequent errors, unnatural wording.
Perfect grammar, imitating the writing style of specific individuals.
Personalisation
General phrases such as “Dear Customer”.
Hyper-personalisation using social media data and public records.
Scale and speed
Manual, resource-limited campaigns.
Automated generation of thousands of unique messages in minutes.
Attack vectors
Mainly email.
Multichannel: email, SMS (smishing), voice calls (vishing), social media.
Avoidance tactics
Simple domain impersonation.
Dynamic page cloning, code obfuscation by AI, deepfake audio and video.
Required skills
Basic technical knowledge.
Low entry threshold with AI tools and Phishing-as-a-Service (PhaaS) platforms.
Anatomy of a Phishing 2.0 attack: An AI-driven arsenal
The modern phishing attack is a complex, multi-step process in which artificial intelligence plays a key role at every step. At the core of Phishing 2.0 are large language models (LLMs) such as GPT-4, as well as their uncensored, darknet-accessible counterparts such as WormGPT or FraudGPT. These tools have become an inexhaustible source of perfectly written, psychologically compelling content for cybercriminals. They eliminate grammatical errors, mimic the communication style of specific individuals and can create persuasive narratives from just a few simple commands.
The effectiveness of Phishing 2.0 is based on hyper-personalisation, and this depends on the quality of the data collected. Artificial intelligence has automated the reconnaissance (OSINT) process, systematically searching the digital footprint of potential victims. AI algorithms aggregate information from social media, corporate websites and public records to learn about the victim’s interests, professional relationships and recent activities. The collected data – the name of a project, a supervisor’s name or a recent holiday – is woven into the content of the message, making the scam appear extremely authentic.
Artificial intelligence has also enabled the mass production and distribution of attacks. ‘Phishing-as-a-Service’ (PhaaS) platforms have emerged, such as ‘SpamGPT’, which mimic the interface of legitimate marketing services but serve a criminal purpose. They offer an integrated AI assistant for generating templates, automating mailings and tracking analytics, allowing even those with few technical skills to conduct sophisticated large-scale operations.
One of the biggest challenges is Phishing 2.0’s ability to bypass traditional security filters. AI is used here to create dynamic threats. AI tools can create perfect, real-time updated replicas of legitimate login pages. Analysts at Microsoft Threat Intelligence identified a campaign where AI was used to hide malicious code inside an image file, masking it using business terminology to confuse scanners. Criminals are also abusing trusted developer platforms to host fake sites with CAPTCHA verification, which blocks automated scanners but lets the victim through to the phishing site.
The integration of AI with phishing is the industrialisation of cybercrime. We are seeing a shift from an ‘artisanal’ to an ‘industrial’ model. AI has become a production line that automates the entire attack process on a scale previously unattainable.
The human element under siege: Deepfake and psychological manipulation
The most worrying front in the evolution of phishing is the use of AI to create hyper-realistic voice and image imitations. Deepfake technology is striking a blow to trust in one’s senses. It only takes a few seconds of audio material to create a convincing voice imitation. Attackers use this technology in voice messages or in real-time phone calls (vishing).
Analysis of actual incidents shows the devastating potential of this technology. In the case of Arup, an employee who initially suspected phishing was completely convinced after a video conference with digital clones of the board of directors. In another attack, the CEO of a UK energy company authorised a transfer for $243,000 after a phone call with a cloned voice of his superior.
However, there are also examples of foiled attempts that provide valuable lessons. An attack on Ferrari was stopped when a manager asked the supposed CEO a follow-up question about a recent private conversation, which the AI was unable to answer. At Wiz, the deception attempt failed because employees noticed a subtle difference between the CEO’s voice from public appearances (on which AI was trained) and his tone in everyday conversations. In contrast, a LastPass employee ignored an attempted contact from the supposed CEO because it was through an unusual channel (WhatsApp) and outside standard working hours.
These cases reveal a fundamental weakness of deepfake technology: the ‘contextual gap’. AI can replicate patterns, but it cannot replicate authentic, shared human experience. It does not know the content of private conversations or the subtle nuances of interactions. This gap is a new battleground on which the ‘human firewall’ can claim victory.
The data behind the threat: Quantification of impact
The scale of the transformation is reflected in hard data. Reports indicate a 1,265% increase in phishing emails, directly linking it to the uptake of GenAI technology. Total phishing volume has increased by 4,151% since ChatGPT’s debut.
The increase in the number of attacks translates into increasing financial losses. The average cost of a data breach whose vector was phishing reached $4.8 million in 2024. Losses from Business Email Compromise (BEC) attacks reached a record $2.9 billion.
What’s more, an experiment conducted by Hoxhunt found that in March 2025, an AI agent became 24% more effective at creating phishing campaigns than an elite human team of experts. This proves that artificial intelligence is becoming objectively better at manipulating humans.
Although the overall volume of attacks is increasing, a strategic shift is also being observed. Attackers are increasingly moving away from mass campaigns to precisely targeted operations on high-profile departments such as finance or HR. Invariably, Microsoft remains the most commonly impersonated brand, used in more than 51.7% of scams.
Fighting fire with fire: AI-driven defence
In response to threats, the cyber security industry has also reached out to AI, creating a new generation of intelligent defences. Unlike traditional filters, defensive AI is adaptive and contextual. Its operation is based on behavioural analysis, creating a dynamic profile of normal communication patterns and detecting anomalies such as a sudden change of tone in an email from a known sender. Natural language processing (NLP) tools analyse the content of messages for subtle signals of manipulation.
Artificial intelligence is also revolutionising the work of security operations teams (SOCs) by automating log analysis and alert classification, allowing human analysts to focus on the most complex incidents. Interestingly, the same large language models used for phishing are also proving effective in detecting it.
This evolution is forcing a fundamental change of philosophy in cyber security. We are seeing a shift from a ‘state’ based model (is this element known to be bad?) to a ‘behaviour’ based model (is this element behaving strangely?). The new model, driven by AI, is not so much interested in ‘what it is’ as in ‘how it works’.
Building a resilient organisation: A multi-layered strategy
Effective defence requires an integrated approach that combines technology, processes and informed people. Traditional training is no longer sufficient. The new programme must prepare employees to confront deepfakes. Implementing out-of-band verification protocols for every sensitive request – confirming an email with a phone call to a known number – becomes crucial. The Ferrari example also demonstrates the power of simple security questions based on a shared, private context.
Technology must provide a solid foundation. The Zero Trust philosophy (‘never trust, always verify’) is becoming a fundamental defence strategy. Phishing-resistant multi-factor authentication (MFA), based on FIDO2 standards (e.g. dongles) that bind the authentication process to a physical token, rendering a stolen password useless, is also essential.
Forecasts from analysts such as Gartner indicate a shift in the allocation of budgets. By 2030, more than half of spending will be on preventative measures rather than post-incident response. This is an acknowledgement that traditional models are too slow to combat attacks at the speed of AI.
The most effective defence mechanisms are no longer purely technical; they are integrated into business processes. The failure at Arup was a process failure – the financial procedure lacked a mandatory, non-digital verification step. The Ferrari success, on the other hand, was a process success. The solution requires a change in the way work is done. IT leaders must become business process engineers, building verification steps directly into high-risk workflows.
Navigating the future of digital deception
Phishing 2.0, driven by AI, is not a hypothetical threat but a current reality. It is more personalised, persuasive and operates on an industrial scale. Deepfake technology has undermined our fundamental trust in sensory evidence.
We are facing a new era where AI has democratised advanced attacks and defences must be equally intelligent. Looking to the future, experts predict a further escalation of this arms race. There is talk of the emergence of autonomous, multi-agent AI systems (‘swarms of agents’) that will conduct complex operations on both the attack and defence side. The UK’s National Cyber Security Centre (NCSC) predicts that AI will continue to reduce the time between vulnerability disclosure and exploitation.
Resilience is a hybrid of smart technology and an equally smart, sceptical workforce. The ultimate defence is a holistic strategy that combines the predictive power of AI with the contextual wisdom of a well-trained, procedurally-acting human team. The fight against digital deception has reached a new level, and our ability to adapt will determine the outcome.
Intuition suggests that the generation raised in the glow of smartphone screens should be best equipped to avoid online pitfalls. However, the latest data casts a long shadow over this belief, revealing a worrying paradox: it is Generation Z, or digital natives, who are the weakest link in a company’s cyber security chain. A study by Yubico shows that as many as 62% of representatives of this group have interacted with a malicious link or attachment in the past year. This figure is significantly higher than that of the older generations. This alarming indicator is forcing IT leaders to fundamentally revise their security strategy.
Why are digital natives falling through the net?
The problem lies not in a lack of familiarity with technology, but in the nature of that familiarity. Being proficient in navigating the digital world is not the same as being able to recognise risks. There are several reasons for this phenomenon and they paint a complex picture of contemporary risk.
Firstly, overconfidence. Young employees, who have been intuitive with apps and social media since childhood, often believe in their “digital infallibility”. This confidence undermines their vigilance, leading them to disregard basic precautions. They trust their ability to distinguish a fake from an original, not realising that today’s attacks, aided by artificial intelligence, are almost perfect.
Secondly, changing attack vectors. Traditional cyber security training focuses on analysing suspicious emails. Meanwhile, phishing has long since left the inbox. Generation Z operates in an ecosystem of instant messaging (WhatsApp, Messenger), social media (TikTok, Instagram) and SMS. Attacks coming through these channels – in the form of a link to a supposed promotion from an influencer or a fake package notification – are much harder to identify, as they appear in a context that users consider trusted and private.
Source: Freepik
Thirdly, the culture of immediacy. Social media and modern apps have accustomed us to instant interaction – quick scrolling, liking and clicking. This habit eliminates a moment for reflection. Phishing attacks are designed to exploit this impulse, often playing on emotions or a sense of urgency (FOMO – Fear Of Missing Out), making the user click before they have time to think.
Implications for business: time for a strategy reset
Maintaining existing methods of protection in the face of this phenomenon is like putting out a forest fire with a watering can. Companies need to understand that their youngest employees, who make up an increasing proportion of the workforce, require a completely new approach.
The traditional annual training sessions in the form of PowerPoint presentations have become a relic of the past. They are not only boring but, above all, ineffective because they do not address the real risks that Gen Z faces on a daily basis. Effective education must be continuous, interactive and personalised. This means phishing simulations conducted on instant messaging, video-based micro-training and gamification that engages rather than bores.
However, even the best training will not eliminate the risk of human error. Therefore, the burden of protection must ultimately shift from humans to technology. Relying on employee vigilance in an era of AI-generated attacks is a strategy doomed to failure. This leads to the only valid conclusion: the need to implement a Zero-Trust architecture in which nothing is trusted by default.
From education to reliable authentication
The Generation Z paradox makes it brutally clear that education alone is not enough. Since we cannot fully trust a person’s ability to recognise a fake, we must implement mechanisms that make such attacks ineffective. Passwords, even the most complex ones, are insufficient today. The key to the future of security is phishing-resistant multi-factor authentication (MFA).
Solutions such as hardware security keys make it impossible to log in to a fake site, as verification is done at the cryptographic level rather than the user’s knowledge. They are becoming the new gold standard that protects the organisation regardless of whether the employee is tired, distracted or simply fooled. For companies employing younger generations, investing in such technologies ceases to be an option and becomes a strategic necessity, protecting against the threats that are already here.
The night of 9-10 September 2025 will go down in history as the moment when the war across our eastern border ceased to be a mere media report and became a tangible threat. Russian drones over Poland and their downing by the Polish armed forces is an unprecedented event.
However, anyone who views this incident solely in military terms is making a strategic mistake. For the violation of airspace was a high-profile prologue to the silent offensive that is about to begin in Polish cyberspace.
Drones over Poland and the anatomy of Russian cyber-aggression: how does the Kremlin machine work?
To understand what lies ahead, we must first grasp the adversary’s philosophy of operation. For years, Russia has perfected a doctrine of hybrid warfare in which missiles, beats and disinformation form a single, integrated arsenal.
The aim is no longer just to conquer territory, but to paralyse the state from within – breaking its economy, destroying trust in its institutions and dividing its society.
In this strategy, cyber attacks play a key role, with specialised secret service units acting with finesse and brutality.
These operations are headed by two main actors whose code names should be familiar to any security professional:
GRU (APT28/Fancy Bear): This is the digital equivalent of the Specnaz units. Units subordinate to military intelligence specialise in high-profile, destructive and sabotage operations. Their goal is chaos. They are behind the attacks on Ukraine’s power grid, the hacking of electoral systems or the devastating Wiper malware attacks that irretrievably erase data. If something is to be destroyed, switched off or paralysed – the GRU steps in.
SVR (APT29/Cozy Bear): They are the aristocracy of Russian digital intelligence. They operate more quietly, more subtly and their operations are characterised by extreme patience. The Foreign Intelligence Service focuses on long-term espionage. They are responsible for the notorious attack on the SolarWinds software supply chain, which gave them access to the networks of thousands of companies and government agencies around the world for months. Their focus is on information, strategic advantage and quietly placing ‘digital sleeper agents’ on key enemy systems.
Significantly, Russian services are blurring the line between state operations and common cybercrime.
Ransomware groups such as Conti or LockBit often receive tacit permission from the Kremlin to operate in exchange for fulfilling ‘orders’ hitting Western targets – hospitals, corporations or local governments. This allows them to wreak havoc at the hands of seemingly independent criminals and further complicates the attribution of attacks.
Scenarios for Poland: predicted attack vectors
In the context of recent events, Poland is becoming a high-priority target. We can expect to be hit from several directions simultaneously.
Scenario 1: Impact on critical infrastructure (ICS/SCADA)
This is the most dangerous scenario. Industrial control systems on which the functioning of the state depends will be targeted. Attacks could target:
Energy sector: Attempts to take control of transformer substations in order to trigger regional or even national blackouts.
Transport and logistics: Paralysis of rail traffic management systems, which would have a direct impact on support shipments to Ukraine, but also on the national economy.
Water supply and treatment plants: manipulation of control systems can lead to interruptions in water supply or, in extreme cases, to water contamination.
Scenario 2: Administrative paralysis and data theft
Key institutions of the state will become the main target of espionage operations (conducted by the SVR). Massive spear-phishing campaigns should be expected, precisely targeting officials and military officers from the Ministry of Defence, the Ministry of Foreign Affairs or the Ministry of Digitalisation.
The aim will not only be to steal security data and defence plans, but also to take control of accounts that can be used for further escalation or disinformation operations.
Scenario 3: Information warfare and social chaos
This attack is already underway, but it will now enter a new, intense phase. Its aim is to destroy the social fabric. We can expect:
DDoS attacks on major news portals and banking services to give the impression that the state is losing control.
Defacement (content substitution) of government websites to publish false messages and sow panic.
Massive disinformation campaigns on social media, run by troll farms and bots. Narratives will focus on undermining the effectiveness of the Polish army (‘they didn’t shoot everything down’), accusing the government of ‘provoking Russia’ and stoking anti-Ukrainian sentiment.
Why is increased activity inevitable?
These predictions are not mere speculation. They stem directly from an analysis of Russian war doctrine and the logic of the current situation.
First: Asymmetric Retaliation. Russia cannot afford an open armed conflict with a NATO country. The downing of its drones was a slap in the face that cannot go unanswered. Cyberspace is the ideal theatre for retaliation – allowing painful blows to the economy and infrastructure while avoiding crossing the threshold of open war.
Second: Phase Two of the Operation. The drone attack was designed not only to strike Ukraine, but also to test the response time and procedures of the Polish defence. Now Phase Two begins: creating internal chaos in a country that is a key logistical hub for Ukraine and a pillar of NATO’s eastern flank. Weakened and preoccupied with its own problems, Poland is a strategic target for the Kremlin.
Third: Testing the Alliance. Russia wants to test in practice how Article 5 solidarity mechanisms work, not only in the military dimension but also in the cyber dimension. A massive attack on Poland will be a test for response procedures and cooperation within NATO.
The front runs through every server room today
We must abandon the illusion that cyber security is a technical problem locked up in IT departments. Today, it is the foundation of national security, with every administrator, developer and manager becoming a defender on the digital front line.
The time of reactive firefighting is irrevocably over. A paradigm shift towards proactive defence and resilience building is required.
It is worth emphasising at this point: the purpose of this analysis is not to sow panic, but to build strategic awareness and resilience. It is sound knowledge and cool risk assessment, not fear, that provide the basis for effective preparation for scenarios that could materialise at any time.
For the IT industry, this means immediate action is required:
The implementation of the ‘Zero Trust’ architecture: The principle of “never trust, always verify” must become standard in every corporate and government network.
Proactive Threat Hunting: Security teams need to actively hunt for signs of intruders on their networks, rather than passively waiting for alerts from SIEM systems.
Audit and Testing of Incident Response Plans (IRPs): Having a plan on paper is not enough. It needs to be tested regularly through simulations so that when a crisis occurs, everyone knows what to do.
Building Public Resilience: The IT sector has a huge role to play in educating employees and the general public on how to recognise disinformation and phishing.
The red sky over eastern Poland was a test of our military procedures. The upcoming digital offensive will be a test of the resilience of our entire state and society. This is not a time for fear, but for the consolidation of forces – for cooperation between the private sector and public administration, for sharing knowledge about threats and for building a digital shield that neither massive DDoS attacks nor precision spying operations can break. History teaches that Poland’s greatest strength in the face of threats has always been its ability to mobilise and adapt. Today, this mobilisation must take place in our networks, server rooms and minds.
For years, one refrain has been repeated in the world of cyber security: employee education is the key to fighting phishing. Companies invest in online training, simulated campaigns and testing, believing that this will significantly reduce risk. However, a recent large-scale study by researchers at the University of California, San Diego, shows that this belief may be way over the top.
Findings from an experiment involving more than 19,000 participants indicate that the effectiveness of training programmes is much lower than the market promises. This does not mean that training does not make sense – but rather that it should be treated as a complementary element, rather than a central pillar of a safety strategy.
A study that changes the narrative
A team of UC San Diego researchers conducted an eight-month study in the healthcare sector, engaging employees in different types of training. Scenarios included simple error messages, static educational information and more extensive, interactive contextual modules.
The result was surprisingly modest. Regardless of the method chosen, the average improvement in phishing recognition performance over the control group was just 1.7%. In practice, this means that traditional programmes do not generate a clear difference in user behaviour – at least not at the level expected by companies investing in education.
The myth of “miracle training”
Over the past decade, the security training market has been growing consistently, fed by the belief that the right e-learning modules can significantly reduce the risk of phishing attacks. Numerous companies offered programmes that were supposed to ‘change the habits’ of employees and dramatically reduce incidents.
Meanwhile, the survey results show that the ‘miracle training’ narrative is not strongly supported by the data. Effects do exist, but they are much smaller than expected. The problem is that many organisations treat training as the main, and sometimes only, tool for protection, creating a false sense of security.
Phishing as an art of social engineering
One of the most interesting findings of the study was that the effectiveness of phishing depends more on the content of the bait than on what training employees have received.
While a small percentage of participants were fooled by fake emails related to Outlook accounts, as many as around 30 per cent clicked on messages related to holiday policy or dress code. This shows how strongly attackers use the organisational context and how difficult it is with training to prepare employees for every possible manipulation.
The conclusions are simple: attackers adapt their methods quickly, choosing topics that are closest to the day-to-day concerns of employees. Training, usually based on repetitive scenarios, cannot keep up with this dynamic.
Why training fails in practice
A second reason for the low effectiveness of the programmes is the behaviour of the users themselves. The study found that many participants simply ignored the educational material or went through it so quickly that they had no real chance to assimilate the content.
This is not only a problem of lack of motivation. In practice, online training courses tend to be treated as a bureaucratic chore to be ‘ticked off’ rather than a valuable source of knowledge. Added to this is the often unengaging format – boring tests and repetitive modules that do not build any lasting habits.
The new role of training
So can phishing training be considered useless? Absolutely not. Researchers stress that their role is still important – only that the effects need to be realistically assessed and measurable goals set.
Rather than believing in a radical improvement, companies should expect incremental changes: a reduction in the number of clicks on dangerous links, an improvement in the speed of reporting suspicious messages or greater awareness when opening attachments. In this context, training can act as a complement to other tools, not as a miracle cure for phishing.
At the same time, organisations should be more demanding of educational programme providers, demanding evidence of effectiveness backed up by research, not just marketing promises.
Multi-level defence
The study’s conclusions are part of a broader trend in cyber security: effective defence requires a multi-level approach.
In addition to training, technical solutions are needed – from anti-phishing filters and anomaly detection tools to systems that automate incident response. It is also important to regularly update systems and build an organisational culture in which mistakes are not a reason for punishment, but an opportunity to learn.
The latter may be particularly relevant. The study showed that, over an eight-month period, half of the participants had been fooled by at least one attack. Punishment for such a mistake will not improve the situation – but analysis of the incident and constructive lessons learned can significantly raise awareness.
Less illusion, more resilience
For years, the phishing training market has lived with the promise that all it takes is the right dose of education to close the door on social engineering attacks. Data from the largest survey to date shows that the reality is more complex.
Companies should not give up on training, but they need to stop treating it as a golden mean. Realistic expectations, combined with technology and organisational culture, offer a much better chance of building resilience than believing in miracle e-learning programmes.
The job market has become a new hunting ground for cybercriminals. Instead of classic malware, they are reaching for a more sophisticated weapon: the tools that IT departments use on a daily basis for remote support. Legitimate and trusted applications thus become an attack vector, targeting the most susceptible – people actively seeking a new career path.
Modern recruitment scams have abandoned primitive methods in favour of sophisticated social engineering. Recent analysis by Proofpoint shows a growing and worrying trend. Attackers are impersonating recruiters and HR professionals from high-profile companies, creating plausible scenarios designed to lull victims’ alertness. This process is part of wider phishing campaigns that use trust as the main currency.
Anatomy of an attack: how legitimate software becomes a weapon
The scheme of operations is deceptively simple but extremely effective. The potential victim receives an email or is contacted by a supposed recruiter via platforms such as LinkedIn. The communication looks professional – often based on copied, authentic job advertisements. After an initial exchange, the candidate receives an invitation to an online interview.
It is here that the key moment of the attack occurs. Instead of a link to popular videoconferencing platforms such as Zoom, Microsoft Teams or Google Meet, the victim is prompted to download and install a small piece of software supposedly necessary to conduct the call. In reality, it is a legitimate remote management and monitoring (RMM) tool such as SimpleHelp, ScreenConnect (now ConnectWise ScreenConnect) or Atera.
These applications, used on a daily basis by IT administrators to diagnose problems or install software on company computers, give almost complete control over the system. In the hands of criminals, they become a gateway to take over the desktop, steal data, monitor activity and, ultimately, gain access to bank accounts and other confidential information.
The problem of undetectability
The main advantage of this method is its apparent legality. RMM tools are digitally signed, commercial products. Traditional anti-virus software often does not classify them as a threat, because technically they are not. They work as intended – only that the purpose of their use is criminal.
Proofpoint alerts that this tactic is becoming the preferred method for cybercriminals to gain ‘first access’ to a victim’s system. It replaces classic Trojans and keyloggers because it is more difficult to detect and does not arouse immediate suspicion. The attack can remain hidden for a long time while the criminals methodically explore the resources of the infected computer.
Scale and sophistication of operations
These attacks do not happen by chance. Criminals prepare their campaigns carefully. In order to acquire the email addresses of potential victims, they publish fake advertisements on job portals, use data from previous leaks or even take control of compromised company accounts and profiles on LinkedIn.
In one case, attackers, using a hijacked LinkedIn account, made contact with candidates and then directed them to further correspondence from a fake, albeit credible-looking, email address. Such activity blurs boundaries and builds a false sense of security. The victim is led to believe that they are participating in a legitimate recruitment process with a real company.
This problem is part of a wider trend of abuse of legitimate remote access software (RAS) that other cyber security companies are also seeing. Attackers are impersonating not only companies, but also government offices, banks or event organisers to maximise their chances of making the message credible.
How to protect yourself? Steps for jobseekers
With the rising tide of such attacks, jobseekers need to be more vigilant. It is crucial to adopt a zero-trust approach to unexpected offers.
Source verification: When receiving a message from a recruiter, verify it through an independent channel. Instead of replying directly, it is advisable to go to the company’s official website, find the ‘Careers’ tab or contact details and make sure that such a recruitment is actually taking place. Never rely solely on the details in the message you receive.
Email address analysis: Check the sender’s e-mail address carefully. Often scammers use domains that at first glance resemble the real one (e.g. `kariera@firma-it.co` instead of `kariera@firma-it.com`).
Red flag: Software installation: the most important rule of thumb – no reputable company requires the installation of custom software to conduct an initial interview. The market standard is established platforms (Teams, Zoom, Meet), which typically run in the browser and do not require administrator privileges. A request to install an RMM tool should be a wake-up call to break contact immediately.
Caution with links and attachments: Before clicking on any link, it is a good idea to hover over it to see its full destination address. Any shortened URLs and requests to download executable files (.exe) or archives (.zip) should raise suspicion.
The evolution of cyber threats shows that the human being remains the weakest link. In a stressful and hopeful situation such as a job search, it is easy to lose vigilance. That is why awareness of criminals’ new methods and healthy scepticism are the most effective line of defence today. After all, one hasty click can ruin not only the chance of a new job, but also digital security.
For years, phishing meant suspicious emails with attachments, typos and links leading to fake login pages. Not surprisingly, corporate security departments focused precisely on email protection. The problem is that cybercriminals have long since moved to where no one expects them to go – to SMS inboxes and employee phone numbers.
Smishing (SMS phishing) and vishing (voice phishing) attacks are not new, but they are only now gaining a scale that should light red lights in SOC teams and CISOs. According to data for the second half of 2024, vishing incidents have increased by 442%. At the same time, smishing has been growing steadily for several years, moving from the periphery of cyber threats to the premier league.
Why are these attacks so effective?
Unlike traditional email phishing, smishing and vishing rely almost exclusively on psychology – not on technical vulnerabilities. The scenarios are deceptively simple: someone calls an employee, claims to be from the IT department, a supervisor or an external contractor and orders an urgent task – e.g. changing a password, providing access data, confirming identity. Or they send an SMS with a link to a supposed login portal, invoice or VPN tool.
While a suspicious email from an unfamiliar address and typos in the domain often arouses vigilance, a short text message or phone call – especially on a private phone – is less often treated as a potential attack. And it is this perception gap that cybercriminals are exploiting.
What do such attacks look like?
The most famous case in 2024 was a series of attacks on retailers in the UK, attributed to the Scattered Spider group. Hackers phoned IT staff, speaking perfect English, impersonated others within the organisation and prompted them to reset passwords. As a result, they gained access to internal systems and then escalated privileges and carried out further actions ranging from sabotage to data theft.
In other cases, SIM swapping, i.e. the acquisition of a phone number by extorting or phishing for a duplicate SIM card, was also used. In this way, attackers took control of 2FA-secured accounts and even carried out financial transfers using SMS authorisation.
Why don’t IT departments see these attacks?
The main reason is simple: most companies do not include protection for employees’ private mobile devices. BYOD (Bring Your Own Device) policies allow private phones to be used for business purposes, but do not cover their active monitoring.
SOCs are built around networks, endpoints and mail systems – they do not have tools that monitor SMS messages or voice calls. Nor are there ‘firewalls’ for phone calls. Furthermore, most security software is unable to analyse and block unauthorised calls or messages at a system level.
Even if an employee recognises a fraud attempt, the chance of them reporting the incident is sometimes low – especially if there has been no actual breach. And the longer an incident remains unknown, the greater the chance of a successful attack.
What can be done about it?
While smishing and vishing cannot be fully blocked, the chances of detecting them quickly and reducing their impact can be significantly improved. Here are the courses of action that companies more aware of this wave of threats are implementing:
Monitoring of the darknet and instant messaging – looking for brand impersonation attempts, phishing kits offers and smishing domains.
Threat simulations – just like email phishing tests, companies are starting to run vishing and smishing campaigns for educational and auditing purposes.
Extension of mobile security – introduction of MDM/MTD (Mobile Threat Defense), which covers private devices with at least basic control.
Staff training – especially in recognising attempts at telephone manipulation. Voice communication should be treated with the same care as email.
State-of-the-art detection mechanisms – using AI to recognise anomalies in user behaviour, including at the level of voice or SMS communication.
Time for a change of perspective
Vishing and smishing are no more technically advanced than email phishing. But they are more intimate, harder to detect and more psychologically effective. This combination makes them extremely dangerous, especially in companies that still treat cyber security as a problem of networks and servers rather than people and their phones.
Since most attacks today are based on socio-technics and the use of new communication channels, organisations need to shift the focus of protection. The classic ‘block and react’ approach is not enough. It is necessary to build resilience, which assumes that some attacks will succeed – but that they will be quickly detected, reported and neutralised before they cause damage.
Because the most dangerous attacks today are those that happen within reach – in a text message or at a number from an unknown number.
Recently, there has been a sharp increase in interest in remote working due to the epidemiological threat. However, it is worth remembering that remote working must involve the security of sharing resources with employees, and the activities undertaken should be safe for both the employees and the organisation they work for. This is discussed in an interview with Hubert Ortyl of Advatech and Michal Nycz of Akamai Technologies.
In recent days, companies have faced a considerable challenge…
Hubert Ortyl ADVATECH
Hubert Ortyl, Business Development Manager Security Solutions, Advatech: In the current situation, a very large number of companies have decided to move employees to remote working. For organisations with several hundred or several thousand employees, this is a really large project, which in addition they have to carry out in a short period of time. The key issue here is to make all IT resources available remotely in a secure manner so as not to risk the potential loss of company-critical data.
How can Akamai help secure remote working?
Michal Nycz, Account Executive, Akamai Technologies: To get a good understanding of our approach to this issue, it is necessary to refer to the philosophy of the so-called Zero Trust, i.e. the total reduction of potential factors that could compromise the security of our IT environment. In the context of the remote connection to the application itself, Akamai proposes an alternative approach to a standard VPN. In our solution, we do not open any (incoming) connection to the Firewall, while only allowing access to selected users, only to selected applications – without access to the entire network. Application access is performed at the application layer, hidden from the Internet and public access.
What are the potential benefits of such an arrangement?
Michal Nycz, Account Executive, Akamai Technologies: The main advantage is the ability to maintain a tightly sealed firewall, while giving users secure access to the application of their choice. The solution integrates easily and quickly, has multi-factor authentication and SSO functionality for all applications, and a user-friendly interface for access policy management. Load balancing between local servers can also be implemented. As a point of interest, we can add that we give some control over what employees do while working remotely….
Home office also means other threats. Phishing, malware? Can we really have full control over what our employees open?
Michal Nycz, Account Executive, Akamai Technologies: Given that Akamai handles about two-thirds of all DNS traffic worldwide, this gives us significant insight into all current threats (including new types of attacks), so we are able to feed our database on an ongoing basis. We have a dedicated team for this analysis, made up of some of the world’s best experts, who are constantly working to keep this database up to date. The solution that protects against this type of attack is our recursive DNS. The way it works is that all DNS queries internally, from all devices, resolve first to Akamai’s DNS to verify that inappropriate content (websites, phishing, malware – and other threats defined by the administrator, or pulled from CSI) is not being opened. Of course, at the administrative level, it is possible to set which addresses a particular user group should have access to and which should not. Both solutions are compatible and share a common interface.
In the current situation, do you offer additional support to ensure safety when working remotely?
Michal Nycz, Account Executive, Akamai Technologies: In an effort to meet the needs of our customers, we offer a programme to help maintain the business continuity of an organisation and create a secure environment for remote working. We are offering the opportunity to use Akamai Zero Trust solutions free of charge for 60 days – along with full technical support from us. The situation is unique, we feel a social responsibility, so if we can help – we will.
Advatech is a partner of Akamai, what is your role in this promotional programme?
Hubert Ortyl, Business Development Manager Security Solutions, Advatech: In this specific period, our role boils down primarily to promoting or suggesting ideas in the area of maintaining the continuity of the organisation’s operations, where a very important requirement is access to company applications along with ensuring the highest security standards. The proposed Akamai EAA solution is deployable within countable hours for a period of 60 days. Advatech stands ready to assist in the launch of the service. All details of this programme can be found on our LinkedIn and Facebook channels.
Zarządzaj swoją prywatnością
To provide the best experiences, we and our partners use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us and our partners to process personal data such as browsing behavior or unique IDs on this site and show (non-) personalized ads. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Click below to consent to the above or make granular choices. Your choices will be applied to this site only. You can change your settings at any time, including withdrawing your consent, by using the toggles on the Cookie Policy, or by clicking on the manage consent button at the bottom of the screen.
Funkcjonalne
Always active
Przechowywanie lub dostęp do danych technicznych jest ściśle konieczny do uzasadnionego celu umożliwienia korzystania z konkretnej usługi wyraźnie żądanej przez subskrybenta lub użytkownika, lub wyłącznie w celu przeprowadzenia transmisji komunikatu przez sieć łączności elektronicznej.
Preferencje
Przechowywanie lub dostęp techniczny jest niezbędny do uzasadnionego celu przechowywania preferencji, o które nie prosi subskrybent lub użytkownik.
Statystyka
Przechowywanie techniczne lub dostęp, który jest używany wyłącznie do celów statystycznych.Przechowywanie techniczne lub dostęp, który jest używany wyłącznie do anonimowych celów statystycznych. Bez wezwania do sądu, dobrowolnego podporządkowania się dostawcy usług internetowych lub dodatkowych zapisów od strony trzeciej, informacje przechowywane lub pobierane wyłącznie w tym celu zwykle nie mogą być wykorzystywane do identyfikacji użytkownika.
Marketing
Przechowywanie lub dostęp techniczny jest wymagany do tworzenia profili użytkowników w celu wysyłania reklam lub śledzenia użytkownika na stronie internetowej lub na kilku stronach internetowych w podobnych celach marketingowych.