As we look ahead to 2025, businesses across Asia Pacific (APAC) are expected to accelerate their adoption of artificial intelligence (AI) in cybersecurity, using it as a critical tool to combat evolving AI-powered threats.
With 43% of security professionals predicting that these sophisticated threats will increasingly evade traditional detection methods, organizations are poised to leverage AI-driven strategies to proactively mitigate risks. At the same time, there is a growing push to secure AI systems themselves, evidenced by initiatives like Singapore’s recent AI security guidelines – trends that are likely to shape cybersecurity practices across the region in the coming year.
Here are five key cybersecurity trends that are poised to define the APAC landscape in 2025.
Cyber Infrastructure Will Become Unified
Although AI is being weaponised by criminals, it is also being used to drive cyber defenses. In 2025, AI-powered, single unified, data security platforms will become more important than ever. Single unified security platforms, infused with AI capabilities, can recognise patterns and potential threats early, enabling them to neutralise risks before they escalate.
Such single unified cybersecurity platforms are used to continuously analyse data on attack services, manage incidents to ensure that every part of an organisation’s infrastructure communicates and shares threat intelligence seamlessly. These platforms, powered by AI, strike a balance between streamlined security management and advanced threat protection.
Importantly, AI-powered single unified cybersecurity platforms also dramatically augment existing cyber skills, giving businesses the upper hand, even in markets short on talent. This is particularly useful across APAC, a region in which many markets are facing ongoing cyber skills shortages, with companies claiming smaller teams than their counterparts in other regions.
2025 Is the Year Deepfakes Go Mainstream in APAC
We’ve already seen deepfakes — images, video or audio of people generated or edited by AI technology – like the deepfake recreations that were used in a video conference call with a company’s chief financial officer in Hong Kong, who was deceived into paying out HK$200 million to fraudsters as a result.
Attacks using deepfakes are becoming a major challenge for organisations. In 2025, the number of attackers employing deepfakes will increase, with the generative AI technology behind them becoming easier to access and use. As a result, traditional defences will become less effective, prompting organisations to adopt advanced solutions to protect against such cyber deception.
Beyond the Quantum Security Hype — What to Expect in 2025
APAC countries, including China, Japan, South Korea, Singapore and Australia, are driving significant investments in quantum computing. For example, Australia alone has pledged to invest close to AU$1 billion in PsiQuantum – a company based in California that aims to build the first commercially useful quantum computer.
Amid such interest and investment in quantum computing, the area of quantum security is fast-evolving, although quantum attacks on widely used encryption methods are not yet feasible, they are likely to become possible within the next decade.
However, in 2025, it is anticipated that nation-state-backed threat actors will intensify their “harvest now, decrypt later” tactics. These involve collecting encrypted sensitive information today with the aim of decrypting it when quantum computing capabilities mature. In the face of such threats, enterprises are advised to start a quantum-resistant roadmap, including using quantum-resistant algorithms, quantum-resistant tunneling, enhanced crypto libraries and quantum key distribution.
For now, CIOs can debunk any hype around this topic to the board. Although significant progress with quantum annealing has been made, military-grade encryption has still not been broken.
Transparency Will Be the Cornerstone for Maintaining Customer Trust in the AI Era
In 2025, we will see transparency begin to become a cornerstone for AI compliance frameworks, with organisations demanding clear communication about the mechanics of AI algorithms.
Regulatory frameworks aimed at accomplishing such transparency are taking some time to emerge in APAC. There have been steps in the right direction, such as the Cyber Security Agency of Singapore’s Guidelines on Securing AI Systems, released in October 2024, or Australia’s Voluntary AI Safety Standard, announced in September 2024.
In the coming year, as government-mandated frameworks gradually gain traction, AI vendors will face growing pressure to demonstrate the safety and transparency of their models. Companies that provide clear explanations of their AI processes will be better positioned to foster deep relationships with customers and employees.
Increased Focus on Product Integrity and Supply Chain Security in 2025
According to our 2024 State of Cloud-Native Security Report, 47% of global respondents anticipate AI-fuelled supply chain attacks compromising vital software components or cloud services. Clearly, supply chain security is becoming a significant deal, and for good reason.
The increasing prevalence of far-reaching software supply chain risks have hit organisations of all sizes with threats they likely didn’t even know they faced. With software supply chain attacks, a vulnerability in one component of a software stack can expose an entire organisation to potential exploitation.
In 2025, the full risk of software supply chain and product integrity will begin to sink in, prompting businesses to identify where they can put proactive checks in place, along the line as software is being created. This is also spreading into the area of AI, with Australia’s Voluntary AI Safety Standard, for instance, containing guardrails that apply throughout the AI supply chain.
AI, particularly GenAI, may be taking a front seat in cybercriminals’ ongoing attack strategies and tactics, but that doesn’t mean businesses don’t have the means with which to defend themselves. In 2025, AI and GenAI will be increasingly used to defend against attackers, giving organisations the edge needed to operate safely in this AI age.