Security breaches and ransomware attacks are painful reminders of how reliant society is on technology. These attacks exploited well-documented weaknesses of some of the most common methods used to hack more than 90% of large organizations and exposed gaping holes in the tools and skills that were deployed to protect critical systems and data.
As we seek to embrace new capabilities such as IoT, cloud and open source, API’s and open up operations to industry disruptors such as FinTech, where game changing business models put data at the core of your business, we often trade privacy for convenience to enable key data assets from internal and external sources to be harvested and correlated.
While cloud providers and XaaS companies talk near iron clad security, their contracts are careful to ensure that liabilities are limited and put the accountability for security breaches firmly at the feet of their clients.
In response to the breaches, governments seek to protect economies, citizens and critical infrastructure with legislation, like General Data Protection Regulation (GDPR) that holds businesses accountable, raising the stakes for a data breach and making it clear that society will not tolerate complacency. Wholescale change is needed to improve the way we manage and secure our identities.
Cyber security is an incredibly technical science, and there are many capabilities already available to help protect against an attack. Machine learning, with the ability to access and synthesize data at speed and scale may provide a valuable weapon to identify and manage an attack or breach. Many firms already employ machine learning methods but this field is still in its infancy.
With the stakes never higher, now is the time to focus on maturing differentiated approaches and capabilities. If recent attacks were not enough to convince us consider this, hackers have already figured out how to “Weaponize AI” as a way in.
The era of artificial intelligence is upon us, yet if this informal Cylance poll is to be believed, a surprising number of infosec professionals are refusing to acknowledge the potential for AI to be weaponized by hackers in the immediate future. It’s a perplexing stance given that many of the cybersecurity experts we spoke to said machine intelligence is already being used by hackers, and that criminals are more sophisticated in their use of this emerging technology than many people realize.