Last Updated: 21st November, 2019
Security Vulnerability: Originally coined back around 1786, the epithet “a chain is no stronger than its weakest link,” and that this old adage would be more pertinent than it has ever been. When a single weak spot within this modern chain is inevitably found, an array of information, from personal bank accounts to national security vulnerability secrets can be compromised.
Just look at what recently happened at Capital One. A hacker was able to download over 100 million customers’ personal details, including credit card applications and Social Security Numbers, by finding one weakness within Capital One’s systems environment.
A reportedly small oversight made by the bank’s software developers allowed the hacker to sneak in essentially through an open door. With so many components contributing to a modern system; how do we protect not only for what we know, but also for those items yet to be discovered?
What if the weak spot was in the foundation to which the chain was connected? The open door which may exist in your application could become the least of your worries if the entire foundation of your house is fractured and a breath away from crumbling around you. Take, for example, issues early on with Microsoft’s operating systems; a fundamental foundation for the majority of IT systems then and now.
Microsoft’s early security vulnerability issues caused an explosion first in annoying security vulnerability exploits and later evolved into a platform enabling malicious and criminal activities. In response, the market introduced many malware protections and access control solutions which have now become a default expense of virtually all companies and consumers today.
The reputational damage to Microsoft’s brand and the costs they forced their customers to bear forced a major shift in resources at Microsoft onto cybersecurity. Microsoft’s security is far superior now and has become central to their product strategy.
Today another major foundational element, computer processors, are in the media and facing a similar challenge to that of Microsoft. Recently discovered defects in Intel’s computer processor chips — which make up about 90 percent of the world’s computer processors and nearly 99 percent of the server chips in the data centers serving the internet — had a Security Vulnerability that could leave sensitive data exposed.
In the race to make an ever-faster processor, the chip manufacturer implemented a functionality known as speculative execution. While this process increased speed, it also created a security vulnerability.
Since the initial disclosure of Intel’s design flaw in January of 2018, seven total exploits have been uncovered: Meltdown, Spectre, Foreshadow, Zombieload, RIDL, Fallout, and SWAPGS Attack. The exploits continue to evolve, and the latest variant of Zombieload was found just last week. While patches exist to address these known exploits, they have a significant negative effect on computer performance and have not been universally adopted.
This central processor vulnerability has stirred a major debate inside government institutions as well as major companies – settings where I have first-hand experience on the development and execution of information technology infrastructure.
While security vulnerability gaps and issues are common, this particular episode because of its far-reaching scope of impact has led to the emergence of new exploits every few months, and in turn, more patches with performance impacts that have put a drain on IT departments of the government to cloud providers to Fortune 500 companies.
It is unlikely that consumers will ever be able to fully assume complete trust in the foundation of their systems, and thus companies and organizations must implement a “zero trust” strategy moving forward. With more and more technology to participants in our systems, each bringing their own vulnerabilities, we will continue to experience security vulnerability risk and not be able to fully trust our hardware or software system building blocks.
That ultimately means added costs to the consumer to build in system protections to ward off unforeseen security vulnerability flaws. Today, for those with Intel processors, the assessed risk may mean having to purchase all new hardware that has addressed the defect, or re-architect around the risk at the expense of computer performance for short-term security gains.
The zero trust philosophy or approach to design incorporates the belief that each component, connection, or even system user could be potentially compromised and represents a risk. The zero trust philosophy has been a long-practiced concept for our most critical systems, but today it must become a more common practice for so many of our systems have become key to business operations, safety and our personal data security vulnerability.
Designing around potential risk, unfortunately means investigating companies and their products to identify the risk they may represent, and avoiding companies with products we cannot trust, and reversely, gravitating towards those that prize both performance and security in equal measure.
We unfortunately have seen that when these priorities are ignored, the consequences can be devastating. A zero trust philosophy can help mitigate the risks that are endemic to the technology landscape today.
Today, more than ever before, we must ask the vendors who represent the foundation to our systems who will be targets of attack to once again significantly step up their commitment and resourcing to their cybersecurity capabilities in protecting their platforms and our businesses.