Today, when we read reports like a researcher finding a thousand instances of various vulnerabilities in a product or a group of academics uncovering almost a hundred distinct vulnerabilities affecting LTE and 5G implementations, we hardly bat an eye. A mere decade ago, an average of 7,000 CVEs were disclosed annually, a stark contrast to today's rapidly expanding threat landscape.
Although it may seem impossible to keep up and remediate all the exposures, the reality is that it is actually impossible with our current resources. A few years from now, AI or some other new technology might change this equilibrium, but the reality is the first step to dealing with the issue today is accepting the situation. It is impossible to remediate every threat.
FACT: Security vendors—large and small—only cover a maximum of 30-40% of CVEs in their detections. I know this shatters a lot of the assumptions out there about 100% CVE coverage expectations as we seen in the numerous CFPs. What happens to the rest of the 65% of unspoken CVEs?
If we continue to numb ourselves to the overwhelming amounts of threats, unfortunately yes.
The reality is there are ways to deal with the seemingly insurmountable problem at hand, but like with most things in cybersecurity, nothing is a silver bullet.
Security professionals must apply multiple techniques in tandem, much like a layered defense strategy.
Start with knowing what you’re protecting. You would be surprised at how many Fortune 500 organizations (and smaller ones) don't even know what’s in their environment.
So, how does one prioritize them? A Software Bill of Materials (SBOM) is key: an SBOM helps organize your digital attack surfaces to enable quicker incident response and vulnerability identification that streamlines breach prevention and compliance processes. Within that, don’t forget Software Composition Analysis. Google just released an open-source tool that helps with this effort.
It is vital to know the deployments of critical resources in the environment—not in a 200-page compliance handbook that is dusted off once every few years but readily available and used weekly, if not daily. Deployment affects the actual exploitability. A breach and attack simulation tool can help as well.
The next step is to add context to what we know about the crown jewels. It matters if the machines belong to the C-suite or are some hardened servers in an air-gapped environment. Context is key in cybersecurity. Stay tuned for more on the topic from Todyl.
Leverage a platform that can integrate data from as many sources as possible. This ensures as much attack surface area coverage as possible and aligns with common IT executive KPIs like reducing the number of third-party vendors (or agents deployed) by 60%. No one solution can keep up with 100% of the threats 100% of the time. Risk based prioritization is key.
Mindset is half the battle. Operate on the assumption that there is going to be a breach. What kinds of detections do you have in place assuming a breach has already happened? How soon are you going to be able to identify and contain the breach? Are you using an MXDR provider in addition to in-house resources? MXDR providers have the advantage of seeing attack traffic patterns across multiple customers in the same industry, geo, etc. for targeted attacks.
Track the landscape to know which threats could impact the resources based on risk identified. Focus and mitigate those first.
Ask your vendors the important questions:
Most importantly, resist the urge to become numb to the problem. Do not accept that the status quo is the best that is possible. This is a marathon, not a sprint.