Third-party cyber risk rises above the fold: the Verizon 2025 Data Breach Investigations Report (DBIR)


The 2025 Verizon Data Breach Investigations Report, also known as the DBIR, has been released. We talked about the 2024 report in this space, so we think it's appropriate to cover it again — especially given the finding that the percentage of breaches (incidents with data loss) involving a third party doubled to 30%.
It's hardly surprising, then, that third-party risk is a focus point of the 2025 report, with cover art representing the balancing act of securing an environment that relies on many different vendors.
This year's report is based on the incidents that took place between Nov 1, 2023, and Oct 31, 2024, so the Snowflake data breaches (which started in April last year) and the CrowdStrike outage (in July) were both in-scope for this DBIR. Verizon has a policy to avoid naming the companies involved in the incidents, but they do make exceptions for large-scale incidents that were publicly disclosed, so both Snowflake and CrowdStrike are mentioned by name.
That said, the DBIR always gives us a lot of insight into general patterns and trends, so we should not focus too much on specific incidents. For this post, we'll focus on two of the report's highlights: credentials and vulnerabilities.
Our Alice in Supply Chains podcast had DBIR author Alex Pinto as a guest for a bonus episode focused on the 2025 report and its findings. Listen to it here or on your favorite podcast platform.
Credential and authentication challenges
An infostealer is a type of malware that exfiltrates credentials and system data, usually as soon as the victim runs the malicious file. To be able to do this, the infostealer scans infected machines for stored passwords, browser cookies, and cryptographic keys. Unlike traditional keyloggers, they do not wait for the user to type their passwords.
There's a whole criminal ecosystem around infostealers, so credentials are often made available for buyers in criminal marketplaces or to "subscribers" of premium channels.
Since these credentials are often inside "logs" that also contain metadata about the machines the passwords were stolen from, the DBIR authors at Verizon were able to estimate how often corporate credentials are stolen from managed and non-managed devices.
What they found might surprise some people. While 30% of machines infected by infostealer malware were managed devices, 46% of corporate credentials were stolen from non-managed devices.
Credential abuse is the top vector for initial access in the DBIR (22% for non-Error, non-Misuse breaches), so third-party incidents often involve stolen credentials as well. The Snowflake incidents, for example, are believed to have been made possible by credentials stolen with infostealer malware — which were possibly abused once criminals realized the value of what they had and the lack of MFA on these accounts.
As mentioned in the report, it's difficult to know for sure why so many corporate credentials were stolen from non-managed devices. Maybe the company has a Bring Your Own Device (BYOD) policy, or it cannot properly enforce policies on device usage (perhaps because they're below the security poverty line). It’s worth noting the fact that contractors are often not subject to the same strict rules as employees, even when assigned corporate credentials, and that could easily be a factor in this situation.
Furthermore, authentication is not limited to users and passwords. There are a myriad of tokens and shared secrets in APIs, development pipelines, remote access systems, and more.
In a data set analyzed by DBIR, the median time to remediate discovered leaked secrets on a GitHub repository is 94 days — showing that expert users (like developers) are not immune to this problem and that response is as much of a challenge as prevention.
Vulnerabilities don't write themselves (yet)
Vulnerability exploitation grew 34% in the report’s dataset and now accounts for 20% of all breaches. This is also a relevant third-party exploitation vector, though not everyone sees this as a third-party risk management issue.
While businesses must patch their systems when updates are made available, it still holds that choosing a different vendor might have given them more time to patch or fewer issues where an urgent patch is needed. Businesses usually have limited visibility on the practices and pipelines of their software vendors, but vulnerabilities have a cost – even if it's just downtime to patch – and more transparency wouldn’t hurt.
We should not think that a perfect patch management process could allow us to disregard software vulnerabilities in our environments, or that any vulnerability exploitation that does not involve a zero-day is always a failure in patch management.
Businesses might be struggling with patch management because they have no means to know that they're using software made by vendors with poor security practices.
Some progress is happening with Software Bills of Materials (SBOMs) and other efforts, but there isn't a lot of push for software vendors to be more upfront about how they set up their repositories or developer access.
This is even more worrying when we think about how other vendors could be integrated into this process. AI usage for programming tasks could create a scenario where a vulnerability "writes itself," as research suggests that generated code is often vulnerable in some way or another.
One recent finding is that large language models (LLMs) tend to reference non-existent libraries and dependencies that can be squatted — an attack that has been called "slopsquatting."
For now, most software is still being written by human developers, so these vulnerabilities exist because someone wrote them. As Alex Pinto said during our podcast episode: "This might not be true in a few years, but the vulnerability didn't write itself." If a vendor is responsible, it's still a supply chain incident.
Mistakes are lessons
Everyone makes mistakes, but these mistakes should be lessons for all of us. As always, the Verizon Data Breach Investigations Report offers us a great perspective into these mistakes, so there's a lot we can all learn.
We encourage you to read the DBIR for yourself – including all the footnotes! If you're still unsure, here are some other interesting data points:
- A human element is still present in 60% of breaches. Maybe cybersecurity awareness training or process changes that could reduce human mistakes are in order?
- 28% of state-sponsored attacks had a financial motive, indicating that these threat actors are no longer just involved in espionage.
- AI risks. AI usage for generating text in malicious emails doubled over the past two years. Furthermore, many employees (72%) are logging into AI services with non-corporate email addresses, which means GenAI might be used outside company policy, with the associated data leakage risks.