Jeff Martin – Mend https://www.mend.io Mon, 25 Nov 2024 22:51:41 +0000 en-US hourly 1 https://www.mend.io/wp-content/uploads/2024/11/Mend-io-favicon-outline-200px.svg Jeff Martin – Mend https://www.mend.io 32 32 NVD Update: Help Has Arrived https://www.mend.io/blog/nvd-update-help-has-arrived/ Thu, 06 Jun 2024 23:03:50 +0000 https://mend.io/nvd-update-help-has-arrived/ NIST has announced that it has filled the funding gap for the National Vulnerability Database (NVD) and hired a contractor to return it to its previous underwhelming state. 

What does that mean for us? Basically, The NVD is off life support, but we wouldn’t say it’s healthy. It’s more like “undead”. 

This has been quite a saga, starting with the news that the NVD stopped most CVE enrichment (You can read about that here.) Then came a wave of public support for the NVD to get the funding it needs, as well as some news about how the NVD briefly stopped even entering CVEs to the database altogether. It ultimately caught up on CVEs, although the enrichment backlog continues).

Then there was the mystery about the unnamed agency that suddenly pulled funding from the NVD.  The anonymity led us to assume it must have been a secretive three-letter agency, like the CIA. However, it turns out it was a four-letter agency: the Cybersecurity & Infrastructure Security Agency, aka CISA. Rich Press, director of media relations, at the National Institute of Standards and Technology (NIST) told Cybersecurity Dive that NIST filled the $3.7 million gap created when CISA pulled funding by reallocating internal funds. So, hey, that’s good.

Even better, it appears that NIST has already begun spending that dough on some hired help to deal with the massive amount of incoming and backlogged CVEs. Reports are varied on how much the deal is worth, but Analygence, a company with a name only the federal government could love, reported that they were awarded a total contract of $125 million with NIST back in December. However, it appears that the NVD-specific part of that contract is only worth about $1.8 million total—and that’s only if it gets extended to July of 2025.

Some are still reporting that Analygence has a contract for $125 million over 5 years with NIST for work on the NVD specifically, but we find that doubtful. It doesn’t seem in line with NIST’s conservative announcement posted May 29th that the backlog would be all sewn up by the end of September. For $125 million we’d expect a shiny new, massively overhauled NVD, not one that’s promising to chug along as normal by the end of the year.

So, what does that mean for organizations trying to stay secure? Not much right now. You might be able to rely on the NVD in October, but for now you still need to draw your vulnerability data from multiple sources.

]]>
NVD’s Backlog Triggers Public Response from Cybersec Leaders https://www.mend.io/blog/nvds-backlog-triggers-public-response-from-cybersec-leaders/ Fri, 12 Apr 2024 18:39:55 +0000 https://mend.io/nvds-backlog-triggers-public-response-from-cybersec-leaders-2/ Just a few weeks ago, we wrote about how the National Vulnerability Database (NVD) is seriously behind in enriching CVEs. On LinkedIn, Mastodon, and other social sites, the NVD’s mounting backlog and what should be done about it has become a hot topic of conversation within the cybersecurity community. 

It’s not hard to see why. From a U.S. perspective, the NVD is itself part of national infrastructure because its data is used to keep private and public sector software products secure – that alone makes it worthwhile for the government to provide the NVD with the funding, support, and expertise it needs. But a disruption to the NVD affects more than just U.S. citizens; the data provided by the NVD is a critical component of vulnerability detection and triaging for organizations across the planet. 

Not everyone may be equally affected by a lack of NVD data, but given the proliferation of open source code in modern applications, somewhere along every software supply chain someone is relying on the NVD. We’re all in this together.

The NVD’s backlog problem

The sad truth is that things have not been going well at the National Institute of Standards and Technology (NIST), the government organization in charge of the NVD, for a while now. A Washington Post article released early last month details the poor state of NIST’s infrastructure and mounting budget constraints. So that’s the backdrop for their announcement at VulnCon on March 28th and, a few days later, a post on their website.

I wish I could report that we now know exactly what went wrong and how it’s going to be fixed—but I can’t. NIST is still tight-lipped about the underlying problem, calling it a “silly governmental problem”, according to Tom Alrich, a consultant and leader of the OWASP SBOM forum who reported on the VulnCon announcement. Since the database is still lagging far behind where it should be, the “silly” problem is likely neither trivial nor fixed.

NIST says they’ve got some support from other agencies to help them work on the backlog, and they’ve reiterated that they plan to form a consortium of “industry, government, and other stakeholder organizations that can collaborate on research to improve the NVD.” So, in these last few weeks, we’ve learned more or less nothing new. 

Open letter to Congress 

NIST’s continued questionable PR and stubborn opacity has triggered mounting concern across the cybersecurity community. There is also concern that the “consortium” solution will lead to a volunteer-based NVD where the project could lose its neutrality or be abandoned altogether. By and large, the community wants to see the NVD survive and thrive under the U.S. government’s care. 

One ad-hoc group of cybersecurity pros is taking the problem up the chain to the United States Congress. Led by Chainguard CEO Dan Lorenc, a team of security researchers and practitioners, including myself, have authored an open letter to Congress expressing concern over NVD’s troubles. 

Published earlier today, the open letter urges Congress to do several things: 

  • Investigate the cause of these recent issues
  • Address the lack of transparency from NIST 
  • Ensure sufficient funding to both erase the backlog and make much-needed upgrades to processes and infrastructure
  • Elevate the status of the NVD to “critical infrastructure” that will be unimpeded by normal budgetary issues and government shutdowns

How can you help?

Time will tell what happens to the NVD and the backlog of CVEs waiting to be enriched. Government organizations tend to move slowly, and that’s especially true for older ones (NIST just celebrated its 123rd birthday in March).

In the meantime, concerned U.S. citizens can write to their Congressperson in support of the NVD, and all citizens of planet Earth should make sure their applications are covered by tools that source vulnerability data from more than just the NVD. Mend.io customers are covered on that front. Smaller organizations using only FOSS solutions will likely need to string together multiple resources to stay covered.

]]>
What You Need to Know About Hugging Face https://www.mend.io/blog/what-you-need-to-know-about-hugging-face/ Wed, 03 Apr 2024 20:19:00 +0000 https://mend.io/what-you-need-to-know-about-hugging-face-2/ The risk both to and from AI models is a topic so hot it’s left the confines of security conferences and now dominates the headlines of major news sites. Indeed, the deluge of frightening hypotheticals can make AI feel like we are navigating an entirely new frontier with no compass.

And to be sure, AI poses a lot of unique challenges to security, but remember: Both the media and AI companies have a vested interest in upping the fright hype to keep people talking. For those of us dinosaurs who’ve been around for a while, the problems organizations are facing with AI feel in many ways fairly similar to when open source software started to hit it big.

What is Hugging Face?

Creating AI models, including large language models (LLMs), from scratch is extremely costly, so most organizations rely on existing models. Much in the way they look to GitHub for open source modules, developers who want to build with AI head to Hugging Face, a platform that currently hosts over 350,000 pre-trained models and 75,000 data sets—with more on the way. Those AI models use licenses just like open source software does (many of them the same licenses), and you’ll want to make sure your AI models use commercial-friendly ones. 

So when it comes to risk, the first thing organizations need to know is what open source AI models developers have put into their code base. Conceptually it’s a simple task, but if your organization has enough code, finding AI models specifically can be less than straightforward. (I didn’t mean this to be a product promoting blog but I have to mention that we solved that problem already.) Once you know what you have, where it comes from, what version you’re on, and so forth, you can make adjustments and keep up with any security notices that might come up, same as with open source code.

But from there, I must admit, we definitely diverge from the classic problems of open source software. By their very nature, AI models are opaque. They might be open source-like in that you’re free to use and distribute them, but you can’t really see and understand the source. Because drilling down into AI models is nearly impossible, organizations that work with AI models are going to need to do a lot more threat modeling and penetration testing.

The next chapter of risks

The risks of open source vulnerabilities will still exist throughout your applications; AI adds some twists to the classics and throws in a few new ones on top of that. Just a few examples:

Risk of data exfiltration (AI version). Goodbye SQL injection, hello prompt injection. Can an attacker interfacing with your AI model use plain language prompts to get it to divulge training data which may include sensitive information?

Risk of bias. Does your AI model include biases that could end up causing discrimination against people based on immutable characteristics? Regulators sure don’t like that.

Risk of poisoning/sabotage. Can an attacker use a poisoned data set against your AI to make it perform incorrectly across the board or with specific targeted interest?  For instance, could it register a specific face as, say, an innocent houseplant? (I’m looking forward to all of the heist films that will inevitably use this in their plots.) Artists are already using this concept to protect their copyrighted works from image-generating AI models.

Risk of being just plain wrong. There’s a lot of ways AI can hand you seemingly good output that comes back to bite you. Just one example: if you use an AI chatbot in place of live support on your website, they might give your customers bad information. And you might be on the hook for that.

Again, this list is non-exhaustive. There are many other threats, like the power of AI being wielded for creating phishing campaigns or writing malware. There are many, many ways that using AI in an application increases the attack surface.

Reading ahead for solutions

As a technology, AI is rather immature. It’s still early days, which means huge susceptibility to disruptions in the field and in the marketplace. No one knows precisely where this is all heading, in tech or in the courts

For now, we’ll have to all stay diligent and keep our ears open. One thing for sure, you cannot keep your head in the sand and ignore AI. You have to know what AI models you have and keep them updated. From there, frameworks like MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) are an excellent place to start for threat modeling.

AI threats won’t be the end of the world, as long as we stay alert.

]]>
Secrets Management vs Secrets Detection: Here’s What You Need to Know https://www.mend.io/blog/secrets-management-vs-secrets-detection-heres-what-you-need-to-know/ Wed, 14 Feb 2024 21:44:01 +0000 https://mend.io/secrets-management-vs-secrets-detection-heres-what-you-need-to-know/ As the name might imply, it’s important to keep secrets secret. Access to even the smallest of secrets can open a window for attackers who can then escalate their access to other parts of the system, allowing them to find more important secrets along the way. Poor practices can leave many secrets lying around unprotected and just one seemingly unimportant secret can lead to a broad security breach.

What are secrets?

Secrets are private credentials that are often the keys to highly sensitive data. They include:

  • Database passwords
  • Privileged account credentials
  • SSH keys
  • Encryption keys
  • Third party tokens
  • API keys
  • Private certificates

Secrets themselves are fairly straightforward, but tracking and managing them manually can be a complicated if not impossible task. Many applications rely on thousands or even millions of secrets to function. Secrets accumulate in all directions over time and across versions, installations, and layers of code, so it’s important to manage your secrets well and detect (and revoke) any secrets that have been left where they shouldn’t be.

How are secrets mismanaged?

Hardcoded secrets. Hardcoded secrets are stored in plain text in source code which may be revealed accidentally, such as in public repositories or code snippets shared on forums or blogs, or maliciously via hackers using other application vulnerabilities and weaknesses to expose the source code.

Use of default credentials. Products often ship with default credentials to make deployment and administration a simpler process. If these credentials aren’t changed for each installation, bad actors can quickly exploit multiple organizations. Hardcoded default credentials are especially risky.

Stored in configuration files. To avoid having secrets in the source code, developers will sometimes instead store the unencrypted secrets in config files. These files sometimes end up in public repositories (oops) or threat actors can gain access to them in other ways and even change passwords to ones of their choosing. Beyond that, having different secrets in different config files just makes things more difficult to track and secure. 

Manual management. Manually managing secrets tends to lead to the use of easier-to-remember (and easier to guess) passwords that are shared among coworkers and not changed often enough. Modern applications are huge and complex, making secret management extremely difficult, especially when they’re spread across many layers. 

Secrets management

Secrets need to be protected both in transit and at rest with strong encryption. Good practices include: 

  • Role-based access control
  • Using the principle of least privilege
  • Keeping an audit trail
  • Keeping secrets in protected locations

While secrets management isn’t one thing or one tool, tools do help automate the process. Secrets management tools allow you to centralize your secrets in a secure repository, not unlike a personal password vault geared toward machine use. They also enforce best practices like changing passwords frequently and offer access control, rotation, and monitoring.

There are many built-in and third-party tools that automate the process of secret generation and distribution. Many cloud providers offer their own tools, such as AWS Secrets Manager (Amazon), GCP’s Secret Manage (Google Cloud), and Key Vault (Azure). Orchestration platforms like Kubernetes also have their own ways of helping you manage secrets.

Secrets scanning and secrets detection

So you’re managing your secrets correctly now. But what about those secrets that are already out in the open? Or the ones that developers might neglect to follow best practices for in the future?

Secrets scanning tools detect secrets in unprotected locations so you can find the layer they’re on, revoke them, and put new ones securely where they belong.

If you are only beefing up your secrets management for the first time, you’ll need to know where they’re still hiding in your codebase.

But don’t stop after scanning your currently deployed code. Secrets may also exist in previous versions of code that are no longer used, but are still accessible, as version control systems by their nature keep a history of all modifications to the codebase. If these secrets are or can be publicly exposed and not detected and revoked, the current version of the application can still be at risk. You should use scanning tools not only across your current source code, but across historical versions of your application as well.

Even after secrets management tools have been put solidly in place, it may be necessary to scan for secrets in order to find newly created weaknesses created by bad developer practices.

Just between you and me…

Secrets are a big deal, and mismanagement of secrets is one of the most overlooked weaknesses in AppSec. Only the very smallest of projects can securely manage secrets manually. For everything else, tools are necessary to both detect secrets and automate the processes of securely storing and accessing them.

]]>
The Challenges for License Compliance and Copyright with AI https://www.mend.io/blog/the-challenges-for-license-compliance-and-copyright-with-ai/ Thu, 21 Dec 2023 20:01:44 +0000 https://mend.io/the-challenges-for-license-compliance-and-copyright-with-ai/ AI-powered code generation is reshaping software development, promising to boost efficiency and innovation. But as this technology gallops forward, the legal landscape remains a dusty, uncharted territory. With policymakers struggling to keep pace, organizations face a daunting choice: embrace the potential of AI or tread cautiously in this legal minefield. The question isn’t whether AI will revolutionize coding, but how we can navigate the risks and uncertainties to harness its power responsibly.

This article is part of a series of articles about Open Source License Compliance

Disclaimer: A rapidly evolving landscape

While the implications of AI-generated code are global, this discussion centers on the United States, the epicenter of AI development and subsequent legal battles. It’s crucial to remember that legal frameworks in other countries may vary significantly.

We’re not legal experts and this content doesn’t constitute legal advice. Moreover, the AI landscape is in constant flux.Laws, regulations, and court decisions are evolving rapidly, making it challenging to provide definitive answers.

With those caveats in mind, let’s tackle the burning questions developers have about using AI-generated code.

Can I be sued?

Developers who use AI-generated code that is often trained on open source software, face a complex legal landscape. Concerns about copyright infringement and license compliance loom large. While the worst-case scenario involves a restrictive GPL license, even a simple attribution requirement can be challenging. Tracking AI-generated code and identifying its open source origins remains a daunting task without reliable tools. However, a recent announcement from Microsoft offers some relief to GitHub Copilot users, promising legal support for those using the tool responsibly.

According to Microsoft:

As customers ask whether they can use Microsoft’s Copilot services and the output they generate without worrying about copyright claims, we are providing a straightforward answer: yes, you can, and if you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.

That doesn’t mean the law is guaranteed to settle on Microsoft’s side, but it does signal loudly that they’re confident they have a strong legal case. A lawsuit alleging Microsoft, GitHub, and OpenAI infringed on open source licenses and copyrights when training their models is working its way through the U.S. legal system and likely will be for some time. Microsoft argues that anyone has a right to look over public code on GitHub to understand and learn from it and even write similar — but not outright copied -– code, and that includes their models. OpenAI hasn’t promised to pay legal fees for its users, but if Microsoft’s argument holds up, it will be good news for OpenAI and its users too.

Can I sue?

The legal landscape surrounding AI-generated code is murky. Currently, AI-created content isn’t eligible for copyright in the U.S. While companies like OpenAI claim to transfer ownership to users, the reality is complex.

For software incorporating AI-generated code, copyright protection hinges on human involvement. Substantial human authorship is crucial. However, determining the exact threshold is unclear. The U.S. Copyright Office requires disclosure of AI-generated content, but specific guidelines for software are still evolving.

To mitigate risks, carefully document AI-generated code within your project. This could be crucial for future copyright applications or potential legal disputes. Ultimately, deciding whether to use AI-generated code involves weighing the potential benefits against the legal uncertainties.

]]>
Let’s Embrace Death in the Software Development Lifecycle https://www.mend.io/blog/lets-embrace-death-in-the-software-development-lifecycle/ Fri, 20 Oct 2023 18:05:10 +0000 https://mend.io/lets-embrace-death-in-the-software-development-lifecycle/ The leaves are turning brilliant colors before they fall off and blow away here where I live just a few minutes outside of Salem, Massachusetts where autumn — Halloween specifically — is a very big deal.

I’m not morbid but it’s a natural time to think about how things wind down and finally breathe their last breath. Nothing lasts forever. Not trees. Not animals. Not people. Not cars. Not houses. Not software. Especially not software.

People who actually make applications definitely know this. But instead of showing respect to our apps and letting them die a planned and peaceful death, we let our products turn first into Frankenstein’s monster with mismatched parts sewn crudely together, then finally into a zombie with fully rotting parts that fly apart at the smallest bump.

In this blog, let’s look at how and why you should retire some software gracefully, before it transforms into something scary.

Stopping zombies: Why you should let some software go

Here’s the classic graphic of the software development lifecycle (SDLC). There’s no obvious place where death comes in.

If you don’t want a zombie product, it needs to come in right at stage 1: planning. You have to plan on how you will replace all of the pieces, and you need to think about when it’ll become too complex. If you don’t decide ahead of time that you are going to budget and plan for building a new house every 100 years, what you end up with is a cursed 200-year-old mansion that’s falling down, a danger to anything it touches, and that anyone can walk right into and steal your stuff. In software, we don’t get 100 years (more like five) but the result is the same.

Here’s the SDLC in practice on a large time scale, or at least what we wish would happen: you spend a lot of time and money on the build, and then you try to maintain the plateau indefinitely to live happily and profitably ever after.

So you reiterate and replace piece by piece, but meanwhile quality (and security) goes by the wayside. You don’t plan for deprecation and getting rid of your product, you just focus on maintaining it. Here’s what actually happens: eventually maintaining that zombie will cost your entire revenue stream with no money leftover to rebuild with. When you’re spending all of your resources maintaining a product, it’s difficult to keep it secure or functional, let alone to iterate and make it better.

This is make or break stuff. Many software startups fail in the first year or two. There’s a second huge cliff between eight and ten years. This makes sense.

For the majority of startups the first couple of years are focused on making ends meet, growing fast, and building quickly with what they can get cheap. If they don’t then slow down enough to plan for the retirement and rebuild of their product, they’ll end up with a product that’s costly to maintain, impossible to secure, and too complex to keep functional. If that’s happening across all of their applications, that company will fry by that ten year mark. 

Why you need to think ahead to evade the monsters

Anyone who doesn’t know when their product will expire isn’t thinking very far ahead. Humans generally aren’t great at long term planning and those in charge of software companies are no exception.

One source of short term thinking comes from high developer turnover. With the average developer only staying at a company 1-2 years, the longevity of a product is seen as someone else’s problem, and it very likely will be. The decision to plan ahead and avoid zombie products can’t be left up to developers. Companies need to have the right long-term perspective and use that to tell employees how to build products that can be retired and rebuilt without chaos.

Why too many companies don’t think ahead

So what stops companies from having that perspective? Well, it’s a painful pitch to make. “Hey, I know the thing we have is working and making us money but in a year I’m going to have to replace pretty much the whole thing. We better spend some money now and rebuild the thing we already have.” It’s one of the toughest business decisions to make in application development. It’s not going to make any money; all it does is cut down future costs. Should we spend $2 million now to not spend $10 million in three years? That’s a long time frame for many companies.

But the alternative to making that tough decision is bleak. It’s easy to be penny-wise and pound-foolish. I’ve killed a lot of products in my career. I’ve retired product lines that were still profitable for the company because they were too much of a pain. They had too many quality issues and couldn’t be secured. If someone had had a little bit more foresight and made some fundamental changes to these products three or four years prior, before I got to them, I would have been able to hold onto them longer. But it was too late.

Why is the threat worse now?

Five to ten years ago you could maybe get away with keeping products around longer. Today, applications are increasingly dependent on each other and the application development supply chain is far more complex. It’s become much harder to find things that fit older software; replacing old parts is a pain in the rear that never works right. And you just can’t properly secure old software. You can try to plug up the holes as you find them but more will pop up and you’ll find some holes are simply out of reach.

I’ve tried to keep my analogies to a Halloween theme so I’m sorry to go out with one about cars. If you try to keep a car going for 30 years by replacing each part piece by piece, you’re not going to end up with a better car. You’re going to end up with a crappy car with terrible gas mileage that can barely get you where you need to go without breaking down, and can take advantage of few if any advancements in efficiency or safety.

Embrace change and refresh to keep the zombies at bay

Because of its dependency and complexity, the problems with approaching software this way is worse than any real world analogy. The road that you drive on doesn’t change every five years but the platforms, networks, infrastructure, and so on that your software rides on do. 

You have to know things in software are always going to change. You have to plan for that change or at least recognize that the change is going to happen. There’s going to be better and cheaper ways of doing things that you’re going to want to take advantage of. So let’s embrace death in the SDLC. Plan for a peaceful death of your software now or be haunted by it later.

Are you set up to manage your dependencies efficiently and avoid zombie software from affecting your codebase?

]]>
What You Can Do to Stop Software Supply Chain Attacks https://www.mend.io/blog/what-you-can-do-to-stop-software-supply-chain-attacks/ Thu, 24 Aug 2023 15:12:16 +0000 https://mend.io/what-you-can-do-to-stop-software-supply-chain-attacks/ In my previous blog post, I looked at how software supply chain attacks work and what you can do to assess and analyze your security posture. Now, let’s figure out how to use the resultant information to harden your software supply chain against threats.

Use SBOMs

The Software Bill of Materials (SBOM) is an increasingly important tool for managing supply chain security. An SBOM is a detailed breakdown of the different software components that are used in applications. SBOMs include metadata, like software origin or licensing terms, version number or release date, code packages, libraries, and other package dependencies. It may also include underlying systems and perhaps the programming language in which the application was coded.

What's in an SBOM?

An accurate SBOM gives you better visibility, transparency, accountability, and management of the software supply chain. With the use of an SBOM, you can see which components meet regulations, industry standards, and best practices. You can track and manage applications’ components, and you can better identify and address any potential security vulnerabilities, malicious packages, other security risks, and compliance issues. When any application undergoes a major update, ensure that the SBOM is updated, using a dedicated SBOM generation tool.

How can SBOMs help?

SBOMs are becoming an increasingly critical tool now that more regulation from governments and industry groups is being introduced to strengthen software supply chain security. The U.S., the U.K., the E.U., Australia, and New Zealand have already introduced cybersecurity strategies, which will obligate software and application providers to be transparent about the provenance of their products, be accountable for them, and follow best practices to secure their software supply chains.

SBOMs and their relation to the software supply chain.

Apply software supply chain security best practices

So, what are the best practices that organizations should implement? In short, they break down as follows:

Software supply chain best practices
  • Develop a comprehensive supply chain risk management process. Identify potential risks, assess their impact, and establish security requirements to manage those risks.
  • Develop and practice your incident response plan. Respond to issues quicker and more efficiently, minimizing damage caused by attacks.
  • Review and improve your processes regularly. Ensure all requirements and controls are up to date with the latest versions and regulations.  
  • Educate and train employees and stakeholders to follow best practices.
  • Don’t disrupt business. Make your processes easy to learn, adopt, and implement. Make them a seamless part of your teams’ regular workflow. Use tools like code signing, digital certificates, multifactor authentication, and secure software distribution, to minimize risks.

Know your components 

To build strong supply chain security, you need to answer the following fundamental questions about every component of your software:

Software supply chain fundamental questions

SBOMs answer the first two, providing a machine-readable, easily communicated inventory of all of the items inside your product. Finding the answers to questions 3 through 5 is more complex. These questions revolve around safety. To answer them, you need to know: 

  • Who your suppliers are, and how secure their systems are
  • What every component does–and confirm that this is actually what that component does
  • All versions are up to date
  • When, if, and how do components become unsafe
  • Where issues appear

As up to 90% of code is open source, you must know who the suppliers are. Remember that attacks can leverage a supplier you think you know, or trust. You’re always vulnerable to attacks that are currently happening, such as typosquatting, dependency confusion, and dependency hijacking.

The threat of malicious packages

We see a lot of attacks using new packages that display undesired and unclear behavior, especially spam packages, which spread like wildfire.

Types of malicious package attacks

New libraries also don’t get flagged as bad behavior because they may not necessarily be damaging. In the case of something like obfuscated code, there’s often no actual malicious behavior. So, people begin to develop trust that can then be exploited.

Then there’s malware — bad behavior, or dormant things waiting to behave badly. The key to staying a step ahead is to stay up to date.

Stay up to date with known vulnerabilities

It’s shocking how many people focus on the unknown when they have huge attack surfaces of known vulnerabilities. Attackers often leverage known vulnerabilities that haven’t been fixed, so it’s important to remember that out-of-date code equals risk, especially with open source. 

While good access control, risk management, and design will limit the impact of a known vulnerability, it’s better to just ensure you have few known vulnerabilities and mitigate them as needed. Therefore, my top recommendation is to automate dependency updates. Many tools will tell you what dependencies need updating. The good, advanced ones will create a pull request so that you can automatically merge the update. Your infrastructure, your containers, and your application code should also auto-update.

Constantly monitor and prioritize

Automated scanning is vital, but it can’t cover everything. You need tooling that constantly monitors and prioritizes vulnerabilities. When something goes out of date or has a new vulnerability, it gets flagged and you can address it. That means you need software composition analysis (SCA).

Prioritization is important because not all vulnerabilities need your attention. You need a central inventory or SBOM. If you’re a big enough company, you need one that covers all your products and is searchable, because when a serious issue like Log4j emerges, you need the capability to search for it throughout your code base and get answers almost immediately.

If you don’t . . .

Software supply chain threat surface expanding

If you don’t take these steps, your threat surface expands. You become vulnerable to new issues, which create plenty of noise that developers hate. This complicates workflows, and fixes and updates get missed.

]]>
How Software Supply Chain Attacks Work, and How to Assess Your Software Supply Chain Security https://www.mend.io/blog/how-software-supply-chain-attacks-work-and-how-to-assess-your-software-supply-chain-security/ Thu, 17 Aug 2023 14:58:46 +0000 https://mend.io/how-software-supply-chain-attacks-work-and-how-to-assess-your-software-supply-chain-security/ When it comes to applications and software, the key word is ‘more.’ Driven by the needs of a digital economy, businesses depend more and more on applications for everything from simplifying business operations to creating innovative new revenue opportunities. Cloud-native application development adds even more fuel to the fire. However, that word works both ways: Those applications are often more complex and use open-source code that contains more vulnerabilities than ever before. Then too, threat actors are creating and using more attack methods and techniques, often in combination.

Ultimately, we end up with a smorgasbord of attack opportunities, and threat actors know it. In fact, Mend.io’s recent report on software supply chain malware saw a 315 percent jump in the number of malicious packages published to npm and rubygems from 2021 to 2022.  These attacks often compromise trusted suppliers or vendors. And precisely because they exploit trusted relationships, they can be quite difficult to detect and repulse.

So how can you defend against them? Here are a few ideas. 

How do software supply chain attacks work?

The software supply chain is the network of suppliers and vendors that provide the software components for applications. Adversaries compromise third-party software to gain access to your systems and code base. Then they move laterally through your supply chain until they reach their intended target.

Generally, software supply chain attacks follow a series of stages.

  • Reconnaissance. Malicious actors research their target and identify vulnerabilities in the supply chain. This involves gathering information on the suppliers, vendors, and partners within the supply chain.
  • Initial compromise. The first access to a vulnerable point in the supply chain, like a third-party supplier or vendor. It may involve phishing and other social engineering to trick employees into providing access credentials.
  • Lateral movement. Once inside the supply chain, attackers try to gain access to other systems or data, using things like stolen credentials or exploit vulnerabilities.
  • Escalation of privileges. Attackers seek to gain administrative access to critical systems within the target enterprise, like domain controllers or other servers that hold sensitive data.
  • Data exfiltration. Data or intellectual property is stolen, or other disruption is caused.

By understanding these stages, you can take steps to detect, mitigate, and prevent software supply chain attacks before they cause significant damage.

Common vulnerabilities

Software supply chain security weakness can most often be caused by:

  • Insufficient code review and testing, resulting in vulnerabilities going undetected. Enterprises should implement a comprehensive code review and testing process to identify and mitigate any potential security issues.
  • Outdated/unpatched software leaves systems vulnerable to known security vulnerabilities that attackers exploit.
  • Poorly designed access controls and weak authentication allow attackers to easily gain unauthorized access to sensitive systems and data.
  • Weak encryption and insecure communication make it easy to perform data breaches.

If an enterprise doesn’t have the tools or expertise to effectively monitor and detect threats, the lack of visibility into the supply chain increases the risk of exposure to potential issues. That’s the first of some hidden vulnerabilities that also pose a threat. The others are:

Hidden vulnerabilities

  • Third-party dependencies. Applications often rely on third-party libraries and components, which can introduce vulnerabilities if they are not properly managed. These can be difficult to detect, especially if the enterprise has poor visibility into the source code.
  • Lack of diversity in software suppliers. If an enterprise relies on a single software supplier and doesn’t have visibility into its security practices, then it can’t effectively detect hidden vulnerabilities.
  • Attacks targeting open-source software happen because enterprises use open-source so heavily that it’s an enormous attack surface.   

How do you assess your supply chain security?

  • Identify software suppliers and partners. Generate a software bill of materials (SBOM) — an inventory of all your vendors, contractors, and other partners, checking their security policies and controls, and their compliance with regulations.
  • Conduct a risk assessment and set up remediation plans, including robust software testing and enhancing security awareness.
  • Review and implement your controls and policies. Ensure your policies meet security requirements. Check access control and data protection to prevent unauthorized access, strengthen confidentiality, limit the attack surface, and mitigate third-party risks.
  • Practice encryption and secure communication
  • Evaluate and redesign supply chain architecture, to increase supply chain visibility, better identify and manage potential issues, malicious activity, third-party risks, and ensure you meet compliance and regulatory requirements.

Tools to strengthen security

Build a holistic approach to security. Use a combination of vulnerability scanners, endpoint protection software, network security tools, identity and access management, and specific software supply chain tools, alongside employee training and response planning.  

In my next blog post, I look at how you do this successfully, and what you should do with these tools to harden the security of your software and applications. 

]]>
CVSS 4.0 — What’s New? https://www.mend.io/blog/cvss4-0-whats-new/ Thu, 22 Jun 2023 17:06:54 +0000 https://mend.io/cvss4-0-whats-new/ The latest version of the Common Vulnerability Scoring System, CVSS 4.0, entered its public preview phase at the 35th annual FIRST conference put on by FIRST, the Forum of Incident Response and Security Teams. An international confederation of computer incident response teams, FIRST writes the CVSS specification that plays such an important role in identifying and cataloging software and application vulnerabilities.

After almost two months of public preview, CVSS 4.0 will be prepared for official rollout in the fourth quarter of 2023, and the U.S. National Vulnerability Database is expected to support its publication. CVSS 4.0 sees significant updates from the current version, 3.1, including “provider provided urgency” (sort of like vendor scores), an increased focus on ‘environmental scores’, a new severity score definition, and a host of other factors.

What’s new in CVSS 4.0?

FIRST identified a number of challenges and critiques of CVSS 3.1 that the release of CVSS 4.0 addresses to improve its precision, usability, and comprehensiveness, with better representation of real-world risk. With that in mind, the following changes have been made: 

  • Finer granularity in base metrics. More detail was added through the addition of several new measurements. Attack Complexity reflects the exploit engineering complexity required to evade or circumvent defensive or security-enhancing technologies. Attack Requirements reflect the prerequisite conditions of the vulnerable component that makes an attack possible. Finally, Enhanced User Interaction granularity was added.
  • Retirement of the Scope metric. Scope sought to measure the ability of a vulnerability in one software component to impact resources beyond its means, or privileges, but it caused inconsistent scoring between product providers and implied lossy compression of impacts of vulnerable and impacted systems. Instead, impact and C/I/A (confidentiality, integrity, and availability) metrics have been expanded into two sets:
    • Vulnerable System Confidentiality, Integrity, and Availability
    • Subsequent System Confidentiality, Integrity, and Availability 
  • Simplified threat metrics and improved scoring impact.  Remediation Level, Report Confidence, and Exploit Code Maturity were simplified to Exploit Maturity.
  • Supplemental metrics.  These describe and measure the following additional extrinsic attributes of vulnerabilities to improve response accuracy.
    • Automatable: Can an attacker automate the exploitation of a vulnerability?
    • Recovery: This measures the resilience of a component or system to recover services after an attack, identified as automatic recovery, user recovery that requires manual intervention, or irrecoverable.
  • Value density. The resources over which an attacker will gain control with a single exploitation event, either diffuse (small) or concentrated (rich in resources).
  • Vulnerability response effort. This measures how difficult it is for consumers to respond to the impact of vulnerabilities for deployed products and services in their infrastructure on a scale of low, medium, and high. They can consider this when applying mitigations and/or scheduling remediation.
  • Provider urgency. This enables any provider along the software supply chain to supply an additional assessment of risk and urgency on a green/amber/red scale of rising severity. It is recommended that the penultimate product provider in the supply chain is best positioned to supply such an assessment.
  • Additional applicability to operational technology (OT), internet connection sharing (ICS), and the Internet of Things (IoT)
  • Safety Metric Values added to Environmental Metrics.

FIRST recommends the following to get the most benefit from the CVSS:

  • Use databases and data feeds to automate the enrichment of your vulnerability data, such as the NVD for base metric values, asset management databases for environmental metric values, and threat intelligence data for threat metric values.
  • Use these important attributes to create new views into vulnerability data:
  • Support teams responsible for resolution
  • Critical applications
  • Internal- vs. external-facing components and applications
  • Business units
  • Regulatory requirements

How should you best use CVSS 4.0?

Like its predecessors, CVSS 4.0 is designed to help you understand the impact of Common Vulnerability and Exposures (CVE) encountered in your software development pipeline. With its new capabilities, we encourage your developers and DevSecOps teams to use the CVSS as frequently as possible throughout the software development lifecycle (SDLC). Version 4.0’s enhanced clarity, flexibility, granularity, and usability make it an even more valuable tool for identifying vulnerabilities and assessing their risks and threats.

In particular, CVSS 4.0’s enhanced ability to assess factors like context, urgency, and resilience will increase risk measurement accuracy. Mend.io welcomes this more risk-based and real-world iteration of CVSS as it perfectly aligns with our vision of prioritizing security findings based on the actual threat they represent in a specific context. In our pursuit of minimizing false positives, we always encourage teams to consider each vulnerability in the context of its usage, because vulnerabilities have differing impacts in different circumstances. Therefore, knowing how particular vulnerabilities behave in different situations helps establish true threat severity and lets teams better prioritize those most in need of remediation. That’s where version 4.0 can help and it’s why its repeated use, even to reassess vulnerabilities previously assessed earlier in the SDLC, should be beneficial.

The changes to CVSS should further improve companies’ vulnerability management when hardening AppSec postures. To that end, it’s important for application security companies such as Mend.io to support and promote CVSS 4.0 from the day it’s incorporated into the NVD.  We will be taking the upcoming changes into account to help ensure that our vulnerability database is as accurate as possible to deliver precision and value from our base SCA product and knowledge base, our container solution, and our platform, especially when it comes to remediation advice.

New precision and in particular better ease of use should make CVSS 4.0 more essential to application security and the way software and application security issues are detected and remediated. We look forward to its official publication towards the end of the year.

]]>
Understanding the Anatomy of a Malicious Package Attack https://www.mend.io/blog/understanding-the-anatomy-of-a-malicious-package-attack/ Tue, 13 Jun 2023 07:05:00 +0000 https://mend.io/understanding-the-anatomy-of-a-malicious-package-attack/ To identify malicious packages and protect yourself against them, you need to know what to look for. Here’s a simple guide.

In January 2022, users of the popular open-source libraries “faker” and “colors” suddenly found their applications started to malfunction and display nonsensical data because they had been infected by a malicious package. Similarly, in October, an attacker unleashed a typosquatting campaign against users of 18 legitimate packages that collectively receive over 1.5 billion weekly downloads. The attack released 155 malicious packages into the npm repository. Its objective was to distribute and install a Trojan that stole passwords.

Malicious packages like these are designed to disrupt or disable their targets’ software and applications. They’re alarmingly easy to create and difficult to identify and avoid unless you know what you’re looking out for.

A rapidly growing menace

Although they’re not a new phenomenon, malicious packages are proliferating at a startling rate. In the Mend.io Software Supply Chain Malware Special Report, we found that the number of malicious packages published on npm and RubyGems rose by 315% from 2021 to 2022. We anticipate that this growth will continue.

Malicious packages are a type of malware that deceives unsuspecting users into downloading them. Once downloaded, they can cause serious damage to the systems that they target. They’re highly effective because their sources seem trustworthy, so users are inclined to download them.

The damage from these packages comes about because they provide an automated and easy way for malicious code to enter systems with little or no effort from attackers. Once a package is uploaded, it operates on its own and unleashes its ill effects. Bad news for users. Great news for attackers. It’s no wonder that there has been a surge in malicious packages.

How malicious package attacks work

Attackers use malicious packages to steal or erase data and transform applications into botnets once they’ve deceived users into downloading the packages. They achieve this in four main ways:

  1. Brandjacking. Attackers assume the online identity of a company or package owner so users will trust and download their packages. Then they insert malicious code. When the dYdX cryptocurrency exchange was attacked, this was how it was infiltrated. In this attack, the malicious package versions contained a preinstall hook so it looked as if a CircleCI script was being downloaded.
  2. Typosquatting. This kind of attack relies on simple typographical errors that targets fail to notice. In these cases, when an attacker creates a malicious package, they deliberately name it in a way that closely resembles the name of a popular package. Then when developers misspell the name or don’t spot that it’s spelled differently, they open and download the malicious package.
  3. Dependency hijacking. Attackers gain control over a public repository to upload a new malicious version of an existing package.
  4. Dependency confusion. This occurs when a malicious package in public repositories shares the name of an internal package. Attackers exploit this to mislead dependency management tools into downloading the public malicious package.

Given the relative novelty of malicious packages, attackers’ methods are fairly unsophisticated. Typically, they rely on four techniques:

  • Re- and post-install scripts
  • Basic evasion techniques
  • Shell commands 
  • Basic network communication techniques

The good news from a security perspective is that when attackers use a straightforward technique like network communication, it’s still reasonably easy to detect them, even when packages are successfully downloaded.

Nevertheless, attackers continually seek to make their techniques more effective and create newer, more complex ways to infiltrate target machines and systems. One example is telemetry for data collection. We anticipate that more and newer ways of creating and using malicious packages will be created.

Timing of attacks

Initially, it seems as though malicious packages are published randomly, and it’s arbitrary when attackers release them, but in fact, that isn’t the case.

Attackers try to maximize the effect of their malicious packages and optimize opportunities that they’ll get downloaded by timing their release. Our research found that Top of Form

Nearly 25% of malicious packages are published on Thursday afternoons. This could be because attackers realize that many cybersecurity companies are based in Israel, where the weekend is Friday and Saturday. So, they deliberately release these packages at a time when these vendors are winding down for the weekend.

Understand open source to protect it from malicious packages

The accessibility of open source software contributes significantly to the impact of malicious packages. Even people with relatively elementary programming skills can create these packages and publish the code to open source repositories that countless developers use. This is an environment that offers plenty of opportunities for malicious packages to get downloaded by unsuspecting users. It’s fertile ground from which malicious actors can launch successful attacks.

Therefore, understanding the implications of incorporating open source code into applications becomes crucial in this context. If you know the dangers, you can be vigilant and better prepared to protect your organization. A significant thing to bear in mind is that malicious packages pose an urgent threat, whereas vulnerabilities can lurk in a codebase for longer periods, sometimes without causing any deleterious effect. It’s therefore important to find and neutralize malicious packages as quickly and efficiently as possible.

Companies can harden their security posture against malicious packages in numerous ways, not least by prioritizing their software supply chain. It’s essential to scan all open source code repositories and libraries, to find and remediate vulnerabilities, and to identify and prevent attacks. The best way to do that is to use an automated scanning tool and accompany this with a software bill of materials (SBOM). While high-profile attacks like Log4j and the SolarWinds breach receive significant attention, they’re just a small proportion of the onslaught of attacks that applications face. The escalating threat posed by malicious package attacks increases the need to take a fresh approach to application security (AppSec). And that fresh approach requires implementing constant, automated AppSec so that organizations can stay ahead of attackers in the race to protect their software and avoid the damage that malicious packages can cause.

]]>