Rami Elron – Mend https://www.mend.io Fri, 15 Nov 2024 21:58:54 +0000 en-US hourly 1 https://www.mend.io/wp-content/uploads/2024/11/Mend-io-favicon-outline-200px.svg Rami Elron – Mend https://www.mend.io 32 32 How Can Application Security Cope With The Challenges Posed by AI? https://www.mend.io/blog/how-can-application-security-cope-with-the-challenges-posed-by-ai/ Tue, 25 Jul 2023 19:32:46 +0000 https://mend.io/how-can-application-security-cope-with-the-challenges-posed-by-ai/ This is the third part of a blog series on AI-powered application security. Following the first two parts that presented concerns associated with AI technology, this part covers suggested approaches to cope with AI concerns and challenges.

In my previous blog posts, I presented major implications of AI use on application security, and examined why a new approach to application security may be required to cope with these challenges. This blog explores select application security strategies and approaches to address the security challenges posed by AI.

Calling for a paradigm shift in application security 

Comprehensive assessment of an application’s security posture involves analyzing the application to detect pertinent vulnerabilities. This entails, among other things, knowledge of code provenance. Knowing the origin of your code can expedite the remediation vulnerablities, and facilitate effective incident response by helping an organization to infer who is likely to be responsible for code maintenance and vulnerability fixes.

Historically, as computer-generated data exploded in volume, the industry was compelled to form new concepts and think in new terms such as “big data”. AI galvanizes a similar shift in thinking due to the relative ease by which AI power can be abused by malicious actors, and the potential extent of such an abuse. With AI-based machine-generated code, it is conceivable that in the near future most of the detected software security vulnerabilities calling for attention are more likely to be associated with a non-human author.

AI doesn’t just underscore the significance of quick pattern and anomaly detection. It excels in such areas. What’s more, it can significantly alleviate the challenge faced by organizations struggling to discern true security vulnerabilities among the noisy results often produced by non-AI solutions. Application security tools should correspondingly be updated or even re-designed to cope effectively with such a challenge.

What can the AppSec industry do to accommodate AI security challenges?

The potential scope of AI-related risk necessitates a significant change to application security tools. Vulnerability management (VM) and attack surface management (ASM) solutions will need to factor in new types of threats and revise asset risk assessments, Application security testing (AST) and software composition analysis (SCA) solutions will need to detect and handle AI-related software vulnerabilities update risk scoring, and enhance prioritization and remediation. Production-focused security solutions (e.g., cloud-native application protection platforms – CNAPP, cloud security posture management – CSPM, cloud workload protection platform – CWPP, security information and event management – SIEM, security orchestration, automation and response – SOAR, etc.) will need to support enhanced detection, reporting, and response workflows to satisfy new AI-related auditing and regulatory compliance requirements, and assuage concerns regarding the organization’s asset security.

AI also highlights the significance of certain aspects of software that were arguably not as pronounced in the not-so-distant past, such as provenance. SCA tools enable organizations to establish an understanding of the software components within an application, their licensing details, and reported security vulnerabilities. SCA already fulfills a pivotal role in application security and software supply chain security, and its importance is likely to become even more pronounced with the advent of AI-powered software. However, coping with AI-related security challenges requires security solutions such as SCA to extend their purview beyond software components to account for connected/dependent services as well, due to their possible exploitation by malicious actors using AI technology. The potentially dynamic nature of inter-service dependencies has significant implications on the software bill of material (SBOM), which is critical to software supply chain security.

Can regulation alone assuage AI-related security concerns?

Regulations are often regarded as a roadblock rather than a motivating stimulus. However, given the unprecedented and still untapped power of AI, regulation is imperative to ensure continued AI evolution without disastrous consequences to critical data, systems, services and infrastructure. Without adequate regulation, it may not be long before it becomes extremely difficult, if not impossible, for organizations to maintain agile software delivery while successfully coping with AI-related issues.

As I’ve noted in my previous blog post, it is important to implement security-for AI, including AI-related restrictions. Granted, it is neither trivial to establish them, nor determine how they should apply to AI technology, its provider, or the organization that leverages such technologies. Nevertheless, to reap the benefits of AI without sacrificing application security, it’s crucial to consider viable approaches to obviate, or at least restrict the damage AI technology can cause. Coping with the potential risks stemming from unbridled use of AI technologies may warrant a radically new approach to application security tooling, processes and practices.

That is not to say that implementing AI-related regulations and guidelines would be easy, or merely an extension of what is currently in place. In fact, it is questionable whether regulation alone is sufficient to help organizations address some of the challenges posed by AI. The industry already struggles to establish whether a given piece of software features code that was generated using AI. It may also be challenging to predict the expected outcome of AI usage, or comfortably establish whether such a usage results in the expected behavior. There are numerous aspects that regulation may be less likely to address, but it just might be able to set an important security bar for critical systems, which is subject to regulatory compliance and may impose restrictions that can obviate or mitigate security pitfalls more likely to emerge otherwise.

As opposed to regulations that often take a while to gain traction or acceptance, AI has been exhibiting an incredibly rapid evolution during the past couple of years. Keeping pace with technology is de rigueur for technology organizations in the contemporary business world, but there is arguably no precedent to the sky-rocketing interest evidenced lately for AI technology. Nevertheless, AI’s risks demand that responsible steps be taken to avoid serious security consequences. Governments are already devising AI-related cybersecurity strategies as they acknowledge these risks and threats that can emerge from AI’s rapid technological evolution. In short, regulation should not be treated as a “nice to have” option. It’s a “must-have.”

Getting ready for a new AI-powered application security world

AI may represent the most powerful opportunity ever to enable developers to produce software code at unprecedented speed and efficiency, but such power has security repercussions that must be taken into account. The remarkable rise of AI technology has unfortunately not been matched by application security readiness, and it might take time before AI-powered solutions will be less susceptible to AI-related risks. With AI increasingly gaining a stronger foothold in software development, it has become crucial to expedite regulation efforts.

It is also important to note the pronounced effect AI has on open source software, which raises serious questions concerning the implications of future open-source software. While arguably shattering various assumptions associated with open source software development such as authorship and licensing classification, etc.), AI could ultimately encourage increased collaboration and software sharing, fertile ground for open source software.

We are witnessing the inception of AI-powered application development. As the industry anticipates the extent to which AI will change the landscape in which application security operates, we must not be complacent about our ability to predict its outcomes. It would be irresponsible to underestimate the pace and magnitude of AI’s impact, and it’s a challenge that we must be ready to tackle.

]]>
The New Era of AI-Powered Application Security. Part Two: AI Security Vulnerability and Risk https://www.mend.io/blog/the-new-era-of-ai-powered-application-security-part-two-ai-security-vulnerability-and-risk/ Tue, 18 Jul 2023 03:15:00 +0000 https://mend.io/the-new-era-of-ai-powered-application-security-part-two-ai-security-vulnerability-and-risk/ This is the second part of a three-part blog series on AI-powered application security.

Part One presented concerns associated with AI technology that challenge traditional application security tools and processes. This part covers aspects concerning AI security vulnerabilities and AI risk. Part Three covers suggested approaches to cope with AI challenges.

AI-related security risk manifests itself in more than one way. It can, for example, result from the usage of an AI-powered security solution that is based on an AI model that is either lacking in some way, or was deliberately compromised by a malicious actor. It can also result from usage of AI technology by a malicious actor to facilitate creation and exploitation of vulnerabilities.

AI-powered solutions are potentially vulnerable at the AI model level. Partial, biased, or accidentally compromised model data might adversely affect the validity of AI-powered application security recommendations. This might produce unwanted outcomes such as inaccurate security scanning results and invalid security policy settings. Model data might be deliberately compromised by malicious actors, thereby raising risk. Notably, many types of security vulnerability often evidenced with non-AI software environments (e.g., injection, data leakage, unauthorized access) are applicable to AI models too. There are of course vulnerabilities that are unique to AI or AI models.

Another cardinal AI-related security risk stems from the potential use of AI-powered software by malicious actors, which enables them to discover and exploit application software vulnerabilities at a scale and speed that dramatically raises the potential security risk impact, and can significantly expand the organization’s attack surface. One example concerns exploitation of vulnerabilities associated with business-related processes, which may result from lacking enforcement of proper security rules for inter-service requests at the transaction level. Common software security vulnerabilities are typically confirmed by either analyzing software code, or assessing the software’s runtime behavior under real or crafted workloads. However, situations may arise where a risk emerges only under conditions depending on the state of multiple independent components, which may complicate its detection by traditional security solutions. AI-powered solutions can help organizations detect such a vulnerability, but AI technology can also be employed by malicious actors to exploit it.

Without means to properly safeguard AI models against the exploitation of vulnerabilities , using AI-powered solutions to detect and remediate vulnerabilities might lead to severe security hazards, such as remediation suggestions that feature maliciously embedded code, which may be challenging to detect and mitigate.

There is an additional AI-related security consideration that I mentioned in my previous blog post — trust, or in this case, the false sense of trust that AI-powered security solutions can create. It is remarkably easy for users to put together textual requests (prompts) for AI security-related advice or actions. This can be deceiving, though. Many developers, especially those lacking application security expertise, may not necessarily possess the knowledge to articulate their intended security requests in an accurate and complete manner. While being sufficiently capable to produce a plausible response in many use cases, AI may not be able to invariably compensate for some ill-defined security prompts, resulting in recommendations that may not fully address the user’s need.

How should we cope with AI-related concerns and its perceived risk?

Reaping the benefits of AI requires new levels of vigilance to effectively address the security risks associated with the technology. The evolution of practical AI-powered application security may have just started, but we must already try to understand AI’s potential challenges and create appropriate security requirements and measures. In my next blog post, I’ll elaborate on them.

]]>
The New Era of AI-Powered Application Security. Part One: AI-Powered Application Security: Evolution or Revolution? https://www.mend.io/blog/the-new-era-of-ai-powered-application-security-part-one-ai-powered-application-security-evolution-or-revolution/ Tue, 11 Jul 2023 16:06:10 +0000 https://mend.io/the-new-era-of-ai-powered-application-security-part-one-ai-powered-application-security-evolution-or-revolution/ This is the first part of a three-part blog series on AI-powered application security.

This part presents concerns associated with AI technology that challenge traditional application security tools and processes. Part Two covers considerations about AI security vulnerabilities and AI risk. Part Three covers suggested approaches to cope with AI challenges.

Imagine the following scenario. A developer is alerted by an AI-powered application security testing solution about a severe security vulnerability in the most recent code version. Without concern, the developer opens a special application view that highlights the vulnerable code section alongside a display of an AI-based code fix recommendation, with a clear explanation of the corresponding code changes. ‘Next time’, the developer ponders after committing the recommended fix, ‘I’ll do away with the option to review the suggested AI fix and opt for this automatic AI fix option offered by the solution.

Now imagine another scenario. A developer team is notified that a runtime security scan detected a high severity software vulnerability in a critical application that is suspected to have been exploited by a malicious actor. On further investigation, the vulnerable code is found to be featured in a code fix recommended by the organization’s AI-powered application security testing tool.

Two scenarios. Two different outcomes. One underlying technology whose enormous potential for software development is only matched by its potential for disastrous security consequences.

Development of secure software has never been an easy task. As organizations struggle to accommodate highly demanding software release schedules, effective application security often presents daunting challenges to R&D and DevOps teams, not least due to the ever-increasing number of software security vulnerabilities needing inspection. AI has been garnering attention from technology enthusiasts, pundits, and engineers for decades. Nonetheless, the mesmerizing capabilities demonstrated by generative AI technology in late 2022 has piqued an unprecedented public interest in practical AI-powered application use cases.  Perhaps for the first time, these capabilities have compelled numerous organizations to seriously explore how AI technology may help them overcome pressing application security challenges. Ironically, the same AI capabilities have also raised growing concerns about security risks that AI might pose to application security.

The advent of advanced and accessible AI technology therefore begs an interesting question. Are traditional application security tools, processes, and practices sufficient to cope with challenges posed by AI, or does AI call for a radically different approach to application security?

Lots of opportunities

AI has the potential to elevate the value of application security thanks to an unprecedented combination of power and accessibility, which can augment, facilitate, and automate related security processes and significantly reduce the effort they often entail. It is helpful to distinguish between two cardinal AI-related application security angles: AI-for-Security (use of AI technologies to improve application security), and Security-for-AI (use of application security technologies and processes to address specific security risks attributed to AI). For example, taking an AI-for-Security perspective, AI can be used to:

  • Automate the establishment of rules for application security policies, security-related approval workflows, and alert notifications
  • Offer suggestions for software design that may dramatically accelerate the development of secure software
  • Support effective and efficient (i.e., low noise) detection of software security vulnerabilities
  • Streamline prioritization of the detected vulnerabilities
  • Propose helpful advice for remediating such vulnerabilities, if not supporting fully-automated remediation altogether

The list of potential AI benefits is truly staggering.

The capabilities of recent AI advancements are particularly likely to appeal to agile organizations in need for accelerated software delivery. To remain competitive, such organizations strive to maximize the pace of software delivery, potentially producing hundreds or even thousands of releases every day. Application security tools such as security testing solutions are heavily used by organizations to govern the detection, prioritization, and remediation of security vulnerabilities. Unfortunately, traditional solutions may struggle to support fast software delivery objectives due to the impact of wrongly detected vulnerabilities and manual remediation overhead, which can result in inefficient vulnerability handling. AI-powered application security tools have the potential to reduce reported vulnerability noise and accelerate vulnerability remediation dramatically. This creates a strong temptation for development and security teams to explore and embrace AI-powered tools.

Lots of unknowns

A key factor that challenges the acceptance of AI-powered application security is trust in the technology — or lack thereof. Currently, there are many unknowns concerning the application security risks of AI. In contrast with non-AI software, AI ‘learns’ from the huge amounts of data it is exposed to. Consequently, AI may not necessarily conform to the logic and rules that non-AI software is explicitly coded to follow. Furthermore, due to AI’s learning process it may be impossible to anticipate whether explicit rules would yield better AI predictability in certain application security scenarios (e.g., correct remediation suggestion for a vulnerability case detected during security testing).

Crucially, it might be unfeasible to comfortably determine how AI ‘reasons’, which in turn makes it difficult to establish if, when, and why it falters. This contrasts with traditional, non-AI software logic that follows explicit instructions designed to produce a predictable output, making it easier to identify deviations from an expected outcome. Statistical observations aside, it may also be challenging to assess how well AI models ‘behave’, and to accurately establish adverse impact resulting from such behavior. To make matters worse, many people tend to accept what AI produces as ‘good enough’ rather than being more discerning and vigilant about such an output. This is especially of concern since the quality and accuracy of AI responses to user requests are sensitive to the data used for learning. Insufficient, partial or flawed data might not be acknowledged by AI as such, potentially leading to wrong or faulty responses that have a detrimental effect on application security.

Another critical aspect concerns establishment of code provenance, its origin and authorship. Ascertaining provenance can enable users to expedite remediation of vulnerable application code, and facilitate effective incident response by helping an organization to infer the party likely to be responsible for code maintenance and vulnerability fixes.

Sadly, it might not be possible in many cases to glean such information for applications that were produced (or altered) by an AI tool. This complicates both attribution of liability and vulnerability remediation. Moreover, AI will likely give rise to new types of vulnerabilities, further challenging the organization’s ability to detect and handle them without hampering development agility and rapid application delivery. Many companies are already struggling to cope with a deluge of detected application security vulnerabilities. While it is plausible that AI-powered security tools may help organizations reduce the count of software security vulnerabilities, usage of AI by malicious actors presents a countervailing effect, which might significantly exacerbate the organization’s challenges.

While AI may present a compelling value proposition for security vulnerability detection and remediation, it also signifies a big unknown regarding the organization’s attack surface. It is conceivable that some organizations will be so focused on gaining a competitive edge with AI that they will inadvertently overlook its potentially negative impact –– or will not acknowledge the need to seriously factor such an impact, let alone its ramifications. Such an approach would make it harder for the organization to anticipate threats and implement appropriate incident response measures, which can put at risk the organization’s overall security posture and threat responsiveness. It is therefore important to place a strong focus on security-forAI measures, guidelines, and boundaries, and consider proper regulation to protect organizations from AI-related cyberattacks.

A defining moment for software development

The advent of AI represents a defining moment for software development and affects how we develop, use, and interact with software. The phrase “game-changing” may be often used unjustifiably, but in the case of AI technology, it is spot on. AI ushers in a new paradigm for software development, but it additionally raises security challenges that make AI an appealing target for malicious actors. In the next blog post, I will review aspects concerning AI security vulnerabilities and AI risk that call for consideration of a new approach to cope with AI challenges.

Read the next post in this series.

]]>