Invicti https://www.invicti.com/ Web Application Security For Enterprise Mon, 16 Oct 2023 14:43:12 +0000 en-US hourly 1 https://cdn.invicti.com/app/uploads/2022/03/08125959/cropped-favicon-32x32.png Invicti https://www.invicti.com/ 32 32 Rapid Reset HTTP/2 vulnerability: When streaming leads to flooding https://www.invicti.com/blog/web-security/rapid-reset-http2-vulnerability-when-streaming-leads-to-flooding/ Mon, 16 Oct 2023 14:43:10 +0000 https://www.invicti.com/?p=47857 The Rapid Reset vulnerability is yet another weakness in the HTTP/2 protocol that allows for DDoS attacks on a massive scale. This post summarizes how the attack works, why it’s possible, what mitigations are available, and why it likely won’t be the last scare related to HTTP/2.

The post Rapid Reset HTTP/2 vulnerability: When streaming leads to flooding appeared first on Invicti.

]]>

What you need to know

 

  • The Rapid Reset HTTP/2 vulnerability tracked as CVE-2023-44487 allows distributed denial of service (DDoS) attacks on an unprecedented scale.
  • Starting in late August 2023 and continuing through October, the vulnerability has been exploited multiple times in attacks that ranged from 120 million to nearly 400 million requests per second.
  • The weakness is in the HTTP/2 protocol itself, making it necessary to patch or reconfigure all web servers, load balancers, proxies, and other appliances that support HTTP/2 connections.
  • As of this writing, some attacks are still happening. Google, AWS, Cloudflare, and other major industry players have coordinated a response to minimize the impact of further attacks while patches are rolled out.
  • All organizations running services that accept HTTP/2 traffic are advised to follow their internet service provider’s guidance to patch or otherwise mitigate the vulnerability.

Invicti’s cloud services, including the on-demand versions of Invicti and Acunetix products, are not at risk. Invicti is following all recommended mitigation measures, and no service disruptions are expected.

“Biggest DDoS attack ever” headlines have long stopped catching anyone’s eye – but this time was different. On August 25, 2023, and in the days that followed came a flood of DDoS attacks over HTTP/2 that surpassed anything seen in the past. By abusing a feature of the HTTP/2 protocol that was designed to maximize throughput, relatively small botnets were sending hundreds of millions of requests every second. Only the world’s largest internet and cloud providers could possibly stand up to the intense bombardment – and mitigation wouldn’t be easy.

What is HTTP/2 and who uses it?

The HTTP protocol was created as the backbone of the World Wide Web way back in 1989 and was designed to transmit static, hyperlinked documents. The most widely used and supported version today is HTTP/1.1, which includes some concessions to complex, high-performance modern web use cases like streaming but still imposes serious limitations.

HTTP/2 was designed to address these shortcomings and incorporate current needs into the protocol to cut down traffic overhead and increase throughout, especially for data streaming. As of this writing, HTTP/2 is supported by just over 35% of all websites (source: W3Techs), which may not look like much – but that number includes all the world’s highest-traffic services and applications.

What is the Rapid Reset HTTP/2 vulnerability?

In a nutshell, attacks that exploit the Rapid Reset HTTP/2 vulnerability flood a server with potentially millions of HTTP/2 requests, immediately followed by request cancellations (resets). Unlike with HTTP/1.1, the client doesn’t have to wait for a response before sending the next request (and next reset). Even though no actual data is sent or received and connections will eventually be abandoned, the server still has to prepare to receive each request and potentially expect further requests from the same client. With huge request volumes arriving from thousands of hosts in a short time, this can rapidly exhaust server resources, resulting in a denial of service.

The vulnerability is not a typical security flaw in some specific application but the result of a lack of security foresight in the HTTP/2 specification itself. One of the major requirements for HTTP/2 was to make streaming easier and more efficient. With HTTP/1.1, only one HTTP request at a time can be processed over a single TCP connection, meaning that the client needs to wait for a response before sending the next request. This is fine when fetching a web page but very inefficient for sending continuous data streams.

Even though HTTP/1.1 added request pipelining to address this limitation, the feature proved troublesome and unreliable in practice, and dealing with the problem properly was one of the main requirements for HTTP/2. The newer protocol allows clients to open multiple concurrent streams within the same TCP connection, typically up to 100 streams at a time. This multiplexing feature is great for efficient streaming but, if abused, could also allow attackers to send 100 times more malicious requests from a single host – and the protocol specification doesn’t impose any security-minded limitations.

The HTTP/2 protocol also allows the client to cancel (reset) a connection and carry on without waiting for any server response. Again, the specification doesn’t limit this behavior, and so we get to the vulnerability. By combining multiple streams per connection with the freedom to unilaterally reset any number of requests, attackers can generate massive amounts of malicious traffic using botnets that are much smaller than usual, making them easier to build and deploy. In effect, the attacks abuse the request reset feature at an extreme intensity and then use multiplexing as a force multiplier. As it turns out, when you give great power to all users, you need to remember some of them could be malicious.

Can you test if a system is vulnerable to Rapid Reset HTTP/2?

Because the vulnerability is caused by the lack of security guardrails in the protocol and only manifests itself by resource exhaustion, safely testing for it is hard, if not impossible. Whether a specific server is vulnerable depends on a complex combination of rate limit settings on the server and whatever appliances and services stand between it and an attacking botnet. The only thing anyone can be sure of at this stage is that without immediate mitigation, any service that supports HTTP/2 connections could be vulnerable.

Mitigations and the future of HTTP/2

If you run an HTTP/2 server, look for product-specific patches and mitigation guidance to configure rate limits that block known malicious traffic patterns by capping the number of concurrent streams. Major providers like Google, AWS, and Cloudflare have also coordinated a response to detect and block attack attempts, as they do for other types of DDoS attacks. Combining such application-layer shielding with patches and configuration updates should be sufficient to keep HTTP/2 servers safe from currently known attacks without a major impact on performance. As a last resort, if you cannot apply suitable patches and use runtime DDoS protection, you may want to consider disabling HTTP/2 altogether – keeping in mind that (to quote Microsoft guidance) this can “significantly influence performance and user experience.”  

HTTP/2 has long attracted criticism for being something of a rushed effort and a missed opportunity to properly address deep underlying issues with request pipelining and multiplexing. Considering that they exploit this very functionality, the Rapid Reset attacks seem to validate these concerns. Many of the shortcomings are addressed by the HTTP/3 protocol, which was published as a proposed standard in 2022 and, though not yet widely used, is already supported by most major web servers and browsers. Seeing as attacks against HTTP/2 are likely to continue and evolve, moving to HTTP/3 definitely seems the way of the future.

The post Rapid Reset HTTP/2 vulnerability: When streaming leads to flooding appeared first on Invicti.

]]>
Top 5 application security misconfigurations https://www.invicti.com/blog/web-security/top-5-application-security-misconfigurations/ Thu, 12 Oct 2023 13:04:20 +0000 https://www.invicti.com/?p=47803 Misconfigurations are a major avenue for web application attacks. No matter how secure your code is, a misconfigured runtime environment can still render your app vulnerable. This Cybersecurity Awareness Month, here are the top five categories of application security misconfigurations.

The post Top 5 application security misconfigurations appeared first on Invicti.

]]>
As part of Cybersecurity Awareness Month, CISA has published a list of the top 10 network security misconfigurations found during red and blue team assessments and in actual incident responses. To make sure application security doesn’t get left out, we’ve decided to follow up with our own list of common application security misconfigurations – but since top 10 lists have received some bad press for being little more than clickbait, we’ll stick to just five of the most important categories.

In broad terms, an application security misconfiguration is any security flaw directly caused by the way an application or its environment is set up, not by any vulnerability in the application itself. For example, if an application is not vulnerable in a development environment but becomes vulnerable once deployed to production, you most likely have a security misconfiguration on your hands. With that definition in place and keeping in mind there is plenty of overlap between the categories, let’s dive into the top 5 application security misconfigurations.

Misconfiguration #1: Vulnerable tech stack components

Any web application is merely the outermost layer of a technology stack that goes right down to the operating system. Depending on its vintage and architecture, a web tech stack may include a web server, application server, database server, web framework, dynamic dependencies, and more. Unless all the runtime components are properly maintained, a missing patch or security update may provide attackers with an opening to exploit a known vulnerable product version and potentially compromise your system without touching the application itself (for instance, via remote code execution by the application server).

Read more about the dangers of outdated web technologies

Misconfiguration #2: Missing or insufficient access controls

Many data breaches happen not because an attacker broke in but because they found something out in the open – exposed cloud storage buckets, sensitive files, and forgotten APIs are all fair game. While ensuring proper access control at multiple levels is a major requirement for secure application development, it must also be a part of deployment and operations, especially as application components become more and more distributed. For example, a misconfigured web server may allow attackers to download the application source code, revealing intellectual property and making it easier to find vulnerabilities by directly analyzing the code.

Read more about the dangers of unauthenticated APIs

Misconfiguration #3: Default or development configurations

Development environments have very different requirements compared to production. Getting as much error information as possible is crucial, and security measures will often be disabled for debugging (or they simply won’t exist yet). With this in mind, many components default to less secure but more verbose settings intended to ease development, and locking them down should be a routine part of the deployment process. Unless properly hardened to minimize the attack surface and data exposure, components may leak excessive information to attackers or expose resources or user accounts that shouldn’t be accessible at all.

Read more about web application hardening

Misconfiguration #4: Missing or incorrect HTTP security headers

We’ve written a lot about HTTP security headers in the past, and with good reason, as they are one of the easiest ways to stop entire classes of web attacks without touching a single line of application code. Among several common headers, the two definite must-haves are Content Security Policy (CSP) headers to minimize exposure to cross-site scripting and the HTTP Strict Transport Security (HSTS) header to enforce encrypted communications and thus prevent man-in-the-middle attacks. While setting them is a fundamental best practice, misconfiguring your security headers can be a risk in itself – from a false sense of security when your CSP rules don’t do what you expected, to making your entire domain inaccessible due to a bad HSTS header.

Read our technical white paper about HTTP security headers

Misconfiguration #5: Excessive process privileges

Privilege escalation is usually the first goal of any attacker who manages to gain an initial foothold on your server. In order to minimize the options available to malicious actors, application hardening should include making sure that all the processes in your stack are running with the minimum necessary privileges and (if possible and appropriate) are separated to reduce the risk of lateral movement. For example, for development on a local machine, it might be quick and easy to run all your servers as root with full file system access – but if done in a production environment, it would allow total system compromise from a single successful command injection.

Read more about privilege escalation

Raising awareness of application security fundamentals

Preventing application security misconfigurations might not get the same attention as chasing down the latest media-friendly vulnerabilities, yet it is a fundamental part of secure development and operations. If you want to run secure software, you must start with an application that leaves development without known vulnerabilities and then put it in a hardened and tested runtime environment. Having only one or the other won’t work – you need to have both and test both.

Read more about the scope of different approaches to application security testing

The post Top 5 application security misconfigurations appeared first on Invicti.

]]>
Invicti Security Achieves ISO 27001:2022 Accreditation, Continuing a Dedicated Commitment to Information Security https://www.invicti.com/blog/news/invicti-security-achieves-iso-27001-2022-accreditation-for-all-products/ Wed, 04 Oct 2023 13:00:00 +0000 https://www.invicti.com/?p=47722 Invicti Security has attained ISO 27001:2022 certification for all of its industry-leading dynamic application security testing (DAST) products. The achievement demonstrates Invicti’s commitment to ensuring information security and data protection across all its systems and products and for all its customers.

The post Invicti Security Achieves ISO 27001:2022 Accreditation, Continuing a Dedicated Commitment to Information Security appeared first on Invicti.

]]>
AUSTIN, TEXAS (Oct. 4, 2023)Invicti Security, the leading dynamic application security testing (DAST) company, is proud to announce its successful attainment of the ISO 27001:2022 certification for all its products. This achievement demonstrates Invicti’s dedication to information security and data protection, underscoring the organization’s commitment to protecting sensitive information, maintaining data integrity, and providing clients and stakeholders with the highest level of trust.

The International Organization for Standardization (ISO) is recognized worldwide for setting standards to ensure the quality, safety, and efficiency of products, services, and systems across various industries. ISO 27001:2022 specifically focuses on Information Security Management Systems (ISMS), offering a comprehensive framework for organizations to establish, implement, maintain, and continually improve their information security practices.

To earn this prestigious accreditation, Invicti underwent a rigorous evaluation process that included comprehensive audits and assessments of its information security management systems, policies, procedures, and controls. The successful certification demonstrates the company’s ability to:

  1. Identify and assess information security risks
  2. Implement robust information security controls
  3. Continually monitor and improve the effectiveness of its ISMS
  4. Safeguard sensitive data and protect against security breaches

Matthew Sciberras, CISO and VP of Information Security and IT at Invicti Security, expressed pride in this achievement, stating: “Our team has worked tirelessly to achieve ISO 27001:2022 certification, and this accomplishment reflects our unwavering commitment to safeguarding the sensitive information entrusted to us. This certification reinforces our clients’ trust in our ability to protect their data and reaffirms our position as a leader in the application security sector.”

The ISO 27001:2022 accreditation aligns with Invicti‘s overarching purpose to propel the world forward by securing every web application and API while upholding the highest ethical standards and ensuring the security and confidentiality of data.

About Invicti Security

Invicti Security – which acquired and combined DAST leaders Acunetix and Netsparker – is on a mission: application security with zero noise. An AppSec leader for more than 15 years, Invicti provides best-in-DAST solutions that enable DevSecOps teams to continuously scan web applications, shifting security both left and right to identify, prioritize and secure a company’s most important assets. Our commitment to accuracy, coverage, automation, and scalability helps mitigate risks and propel the world forward by securing every web application. Invicti is headquartered in Austin, Texas, and has employees in over 11 countries, serving more than 4,000 organizations around the world. For more information, visit our website or follow us on LinkedIn.

###

Media Contact:

Kate Bachman
Invicti Security
kate.bachman@invicti.com

The post Invicti Security Achieves ISO 27001:2022 Accreditation, Continuing a Dedicated Commitment to Information Security appeared first on Invicti.

]]>
Hacking the hackers: Borrowing good habits from bad actors https://www.invicti.com/blog/web-security/borrowing-good-habits-from-bad-actors-announcing-ebook/ Mon, 02 Oct 2023 13:29:42 +0000 https://www.invicti.com/?p=47709 Cybercriminals are smart, quick, and relentless. If we want to outsmart them, it’s imperative that we pay attention to their behaviors and use these hacker skills more efficiently than the bad guys do every single day.

The post Hacking the hackers: Borrowing good habits from bad actors appeared first on Invicti.

]]>
In a digitized world where information is both a valuable asset and a potential target, malicious hackers are a constant threat – and often loom larger than life. It’s easy to think of cybercriminals as shadowy supervillains when, in reality, they’re merely highly motivated and unscrupulous people using the specialized tools at their disposal to work smarter, not harder. By combining tools and skills with the habits of a persistent attacker mindset, they can efficiently breach security systems, steal sensitive data, and disrupt critical infrastructures. 

Scaled up to global levels, that efficiency becomes a huge and costly global problem. It’s estimated that by 2025, cybercrime will cost the world economy some $10.5 trillion a year – the most significant transfer of wealth in human history. Unless we can all find a way to build security that proactively keeps attackers at bay, threat actors will only escalate their efforts to wreak havoc for enterprises, government organizations, and even entire nations. 

But what if you could turn the tables on cyber adversaries by embracing some of their habits and building them into our own DevSecOps strategies? If we can understand how bad actors apply their skills and mindset to outsmart us, we can harness the most effective habits to outhack the hackers and protect our digital assets more effectively. Read our free eBook to learn how:

Good habits of bad actors that give them an edge

Malicious hackers operate in an environment where time and information are precious. Using as much intel as they can gather, they can set up attacks to exploit vulnerabilities swiftly and stealthily within a narrow window of opportunity. They often succeed because they’re relentless, motivated, and resourceful. They will use anything they need to get the job done, from dedicated tools and pre-packaged exploits on the dark web to their own skills and proven operating procedures. 

Here are a few hacker habits that can help the bad guys stay one step ahead – and that you can turn to your advantage:

  • They map out, monitor, and understand the entire target environment, including who has access to what systems and data within an organization, so they can better pinpoint their targets. Attackers also gather every scrap of public and non-public information about the targeted systems, people, and security tools. Armed with this intelligence, they can exploit security flaws to penetrate your systems and then escalate access to go deeper – and cause even more damage.
  • They share knowledge and tools to work smarter, not harder. Knowledge-sharing allows attackers to stay on the technical cutting edge and also serves as a way to train junior cybercrimes on historical knowledge about vulnerabilities, attack techniques, and approaches that have proven successful. Underground communities and marketplaces make it easier for malicious hackers to quickly develop and adapt tools and skills, helping them become experts in specific fields. 
  • They verify everything to ensure they have the best information. Outsmarting their victims is a top priority for bad actors, so they strive to question, verify, and improve all the information they have. That way, they know they’re always operating with the best possible intel and the most suitable tools to break or sidestep your existing defenses – a situation you could be oblivious to if you don’t have complete visibility of your attack exposure.

To counter these battle-tested attacker habits, we need to cultivate our own AppSec hacks. Proactively hacking the hackers by maximizing coverage, efficiency, and accuracy in a continuous process is vital to prevent the bad guys from finding weak spots before you do. It’s the only way to outpace the attackers and get your guard up before they can land the next punch. 

By anticipating their tactics, understanding their motives, and proactively implementing measures to thwart their advances, we can give ourselves a better chance of safeguarding sensitive data and the systems that process it – and make sure we’re the ones staying one step ahead in the ever-evolving cybersecurity landscape.

Read our new eBook, Good Habits of Bad Actors, for more hacker habits and AppSec practices that you can start using to your advantage right now.

The post Hacking the hackers: Borrowing good habits from bad actors appeared first on Invicti.

]]>
Invicti’s VP of Engineering Kalpana Tummala Honored with SC Media’s Women in IT Award https://www.invicti.com/blog/news/invicti-vp-of-engineering-kalpana-tummala-honored-with-sc-media-women-in-it-award/ Wed, 27 Sep 2023 13:00:00 +0000 https://www.invicti.com/?p=47559 Invicti VP of Engineering and Program Management, Kalpana Tummala, has been recognized in the Women to Watch category at SC Media’s annual Women in IT awards.

The post Invicti’s VP of Engineering Kalpana Tummala Honored with SC Media’s Women in IT Award appeared first on Invicti.

]]>
AUSTIN, Texas (Sep. 27, 2023) – SC Media, in partnership with its flagship company CyberRisk Alliance (CRA), unveiled the winners of its annual Women in IT Security program – with Invicti’s VP of Engineering and Program Management, Kalpana Tummala, recognized in the Women to Watch category.

Celebrating its tenth year, SC Media’s Women in IT Security program highlights the need for workforce diversity and underscores the advantages that women can bring to cybersecurity. Candidates are nominated by their peers and then selected by SC Media’s editorial staff, being placed into four categories. In the Women to Watch category, nominees are recognized for positively impacting the community as “drivers of the industry’s next wave of growth and innovation,” according to SC Media.

“I am honored to be one of the many experienced nominees recognized in the Women to Watch category,” Tummala said. “The gender gap in IT, specifically cybersecurity, has historically sent the message that women are not as capable as men when it comes to big-picture issues and technical problem-solving. In reality, that simply isn’t the case, and we’re just as capable of shaping the future of IT.”

Women represent just 24% of the global cyber workforce, research from Forrester shows, which presents an urgent need for better representation but also ample opportunity for women to succeed in a typically male-dominated industry. With more than 22 years of experience in engineering and leadership, Tummala has seen first-hand the struggles of this underrepresentation for women in IT – and the many ways that they can bring unique perspectives to the table when given an opportunity to thrive. 

“There is a mountain of untapped potential for women in cybersecurity, specifically engineers. Women are naturally great problem-solvers and communicators who can think quickly under pressure, which is vital for fast-paced industries,” Tummala commented. “In my experience, women are more risk-averse and capable of architecting strategies that protect their teams and their organizations. Those are invaluable skills that we need more of in cybersecurity.”

To see the full list of awardees, visit: https://www.scmagazine.com/news/congratulations-to-our-2023-sc-media-women-in-it-security-honorees

About Invicti Security

Invicti Security – which acquired and combined DAST leaders Acunetix and Netsparker – is on a mission: application security with zero noise. An AppSec leader for more than 15 years, Invicti provides best-in-DAST solutions that enable DevSecOps teams to continuously scan web applications, shifting security both left and right to identify, prioritize, and secure a company’s most important assets. Our commitment to accuracy, coverage, automation, and scalability helps mitigate risks and propel the world forward by securing every web application. Invicti is headquartered in Austin, Texas, and has employees in over 11 countries serving more than 4,000 organizations around the world. For more information, visit our website or follow us on LinkedIn.

About SC Media

SC Media is the essential resource for cybersecurity professionals – the flagship information brand of CyberRisk Alliance and the gateway to content from Security Weekly, CRA Business Intelligence, Infosec World and SC Events. These resources offer an unparalleled range of foresight, learning and collaboration – news-analysis and enterprise reporting; practitioner-led podcasts and videos; research, data and product reviews; events, conferences and training; and much more. Through these resources and our authoritative network of faculty and contributors, we convene and engage the cyber community, to share insight with, by and for security practitioners and leaders.

About CyberRisk Alliance

CyberRisk Alliance (CRA) is a business intelligence company serving the high growth, rapidly evolving cybersecurity community with a diversified portfolio of services that inform, educate, build community and inspire an efficient marketplace. Our trusted information leverages a unique network of journalists, analysts and influencers, policymakers and practitioners. CRA’s brands include SC Media, Security Weekly, InfoSec World, Cybersecurity Collaboration Forum, our research unit CRA Business Intelligence, and the peer-to-peer CISO membership network, Cybersecurity Collaborative. More information is available at http://cyberriskalliance.com/.

###

Media Contact:

Kate Bachman
Invicti Security
kate.bachman@invicti.com

The post Invicti’s VP of Engineering Kalpana Tummala Honored with SC Media’s Women in IT Award appeared first on Invicti.

]]>
NIST Cybersecurity Framework gets user-friendly: Upcoming changes in CSF v2.0 https://www.invicti.com/blog/web-security/upcoming-changes-in-nist-cybersecurity-framework-v2/ Fri, 22 Sep 2023 13:00:00 +0000 https://www.invicti.com/?p=47416 The NIST CSF is widely used to build security programs in government and business organizations but was not originally intended as a general-purpose cybersecurity framework. We examine the public draft of the upcoming CSF v2.0 to see how NIST is making the framework more universal, user-friendly, and practical.

The post NIST Cybersecurity Framework gets user-friendly: Upcoming changes in CSF v2.0 appeared first on Invicti.

]]>
The NIST cybersecurity framework is the de facto standard for building and structuring cybersecurity strategies and activities – but that’s not how it started out, and not what it’s really called. The document in question is the Framework for Improving Critical Infrastructure Cybersecurity, currently at version 1.1. In August 2023, NIST published a draft version of its proposed successor, now simply called The Cybersecurity Framework (CSF) – and unlike the current version, the draft comes with a variety of practical implementation examples.

A framework driven by executive orders

Back in 2013, an executive order from the Obama administration was issued calling for a standardized cybersecurity framework to describe and structure activities and methodologies related to securing critical infrastructure. In response, the National Institute of Standards and Technology (NIST) developed its Framework for Improving Critical Infrastructure Cybersecurity. While originally intended for organizations managing critical infrastructure services in the US private sector, it became widely used by public and private organizations of all sizes and is commonly known as just the NIST cybersecurity framework.

Nearly a decade later and hot on the heels of the SolarWinds and Colonial Pipeline attacks, the Biden administration issued its own executive order on cybersecurity in 2021. Now concerned with the security of all federal systems and their software supply chains, the order (among other things) obligated NIST to prepare and issue suitable guidance. Based on this order and related activities, NIST has revisited its existing framework specifically to make it easier to apply regardless of industry or size of organization.

According to NIST, the stated purpose of the revision is to “reflect current usage of the Cybersecurity Framework, and to anticipate future usage as well.” As part of this effort, the official name is being changed and the language simplified and refocused on practical usability. Most importantly, implementation examples have been added to the previously dry and theoretical document to illustrate how the framework items could translate into real actions.

Governance leads the list of changes

Looking at the CSF v2.0 public draft, the most prominent change is that we now have six core cybersecurity functions, with the Govern function joining the existing quintet of Identify, Protect, Detect, Respond, and Recover. This is in line with the shift away from protecting critical infrastructure and towards wider applicability, where each organization needs to start by understanding its unique operating context and defining risk management expectations and strategies. Specifically, the Govern function breaks out into the following categories:

  • Organizational Context
  • Risk Management Strategy
  • Cybersecurity Supply Chain Risk Management
  • Roles, Responsibilities, and Authorities
  • Policies, Processes, and Procedures
  • Oversight

Note that while the Govern function itself is new in v2.0, it mostly incorporates existing outcomes (subcategories) that have been moved out of other functions (mainly Identify) and into a new home that highlights the importance of top-down planning and oversight.

Examples at last

The existing NIST CSF is famously dry and theoretical, being originally intended as an aid for creating and managing highly formalized strategies and processes related to securing critical infrastructure. Its popularity as a general-purpose framework saw organizations picking, mixing, and interpreting the abstract outcomes to arrive at actual controls and actions to implement. Based on community feedback and in line with its expanded usage, CSF v2.0 provides implementation examples for each outcome.

The new examples make it much easier not only to implement outcomes but also just to read the document, helping you understand each outcome and see how it could apply in your specific situation. To illustrate, here’s one of the subcategories in the CSF draft under the new Govern function, category Organizational Context (GV.OC):

GV.OC-05: Outcomes, capabilities, and services that the organization depends on are determined and communicated

When read on its own, this is a very generic statement that could be interpreted (and misinterpreted) in many ways. Helpfully, there are now two examples of specific actions that fall under this subcategory:

Ex1: Create an inventory of the organization’s dependencies on external resources (e.g., facilities, cloud-based hosting providers) and their relationships to organizational assets and business functions
 

Ex2: Identify and document external dependencies that are potential points of failure for the organization’s critical capabilities and services

While they only scratch the surface, the examples do make it much easier to start thinking along the right lines to map out your external dependencies and understand their security implications for your specific organization.

Getting familiar with the NIST CSF v2.0 draft

The current document is still a public draft and open for community feedback, so there may be more changes before the final version lands in early 2024. Seeing as the implementation examples are both the biggest and the most subjective addition, it’s likely they will see modifications or additions compared to the draft. We will cover the official v2.0 on the blog once it is released, so watch this space for a deeper dive into applying the cybersecurity framework to web application security.

Compared to the current framework, the upcoming NIST CSF v2.0 promises to be much more practical and easier to apply in any organization. Considering its great value for building and maintaining a cybersecurity program, this can only be good news for federal agencies and commercial organizations alike.

For anyone who wants to get familiar with the new framework without digging through the full document, NIST has prepared a helpful reference tool as an interactive way to browse the updated functions, categories, subcategories, and examples.

The post NIST Cybersecurity Framework gets user-friendly: Upcoming changes in CSF v2.0 appeared first on Invicti.

]]>
Surviving the API apocalypse: How to defeat zombie APIs https://www.invicti.com/blog/web-security/zombie-shadow-api-security/ Thu, 14 Sep 2023 13:00:00 +0000 https://www.invicti.com/?p=47296 Lurking in the shadowy corners of your environment, zombie APIs can bring unnecessary risk by providing attackers with unseen and untested points of entry. Baking anti-zombie practices into your AppSec strategy is no longer a nice-to-have but a requirement if you want to keep a lid on the risks and headaches that forgotten APIs can bring.

The post Surviving the API apocalypse: How to defeat zombie APIs appeared first on Invicti.

]]>
In the world of software development, application programming interfaces (APIs) are everywhere. Whether you’re building microservice-based applications or maintaining monolithic architectures, chances are you have services running and you’re exposing and calling their associated APIs in the background. They’re a critical part of software development and nearly two-thirds of developers spend more than 10 hours every week working with APIs – with 32% spending over 20 hours a week

Because APIs are aplenty in web application development and functionality, they’re a prime target for attackers. Palo Alto’s latest report on API security, Securing the API Attack Surface, found that just 25% of respondents accurately inventory API usages, and 28% lack visibility and control around security during the development of APIs. 

Throwing yet another wrench into the mix, many organizations are plagued by so-called zombie APIs – endpoints or entire APIs that have been forgotten or overlooked, usually after they became outdated. Sitting there unmaintained and exposed to the world without updates, patches, or security testing, such lurking APIs carry significant security risks. And similar to the zombies we see on TV, these forgotten friends-turned-foes can be a serious pain for your DevSecOps teams.

How zombie and shadow APIs bring a plague of risk to your security strategy

Zombie APIs are often discussed alongside shadow APIs. While both can lead to similar security headaches, shadow APIs are actively used and often even developed – except they live outside the organization’s best practices and governance. Shadow APIs are often discovered alongside zombie APIs when organizations work to cover more of their attack surface and discover otherwise unknown assets. Together with rogue APIs, they form the unholy trinity of API security:

Shadow APIZombie APIRogue API
Any undocumented and unmonitored API used in your applications (including untracked use of a third-party API)Any unmaintained and untracked API that is still accessible in production (often an old version)Any API that provides unauthorized access to data or operations (created with malicious intent or caused by security flaws)

All these types of surprise APIs present a common problem that organizations need to keep an eye on. As more businesses incorporate more APIs into their environments, they can inadvertently contribute to API sprawl that risks leaving behind zombie APIs – and also shadow APIs, if they don’t enforce watertight API inventory procedures. 

The move toward API-first application architectures and the rapid pace of API creation means the sprawl will only worsen for some organizations. Neglecting to maintain and secure APIs can lead to some serious consequences if threat actors get your endpoints in their sights. For example, cybercriminals might use your APIs to:

  • Exploit more serious vulnerabilities and gain deeper access to an application.
  • Steal sensitive data and use that information to execute other attacks, like phishing.
  • Execute full-scale attacks on related services and applications to disrupt service.
  • Gain entry to unauthorized administrative areas of a website or application.

An attack resulting from subpar API security can lead to critical data exposure, financial loss, and lasting damage to customer trust. Fortunately, there are best practices and tools that organizations can implement within their own security strategies to ensure they’re catching those zombie APIs before they snowball into a security apocalypse.

Defeating zombie APIs before the plague can spread 

When it comes to securing your APIs and API endpoints, it’s important that you first change your mindset around APIs and understand that they’re a critical part of your security posture. If you don’t know how many APIs you have, what endpoints they provide, and what the status is for each one, you can’t possibly understand your full threat exposure and all of the risks you’re facing. You can avoid creeping APIs by putting your best foot forward on asset discovery and management while also running regular and consistent scans for deeper intelligence on your environment. 

Follow security best practices around discovery and complete coverage. It’s critical that as APIs try to sprawl across your digital landscape, you’re staying on top of where everything lives and how secure each asset is: 

  • Use web asset discovery to find everything you have out in the wild, keeping a running inventory of all your applications and the APIs they expose.
  • Conduct regular reviews and audits of your security tools, configurations, and workflows to spot areas for improvement. 
  • Document everything related to APIs, from development to maintenance to security testing, and ensure DevSecOps teams have access to the documentation.

Build security into the software development lifecycle with a focus on APIs. When you ensure that security is a routine part of your development workflow, you can catch more issues before they reach production: 

  • Use dynamic application security testing (DAST) to cover your entire attack surface (including APIs) regardless of technology or availability of source code.
  • Build agile security into the coding process so that scanning in development and production becomes a standard procedure. 
  • Select security tools that cover all major API types and definitions with accurate and automatic authentication. 

With a thoughtful and efficient combination of the right tools and best practices, zombie APIs don’t have to sneak up on you in the dark. When API security becomes a routine and automated part of your AppSec program, undead endpoints don’t get a look in anymore – and your development projects don’t have to put the brakes on innovation to let security catch up.

Watch our on-demand webinar API Security Decoded: Insights into Emerging Trends and Effective AppSec Strategies to learn more.

The post Surviving the API apocalypse: How to defeat zombie APIs appeared first on Invicti.

]]>
PCI DSS v4.0 makes integrated application security a compliance requirement https://www.invicti.com/blog/web-security/pci-dss-v4-update/ Fri, 08 Sep 2023 11:14:48 +0000 https://www.invicti.com/?p=47243 Version 4.0 of the PCI DSS will become active on March 31, 2024, with some requirements remaining optional for a further year. Organizations that need to maintain compliance with PCI requirements for protecting sensitive payment card information should take note of the updates well ahead of time, especially where they relate to application security.

The post PCI DSS v4.0 makes integrated application security a compliance requirement appeared first on Invicti.

]]>
Before the Payment Card Industry Data Security Standard (PCI DSS) was created around 2004, consumers and merchants alike were plagued by many fragmented payment systems. It was a constant headache and source of risk – especially when one credit card company’s policies violated another’s, mandated different security controls, or simply weren’t following guidelines as thoroughly as they should have been. When the PCI Security Standards Council (PCI SSC) fully formed and released compliance guidelines for the industry, merchants of all sizes finally had a common baseline for protecting payment account data throughout the payment lifecycle while enabling more secure technology solutions. 

The original PCI DSS v1.0 was released in 2004 and has seen several major overhauls, with v3.2.1 being the current active version. In 2022, nearly 20 years since the first release, v4.0 was published in an effort to keep pace with rapid advances in technology and dynamic changes to the security landscape. The latest update brings fresh cybersecurity guidelines for organizations that need to secure their web apps and protect payment card data.

PCI DSS changes include tighter protocols for securing web apps

Version 4 of the PCI Data Security Standard includes a stricter approach to web application security in order to achieve PCI compliance, no matter the size of an organization. There were quite a few changes made between v3.2.1 and v4.0 to restructure the standard and bring it into line with the current security realities of payment processing ecosystems. Alongside more general requirements for anti-phishing and anti-malware measures as well as network security, several new or updated guidelines are related specifically to application security:

  • Implement multi-factor authentication (MFA) throughout the common data environment
  • Don’t hard-code passwords used in applications and systems accounts
  • Use automated technical solutions for detecting and preventing web-based attacks, such as web application firewalls (WAFs)
  • Perform authenticated vulnerability scanning
  • Prevent common application vulnerabilities by using suitable methods and tools already during development (aka shifting left)
  • Run external and internal vulnerability scans at least once every three months and after every significant change

Of note is requirement 6.4.2, which becomes mandatory in March 2025 and requires organizations to “deploy an automated technical solution for public-facing web applications.” Once in force, it will replace the option provided in requirement 6.4.1 to only perform periodic manual web application reviews without automated measures. The change should encourage organizations to begin the process of understanding their risk and implementing automated tools to reduce it in a continuous process. 

Several requirements either list or imply the need for dynamic vulnerability scanning. In the examples of vulnerabilities to be prevented or mitigated already during development, requirement 6.2.4 lists a number of security flaws that are typically identified using dynamic testing. This includes all types of injection vulnerabilities (notably SQL injection and command injection), client-side vulnerabilities like cross-site scripting (XSS) and cross-site request forgery (CSRF), insecure API access, and security misconfigurations. What’s more, all of section 11.3 is devoted to internal and external vulnerability scans. Requirements include scanning both periodically and after every significant change, resolving all high and critical vulnerabilities, and rescanning all fixes to ensure they are effective.

Another important update is requirement 6.3.2, which also takes full effect in March 2025 and covers patch management. In this requirement for bespoke and custom software, organizations must maintain an inventory of their assets so they know the full extent of their attack surface. In practice, this could be achieved through asset discovery and management, by running software composition analysis (SCA), and by maintaining software bills of materials (SBOMs) for all applications.

How to prepare your web security program for PCI DSS compliance 

Paying lip service to compliance requirements is never a good idea, especially when it comes to security. Doing only the bare minimum needed for security certification can create a false sense of security and put the entire organization at risk. For payment processors in particular, a comprehensive security strategy that takes compliance requirements as its baseline is the best way to reduce the risk of security incidents and breaches when handling sensitive financial data and transactions.

Here are five best practices for covering web application security as part of your PCI DSS compliance efforts:   

  • Build security into application and process design and architecture. This includes following secure design and coding practices, running and maintaining runtime protection measures such as WAFs, keeping up with security updates, and embedding application security testing into the development process by shifting left
  • Make accurate vulnerability scanning a continuous process within operations and development. Apart from being explicitly mandated in the new PCI DSS version, vulnerability scans can do double duty, minimizing your current attack exposure on the one hand and preventing new vulnerabilities from being implemented on the other.
  • Keep a handle on access control to protect data across your web apps and APIs. Proper access control to back-end systems and front-end applications is a must for any organization that processes sensitive cardholder data, but with the vast majority of data operations now performed via APIs, you also need to ensure (and then test) that your API endpoints also enforce correct authentication and authorization. 
  • Ensure your vulnerability management covers both publicly reported issues (CVEs) and flaws in your custom code. PCI DSS v4.0 specifically mandates that while you need to keep up with external vulnerability reports and ensure your scans incorporate them, you also need to minimize vulnerabilities in new or customized software, in practice requiring you to both scan for vulnerable components and test for security weaknesses
  • Automate security testing as far as possible to maximize efficiency. The updated standard requires the use of automated security tools alongside any manual reviews and tests, so it is crucial to minimize the noise generated by any automated scanners in your toolset. Features like automatic vulnerability verification can help your teams focus on actionable issues without distractions and false alarms. 

Following these best practices for securing your web apps and software should have your organization in good shape to prepare for formal certification for any PCI DSS version. For specific requirements, keep in mind that there is a strict implementation timeline for moving to v4.0:

Source: https://blog.pcisecuritystandards.org/countdown-to-pci-dss-v4.0

As of this writing, we are still in a transition period where v3.2.1 is active, and v4.0 is only recommended. As we move closer to the deadlines in March of 2024 and then 2025 (for the full set of requirements), integrating best practices and more modern tooling into your software development lifecycle today will lay the foundation for a successful compliance process tomorrow.  

How Invicti can help with PCI DSS compliance

Invicti provides out-of-the-box scan profiles and reports for web vulnerabilities covered by PCI Data Security Standard requirements. We also work with a third-party ASV (Approved Scanning Vendor) to provide one-click PCI DSS compliance confirmation for web applications. To learn how Invicti can be your partner in achieving and maintaining PCI DSS compliance, contact our sales team.

The post PCI DSS v4.0 makes integrated application security a compliance requirement appeared first on Invicti.

]]>
DAST tools are only as good as their setup and support https://www.invicti.com/blog/web-security/dast-tool-setup-support/ Thu, 31 Aug 2023 13:00:00 +0000 https://www.invicti.com/?p=47105 For all the differences between the DAST tools on the market today, scanner configuration and optimization can make or break any product. Even the best tool needs to be set up correctly to test every corner of your unique application environment – and to get there quickly and efficiently, you need rock-solid support from your vendor.

The post DAST tools are only as good as their setup and support appeared first on Invicti.

]]>
In the testing tool corner of the security industry, it’s easy to get caught up in comparing features, prices, and vendor claims across products and forget that tools don’t run themselves – they’re used by people who need to get a job done. Especially in the realm of dynamic application security testing (DAST), any scanning tool needs to be optimized to best match your unique environment and business needs.

The right setup and ongoing support can make a huge difference to the quality and usefulness of results. If your vendor can guide you through deployment and optimization, you will start seeing real value almost immediately.

Getting results and value in hours versus weeks

Proving the value of investments in security tools is notoriously difficult, especially when it comes to security testing. Without tangible results in a realistic timeframe, automated tools like DAST risk becoming a compliance item to tick off the list without regard to actual impact on security. Like any other tool, DAST needs to be set up correctly. If it’s not configured for your environment, even the best tool might miss some assets that should be getting tested – and a mediocre tool may find nothing at all because it can’t get in.

The combination of a good product, good setup, and good support is what determines the time to value. Even a technically good product that isn’t backed by the right support and documentation may leave your teams with a steep learning curve and many weeks of trial, error, and manual tweaking before you start to see value. But when product, setup, and support meet in the right place, your first security improvements could start coming in within hours of your first scan.

Common speedbumps in setting up scanning

At Invicti, we work closely with our customers, from initial onboarding to everyday support and feature requests for our industry-leading DAST solutions. Based on our experience, here are three crucial areas where less advanced scanners can stumble – and also where a few minutes of expert guidance can save many hours of DIY setup and exponentially improve the quality of your results:

  • Knowing what to test: Deciding on the scope of DAST scans is crucial to ensure you’re testing everything you need. Otherwise, whatever tests you run could be skipping critical assets, potentially leaving them vulnerable to attack. Invicti incorporates an asset discovery service and an advanced crawler to identify as many potential scan targets as possible. When set up properly, these pre-scan features show you your attack surface and help prioritize assets for testing.
  • Authentication: There are few web applications and even fewer APIs that are fully accessible without authentication and usually also authorization. Basic vulnerability scanners often struggle to access and test restricted assets or lack the automation features to scan them without user interaction. Setting up authentication is one of the first steps in bringing Invicti customers on board – and once set up, the Invicti solution can run authenticated scans fully automatically.
  • Performance and scope optimization: Getting a DAST tool working is only the first step to getting the best possible results from it. Each customer environment is unique, so the Invicti support team helps customers constantly optimize their setup to maximize performance and scope. This translates into faster scans, more accurate results, and often even customized solutions to scan bespoke applications that most scanners can’t test at all.

Going from scan results to actual fixes

For most DAST scanners, delivering the scan results is where the job ends, and anything after that is someone else’s problem. In fact, many users don’t expect a DAST tool to do anything more. But Invicti was built with automation and integration in mind, so its functionality also includes a wealth of workflow integration features that can be set up to efficiently feed scan results into an existing development pipeline. You don’t need security experts to run an advanced DAST solution – once set up and integrated into your workflows, it can run all by itself and be easily managed even by personnel who are not security experts.

Invicti customer support can help to gradually expand the scope of integration until DAST runs fully automatically as a silent coworker. At this stage, you are optimizing not only application security testing but your entire development and testing process. And with Invicti’s proof-based scanning and remediation guidance in vulnerability reports, you’re seeing clear security benefits with added confidence in the results, as real security vulnerabilities are found and closed with every ticket.

Read our case study to learn how much time Park ‘N Fly saves with integrated Invicti DAST

Shortcut to DAST success: Tag-teaming with your vendor

Nobody knows your application environment better than your own team, but nobody knows the product like the vendor’s team. The fastest road to success and value is to combine the two and have the vendor guide your internal experts through the setup and optimization process while relying on their intimate knowledge of the applications and process flows involved. That way, your employees can focus on doing their core jobs rather than setting up and optimizing scans. 

The right DAST backed by reliable onboarding and vendor support can be all you need to transition to an efficient and effective DevSecOps process. So when looking at DAST products, remember to ask about the onboarding process and vendor support – and when looking at Invicti, remember to ask about our Guided Success offering.

The post DAST tools are only as good as their setup and support appeared first on Invicti.

]]>
5 fundamental differences between DAST and penetration testing https://www.invicti.com/blog/web-security/5-differences-dast-vs-penetration-testing/ Thu, 24 Aug 2023 13:00:00 +0000 https://www.invicti.com/?p=47110 Automated vulnerability scanning with DAST tools and manual penetration testing are two distinct approaches to application security testing. Though the two are closely related and sometimes overlap, they differ (among other things) in scope, efficiency, and the types of security vulnerabilities found.

The post 5 fundamental differences between DAST and penetration testing appeared first on Invicti.

]]>
In cybersecurity, it can be tempting to fall into checklist mode, if only for the peace of mind of ticking off the compliance items required to minimize security risk. In web application security specifically, some organizations still treat a periodic manual penetration test or vulnerability assessment as sufficient to tick their “application security testing” box – but is penetration testing enough to truly cover that area? And what about all the automated testing methods out there (aka the AST zoo)?

This post attempts to clear up some of the confusion around the relative merits of automated and manual approaches to dynamic application security testing (DAST) – and show that it’s not an either-or proposition.

Strictly speaking, all types of security testing that probe a running app from the outside (black-box testing) qualify as DAST, whether manual or automated. In practice, the term DAST usually refers to automated vulnerability scanning, while manual black-box testing is called penetration testing (or pentesting for short).

Difference #1: Web asset coverage

When testing to determine your actual exposure to attacks, ideally you need to know and test your entire web attack surface. While penetration testers are theoretically able to test any asset that might also be available to attackers, manual testing is time-consuming and in practice usually limited in scope to a smaller subset of your environment. This could mean only testing business-critical apps or focusing on new and changed assets.

A good quality DAST tool, on the other hand, can run automated scans on any number of assets – preferably on your entire web environment. Similar to pentesting, DAST can find not only vulnerabilities resulting from security flaws in your own code but also vulnerabilities in third-party libraries and APIs, as well as purely runtime issues like security misconfigurations and vulnerable tech stack components. This is in contrast to static application security testing (SAST), where you are analyzing source code without running it, so you can only uncover potential vulnerabilities – and only when you have the code.

Difference #2: Speed and cost

Apart from practical limitations of scope, penetration testing is far slower than a DAST scan, both in terms of actual time taken and in terms of process efficiency. Every test you run has to be commissioned in advance and carries an associated cost, so relying purely on pentesters for application security testing can get cumbersome and expensive. And if you’re unable to test everything, and test it often, the time gaps between pentests can translate into gaps in your security posture.

With an accurate DAST solution under your belt, you can run what amounts to basic automated pentesting as often as you need; some Invicti customers scan their entire environment on a daily schedule. Whether in production or development, you can run scans whenever you want at no additional cost and without waiting on anything or anyone. This is especially important in an agile DevSecOps process, where stopping a sprint to wait for security testing results is not a realistic option. Because a scanner mainly finds what pentesters would consider obvious vulnerabilities, fixing these simpler issues is much faster than, say, addressing a major security flaw in business logic.

Read our case study to learn how bringing vulnerability testing in-house with Invicti DAST allowed one customer to cut their external pentesting costs by 80%

Difference #3: Depth and breadth of testing

There’s no question that an experienced pentester can go deeper and exploit more complex security vulnerabilities than any automated tool ever could. But, again, this takes time and cannot be applied equally to your entire web environment. In fact, that’s not the original purpose of pentesting – as the name implies, a penetration test is primarily intended to check if it’s possible for anyone to break into a system, so it doesn’t provide a full picture of your security.

You can think of a DAST solution as a way of setting and maintaining your security baseline. A good vulnerability scanner can run hundreds of automatic security checks per web asset and (if set up properly) do it across your entire environment at a scale and speed unattainable with manual testing. In fact, most penetration testers start work by running a vulnerability scanner to see what they’re working with and where to focus their efforts. In addition, with a mature solution like Invicti, the automated tests incorporate years of security research expertise across multiple web technologies and attack techniques, going far beyond the skill set of any single tester.

Difference #4: Ease of remediation

Finding security gaps is the short-term goal of security testing – but the long-term goal is to fill those gaps. Pentesting focuses on finding ways into your applications, so while the results of a penetration test provide information about the current resilience of an IT environment, they might not make it any easier to address the identified issues. This is especially true when testing originates in the sphere of information security with little to no integration with application development teams, who simply get a report about exploited vulnerabilities and are left to their own devices to fix them.

While many DAST tools can be equally unhelpful, especially when run as standalone scanners, some DAST solutions are designed specifically to integrate with the software development life cycle (SDLC) and aid remediation. In the case of Invicti, this starts with a rich set of out-of-the-box integrations with popular issue trackers, CI/CD pipelines, and collaboration platforms. To ensure that automated workflows are not flooded with false positives, Invicti uses proof-based scanning to automatically verify the majority of common vulnerabilities. That way, developers get confirmed and actionable tickets directly in their issue tracker – each complete with detailed technical information and remediation guidance.

Difference #5: Types of vulnerabilities found

Both DAST and pentesting will find many of the same fundamental web vulnerabilities, like SQL injection or cross-site scripting (XSS) – but that’s where the similarities end. Manual testers, whether pentesters or bounty hunters, excel at finding business logic vulnerabilities that automated scanners can’t detect because they don’t understand application logic. This includes such security flaws as insufficient authentication or authorization, where a certain resource is accessible to an attacker even though it shouldn’t be. Penetration testers can also use their expertise and intuition to combine multiple vulnerabilities into complex chains to mimic real-world attacks.

Where a DAST solution can’t improvise like a human, it wins out on persistence, consistency, and sheer volume. If you have several dozen XSS vulnerabilities across your environment, for example, a penetration test might only report a handful of them and leave it to your developers to find and fix all similar input sanitization failures. A good DAST scanner, on the other hand, will report most or all of these security issues, providing your development teams with an actual task list rather than general recommendations. DAST tools also come with a far greater variety of test attacks and payload varieties than could be realistically used in purely manual testing – and again, they can throw them at any number of assets.

Keeping your web apps and APIs secure goes beyond DAST vs. penetration testing

Cyberattacks are now a permanent feature of all cloud-based operations, and building up resistance is crucial to prevent them from becoming data breaches. As application architectures and deployment modes get ever more distributed and complex, it’s no longer enough to rely only on perimeter defenses like web application firewalls – first and foremost, the underlying application itself needs to be secure. Any AppSec program worth its salt should incorporate a layered and comprehensive approach to security testing, using the right testing methods at the right time to minimize the number of application vulnerabilities at every stage of development and operations.

DAST solutions are unique among AppSec testing tools in that they can cover both information security (to scan your organization’s own attack surface) and application security (to test the apps you’re developing and running). Combined with the sheer scale of testing and the ability to test all web assets regardless of tech stack or access to source code, this makes DAST a foundational component of any cybersecurity program. Use DAST to bring testing in-house and fix everything you can, and only then call in the security experts and ethical hackers as part of a penetration test or bug bounty program.

As a final thought, remember the recent MOVEit Transfer crisis? (If not, we’ve covered it here and here.) The resulting attacks that ultimately affected hundreds of organizations were only possible because malicious hackers combined several simple and normally inaccessible vulnerabilities into a devastating attack chain. Just like a penetration tester, the attackers used their human ingenuity to devise an attack path – but if those basic vulnerabilities had been found by automated scanning at earlier stages of the development process, all those MOVEit Transfer data breaches might not have happened.

The post 5 fundamental differences between DAST and penetration testing appeared first on Invicti.

]]>