For part two in our series on the Executive Order (EO) 14028 this article focuses on the minimum testing standards for software vendors or developers. Software vulnerabilities are extremely common in commercial applications with as much as 76% of applications having at least one security flaw. With most businesses and government agencies offering services through web applications, a single failure to secure a web application can result in consequent damages. To help address this NIST has provided some standards for application testing that can be used by developers to ensure that their applications are secure before they are released to market and even after launch. This article will begin by explaining these different points as well as some NIST recommendations for creating applications that are secure by design.
Threat Modeling is the process of identifying potential threats for your application based on the application’s function. For example a web application that will be internet facing will have different threats than an internal application. You do this by creating an abstraction of the system, profiles of potential attackers, identifying their goals and methods and eventually you begin planning your countermeasures.
Typically threat modeling should be done in the planning phase of the Software Development Lifecycle (SDLC) so that you can plan out what type of security features you will need to counteract those threats. As your application changes or adds new features then you should repeat the process taking these new elements into account.
For web applications some of the common threats are attacks like: Cross-site scripting, SQL injections, cross-site request forgery, directory traversal and malicious file uploads to name a few. Rather than trying to individually look up threats we recommend using threat modeling frameworks that will contain a list of threats that can affect certain types of applications. Some popular frameworks you should consider using are the MITRE, OWASP, Microsoft’s STRIDE framework and the TRIKE methodologies.
The first recommendation NIST gives for testing is to perform automated testing for verification of your software. This can be anything from a simple script to commercial software that is specifically designed to find security vulnerabilities. NIST recommends that you use automated verification to achieve the following goals:
- Ensure that static analysis does not report new weaknesses
- Run tests consistently
- Check results accurately
- Minimize the need for human effort and expertise
Automated testing has the benefit of not requiring skilled man hours, can be performed repeatedly as new code is written and in many cases are more accurate and consistent than manual code reviews. Automated testing should be performed whenever there are updates to the source code.
Static analysis is anytime you check code for vulnerabilities without running the code itself. NIST identifies two main ways this should be performed. First is the use of a code scanner to identify security bugs. You can do this by leveraging software applications to review the code for potential security vulnerabilities (and, to a lesser extent, with manual code reviews).
The second element is looking for hardcoded secrets, this can be things like passwords, encryption keys, access keys etc. Hard-coded secrets are very problematic since they can be leveraged by hackers to gain initial access and later move laterally into the core infrastructure of a company. Thanks to its expertise on this topic, GitGuardian has been thoroughly documenting why this is such a problem, how common this security vulnerability is but also why both ‘classic’ static analysis and code reviews fail to prevent it.
Secrets are often unwittingly posted to public Github because they are buried into the Git history. This makes them freely available to everyone on the internet to find. Even in the case of a private repository you are still revealing those secrets to anyone that has access to the repository within the company. This can lead to misuse by an employee or in the event that one account is compromised, it will provide an attacker with access to several secrets.
Focus on hard-coded credentials
One commonly overlooked part of secrets detection is scanning for both specific and generic secrets. This is best explained using these examples:
AWS_ACCESS_KEY_ID = A3T6AKIAFJKR45SAWS5Z AWS_SECRET_ACCESS_KEY = hjshnk5ex5u34565d4654HJKGjhz545d89sjkjak ... connect_to_db(host=”184.108.40.206”, port=8130, username=”root”, password=”m42ploz2wd”)
As you can see in the first section it’s very clear what those secrets are used for while in the second one it is not. Many secret detection tools are only configured to detect specific secrets and would omit the second secret, which would leave it freely accessible to unauthorized users.
GitGuardian’s continuous monitoring of public GitHub has shown that almost half of leaked secrets belong to the generic category, which makes their detection an essential feature to provide an efficient shielding.
Lastly, there is the problem of false positives and false negatives. Having an excess amount of false positives results in something called alert fatigue, which is where people become so accustomed to seeing alerts that aren’t important that they start to ignore all alerts and they fail to investigate the alerts that are meaningful. You also want to keep false negatives low so that you can be sure to detect all instances of secrets that are exposed.
As you may know GitGuardian is a company that specializes in secret detection of generic and specific secrets across both public and private repositories. Our detection systems have been tuned to provide the best combination of low false alerts and a low number of secrets missed, if you would like to try our product you can find more information here.
Dynamic analysis is when you perform security checks at runtime, which means when the application is being executed. This can reveal errors that may not be apparent during static analysis. According to the NIST standards here are what they suggest for dynamic analysis:
Built-in checks and protection: The first step in this process is to run the application with the built-in checks and protections that are found within the programming languages themselves. For software that is written in languages that are not memory safe, you should use techniques that enforce memory safety.
Next is the use of black box cases, which are tests that are performed with no knowledge of the inner workings of an application. Therefore they are not based on an implementation or a piece of code, rather they are based on functional requirements. Black box test cases are also meant to be negative tests, which means you’re looking to prove that even with invalid input errors won’t occur. Some examples of the type of cases you want to test for are denial of service, overload attempts, boundary analysis or functional specifications or requirements.
You can also use code-based structural test cases, these are tests that are based on the specifics of the code itself. In this case you look at the implementation of code and come up with tests that ensure that this piece of the software performs as expected.
Consider this example“Suppose the software is required to handle up to one million items. The programmer may decide to implement the software to handle 100 items or fewer in a statically allocated table but dynamically allocate memory if there are more than 100 items. For this implementation, it is useful to have cases with exactly 99, 100, and 101 items in order to test for bugs in switching between approaches.”
NIST recommends that these tests should achieve at least a minimum of 80% statement coverage to ensure that the application is secure.
Application Fuzzers: Fuzzers are applications that feed a large number of randomized inputs into an application in order to discover bugs. They purposely provide unusual inputs that a manual tester probably wouldn’t think to try and this can lead to unexpected errors.
Web App Scanners: If your software will be connected to the Internet then you should run a web app scanner to identify potential web app vulnerabilities. As with an application fuzzer it will generate inputs and monitor for unusual behavior.
Checking included Software
While third party code allows you to add functionality easily it comes with security risks. Any code that you use from outside of your company hasn’t necessarily been vetted correctly for security bugs. You should be sure to use the same techniques for static and dynamic analysis on any third party code that you use to ensure that you aren’t introducing security bugs to your application. There are companies such as Veracode that offer services for evaluating third party code or even full applications that you may be interested in buying and integrating into your company’s software development process.
Finding and fixing bugs in your live applications
As people use your application it’s inevitable some of them will find security bugs and these need to be fixed quickly before hackers can exploit them. One way to be proactive in finding these bugs is to have regular security audits performed by professional penetration testers. Alternatively, you can create a bug bounty program and have security researchers examine your application on a regular basis for security bugs. Once these bugs are found it’s up to the development team to quickly create fixes and release patches to have the issue resolved.
Going beyond Software Verification
Software Development Practices
NIST advocates that developers should go beyond just software verification and implement practices that will ensure that the application is secure by design. They identified four main practices for creating secure software in their white paper “Mitigating the Risk of Software Vulnerabilities by Adopting a Secure Software Development Framework (SSDF)”:
1) Prepare the Organization: ensure that the organization’s people, processes and technology are prepared at the organization level.
2) Protect the Software: implement features and controls to protect the software from tampering and unauthorized access.
3) Produce Well-Secured Software: produce software that has minimal vulnerabilities in its releases.
4) Respond to Vulnerabilities: identify vulnerabilities in software releases, respond quickly to address those vulnerabilities and take action to prevent similar vulnerabilities from occurring in the future.
Software Installation and Operation Practices
Even if the software itself is secure, if its installation, operation or maintenance is improperly implemented this can introduce vulnerabilities. Here are some common examples of potential risks in this area:
- Configuration Files: Due to the differences in software applications and networking parameters, most computer applications allow you to change their settings. Unexpected settings when configuring software can lead to security vulnerabilities. To prevent this, software releases should include secure default settings and restrictions on how far people can deviate from the default settings to avoid these vulnerabilities.
- File Permissions: This refers to file ownership and permissions to read, write, execute and delete files. Security can be compromised if the mechanisms responsible for it can be modified or removed. To prevent this, file permissions should be determined using the principle of least privilege to limit the possibility of unexpected actions leading to security vulnerabilities.
- Network Configuration: This refers to the security measures that are implemented when building and installing computers and network devices. The network should be configured to prevent unauthorized access to software applications. Verification of this should be done routinely to ensure that any misconfigurations and unauthorized access to applications will be caught.
- Operational Configuration: This refers to components that are dependent on a software product or components on which the software product is dependent. A common example of this in the case of source code are things like compilers and interpreters. Any of these dependencies which are related to the security of the software need to be verified to ensure that they are configured properly or the security of the software may be compromised.
Additional Software Assurance Technology
NIST highlights the fact that while software verification has improved drastically in the past, there are still improvements being made in the area of software assurance. They highlight the following near-term advances that may be leveraged to improve security assurance:
- Applying machine learning to reduce false positives from automated security scanning tools and to increase the vulnerabilities that these tools can detect
- Adapting tools designed for automated web interface tests, e.g. Selenium, to produce security tests for applications
- Improving scalability of model-based security testing for complex systems.
- Improving automated web-application security assessment tools
- Applying observability tools to provide security assurance in cloud environments
- Adapting current security testing to achieve cloud service security assurance
This iteration of EO 14028 gives you an overview of what software vendors need to do to be sure their products are secure. It begins with threat modeling to identify the potential threats to your application, their methods and goals. Following that you should use automated testing to perform both static and dynamic analysis of your application for any signs of security bugs. You should be diligent in vetting any third party code that you introduce into the project and lastly be sure to review your application periodically for any security bugs after release and fix them quickly.
As part of your security testing you should be using tools to find any hardcoded secrets (passwords, access keys, encryption keys, etc). If these are left in your repository or accidentally posted to Github this can easily lead to data breaches and company hacks. Studies show that over two million secrets were found in Github in 2020, showing growth of over 20% year-to-year.
If that is a concern for you, you may want to consider using our tools to monitor both your public and private code repos. If you enjoy this article and would like to read more like this, you can subscribe to our newsletter below for more free content.