There was a time when applications were built just to complete the task they were required for. But these days, developers often need to focus just as much on how secure the application is. If you’ve written code, it’s likely that you’ve been asked to check for the most common security loopholes and fix them.
Almost every large organization has a dedicated QA and security department. Even so, it’s important that you, as a developer, understand the importance of security. This goes doubly so for startups and other organizations that don’t have a dedicated security or QA team.
How can you increase awareness of security and implement useful, practical, effective solutions? Hold regular security testing sessions. In this post, you’ll learn about the major issues you should consider while running a security testing session.
What is security testing?
Security testing is the process of finding vulnerabilities or security loopholes in the application or system you’re testing. Almost every application available today has multiple users, and these folks use shared resources. For this reason, you must take care that your code isn’t vulnerable to attacks.
So, how can you know whether your code is vulnerable? Well, you can wait for someone to exploit the weakness and misuse it. But that’s not what you’d want. Another way to identify these weaknesses is by—you guessed it—running your code through security tests.
Security testing won’t catch everything; there are always vulnerabilities that slip through, and every new deploy can bring a security flaw. However, security testing can help you identify and catch many vulnerabilities early on, which will reduce the amount of vulnerabilities that make it to production.
Running security testing sessions
How you run a security testing session determines how effective the process is. Security testing can involve different teams, but to make things simple, let’s consider two major types: development teams and quality assurance (QA) teams.
The more you focus on security while building the application, the fewer fixes you’ll have to make later. Development teams have to implement security features during the development phase, while QA teams do so during a separate testing phase. We’ll explore the role of QA teams later in this post, but let’s examine development teams’ role first, since their work comes earlier in the process.
To make your job easier, a set of principles for security testing already exists. When you’re developing the application and running a security testing session, it’s important to focus on these principles. Now, let’s have a look at them in detail.
Security testing principles
The security testing principles will help you ensure that your application is free from the most obvious threats. Let’s focus on the six most important principles:
Confidentiality is all about maintaining data privacy. For example, your bank details—such as your credit card number, your banking password, and so on—are sensitive data. You wouldn’t want this data to be available to anybody else. Similarly, you have to make sure your code protects sensitive data from getting into the hands of the wrong people.
You can use account isolation to keep data private to one particular user. How you test for account isolation depends on the architecture of the application. One common way of testing account isolation is to try to access the information of some other account.
Let’s say you’ve written code to send messages. Now, you’re using your code to send your bank account number to a client so that she pays you for the work you’ve done. It would be bad if your account number was altered and a wrong account number was sent to the client, right? Then the money would go to some other person while you’re waiting for the payment. This simple example shows the importance of integrity in applications.
You have to make sure your code transfers accurate data and doesn’t allow unauthorized users to make changes to the data. You can consider integrity to be another word for accuracy.
When you develop an application and deploy it, many people and organizations are likely to use it. Therefore, you have to make sure that your application is available to the users trying to access it. After all, if you can’t use an application when you want to use it, it might as well not exist at all.
If you want to send an email from your email address, then you have to authenticate yourself by using a password. You can use the email functions only after identifying yourself.
Similarly, you have to allow only specific users who identify themselves to use the features of the application that they’re permitted to use. One of the most common ways of doing this is by using a login mechanism.
After the user logs in, you still have to check whether the user is restricted to access only the features that he or she is supposed to. In poorly programmed web applications, that user interface lists the pages the user can view. The assumption is that the user will click only the links that are on the page. But there’s a chance that the user changes the URL and accesses unauthenticated pages or paths. Therefore, you have to implement an authentication mechanism on every page and path.
If you’re using different applications or application programming interfaces (APIs), or if you have integrations, then you’ll have to use access tokens for authentication. You have to make sure that these tokens are unique so that there are no security breaches or authentication conflicts. This process is known as token unicity.
Authentication is mainly used in applications that have user-specific functions—for example, Facebook. But if you’re building a public or generic-user application such as a search engine, then including authentication isn’t compulsory.
If you’re building an application that different users with different roles will use in various ways, then you have to implement logic to provide access rights. And you have to make sure that nobody can bypass these rights—for instance, by taking editing privileges when you’ve granted them viewing privileges only.
Non-repudiation means that you can’t deny something that you’ve done. For instance, if you’re sending some money to a client, then the application or the medium of money-transfer will have this information. This means you can’t later deny that you sent the money.
When you’re building an application, you have to consider logging user actions. This is essential when there’s a dispute between parties about actions done through your application.
Now that you know the roles of the development team in security testing, let’s take a look at the roles of the QA team. If you’re part of a QA team, then you’ll have to test the application in different ways to make sure all potential threats are addressed.
Security testing techniques
Of course, there are various approaches to security testing, but let’s discuss the most common ones that OWASP lists.
- Manual Inspection and review
- Threat modeling
- Code review
- Penetration testing
Manual inspection and review
You can divide this technique into two categories: Design and Working.
In the first category, you’ll be looking at the design and architecture of the application to find security flaws. Here, you’ll check the flow of the application, see if authentication and authorization are designed properly, and so on.
In the second category, you’ll do the security check by using the application yourself. You’ll check if the application is working as expected or not, try to do something that you aren’t allowed to and then see if that works, and so on.
This technique gives you a chance to check the application for custom use cases. But the problem is that you’ll need good skills and a sharp eye to notice the flaws. Also, this method is time-consuming.
Threat modeling is the process of identifying potential threats or weak points in your code and trying to fix those flaws. First, you’ll search for weak points in your application where there could be security attacks. Then you’ll have to figure out what kinds of attacks someone can perform on your application. For example, if you’re using a SQL database, then there’s a chance that your application is vulnerable to SQL injections. Or if you have a web application, it might be vulnerable to cross-site scripting. Finally, you’ll have to implement measures to prevent or mitigate the threats.
With this technique, you’ll look at the source code of the application and check if there are any security issues in that code. In many poorly programmed applications, especially in the case of web applications, developers include sensitive data but provide no protection. There’s a high chance that security testing tools might not catch these issues, and that’s why source code review is so important.
You might have heard of penetration testing (also known as ethical hacking). In this technique, you’ll try to find vulnerabilities in the code and then try to exploit them. If you successfully exploit them, then you’ll know what kind of attacks work, and you can fix them.
There are two types of penetration testing: external and internal. In external pentesting, you’ll try to test the security of the application from the assets of the application that are available to the outside world. When you implement the security principles in your application, that completes the purpose of external pentesting. In internal pentesting, however, you’ll test the depth of a security breach after a hacker gets into the system.
The advantage of penetration testing is that you’ll look at the application and code from a hacker’s perspective, which is actually very effective.
How much time does a security testing session take?
How much time should you expect to expend on one of these sessions? It depends on the size of your application and how you plan on testing it. For example, consider a simple C program that takes input from the user and prints it out on the screen. The main threat that’s possible here is a buffer overflow attack. You can test this and fix it within an hour. On the other hand, if you consider an application like Facebook, it’d take months just to finish the primary test. So how much time it takes depends on the size of the application and the number of integrations.
You can follow two approaches for testing different-sized applications. For tiny applications, the development team and the QA team can sit together, check for potential threats, and discuss the best possible countermeasures. This would take a maximum of a day or two.
For large-scale or complex applications, the development team and the QA team can run security tests individually and create a report. The time for this again depends on the size, but typically it should be around a month maximum. After running the tests and creating reports, both the teams can sit together and discuss the issues and their solutions.
The bottom line
Security is one of the major concerns of an application developer. Even though there are ethical hackers who specifically work on helping improve security, it’s important for anyone who’s building an application to know about security. This CTO security checklist can help you make sure you’re doing everything you should be doing.
When you’re running a security session, you have to keep in mind the principles of security testing. Also, use various techniques to make sure your application and code have good security.
Security testing will never be perfect, but it’s a smart practice for reducing the amount of vulnerabilities that make it to production. Application Security Management platforms that monitor and protect your applications in production like Sqreen can then help you identify and protect against the vulnerabilities that do make it through.
This post was written by Omkar Hiremath. Omkar uses his BE in computer science to share theoretical and demo-based learning on various areas of technology, like ethical hacking, Python, blockchain, and Hadoop.