Right-Sizing

Application Security is just one of many important attributes that a software system should have. Of course you need to fulfill basic functional requirements – no point in securing software that isn’t useful; more generally you also need to address a range of “ilities” such as reliability, usability, extendibility, maintainability, and scalability. Application security will be competing with these requirements for resources. Developer time is likely the most important of the resources, but your security program is also going to spend hard cash on training, tools, and security expert services.

What is at Risk?

Risk is best considered from the perspective of the organization that will actually run the software. With a modern web-based service this might be the same as the development organization. In other cases a customer might run the software in an environment they control.

Product Market Space

Some market spaces such as finance and medical are inherently more sensitive to security concerns.

Specific Product Role

What exactly does the software do? For example software that manages the configuration of an implantable defibrillator is likely more sensitive than software that manages an EKG display even though they are both medical devices.

Data Sensitivity

Does the software manage data that is inherently high value such as credit card details?

Theft of Service

Does your software run in an environment where there is a risk of theft of the underling compute or network resources? Is it a component of a service that might be subject to theft?

Denial or Disruption of Service

Denial of service is generally thought of as the risk that the service being delivered by the software might be shut down, but we should also consider the possibility that a flaw in our software might enable an attacker to shut down other services.

What will it take?

Technology Stack & Architecture

Most application software rides on top of several layers of other software components, each of which has its own security considerations. The more complex your environment the more difficult it will be to tackle security. For example if you develop in 4 different languages you need to have tools to cover each.

External Interfaces

Attacks are usually instigated via an exposed interface so the nature of these interfaces plays an important role in securing a system. For example there is a large body of art associated with attacking and defending web-based user interfaces.

Technical Deployment Environment

Is your software deployed on embedded devices, on cloud instances (with or without containers), as an app install in a “bring your own OS” environment? Or possibly several of these? Each of these environments have different security considerations.

Current status

How far along the “security maturity” path is the organization now? If you are just getting started you have a lot more work to do than if you are looking to buff up an existing approach.

Other Factors

Team Size

It will be harder for a small team to implement a substantial security activity. For example, there is a certain amount of work required to use any tool effectively. With a larger team, the effort is more easily amortized.

Organizational Commitment

Is the organization really committed to application security? Is there a clear executive owner? Is there a budget? Do we plan to hold individuals responsible?

Stakeholder Interactions

Stakeholders, especially external customers, may make specific demands on your security plan. They are also likely to want to see an externally presentable view of the plan itself. Internal stakeholders may demand to understand implementation details, cost, and justifications.

User Community

Who are the actual users of your software? Is it “anyone on the internet?” Users of specific embedded devices?, corporate users behind a firewall?

Risk Tolerance

This is the tolerance of the software development organization to security risks in the deployed systems. If your software is part of a web-based service, as so many are today, you are essentially your own customer. If, on the other hand, you sell the software or a device it runs on, someone else is the customer and is directly in the line of attack. The risk then is the business risk of loss of reputation, penalties imposed by contracts, or even lawsuits.

Scaling and Tuning

“Right-Sizing” is a catchy title and it is important. At the end of the day there will be a cost for this effort and we want to ensure that we spend the right amount. But the total size of the effort is just the roll-up of different costs – the trick is to figure out the best way to spend the limited resources given what is at risk, the specific challenges of securing it, and other important factors.

With all of these variables there is no single cook-book solution. Focusing first on the notion of “scale” we can identify three different “intensity levels” for a secure application program. Scale is thus a tool to help get us in the right ball-park; beyond that we will need to make a series of decisions based on the considerations above.

The Low Intensity program can be followed like a kitchen recipe. It works because it is focusing on the most common and obvious items. Beyond that your team will need to bring some security expertise to the table to help guide program tuning.

The intensity discussions below build on each other with topics re-addressed only as necessary.

Low Intensity

  • Identification of security champion
  • Web-based training for developers & architects
  • Static code analysis with a focus on ‘high-value’ flaws such as buffer overruns in C++, input sanitization, and HTML/Web exploits. This leverages the code-quality adjacency: You should probably be running these tools anyway.
  • Commercial security scanning. This will be sensible if the final result is a running application (as opposed to a software library). It is particularly important when the application is bundled with the operating environment such as on an embedded device or a cloud-computing instance.
  • Targeted composition analysis. The goal here is to identify software components that are embedded at build time and are therefor invisible to a scanner. Depending your technology stack, it may be possible to rely on tools already present such as npm audit.
  • Vulnerability resolution policy. Document your timeline for addressing different levels of vulnerabilities.
  • Minimal internal “security” communication. Probably a few pages on an internal wiki summarizing what is being done is sufficient
  • Manual code inspection process
  • Reactive vulnerability management. The team is using tools to help identify ‘obvious’ vulnerabilities but is not aggressively working to root out every potential vulnerability.

Medium Intensity

  • Web-based training for broad set of staff involved with product creation and deployment
  • Lab based training for developers
  • In-house security expertise with a reactive focus. These individuals help the development team assess vulnerabilities and develop remediation. They also keep an eye on the outside world for new vulnerabilities, new tools, etc.
  • More aggressive use of security scanners. For example using multiple commercial products, archiving scan results, scanning deployed versions of software, etc.
  • Integration of security tools into dev-ops pipelines
  • Formalize security issue tracking. For example by extending the defect tracking tool with appropriate attributes.
  • High-visibility vulnerability management process. What does your team do when a new branded vulnerably is on the front page of the Wall Street Journal and your customers are ringing the phone?
  • External vulnerability messaging policy and process
  • File-system wide composition analysis. This leverages an adjacency: You probably should be doing this for open source license management anyway.
  • More fleshed out internal messaging
  • Aggressive vulnerability resolution. While it is certainly important to say what you do and do what you say, what your stakeholders are looking for is rapid turnaround.
  • Development of externally presentable security messaging
  • Dynamic testing
  • Fuzz testing
  • Container testing
  • Ad-hoc Threat Modeling
  • Other process and policies as dictated by business needs
  • Occasional Commercial Assessment and/or Penetration testing
  • Moving towards a proactive vulnerability management approach

High Intensity

  • Internally developed training
  • Active internal technical evangelism; for example ‘capture the flag’ contests
  • Active internal non-technical outreach to management and other stakeholders
  • Participation in industry conferences and forums.
  • Formal and consistent threat modeling starting early in design
  • Internal expertise for penetration testing
  • Frequent external assessment/penetration testing
  • Integration of security tool’s “policy” mechanisms into development processes
  • Formal mechanisms to mitigate both local and upstream supply chain attacks
  • The team is fully committed to a proactive vulnerability management approach

Whats Next?

The next step is to draft an outline of your own security program! If you are already doing some of this you might just be filling in a few cracks. A key driver will be understanding why you are even looking at creating or improving a security program:

  • It is simply the right thing to do. This may be coupled with some executive “encouragement.”
  • You are getting beaten up by external stakeholders
  • The market place you are in simply demands it. No point in waiting to get beaten up…

A lot of the line items have to do with Tools and Services and Process and Policy. Those pages and others from the Details section will help you flesh out your program.