Last month I wrote about culture change and getting software development teams on board with the need to create secure software. Once an organization has accomplished that task the next challenge is:
“OK, now what do we tell the developers to do?”
Figuring out what to tell the developers to do is not as easy as telling them “write secure code.” If they knew how to do that in the first place, the organization probably wouldn’t need a software security program. But almost all developers went through college computer science or software engineering programs that teach little about software security. Maybe a little cryptography or access control lists in operating systems, but nothing about memory corruption or SQL injection errors that can make software vulnerable. So it’s up to the software security organization not only to motivate the developers, but also to teach them what to do.
When I was at Microsoft, we initially faced the problem of “what to do” at scale during the Windows Security Push in 2002. Then we had to teach about 8500 Windows developers what to do as they tried to find and remove security vulnerabilities in the code base that would become Windows Server 2003. We compressed our answers to that question into a four-hour training course that was taught to 1000 people at a time by Michael Howard, Jason Garms and Chris Walker (concentrating on developers, program managers, and testers respectively).
For the most part, our training got it right – we had some static analysis tools that would find code-level vulnerabilities; we knew about some APIs whose use was error-prone and dangerous; we knew about dangerous coding constructs to be avoided and we knew about some areas of software design that deserved careful review for security problems. Those were the things we got right.
But we also made some mistakes.
Thinking like a hacker
In retrospect, I think the most significant mistakes were a result of our attempting to make the development teams “think like a hacker.” Michael, Jason and Chris (and the other members of the security team) were and are security experts and it was natural for us to try to impart insights that developers would need if they were going to review the design of a system or component and intuitively say “that’s a place to look for vulnerabilities.”
The first example that I still remember was our approach to threat modeling. We told development teams to draw a data flow diagram of their system or component and then “think up threats” that could result in one of the STRIDE effects (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege). After thinking up a list of threats, the teams were supposed to come up with ways to mitigate the most important ones. If the team included someone with real security skills, the approach worked OK. If not, it just led to frustration because of the vague guidance we provided.
The second example involved asking test teams to build what amounted to code-level penetration tests that would exploit design weaknesses. We showed the teams some examples, and told them to go off and find weaknesses on their own. The result was a few interesting bugs that nowhere near justified the effort expended.
The lesson I learned from our early mistakes was that not every engineer is going to or can think like a hacker. If the engineers are motivated and you give them concrete guidance on what to look for, they can do an effective job. But just telling engineers to think like a hacker doesn’t cut it. David LeBlanc, one of my colleagues from the Microsoft days, says “Developers want to do the right things, but you have to tell them what to do and be very specific.”
Today’s threat modeling tools and processes help engineers model their designs and look for specific security problems. And we ask testers to tailor and run tools such as fuzzers that find vulnerabilities automatically. So, there’s been a lot of improvement from those early days. But I still hear some folks who are building software security teams say “we’re going to teach our engineers to think like hackers” and I now tell them that there are more effective and productive ways to use the engineers’ time and talents.
This article is published as part of the IDG Contributor Network. Want to Join?