Black hat knowledge for white hat programmers [closed]

At the end of the day nothing the 'black hats' know is criminal knowledge, it's just how the knowledge is applied. Having a deep understanding of any technology is valuable as a programmer, it's how we get the best out of the system. It's possible to get by these days without knowing the depths as we've more and more frameworks, libraries and components that have been written using such knowledge to save you having to know everything but it's still good to dig from time to time.


I'm coming in late on this, as I just heard about it on the podcast. However, I'll offer my opinion as someone who has worked on the security team of a software company.

We actually took developer education very seriously, and we'd give as many teams of developers as possible basic training in secure development. Thinking about security really does require a shift in thinking from normal development, so we'd try to get developers thinking in a how-to-break-things frame of mind. One prop we used was one of those home safes with the digital keypad. We'd let developers examine it inside and out to try to come up with a way of breaking in to it. (The solution was to put pressure on the handle while giving the safe a sharp bash on the top, which would cause the bolt to bounce on its spring in the solenoid.) While we wouldn't give them specific black-hat techniques, we'd talk about the implementation errors that cause those vulnerabilities -- especially things they might not have encountered before, like integer overflows or compilers optimising out function calls (like memset to clear passwords). We published a monthly security newsletter internally, which invited developers to spot security-related bugs in small code samples, which certainly showed how much they would miss.

We also tried to follow Microsoft's Security Development Lifecycle, which would involve getting developers to talk about the architecture of their products and figure out the assets and possible ways to attack those assets.

As for the security team, who were mostly former developers, understanding the black-hat techniques was very important to us. One of the things we were responsible for was receiving security alerts from third parties, and knowing how difficult it would be for a black hat to exploit some weakness was an important part of the triage and investigation processes. And yes, on occasion that has involved me stepping through a debugger to calculate memory offsets of vulnerable routines and patching binary executables.

The real problem, though, is that a lot of this was beyond developers' abilities. Any reasonably sized company is going to have many developers who are good enough at writing code, but just do not have the security mindset. So my answer to your question is this: expecting all developers to have black-hat knowledge would be an unwelcome and detrimental burden, but somebody in your company should have that knowledge, whether it be a security audit and response team, or just senior developers.


I'm going to be a bit heretical and go out on a limb and say:

  • You really need to talk to the sysadmin/network folks that secure their machines. These folks deal with the concept of break-ins every day, and are always on the lookout for potential exploits to be used against them. For the most part, ignore the "motivation" aspect of how attackers think, as the days of "hacking for notoriety" are long gone. Focus instead on methodology. A competent admin will be able to demonstrate this easily.

When you write a program, you are presenting what is (hopefully) a seamless, smooth interface to ${whatever-else-accepts-your-programs-I/O}. In this case, it may be an end-user, or it may be another process on another machine, but it doesn't matter. ALWAYS assume that the "client" of your application is potentially hostile, regardless if it's a machine or a person.

Don't believe me? Try writing a small app that takes sales orders from salespeople, then have a company rule that you need to enforce through that app, but the salespeople are constantly trying to get around so they can make more money. Just this little exercise alone will demonstrate how a motivated attacker - in this case, the intended end-user - will be actively searching for ways to either exploit flaws in logic, or to game the system by other means. And these are trusted end-users!

Multiplayer online games are constantly in a war against cheaters because the server software typically trusts the client; and in all cases, the client can and will be hacked, resulting in players gaming the system. Think about this - here we have people who are simply enjoying themselves, and they will use extreme measures to gain the upper hand in an activity that doesn't involve making money.

Just imagine the motivation of a professional bot herder who makes their money for a living this way...writing malware so they can use other people's machines as revenue generators, selling out their botnets to the highest bidder for massive spam floods...yes, this really does happen.

Regardless of motivation, the point remains, your program can, and at some point will, be under attack. It's not enough to protect against buffer overflows, stack smashing, stack execution (code-as-data is loaded into the stack, then a return is done to unload the stack, leading to execution of the code), data execution, cross-site scripting, privilege escalation, race conditions, or other "programmatic" attacks, although it does help. In addition to your "standard" programmatic defenses, you'll also need to think in terms of trust, verification, identity, and credentials - in other words, dealing with whatever is providing your program input and whatever is consuming your program's output. For example, how does one defend against DNS poisoning from a programmatic perspective? And sometimes, you can't avoid things in code - getting your end-users to not turn over their passwords to coworkers is an example.

Incorporate those concepts into a methodology for security, rather than a "technology". Security is a process, not a product. When you start thinking about what's "on the other side" of your program, and the methods you can employ to mitigate those issues, it will become much clearer as to what can go right, and what can go horribly wrong.


To a large extent. You need to think like a criminal, or you're not paranoid enough.


To what extent do you think an honest programmer needs to know the methods of malicious programmers?

You need to know more than them.