C++ Programming: 10 Anti-Hacker Tips
As a C++ programmer, you need to learn the things you should do in your C++ code to avoid writing programs that are vulnerable to hackers. There are also describes features that you can enable if your operating system supports them, such as Address Space Layout Randomization (ASLR) and Data Execution Prevention (DEP).
Don′t make assumptions about user input
Programmer′s tunnel vision is okay during the early development phase. At some point, however, the programmer (or, better yet, some other programmer who had nothing to do with the development of the code) needs to sit back and forget about the immediate problem. She needs to ask herself, “How will this program react to illegal input?”
Here are some of the rules for checking input:
Make no assumptions about the length of the input.
Don′t accept more input than you have room for in your fixed-length buffers (or used variable-size buffers).
Check the range of every numerical value to make sure that it makes sense.
Check for and filter out special characters that may be used by a hacker to inject code.
Don′t pass raw input onto another service, such as a database server.
And perform all the same checks on the values returned from remote services. The hacker may not be on the input side, he may be on the response side.
Handle Failures Gracefully
Your program should respond reasonably to failures that occur within the program. For example, if your call to a library function returns a nullptr, the program should detect this and do something reasonable.
Reasonable here is to be understood fairly liberally. The program does not need to sniff around to figure out exactly why the function didn′t return a reasonable address. It could be that the request was for way too much memory due to unreasonable input. Or it could be that the constructor detected some type of illegal input.
It doesn′t matter. The point is that the program should restore its state as best it can and set up for the next bit of input without crashing or corrupting existing data structures such as the heap.
Maintain a Program Log
Create and maintain runtime logs that allow someone to reconstruct what happened in the event of a security failure. (Actually, this is just as true in the event of any type of failure.) For example, you probably want to log every time someone signs into or out of your system.
You′ll definitely want to know who was logged into your system when a security event occurred — this is the group that′s most at risk of a security loss and who are most suspicious when looking for culprits. In addition, you′ll want to log any system errors which would include most exceptions.
A real-world production program contains a large number of calls that look something like the following:
log(DEBUG, "User %s entered legal password", sUser);
This is just an example. Every program will need some type of log function. Whether or not it′s actually called log() is immaterial.
Follow a good development process
Every program should follow a well thought out, formal development process. This process should include at least the following steps:
Collect and document requirements, including security requirements.
Adhere to a coding standard.
Undergo unit test.
Conduct formal acceptance tests that are based on the original requirements.
In addition, peer reviews should be conducted at key points to verify that the requirements, design, code, and test procedures are high quality and meet company standards.
Implement good version control
Version control is a strange thing. It′s natural not to worry about version 1.1 when you′re under the gun to get version 1.0 out the door and into the waiting users′ outstretched hands. However, version control is an important topic that must be addressed early because it must be built into the program′s initial design and not bolted on later.
One almost trivial aspect of version control is knowing which version of the program a user is using. When a user calls up and says, “It does this when I click on that,” the help desk really needs to know which version of the program the user is using. He could be describing a problem in his version that′s already been fixed in the current version.
Authenticate users securely
User authentication should be straightforward: The user provides an account name and a password, and your program looks the account name up in a table and compares the passwords. If the passwords match, the user is authenticated. But when it comes to antihacking, nothing is that simple.
First, never store the passwords themselves in the database. This is called storing them in the clear and is considered very bad form. It′s far too easy for a hacker to get his hands on the password file. Instead, save off a secure transform of the password.
Manage remote sessions
You can make certain assumptions when all of your application runs on a single computer. For one thing, once the user has authenticated himself, you don′t need to worry about him being transformed into a different person. Applications that communicate with a remote server can′t make this assumption — a hacker who is listening on the line can wait until the user authenticates himself and then hijack the session.
What can the security-minded programmer do to avoid this situation? You don′t want to repeatedly ask the user for his password just to make sure that the connection hasn′t been hijacked. The alternative solution is to establish and manage a session. You do this by having the server send the remote application a session cookie once the user has successfully authenticated himself.
Obfuscate your code
Code obfuscation is the act of making the executable as difficult for a hacker to understand as possible.
The logic is simple. The easier it is for a hacker to understand how your code works, the easier it will be for the hacker to figure out vulnerabilities.
The single easiest step you can take is to make sure that you only ever distribute a Release version of your program that does not include debug symbol information. When you first create the project file, be sure to select that both a Debug and a Release version should be created.
Never, ever, distribute versions of your application with symbol information included.
Sign your code with a digital certificate
Code signing works by generating a secure hash of the executable code and combining it with a certificate issued by a valid certificate authority. The process works like this: The company creating the program must first register itself with one of the certificate authorities.
Once the certificate authority is convinced that My Company is a valid software entity, it issues a certificate. This is a long number that anyone can use to verify that the holder of this certificate is the famous My Company of San Antonio.
Use secure encryption wherever necessary
Like any good warning, this admonition has several parts. First, “Use encryption wherever necessary.” This tends to bring to mind thoughts of communicating bank account information over the Internet, but you should think more general than that.
Data that′s being communicated, whether over the Internet or over some smaller range, is known generally as Data in Motion. Data in Motion should be encrypted unless it would be of no use to a hacker.
Data stored on the disk is known as Data at Rest. This data should also be encrypted if there is a chance of the disk being lost, stolen, or copied. Businesses routinely encrypt the hard disks on their company laptops in case a laptop gets stolen at the security scanner in the airport or left in a taxi somewhere.
Small portable storage devices such as thumb drives are especially susceptible to being lost — data on these devices should be encrypted.