To make matters worse, the frameworks can often leave you, and any who come after you, stranded with pretty code that's difficult to understand, revise, or extend.
As Mike Morton, another programmer, explains, "They carry you 90 percent of the way up the mountain in a sedan chair, but that's all. If you want to do the last 10 percent, you'll need to have thought ahead and brought oxygen and pitons."
Many of the worst security bugs appear when developers assume the client device will do the right thing. For example, code written to run in a browser can be rewritten by the browser to execute any arbitrary action. If the developer doesn't double-check all of the data coming back, anything can go wrong.
One of the simplest attacks relies on the fact that some programmers just pass along the client's data to the database, a process that works well until the client decides to send along SQL instead of a valid answer. If a website asks for a user's name and adds the name to a query, the attacker might type in the name x; DROP TABLE users;. The database dutifully assumes the name is x and then moves on to the next command, deleting the table filled with all of the users.
There are many other ways that clever people can abuse the trust of the server. Web polls are invitations to inject bias. Buffer overruns continue to be one of the simplest ways to corrupt software.
To make matters worse, severe security holes can arise when three or four seemingly benign holes are chained together. One programmer may let the client write a file assuming that the directory permissions will be sufficient to stop any wayward writing. Another may open up the permissions just to fix some random bug. Alone there's no trouble, but together, these coding decisions can hand over arbitrary access to the client.
Sometimes too much security can lead paradoxically to gaping holes. Just a few days ago, I was told that the way to solve a problem with a particular piece of software was just to "chmod 777" the directory and everything inside it. Too much security ended up gumming up the works, leaving developers to loosen strictures just to keep processes running.
Web forms are another battleground where trust can save you in the long run. Not only do bank-level security, long personal data questionaires, and confirming email addresses discourage people from participating even on gossip-related sites, but having to protect that data once it is culled and stored can be far more trouble than it's worth.
Because of this, many Web developers are looking to reduce security as much as possible, not only to make it easy for people to engage with their products but also to save them the trouble of defending more than the minimum amount of data necessary to set up an account.
My book, "Translucent Databases," describes a number of ways that databases can store less information while providing the same services. In some cases, the solutions will work while storing nothing readable.
Worried about security? Just add some cryptography. Want your databases to be backed up? Just push the automated replication button. Don't worry. The salesman said, "It just works."
Computer programmers are a lucky lot. After all, computer scientists keep creating wonderful libraries filled with endless options to fix what ails our code. The only problem is that the ease with which we can leverage someone else's work can also hide complex issues that gloss over or, worse, introduce new pitfalls into our code.
Cryptography is a major source of weakness here, says John Viega, co-author of "24 Deadly Sins of Software Security: Programming Flaws and How to Fix Them." Far too many programmers assume they can link in the encryption library, push a button, and have iron-clad security.
But the reality is that many of these magic algorithms have subtle weaknesses, and avoiding these weaknesses requires learning more than what's in the Quick Start section of the manual. To make matters worse, simply knowing to look beyond the Quick Start section assumes a level of knowledge that goes beyond what's covered in the Quick Start section, which is likely why many programmers are entrusting the security of their code to the Quick Start section in the first place. As the philosophy professors say, "You can't know what you don't know."
Then again, making your own yogurt, slaughtering your own pigs, and writing your own libraries just because you think you know a better way to code can come back to haunt you.
"Grow-your-own cryptography is a welcome sight to attackers," Viega says, noting that even the experts make mistakes when trying to prevent others from finding and exploiting weaknesses in their systems.
So, whom do you trust: yourself or so-called experts who also make mistakes?
The answer falls in the realm of risk management. Many libraries don't need to be perfect, so grabbing a magic box is more likely to be better than the code you write yourself. The library includes routines written and optimized by a group. They may make mistakes, but the larger process can eliminate many of them.
Programmers love to be able to access variables and tweak many parts of a piece of software, but most users can't begin to even imagine how to do it.
Take Android. The last time I installed a software package for my Android phone, I was prompted to approve five or six ways that the software can access my information. Granted, the Android team has created a wonderfully fine-grained set of options that let me allow or disallow software based on whether it requires access to the camera, tracks my location, or any of a dozen other options. But placing the onus on users to customize funtionality they do not fully understand can invite disaster in the form of inadvertant security holes and privacy violations, not to mention software that can prove too frustrating or confusing to be of use to its intended market.
The irony is that, despite being obsessed with feature check lists when making purchasing decisions, most users can't handle the breadth of features offered by any given piece of software. Too often, extra features clutter the experience, rendering the software difficult to navigate and use.
Some developers decide to avoid the trouble of too many features by offering exactly one solution. Gmail is famous for offering only a few options that the developers love. You don't have folders, but you can tag or label mail with words, a feature that developers argue is even more powerful.
This may be true, but if users don't like the idea, they will look for ways to work around these limitations -- an outcome that could translate into security vulnerabilities or the rise of unwanted competition. Searching for this happy medium between simple and feature-rich is an endless challenge that can prove costly.
One of the trickiest challenges for any company is determining how much to share with the people who use the software.
John Gilmore, co-founder of one of the earliest open source software companies, Cygnus Solutions, says the decision to not distribute code works against the integrity of that code, being one of the easiest ways to discourage innovation and, more important, uncover and fix bugs.
"A practical result of opening your code is that people you've never heard of will contribute improvements to your software," Gilmore says. "They'll find bugs (and attempt to fix them); they'll add features; they'll improve the documentation. Even when their improvement has been amateurly done, a few minutes' reflection by you will often reveal a more harmonious way to accomplish a similar result."
The advantages run deeper. Often the code itself grows more modular and better structured as others recompile the program and move it to other platforms. Just opening up the code forces you to make the info more accessible, understandable, and thus better. As we make the small tweaks to share the code, they feed the results back into the code base.
Millions of open source projects have been launched, and only a tiny fraction ever attract more than a few people to help maintain, revise, or extend the code. In other words, W.P. Kinsella's "if you build it, they will come" doesn't always produces practical results.
While openness can make it possible for others to pitch in and, thus, improve your code, the mere fact that it's open won't do much unless there's another incentive for outside contributors to put in the work. Passions among open source proponents can blind some developers to the reality that openness alone doesn't prevent security holes, eliminate crashing, or make a pile of unfinished code inherently useful. People have other things to do, and an open pile of code must compete with hiking, family, bars, and paying jobs.
Opening up a project can also add new overhead for communications and documentation. A closed source project requires solid documentation for the users, but a good open source project comes with extensive documentation of the API and road maps for future development. This extra work pays off for large projects, but it can weigh down smaller ones.
Too often, code that works some of the time is thrown up on SourceForge in hopes that the magic elves will stop making shoes and rush to start up the compiler -- a decision that can derail a project's momentum before it truly gets started.
Opening up the project can also strip away financial support and encourage a kind of mob rule. Many open source companies try to keep some proprietary feature in their control; this gives them some leverage to get people to pay to support the core development team. The projects that rely more on volunteers than paid programmers often find that volunteers are unpredictable. While wide-open competitiveness and creativity can yield great results, some flee back to where structure, hierarchy, and authority support methodical development.
Read more about developer world in InfoWorld's Developer World Channel.
This story, "12 Programming Mistakes to Avoid" was originally published by InfoWorld.