If you look around, you can find a lot of advice on managing risk and keeping code quality high in software projects. But there's a catch for some of us: the vast majority of this advice is aimed at developers, managers, and testers working in medium or large organizations. When your staff consists of one-third management and architects, one-third developers, and one-third testers, you can make some assumptions about workflow and division of labor. But those assumptions won't hold true when you're working for a very small company. In the ultimate case - where you are the entire software department of a small company, or even a one-man micro-ISV - you're forced to handle your own risks and quality. If you're in that situation, read on for some ideas.
Raising the Quality Bar
So you're cranking along, in the zone, writing great code. Yup, we all do that. Alas, no matter how great a coder you are, the reality is that you probably don't write absolutely perfect code. If you plan to ship this code to someone else to use (say, a client or a customer), you need to figure out how to keep the code quality high. What can you do that won't distract you too much from the coding focus?
No matter how tempted you are to assume you know what you're doing, avoid the temptation to dive right in and start writing code without thinking. Any serious project should have a list of requirements, and every line of code should contribute to fulfilling those requirements. If you don't know what your application is supposed to do, how will you know when you're done? If your IDE lets you maintain a to-do list, that's a good place to do simple requirements tracking. Alternatively, you may want to use a task management or bug-tracking application for this purpose.
Use the testing tools built in to your IDE. These days, just about every IDE will let you create and execute unit tests as you're coding with minimal effort. Automated unit tests won't catch everything, but if you get in the habit of writing them as you go along you'll end up with an excellent "smoke testing" suite: the tests that tell you if the code is absolutely on fire with smoke pouring out of the windows. This is a good starting point for more serious quality control.
Speaking of bug-tracking, do it. No excuses. There are several good free bug-tracking systems on the market these days, and others that offer free licenses for a single user, as well as a plethora of commercial alternatives. When you discover something wrong with your code, track it. When you think of something that might go wrong with future code, or a feature that still needs to be implemented, track that too. The more things you get down in a database somewhere, the fewer distractions you will have cluttering your mind. This means better code and less chance of things falling through the cracks.
Remember that unit-testing doesn't catch everything. Test-driven development advocates will tell you otherwise, but I'm skeptical that even great tests will catch all possible emergent and interactive behavior. Set aside some time to just play with the application. Bang on it. Try absurd input. Try to break it. Remember that your users won't read the manual or put in perfect data every time. During testing, you shouldn't either.
If at all possible, get someone else to test your application. This is the single best thing you can do as a lone wolf: other testers won't have your blind spots. Perhaps you can trade testing time with another developer in another small company, or convince your significant other that a night on the town is worth a few hours of pretending to appreciate your development skills. If you simply must do your own testing, try to set code aside for a few days between writing it and testing it, so you'll have a chance to forget exactly how it works. This increases your chance of doing something stupid and so breaking things. (And remember, when you're in testing mode, breaking things is good.)
Widening Your Focus
Keeping your code quality high is important - but if that's all you're worried about, you're missing the big picture. There are many things that can get in the way between a great idea and shipping software. If you're a lone wolf, any one of these risks, from a tool vendor going out of business to an inability to solve some technical problem, can completely destroy your business. Dealing with these risks is the job of risk management.
If you've never had to do any risk management, your first impulse might be to just worry (or hide). But in fact, you can be much more systematic than that. It's convenient to break risk management up into two steps: risk assessment and risk control.
The first step in risk management is simply to understand the risks to your project. Probably the easiest way to do this is to just brainstorm a list of all the things that could go badly wrong between now and the delivery of working software to your customer. Consider four types of serious risks:
- Those that could destroy the project entirely.
- Those that could have a substantial negative impact on cost.
- Those that could have a substantial negative impact on schedule.
- Those that could have a substantial negative impact on quality.
I recommend developing your initial list by brainstorming - that is, don't censor yourself, but write down every risk that pops into your mind. You'll have a chance to rate the risks soon enough, but you can't rate a risk that you don't think of. On the other hand, try to be at least moderately realistic. It's probably not worth including asteroid strikes and alien invasions on your list of risks for a software project.
When you've come up with as complete a list of risks as possible, you may feel overwhelmed. It's not unusual to have twenty, thirty, or fifty real threats to the successful completion of a modestly-sized software project, from vendors going out of business to hardware failure to inability to code critical sections with sufficient performance. Don't panic. It's not likely that all of the risks are equally important, so the next step is to come up with a numeric ranking of risks so you know what you should really worry about (and control).
My own preferred method for ranking is to assign two numbers to each risk: a probability and a cost. You can express the cost in the amount of money you'll lose if the risk comes to pass, or the amount of schedule slip you'll be forced to take, so long as you use the same units for each risk. Will these numbers be perfectly accurate? Probably not, but if you're an experienced developer it's likely that they'll be relatively accurate, and that's all that matters. That is, a risk that you rate at 10% will be roughly twice as likely to happen as one that you rate at 5%, and a 4-week-slip risk will be roughly twice as drastic as a 2-week-slip risk.
Now multiply the probability times the cost to get the expected impact of each risk, and sort the risks by impact. Focus on the 3, or 5, or 10 risks at the top of the list. Figure 1 shows what this technique looks like in Excel.