Before you start writing code, at a minimum you should nail down some sort of feature or task set
Even on one person projects, you need to identify how you are going to work - that way you have a baseline to measure against. And, you will be able to see if you prescribed method for building software works. If you do not have a process (regardless how informal) you cannot proclaim much about the success or failure of your project from a management perspective.
Write a short statement describing what your software will do, and why to help you keep you on track
* at this stage of the game
Another good starting technique is to brainstorm with mind maps. Start with your center idea, and branch off into other relevant areas, and then put down important notes about each of those areas.
Using a simple program like MS Excel you can fairly easily track projects up to 5 to 10 people in size. Do not forget the purpose is scheduling - it is not to schedule, but to help ensure that your software is delivered on time, on budget and with the desired level of functionality and quality. Again, scheduling is not to schedule, if the time to maintain the schedule is more than the time being saved from having the schedule, then guess what - you are wasting your time.
Feature / Task Name | Original Estimate (days) | Current Estimate (days) | Elapsed Time (days) | Assigned To |
Holiday | 10 | 10 | 3 | Andrew |
Add new promotions | 0.5 | 0.75 | 0.5 | Andrew |
Debugging / QA Defects | 3 | 3 | 1.25 | Roy |
Deployment | 1 | 1 | 0 | Ryan |
Quite simply, which best-practices will you use to write your software, test it, deploy, maintain and document it. Below, is a brief outline of three well-known software methodologies. I have a strong preference for agile methods; mostly, because of the strong emphasis on quality assurance and automation.
One process leads into another. Figure out all requirements, develop the complete architecture, design the entire system, implement the entire system, test the entire system and deploy the entire system.
Primary drawbacks is that talking about a system and actually experiencing it are two different things. It is very likely that during implementation and testing you will undercover new requirements, conflicting requirements and just plain invalid ones.
This process works well in other engineering domains because the cost of manufacturing is much greater than that of design (i.e. building a building takes a lot more time and effort than architecting it). Where as software is manufactured at the push of a button (ctrl-c, ctrl-v).
Just like plain english, incremental refers to building small subsets of the system ( i.e. increments). Then, you iterate through the entire lifecycle again to piece those increments together (iterative process). The most well known I/I process is RUP (Rational Unified Process). Incremental, iterative methodologies are like small, somewhat overlapping, waterfall methodologies.
Agile methodologies are best described by the following high level values; each agile implementation having its own rules and processes.
Requirement changes are an unfortunate reality of software development. This is because the best solution for a particular problem isn't always immediately obvious. Delivering a rough version early in the development cycle allows for changes to be identified and implemented without the risk of delaying the project. The first iteration of the site will be delivered to FHC within three weeks. Additional iterations will occur every two weeks.
Not only is interaction encouraged, but it is mandated. Keeping in constant dialogue ensures that everything remains on track.
Extensive options and preferences are ways of avoiding making tough design decisions. We deliver simple and pragmatic solutions.
Adding or modifying features always carries the risk of breaking existing features. Automated testing alleviates this problem by having the computer routinely run a series of tests, written by us, to ensure that everything still works as intended. This safety net allows us to make changes without the fear of unknowingly breaking something.
In the simplest terms, an architecture should be your highest-level breakdown of a system. Usually architecture artefacts convey decisions that are hard to change. Architecture has little to do with programming, whereas designs can (and typically do) involve coding.
Three high-level approaches include breadth first, depth first or a breadth-depth mixture.
Rough in the entire application, then fill in the detail gaps later. You can quickly see if your architecture is missing anything and you can demo the overall flow of the system early. But, your customers may mistake the shell as the entire application, and may expect the final system earlier than planned.
Attempt to implement one piece of the application at one time. You will fully understand application at a component level. But, you will be missing the big-picture.
The UserInterface is the application, therefore it usually makes sense to build it breadth first - much like a prototype of tracer bullet. Then for each UI element you can go in depth and fully build that component.
Microsoft does it: Alpha, Beta 1, Beta 2, Release Candidate, Release. Extreme Programming (under the Agile branch) likes to involve a series of small but complete (i.e. not full of bugs because the *developers* will get QA to clean it up) systems. Release early, release often and always promote high quality. Agile also pushes fixed iterations of a few weeks (to make metrics tracking easy) and software releases are comprised of one or more iterations (usually 3 or 4).
The mere fact that you have a source code control system in place is great. You should be familiar with the concept of commit, update, add, delete. Your repository act like a backup of your code over time, but little more.
You understand that your product will have multiple development branches, as well as the latest and greatest trunk. You should be familiar with trunk, branch, merge and label.
You have triggers and sensors listening to your repository for changes and reacting with several processes like auto-deployment, automated testing, applying version numbers and tracking software metrics.
I have used several repositories including: shared drive, CVS, Visual Source Safe, Rational Clear Case and Subversion. Hands down, I prefer Subversion (aka SVN) in combination with TortoiseSVN - both are free and both are excellent repositories - even for managing everyday documents.
Anything that would make you cry if you lost it
|
|
A year from now, you should be able to rebuild todays' version (lets say version 1.05) using the artefacts stored in your SCC.
Assertions within your code can be used to verify the state of program and is most often used to check pre and post conditions
Debug.Assert(anInput != null,"The input should not be null");
Assertions represent internal issues and should never be used as client facing errors. In general avoid involved or tricky code - no browni points for being overly clever in the real world - we prefer code that works, and can be maintained.
Exceptions are another method to handle problems in your software. Exceptions represent exceptional situations that cannot be avoided. Examples include running out of memory, or unable to find a necessary external resource. Exceptions are usually thrown within the core of your application and consumed by the user interface.
Exceptions involve the following components:
There are several default available exceptions. Only use custom exceptions is (a) no existing class exists to represent your unique situation, or (b) if you want to pass additional information with the exception (known as nesting an exception - maybe you experience an IO exception, but you wanted to add an additional note that the network was down, hence the file is not available).
Do not use as an alternative to other control-of-flow statements. For example, do not throw an exception to signal the end of a process, but do throw an exception if you were unable to write a file to disk
Even if only for debuggin purposes, pass along a simple message outlining what happened
This is a process of wrapping your *higher* level exceptions with the *lower* level specific exceptions. By chaining exceptions you will not lose important debugging information like the stack trace.
Built-in examples include InvalidOperationException, ArgumentException, NullPointerException
Noise comments describe the obvious in a painfully duplicated manner. Comments should not describe the language, but the reasoning behind it. The following are very, very bad examples of comments - please do not write comments like this.
// if the counter is greater than 1 if (x > 1) OR // Telephone Number property public string TelephoneNumber { get { return _telephoneNumber; } }
Overyly formal comments repeat a lot of information that is already readily available. Please minimize comments that include items like the file name (just look in your editor to see what file you are editing), the history (this information is available in your source code control system), method name and inputs (they are right there, if there is ambiguity, then rename them). This information is not only a duplicate, but it is also not as the reliable as the actual source for the information (i.e. your source code control is much better at tracking versions, then you are).
Placeholder comments can be used to signal well-known items to be addressed at a later date. Your IDE may give you programming support to locate these placeholders, but do not overly rely on them. Examples include:
//HACK: should be in english and french //TODO: check inputs for out-of-bounds values //POSTPONE: do not implement until the data access tier is complete
Summary comments provide a high level view of the class / method and intent comments describe the why of your code, not the how.
//if no filename was specified, then derive if from the source url if (filename == "" || filename == null) { filename = sourceUrl + "default.txt"; }
Great comments can go bad because they are not tightly coupled to the code and can easily get out of date. Your first goal should be to strive for self-documenting code.
/* GOOD VARIABLE NAME */ $numberOfSearchResults = count($allResults); /* BAD VARIABLE NAME REQUIRING STALE COMMENT */ // the number of search result entries $num = count($allResults);
Your second goal should be to reduce your noise and duplicate summary code. And finally, you should work hard to maintain intent and relevant summary code.
Unit Testing validates a software system at its most basic level and in preferred isolation. The xUnit automated testing framework is implemented for almost every language out there. A key to unit testing is that (a) the tests are short, (b) the tests run quickly, (c) the tests are run often, and (d) the tests act as documentation for the code that it is testing.
Integration Testing involves multiple tiers of your application (for example, how components communicate between one another). This type of testing can be automated using the xUnit framework from above for non UI integration tests. UI based integration tests can be achieved using xUnit* derivatives, or replay tools like Selenium, and Watir.
System Testing is end-to-end testing in a production like environment. A product like Selenium can be a great help.
Beta Testing is manual testing whereby you give your software to a subset of your desired audience and have them try out your software.
Acceptance Testing is process where your application is verified from your customers standpoint. Tools like FIT and Selenium can help automate this process.
Developers should be invovled in all the types of testing above. At the least, your project should have a 100% passing unit test policy - where the system is not considered stable unless all of the automated tests are passing.
Within a .Net environment an available testing framework is nUnit.
Developers will use the graphical interface during coding. Simply select your application .dll (or .exe) and select which tests you would like to test
Command line utility is used primary during automated activities (i.e. in combination with NAnt) Simply state which .dll (or .exe) you wish to test after the nunit-console.exe command
c:\temp> nunit-console.exe BankMachine.dll
When you write assertions, be sure to be the expected value first
[Test] public void AbsoluteValue() { Assert.AreEqual(4,Math.Abs(-4)); Assert.AreEqual(3,Math.Abs(3)); }
Testing exceptions is easy, simply add the ExpectedException attribute.
[Test,ExpectedException(typeof(FileNotFoundException))] public void AbsoluteValue() { File.OpenRead("filedoesnotexist.txt"); }
Here we write our tests to describe things like, I want addTwoPlusTwo() to return 4, or isPositive(-2) to return false; even though addTwoPlusTwo() and isPositive() do not exist, yet.
Ensure your tests do not work (because you have not implemented them yet. Then get them to pass.
Okay, the desired functionality is there, but you might have unnecessary duplication, or other code smells. Fix them.
Well, after you made the code pretty you need to ensure that yuo did not break anything. Repeat these steps for all features / tasks.
Or, quite simply red-green-refactor.
The effects of TDD include
Refactoring is a process of improving yuor code without (really) changing its behaviour. For example, you can optimize an algorithm, group utility functions together, improve variable names, even change your HTML to be CSS based.
Passive code generators dump code into yuor project and then bugger off - the generator never modifies or updates the code. For example, code generation wizards (answer a few questions and out pops some code). After the initial generation it is up to you to maintain the code.
Active code generators maintain the code they generate. You can tweak the generator input and regenerate your code (but never editing it directly).
There are several styles of active code generators including:
Code generators are useful for database access, and UI code, to create API and other code related documentation, as well as write functions for web services.
When you are deciding on code generator and code generation tools, you should look for:
Simply understand what could go wrong with your project. Once yuo have brainstormed your list (like, software is buggy, UI is too complex, application is slow) then assign a probability that the risk event will occur and the cost (in weeks for example) to deal with the risk).
Risk | Probability | Cost (Weeks) | Impact (Prob. x Cost) |
Too busy to work on project | 50% | 6 | 3 |
Hard-drive failure | 10% | 12 | 1.2 |
Software too buggy to ship | 20% | 3 | 0.6 |
Building burns down | 1% | 20 | 0.2 |
Multiplying the probability of occur with the damage creates a risk impact, the higher this number, the more important it is to address the risk.
Risk | Management Steps (to mitigate risks) |
Hard-drive failure | Hard drive data will be backed up nightly. Central repository will ensure programer's code is stored in one location |
Software too buggy to ship | Test-driven development. Continuous Integration. 100% Green bar rule. |
Maintain the top 5 risks list. With this list, you can easily see which risks are growing in importance and which are regressing.
Just like in the medical community, bugs fixes must be priortized based on the impact the fix will have. The primary elements to consider include: how long to fix the bug, the impact when the bug occurs, and how often the bug occurs
Bug | Effort to Fix | Impact | Frequency |
UI does not refresh username on change | 0.5 days | Medium | Low |
When bugs are resolved, you have a few choices:
The entire bug process of reporting, triage, fixing, retesting and closing can be very political. Bug tracking will only work if your ultimate goal is to ship high-quality code - any other reasons and your tracking system will probably fail.
When reporting bugs, at a minium include (1) what happened, (2) what yuo think should happen, and (3) steps to reproduce the issue.
Store all issues in the bug tracking database because (a) managers should have a single place to view work items and monitor workload, and (b) developpers can use the bug tracking system to prioritize their work.
The intent of logging is about following your application during its use, without you being there, just in case something goes wrong you want to know what really happened. Logging during development can help when your team has internal testing using the systems. The log can help clarify the order of events (especially for asynchronous inputs like external feeds or tickers).
The basic goal of loggin during development is to get information that is not available from your debugger. Once your application has shipped, logging because crucial for chasing down client issues. An end users goal is not the test the system (rather to use the system) so they usually do not write very good bug reports. It is much easier to ask your clients to submit a system log than to try and re-trace their steps from memory.
It should be easy to turn logging on (when you want to watch for bugs) and off (during normal use to not affect performance).
You should consider logging:
Each week, all team member should report:
The Peter Principle (from Dr. Lawence J Peter) states that every employee tends to rise to his or her level of incompetence. You typically get promoted when a job is well done, but rarely will you get promoted for doing a poor job. So, when you stop getting promoted that is because you are no longer doing a good job.
Can you create a ready for shrink-wrap version of the software? To answer that question, is to understand your build process. To answer that question by stating "click here", you have mastered it.
A build process promotes simplicity. You will be using a tool to not only execute all of the steps to create your application, but it also acts as a definitive document outlining those steps. And, it is automated which means that your build process is very reliable.
Here is a very generic idea of what you build process should involve (and should automate):
Integrate early and often. Continuous Integration (CI) is a process of automatically integrated your software components whenever one of them changes (for example a developer checks in code). CI is a process that (1) listens for changes to your source code repository, (2) executes a build if changes are noticed (using your fancy build process from above), and (3) maintains the build results.
There can be a lot of issues that arise when writing software for someone else. So, it is important to be properly protected with a good software contract. Issues that arise with clients include:
Your contract should include:
By default, you get copyright. But, you can put it in the public domain if you want. You will be giving up ownership completely and others can take it, use it, profit from it without giving you credit or compensation.
At the time this book was published, there were about 50 difference OSI approved Open Source Licenses.
Known as an infectious / viral licence. If you use GPL, then your code must also be GPL.
Known as the Berkely Source Distribution License. It is not viral. The primary tenants include:
There is more latitude than GPL, but you are able to take from the community without giving back. For example, Microsoft uses BSD licensed code in some of their network software.
Written by lawyers and very hard to read. GPL forbids combining GPL with proprietary code whereas MPL expressly allows it.
Microsofts answer to open source (so not too popular). Parts of an applications source code can be opened up for specific purposes (for example debugging).
Things to think about when you draft your own license
Do not write your own installer
When choosing an installer, consider cost, functionality (can it do all the steps for our install, or just some of them), customization, windows installer support (which is a built in API for loggin, uninstalling, cleanup and install-on-demand), delivery mechanism (does it support website downloads?), and integration within your development environment.
Some tips to consider when delivering your software