A software regression is an issue introduced into the software during the development cycle that breaks a "feature" that was previously working as designed. Lets provide a "concrete" example of a software regression issue.
Say I have the .NET "Hello World!" web service and up and running and working as designed. A new feature has been requested to add a new method to the web service that allows me to add my name as a user input and a message of "Hello World, I'm Shawn Owston!". You develop the net method, build and deploy and start testing. The first time you test it, the new method "Hello World, I'm Shawn Owston!" works...hoooray! But then when you try to use the old method, it throws an exception!
Now the new feature is working, that's a good thing, but the originally released feature is now broke, that's regression.
The first method to manage regression is to NOT introduce it in the first place! This requires architectural design, review and understanding of all of those involved in the development before writing code to ensure that the "best" framework, tools and coding standards are used to build a new feature. Remember that "Agile" doesn't equivocate" to "don't design anything" which is an interpretation that some pundits may object to the methodology. You may plan architectural designs in sprints and properly resource them as required...big and small architectural designs.
The second method to manage regression is for each developer to be thorough and pay attention to what they are doing "at all times", especially in a multi-developer team environment. This covers the entire gammet of the development cycle from proper branch management of source control, proper check-in/check-out procedures and proper automated build systems. It also means that developers must be "conscious" of the other team members dependent on the quality of their work. Do NOT check in work that is not "complete" into the build branch that your QA team is performing daily testing and do NOT take shortcuts or introduce known "hacks" into the software without communicating the issue to QA or Product Management to see if it's appropriate even in the short term.
The third method is to of course implement automated testing on your software. There are many automated testing software suites like QTP and SilkTest that allow you to "record" a user workflow as a script and to automatically run these scripts against your software to ensure that the features PASS/FAIL upon every software build or simply on some scheduled basis. These only test the "functional" aspect of software although...there are non-functional aspects that regression can reak havoc in your customer experience; i.e. PERFORMANCE. Yes, the user workflow may work, but it it now takes the software 10 times longer to perform the workflow, you have issues. Luckily, there is automated load testing software...HP LoadRunner being my preference. In the long run, automated testing saves a great deal of time and money by reducing the amount of human QA that needs to be performed. It also catches issues early rather that allow the regression issues to not pile up to an insurmountable and unmanageable quantity.
The fourth method is to test, test, test and retest the software. There is no supplement for having skilled QA and humans that know the workflow, expected performance and overall "usability" of the software.
Now regression will always occur, these are only ways to avoid them and to discover them. So once they issues exist, how do you "manage" them in an agile software project?
In our projects, we make sure to label all regression issues with a "regression" label in our bug tracking system. From the Product Management side of the house, these are ALWAYS Release Stopper issues. Refuse to let the software be released with features that used to work.
We also make sure all regression issues bubble up to the top of the priority list to resolve in the very next sprint. We plan and resolve the issue immediately. The issue was just recently introduced in the previous sprint if the QA cycle is working as designed, so resolving it immediately is important because the code is "fresh" in the developers head and it also reduces the chance that the regression issue will be "built upon" and cause further issues downstream as the more features are added that may be dependent on the code with the issue.
From all of the above, you can see that managing regression is a harmony of a good plan on not creating the regression issue, discovering the issue in a timely manner and resolving the issue "just in time" to avoid a pile up of regression issues that possibly are required to be released to the market because your out of development time.
Showing posts with label Agile Software Methodology. Show all posts
Showing posts with label Agile Software Methodology. Show all posts
Thursday, February 25, 2010
Saturday, August 15, 2009
Maintaining a Schedule with an Agile Software Development Methodology
As a commercial software vendor, it is EXTREMELY important to maintain a SCHEDULE and deliver software on the date that we communicate to the market that the software will be released. This is critical because we have existing customers who pay software maintenance and expect on a regular basis:
1. bug fixes
2. minor feature improvement/enhancements to existing features
3. resolutions to any workflow issues they may have reported
4. new features at major releases
It is also critical to deliver new major features to the market to enable the sales force to meet the demand of the feature in the market on time to capture the sale! If you deliver the feature 1 year after another software vendor has already released the equivelent feature, you put your sales force at a disadvantage as they are playing catch-up against the competition.
ERDAS targets two releases per year (a minor release in Q1 and a major release in Q4).
It is far too easy to get trapped in "release date creep" in an Agile Software Development Methodology. There are several reasons for this...
1. The very nature of the methodology is user experience driven, not software architectural by nature like a waterfall methodology
2. It is easy to "not" do architecture or take the "big picture" into account before implementing a use cases. This sometimes leads to the inclusion of "technical debt" into the software by the final implementation not "fitting" into a holistic architecture of the software. This results in the need to refactor the software at a later time to "clean up" the architecture and/or create harmony across similar use cases.
3. The use case may be implemented, but the resulting performance and non functional requirements were not accounted for...only the ability of the use case to be met by the actor in the system was focused upon. For example, the use case works on Oracle, but not PostGreSQL, or the use case works in IE6, but not Firefox.
In short, the analysis of "what it will take to implement a use case" process when planning a sprint can be a difficult process on very large feature sets and feature sets that span multiple tiers of the software (i.e. Database, Server, Middleware, Clients).
In previous blog posts, I mentioned our own software development teams use of the "Spike" in our methodology to "flush out" technical unknowns and document architectural requirements before implementing use cases. This process in the Agile Methodology has been a great tool to our development teams.
Besides technically flushing out architecture and technology before implementing use cases, it's important to take a step back and look at the release cycle as a whole and the stages of the release cycle to ensure that the software meets not only the user experience, but the non functional categories of quality, performance and OS/DB/App Server needs.
Rather than develop the use cases "just in time" on a sprint by sprint basis, our Product Management Teams develop all the use cases for a release cycle BEFORE the release cycle begins. This allows the development team to understand the system under design as a whole and also for the Product Management teams to clearly and concisely present to the development teams what we expect the system actors to be able to accomplish when the software is released. Building a "Release" backlog of use cases really enables the development teams to consider architectural dependent use cases, understand the software as a whole and choose appropriate technology to meet all of the use cases, not just incorporate technology at run time during sprints.
We also provide ample time at the end of the release cycle for software stabilization (bug and improvement issues that provide quality to the software and ensure the software meets performance and the non functional requirements). Completion of the new features and stabilization signifies a "Feature Complete" state of the software, where the teams agree that the software could be released to the market.
We're still not "done" at that point!! The software goes through a final QA which it must pass to be released to the market, Acceptance Testing and BETA and then finally if AT and BETA does not reveal any critical bugs, the software goes into a box and is delivered to the market.
In short, if you are new to the Agile Methodology, MAKE A DATE that you expect the software to be released, MAKE A PLAN that adequately will enable your development team to meet that date and MAKE ROOM to test the quality and performance of the software BEFORE releasing it to the market!
1. bug fixes
2. minor feature improvement/enhancements to existing features
3. resolutions to any workflow issues they may have reported
4. new features at major releases
It is also critical to deliver new major features to the market to enable the sales force to meet the demand of the feature in the market on time to capture the sale! If you deliver the feature 1 year after another software vendor has already released the equivelent feature, you put your sales force at a disadvantage as they are playing catch-up against the competition.
ERDAS targets two releases per year (a minor release in Q1 and a major release in Q4).
It is far too easy to get trapped in "release date creep" in an Agile Software Development Methodology. There are several reasons for this...
1. The very nature of the methodology is user experience driven, not software architectural by nature like a waterfall methodology
2. It is easy to "not" do architecture or take the "big picture" into account before implementing a use cases. This sometimes leads to the inclusion of "technical debt" into the software by the final implementation not "fitting" into a holistic architecture of the software. This results in the need to refactor the software at a later time to "clean up" the architecture and/or create harmony across similar use cases.
3. The use case may be implemented, but the resulting performance and non functional requirements were not accounted for...only the ability of the use case to be met by the actor in the system was focused upon. For example, the use case works on Oracle, but not PostGreSQL, or the use case works in IE6, but not Firefox.
In short, the analysis of "what it will take to implement a use case" process when planning a sprint can be a difficult process on very large feature sets and feature sets that span multiple tiers of the software (i.e. Database, Server, Middleware, Clients).
In previous blog posts, I mentioned our own software development teams use of the "Spike" in our methodology to "flush out" technical unknowns and document architectural requirements before implementing use cases. This process in the Agile Methodology has been a great tool to our development teams.
Besides technically flushing out architecture and technology before implementing use cases, it's important to take a step back and look at the release cycle as a whole and the stages of the release cycle to ensure that the software meets not only the user experience, but the non functional categories of quality, performance and OS/DB/App Server needs.
Rather than develop the use cases "just in time" on a sprint by sprint basis, our Product Management Teams develop all the use cases for a release cycle BEFORE the release cycle begins. This allows the development team to understand the system under design as a whole and also for the Product Management teams to clearly and concisely present to the development teams what we expect the system actors to be able to accomplish when the software is released. Building a "Release" backlog of use cases really enables the development teams to consider architectural dependent use cases, understand the software as a whole and choose appropriate technology to meet all of the use cases, not just incorporate technology at run time during sprints.
We also provide ample time at the end of the release cycle for software stabilization (bug and improvement issues that provide quality to the software and ensure the software meets performance and the non functional requirements). Completion of the new features and stabilization signifies a "Feature Complete" state of the software, where the teams agree that the software could be released to the market.
We're still not "done" at that point!! The software goes through a final QA which it must pass to be released to the market, Acceptance Testing and BETA and then finally if AT and BETA does not reveal any critical bugs, the software goes into a box and is delivered to the market.
In short, if you are new to the Agile Methodology, MAKE A DATE that you expect the software to be released, MAKE A PLAN that adequately will enable your development team to meet that date and MAKE ROOM to test the quality and performance of the software BEFORE releasing it to the market!
Friday, March 28, 2008
Agile Software Devlopment Methodology - The Sprint
I wanted to take the opportunity to talk about the benefit of sprints (or iterations) in the Agile Software Development Methodology...not from a developer perspective, but from a Product Management perspective. Our team sprints are 2 weeks. This means that we analyze the priority of features, estimate user stories based on their feature priority and task the team with two weeks worth of "work" to achieve a user experience by the end of the sprint.
The best feature that I like about sprint is the ability to "change gears" every two weeks! Any enterprise product needs the ability to support project work and meet high priority features for clients to "capture the sell". As the Product Manager, I'm the person with "boots on the ground". I get to talk directly with clients, sales teams and distributors to hear their requirements of the software. Although it's impossible to make every feature request a priority, it is within the methodology to change the priorities (every two weeks) to allow the software to demonstrate a feature that is a time critical request for a client.
There is a fine line that must be walked here...you don't want to impact the top priority features for a release, but there is plenty of room to achieve "quick wins" with existing large clients and prospective future clients.
The sprint allows me to reprioritize a "quick win" feature request to fit within the sprint and provide a thin line user experience to allow demonstration of the feature in 2 weeks.
Clients (and sales staff) really like to see their feature requests being demonstrated in two weeks! It demonstrates that we really develop the software for the client to meet their business needs and allows us to be "responsive" to cash flow potential. The interesting phenomena is that most of the time, the client is usually fine with the feature being "productized" and completed by the scheduled release date! By simply demonstrating that we are responsive to their needs shows a commitment to solving their business problems.
The sprint is also a great thermometer for measuring progress. At the end of every two weeks, I get to calculate the velocity of the team that accounts for the number of bugs resolved and the number of user stories estimated and completed in the sprint. It gives us a continuous monitor on our ability to estimate features, our ability to rapidly address bugs and the ability to complete user facing features. Analysis of velocity allows us to continuously evaluate the planned feature set for the release to effectively manage resources, money and time!
The best feature that I like about sprint is the ability to "change gears" every two weeks! Any enterprise product needs the ability to support project work and meet high priority features for clients to "capture the sell". As the Product Manager, I'm the person with "boots on the ground". I get to talk directly with clients, sales teams and distributors to hear their requirements of the software. Although it's impossible to make every feature request a priority, it is within the methodology to change the priorities (every two weeks) to allow the software to demonstrate a feature that is a time critical request for a client.
There is a fine line that must be walked here...you don't want to impact the top priority features for a release, but there is plenty of room to achieve "quick wins" with existing large clients and prospective future clients.
The sprint allows me to reprioritize a "quick win" feature request to fit within the sprint and provide a thin line user experience to allow demonstration of the feature in 2 weeks.
Clients (and sales staff) really like to see their feature requests being demonstrated in two weeks! It demonstrates that we really develop the software for the client to meet their business needs and allows us to be "responsive" to cash flow potential. The interesting phenomena is that most of the time, the client is usually fine with the feature being "productized" and completed by the scheduled release date! By simply demonstrating that we are responsive to their needs shows a commitment to solving their business problems.
The sprint is also a great thermometer for measuring progress. At the end of every two weeks, I get to calculate the velocity of the team that accounts for the number of bugs resolved and the number of user stories estimated and completed in the sprint. It gives us a continuous monitor on our ability to estimate features, our ability to rapidly address bugs and the ability to complete user facing features. Analysis of velocity allows us to continuously evaluate the planned feature set for the release to effectively manage resources, money and time!
Subscribe to:
Posts (Atom)