Friday, April 16, 2010

Moving On...

As many of you already know, today is my last day at ERDAS. I will be moving on to other new and exciting opportunities.

I want to thank everybody at ERDAS...what an amazing group of people! How such a small group of people get so much done is really amazing. I also want to thank all the distributors and resellers globally.

I look forward to promoting ERDAS technologies in the market in the future.

Thursday, February 25, 2010

Managing Regression Issues in Agile Software Methodology

A software regression is an issue introduced into the software during the development cycle that breaks a "feature" that was previously working as designed. Lets provide a "concrete" example of a software regression issue.

Say I have the .NET "Hello World!" web service and up and running and working as designed. A new feature has been requested to add a new method to the web service that allows me to add my name as a user input and a message of "Hello World, I'm Shawn Owston!". You develop the net method, build and deploy and start testing. The first time you test it, the new method "Hello World, I'm Shawn Owston!" works...hoooray! But then when you try to use the old method, it throws an exception!

Now the new feature is working, that's a good thing, but the originally released feature is now broke, that's regression.

The first method to manage regression is to NOT introduce it in the first place! This requires architectural design, review and understanding of all of those involved in the development before writing code to ensure that the "best" framework, tools and coding standards are used to build a new feature. Remember that "Agile" doesn't equivocate" to "don't design anything" which is an interpretation that some pundits may object to the methodology. You may plan architectural designs in sprints and properly resource them as required...big and small architectural designs.

The second method to manage regression is for each developer to be thorough and pay attention to what they are doing "at all times", especially in a multi-developer team environment. This covers the entire gammet of the development cycle from proper branch management of source control, proper check-in/check-out procedures and proper automated build systems. It also means that developers must be "conscious" of the other team members dependent on the quality of their work. Do NOT check in work that is not "complete" into the build branch that your QA team is performing daily testing and do NOT take shortcuts or introduce known "hacks" into the software without communicating the issue to QA or Product Management to see if it's appropriate even in the short term.

The third method is to of course implement automated testing on your software. There are many automated testing software suites like QTP and SilkTest that allow you to "record" a user workflow as a script and to automatically run these scripts against your software to ensure that the features PASS/FAIL upon every software build or simply on some scheduled basis. These only test the "functional" aspect of software although...there are non-functional aspects that regression can reak havoc in your customer experience; i.e. PERFORMANCE. Yes, the user workflow may work, but it it now takes the software 10 times longer to perform the workflow, you have issues. Luckily, there is automated load testing software...HP LoadRunner being my preference. In the long run, automated testing saves a great deal of time and money by reducing the amount of human QA that needs to be performed. It also catches issues early rather that allow the regression issues to not pile up to an insurmountable and unmanageable quantity.

The fourth method is to test, test, test and retest the software. There is no supplement for having skilled QA and humans that know the workflow, expected performance and overall "usability" of the software.

Now regression will always occur, these are only ways to avoid them and to discover them. So once they issues exist, how do you "manage" them in an agile software project?

In our projects, we make sure to label all regression issues with a "regression" label in our bug tracking system. From the Product Management side of the house, these are ALWAYS Release Stopper issues. Refuse to let the software be released with features that used to work.

We also make sure all regression issues bubble up to the top of the priority list to resolve in the very next sprint. We plan and resolve the issue immediately. The issue was just recently introduced in the previous sprint if the QA cycle is working as designed, so resolving it immediately is important because the code is "fresh" in the developers head and it also reduces the chance that the regression issue will be "built upon" and cause further issues downstream as the more features are added that may be dependent on the code with the issue.

From all of the above, you can see that managing regression is a harmony of a good plan on not creating the regression issue, discovering the issue in a timely manner and resolving the issue "just in time" to avoid a pile up of regression issues that possibly are required to be released to the market because your out of development time.

Wednesday, February 3, 2010

Simulate the 501'st user in a live webinar on Imagery through WMS

The recorded webinar for Prove it: Can ERDAS Really Deliver Terabytes of Imagery Faster than the Competition?” has been posted to the ERDAS Website at the following URL:

This download contains the webinar recording. The PowerPoint presentation used during the presentation and the LoadRunner Performance Report generated during the presentation that reports on the Servers usage and throughput during the load test will be posted tomorrow.

This webinar demonstrates LIVE an ERDAS APOLLO with a load test running 500 concurrent users on it then live use of a web client using this server (simulates being the 501’st user). It also demonstrates several publicly available web sites that showcase our technology in publicly available web sites.

I had to cut short the questions at the end of the recording and we fielded another 20 minutes of questions that were not recorded (I wanted the recording to be 60 minutes and it ended up 75 because of questions).

I'm going to reproduce the demonstration with 1000 users next time, try and put a heavier load on the server. 37% resource usage is nothing!