Friday, April 16, 2010

Moving On...

As many of you already know, today is my last day at ERDAS. I will be moving on to other new and exciting opportunities.

I want to thank everybody at ERDAS...what an amazing group of people! How such a small group of people get so much done is really amazing. I also want to thank all the distributors and resellers globally.

I look forward to promoting ERDAS technologies in the market in the future.

Thursday, February 25, 2010

Managing Regression Issues in Agile Software Methodology

A software regression is an issue introduced into the software during the development cycle that breaks a "feature" that was previously working as designed. Lets provide a "concrete" example of a software regression issue.

Say I have the .NET "Hello World!" web service and up and running and working as designed. A new feature has been requested to add a new method to the web service that allows me to add my name as a user input and a message of "Hello World, I'm Shawn Owston!". You develop the net method, build and deploy and start testing. The first time you test it, the new method "Hello World, I'm Shawn Owston!" works...hoooray! But then when you try to use the old method, it throws an exception!

Now the new feature is working, that's a good thing, but the originally released feature is now broke, that's regression.

The first method to manage regression is to NOT introduce it in the first place! This requires architectural design, review and understanding of all of those involved in the development before writing code to ensure that the "best" framework, tools and coding standards are used to build a new feature. Remember that "Agile" doesn't equivocate" to "don't design anything" which is an interpretation that some pundits may object to the methodology. You may plan architectural designs in sprints and properly resource them as required...big and small architectural designs.

The second method to manage regression is for each developer to be thorough and pay attention to what they are doing "at all times", especially in a multi-developer team environment. This covers the entire gammet of the development cycle from proper branch management of source control, proper check-in/check-out procedures and proper automated build systems. It also means that developers must be "conscious" of the other team members dependent on the quality of their work. Do NOT check in work that is not "complete" into the build branch that your QA team is performing daily testing and do NOT take shortcuts or introduce known "hacks" into the software without communicating the issue to QA or Product Management to see if it's appropriate even in the short term.

The third method is to of course implement automated testing on your software. There are many automated testing software suites like QTP and SilkTest that allow you to "record" a user workflow as a script and to automatically run these scripts against your software to ensure that the features PASS/FAIL upon every software build or simply on some scheduled basis. These only test the "functional" aspect of software although...there are non-functional aspects that regression can reak havoc in your customer experience; i.e. PERFORMANCE. Yes, the user workflow may work, but it it now takes the software 10 times longer to perform the workflow, you have issues. Luckily, there is automated load testing software...HP LoadRunner being my preference. In the long run, automated testing saves a great deal of time and money by reducing the amount of human QA that needs to be performed. It also catches issues early rather that allow the regression issues to not pile up to an insurmountable and unmanageable quantity.

The fourth method is to test, test, test and retest the software. There is no supplement for having skilled QA and humans that know the workflow, expected performance and overall "usability" of the software.

Now regression will always occur, these are only ways to avoid them and to discover them. So once they issues exist, how do you "manage" them in an agile software project?

In our projects, we make sure to label all regression issues with a "regression" label in our bug tracking system. From the Product Management side of the house, these are ALWAYS Release Stopper issues. Refuse to let the software be released with features that used to work.

We also make sure all regression issues bubble up to the top of the priority list to resolve in the very next sprint. We plan and resolve the issue immediately. The issue was just recently introduced in the previous sprint if the QA cycle is working as designed, so resolving it immediately is important because the code is "fresh" in the developers head and it also reduces the chance that the regression issue will be "built upon" and cause further issues downstream as the more features are added that may be dependent on the code with the issue.

From all of the above, you can see that managing regression is a harmony of a good plan on not creating the regression issue, discovering the issue in a timely manner and resolving the issue "just in time" to avoid a pile up of regression issues that possibly are required to be released to the market because your out of development time.

Wednesday, February 3, 2010

Simulate the 501'st user in a live webinar on Imagery through WMS

The recorded webinar for Prove it: Can ERDAS Really Deliver Terabytes of Imagery Faster than the Competition?” has been posted to the ERDAS Website at the following URL:


http://www.erdas.com/Resources/Webinars/ArchivedWebinars/tabid/175/currentid/3334/objectid/3334/default.aspx


This download contains the webinar recording. The PowerPoint presentation used during the presentation and the LoadRunner Performance Report generated during the presentation that reports on the Servers usage and throughput during the load test will be posted tomorrow.

This webinar demonstrates LIVE an ERDAS APOLLO with a load test running 500 concurrent users on it then live use of a web client using this server (simulates being the 501’st user). It also demonstrates several publicly available web sites that showcase our technology in publicly available web sites.


I had to cut short the questions at the end of the recording and we fielded another 20 minutes of questions that were not recorded (I wanted the recording to be 60 minutes and it ended up 75 because of questions).


I'm going to reproduce the demonstration with 1000 users next time, try and put a heavier load on the server. 37% resource usage is nothing!


Thursday, December 17, 2009

ERDAS APOLLO 2010 Proves Itself as the "Fastest" Raster Imagery Server

Chris Tweedie of our ERDAS Australia Region has recently reproduced the FOSS4G raster imagery performance benchmark tests using ERDAS APOLLO to compare the results and has published them on his blog:

Here are the Results of ERDAS APOLLO 2010 using the FOSS3G Test Sceneario

We've all been really busy with the Recent APOLLO 2010 Release, but Chris made it a priority to get these numbers completed and published. Nice work Chris!

Chris did a great job of providing graphs for Image Format performance for each Map Server and graphs for Throughput and Response Times for each format with each Map Server to make it comparable by Server or by Format/Server.

Some analysis of the results:

1. ECW imagery format is a "performance purpose" built imagery format. The throughput of ERDAS APOLLO with the ECW format is 2.5 the Open Source Servers at a 150 user load and can support a much higher load than 150 which wasn't represented in the this test scenario. The throughput curve had a steep positive slope even at 150 users where the other servers were peaked out with throughput at the 150 user measurement.

2. ERDAS APOLLO outperformed each Open Source server on every image format with the highest Throughput (transactions/second) and lowest Average Response Times (the time that it takes to get the image on your map). ERDAS APOLLO outperformed open source with every image format in the test.

ERDAS will be providing several webinars and whitepapers regarding the subject of "Raster Imagery Performance" to showcase the ERDAS APOLLO 2010 and it's performance capabilities.

Keep in mind that these numbers do not even showcase our faster imagery protocols that we provide with ERDAS APOLLO....

ECWP and our Optimized Tile Delivery are MUCH faster and support much higher user loads compared to the WMS protocol (THOUSANDS of users on a standard server). In short, we can produce maps MUCH FASTER than these numbers published with the "slowest" protocol provided by the APOLLO 2010.

Good work again Chris...

Thursday, October 15, 2009

How Does ERDAS Support Open STANDARDS?

ERDAS expends a large effort to support Open Standards and in the end, interoperability with other software implementations that as well support open standards. We invest in the development and implementation of Open Standards because as an organization, we believe in our software products working seamlessly with any other geospatial package that exists on the market and providing our customers with a large variety of deployment and design options for geospatial solutions. We support open standards to additionally be a compelling and viable option in ANY geospatial system under design irregardless of an organizations existing systems "vendor". We also measure each component and feature we develop with any pertinent existing IT and Geospatial standard to ensure that we maintain a high degree of interoperability.

So "how" does ERDAS support Open Standards then?

We support the entire array of IT and Spatial Standards. On the IT side of the house, we support a variety of Operating Systems, credentials stores (LDAP, Active Directory, DB, etc), Application Servers, Databases, chip-sets and virtualization environments (both hardware and OS) and of course, industry web service standards of WSDL/SOAP/UDDI.

On the GeoSpatial side of things, we support the web mapping standard WMS, the gridded data delivery service of WCS, vector feature delivery service of WFS, open catalog service of CSW, map context WMC, WRS, URN, OWS common, Filter, on and on an on....

We also participate in the development of standards within the OGC standards body. There is a "cost" to participating in these standards development bodies of which ERDAS is more than willing to participate to ensure the standard meets our customers use cases and to bring to the table the wealth of industry proven "knowledge" to ensure the standards meet in the end, market needs.

Why support Open Standards?

We do this because it's important to our exiting customers and a market driving initiative in the geo-market space. We do it because the standards are becoming "mature" and capable to meet customer use cases (not just a prototype, actually used in a production environment). We also do it to proliferate the standards within the industry and prove the capabilities of the standards within extremely high volume, rapidly changing production environments.

Why point these facts out?

There's been quite a bit of "noise" superimposing Open Standards with "Open Source" that can be confusing to non-IT decision makers and a propensity of some open source pundits to raise an argument that "commercial"="proprietary"="vendor lockin"=an advantage to open source. This is definitely not the case in the market today, whereas ERDAS is not the only vendor ensuring a high degree of open standard support and interoperability.

This "noise" rises from the organization and productization of the disperate open source projects in the geo-space and to create competitive marketing against the existing commercial products. It is now officially a "vendor" option to customers. ERDAS stands firm in out position in the market and our capabilities to provide highly interoperable, entire end-to-end geospatial processing and analysis chains with market proven maturity and market leading segment to PROVE it.

Wednesday, October 14, 2009

ESRI Bails on FOSS4G Performance Shootout

ESRI has withdrawn from the FOSS4G Performance Shootout. There hasn't been any official statement on the Performance Mailing List announced as to the reason why yet, just that they will no longer continue with the exercise.

Why would ESRI pull out so late in the game?? The performance numbers were supposed to be completed this week?? Why put in so many WEEKS of effort in getting the system setup and deal with all the "issues" that were encountered during the setup and testing period to "bail" right at the end??

I've been following the mailing list quite closely, and I have a couple of observations:

1. There was a lot of disorganization in terms of the coordination of the hardware topology, testing procedures, the data configuration, what data to use, what tests would be performed and "who" was responsible for what! It was a total free for all with no scheduling, responsibility and probably the "hand holding" necessary to keep ESRI engaged. It was up to ESRI to meander through the minefield of issues presented with the data, changes to the system, test scripts and hardware.

If your ESRI and you feel like you just jumped into a circus, would you continue to parade in line with the rest of the show?

2. I am EXTREMELY glad I didn't waste any resources on attempting to participate in what should be an EXTREMELY interesting and engaging exercise.

I highly recommend that an independent organization provide a non biased, preapproved methodology, preapproved dataset with the opportunity to put the data in a vendor recommended format (which every customer usually does anyways), capable and diverse hardware set to limit the non-functional limitations, proven performance testing software and methodology to measure LOAD not throughput and the right to review, control the rights of the results to be published/non published, there will be very high participation from ALL the available vendors.

This is even after the publication announcing the "shootout" which I felt was aimed at throwing ESRI under the bus anyways.

What a disaster...

Thursday, October 8, 2009

WorldView II Successfully Launched!!

So the WorldView II sensor is in orbit! I can't wait to get some of the products into the APOLLO and the Web Processing Service!!

The sensor has a short wave IR band that REALLY is needed for lots algorithms that can only use LANDSAT TM and MODIS imagery...but WorldView II will do it at 46 cm!

I will definitely be showing that data off on the demo site as soon as we gain access to it!

Friday, September 18, 2009

Geospatial Performance Benchmarks "Apples to Apples"

I get a lot of requests for Performance Benchmarks of APOLLO vs. lots of other systems. We provide extremely analytical and detailed performance results of our server every release. A couple issues always crop up in any performance benchmarks:

1. Features - first of all, ANY competitive product cannot DO what APOLLO can do...so I find myself either A: Dummying down APOLLO and the test to even be able to do an "apples to apples" test or B: not making it "apples to apples".

2. Return on Investment - ROI on software is not just performance, but HOW LONG IT TOOK TO SETUP, ADMINISTER and GET the feature operational in a production scenario!! I find myself spending such a HUGE part of my time getting the competitive software to even "work" to do the test.

I've been following the FOSS4G's Web Mapping Shootout announced for their 2009 conference. I get a really HUGE chuckle because their "shootout" couldn't be perfomed on a more CARTOON set of data and NON-REALISTIC use case. I don't know ONE client that requires one image and a handfull of vector data sets (3 to be precise).

Our smallest benchmark has 459 7 band images...choke on that Open Source.

They should call it a "water gun fight" instead of a "Shootout".

Also what will NOT be collected in the "shootout" is how long it took them to setup the system and service enable the data...how many "WEEKS" are you willing to struggle with that?

PERFORMANCE is about ROI on the investement and of course the ability of the system to handle a user load. Weigh both when your looking at the numbers!!