Saturday, August 15, 2009

Maintaining a Schedule with an Agile Software Development Methodology

As a commercial software vendor, it is EXTREMELY important to maintain a SCHEDULE and deliver software on the date that we communicate to the market that the software will be released. This is critical because we have existing customers who pay software maintenance and expect on a regular basis:

1. bug fixes
2. minor feature improvement/enhancements to existing features
3. resolutions to any workflow issues they may have reported
4. new features at major releases

It is also critical to deliver new major features to the market to enable the sales force to meet the demand of the feature in the market on time to capture the sale! If you deliver the feature 1 year after another software vendor has already released the equivelent feature, you put your sales force at a disadvantage as they are playing catch-up against the competition.

ERDAS targets two releases per year (a minor release in Q1 and a major release in Q4).

It is far too easy to get trapped in "release date creep" in an Agile Software Development Methodology. There are several reasons for this...

1. The very nature of the methodology is user experience driven, not software architectural by nature like a waterfall methodology
2. It is easy to "not" do architecture or take the "big picture" into account before implementing a use cases. This sometimes leads to the inclusion of "technical debt" into the software by the final implementation not "fitting" into a holistic architecture of the software. This results in the need to refactor the software at a later time to "clean up" the architecture and/or create harmony across similar use cases.
3. The use case may be implemented, but the resulting performance and non functional requirements were not accounted for...only the ability of the use case to be met by the actor in the system was focused upon. For example, the use case works on Oracle, but not PostGreSQL, or the use case works in IE6, but not Firefox.

In short, the analysis of "what it will take to implement a use case" process when planning a sprint can be a difficult process on very large feature sets and feature sets that span multiple tiers of the software (i.e. Database, Server, Middleware, Clients).

In previous blog posts, I mentioned our own software development teams use of the "Spike" in our methodology to "flush out" technical unknowns and document architectural requirements before implementing use cases. This process in the Agile Methodology has been a great tool to our development teams.

Besides technically flushing out architecture and technology before implementing use cases, it's important to take a step back and look at the release cycle as a whole and the stages of the release cycle to ensure that the software meets not only the user experience, but the non functional categories of quality, performance and OS/DB/App Server needs.

Rather than develop the use cases "just in time" on a sprint by sprint basis, our Product Management Teams develop all the use cases for a release cycle BEFORE the release cycle begins. This allows the development team to understand the system under design as a whole and also for the Product Management teams to clearly and concisely present to the development teams what we expect the system actors to be able to accomplish when the software is released. Building a "Release" backlog of use cases really enables the development teams to consider architectural dependent use cases, understand the software as a whole and choose appropriate technology to meet all of the use cases, not just incorporate technology at run time during sprints.

We also provide ample time at the end of the release cycle for software stabilization (bug and improvement issues that provide quality to the software and ensure the software meets performance and the non functional requirements). Completion of the new features and stabilization signifies a "Feature Complete" state of the software, where the teams agree that the software could be released to the market.

We're still not "done" at that point!! The software goes through a final QA which it must pass to be released to the market, Acceptance Testing and BETA and then finally if AT and BETA does not reveal any critical bugs, the software goes into a box and is delivered to the market.

In short, if you are new to the Agile Methodology, MAKE A DATE that you expect the software to be released, MAKE A PLAN that adequately will enable your development team to meet that date and MAKE ROOM to test the quality and performance of the software BEFORE releasing it to the market!

Friday, August 14, 2009

ERDAS APOLLO 2010 OGC Web Processing Service (WPS)

The ERDAS APOLLO 2010 release coming this October will have an OGC Web Processing Service (WPS). This OGC standard is not as well known as the Web Mapping Service (WMS), Web Feature Service (WFS), Web Coverage Service (WCS) or Catalog Service Web (CSW), so here's some information about this service and what ERDAS has done to build the most powerful WPS available on the market!

What is the Web Processing Service?

"The WPS standard defines an interface that facilitates the publishing of geospatial processes and makes it easier to write software clients that can discover and bind to those processes. Processes include any algorithm, calculation or model that operates on spatially referenced raster or vector data. Publishing means making available machine-readable binding information as well as human-readable metadata that allows service discovery and use.

A WPS can be used to define calculations as simple as subtracting one set of spatially referenced data from another (e.g., determining the difference in influenza cases between two different seasons), or as complicated as a hydrological model. The data required by the WPS can be delivered across a network or it can be made available at the server. This interface specification provides mechanisms to identify the spatially referenced data required by the calculation, initiate the calculation, and manage the output from the calculation so that the client can access it.

The OGC's WPS standard will play an important role in automating workflows that involve geospatial data and geoprocessing services." Quoted from the OGC Web Site

What is the WPS in APOLLO?

The WPS in APOLLO is the interoperable Web Service that is exposed by the server for client applications to self describe, publish and execute Geoprocesses. At a higher level, the WPS is integrated into a web based Geoprocessing workflow where end users can execute extremely powerful WPS processes through a web client "on demand".

Highlighting the consumer users use case, they will be able to navigate to anywhere in the map, discover the WPS Processes they have been provided the security right to execute, select the process, discover the PROPER data to execute the process and "on the fly" execute and receive output "data" that can be immediately mapped and downloaded after the WPS process has been completed.

The WPS in APOLLO is EXTREMELY powerful because we have enabled the IMAGINE Spatial Modeler Engine within the WPS process execution framework! So what does that mean...

In short, Analyst actors in the system will be capable of graphically designing complex spatial models and algorithms in the IMAGINE Spatial Modeler to create chained spatial model workflows and publish these workflows to the APOLLO WPS for execution for consumer end users!!! This means that a single WPS process bundles a full geoprocessing model (i.e. hydrologic models, change detection models, terrain analysis and portrayal, any gridded data processing model in fact), not just a simple mathmatical or pixel process!

We've added many "bells and whistles" to integrate the MASSIVE catalog of data that exists in the APOLLO and make it VERY EASY for end users to know what data "loads" a WPS process. During the Publishing Workflow of a model from IMAGINE, the analyst user stores for each model input a CSW query against the catalog to provide for the end user a "list" of valid data that exists within the catalog that satisfies the model input requirements (i.e. is a Multispectral image with NIR and Red band or is Terrain of > 10 meter resolution, etc, etc). The web client executes these CSW queries (along with a spatial domain query based on where the user is in the map) to display a list of valid inputs for wherever the end user is looking at the map!!

Compiling the extreme power and ease of CREATING complex geoprocesses in the Spatial Modeler, with the interoperable web service for executing these processes, with the EASE of the user experience to execute these processes by a 'non remote sensing' web client user and you have a secure, CONSUMER based geoprocessing platform over the web!

Get ready for my WPS demonstrations coming soon!!! It's going to blow your socks off!