Thursday, December 17, 2009

ERDAS APOLLO 2010 Proves Itself as the "Fastest" Raster Imagery Server

Chris Tweedie of our ERDAS Australia Region has recently reproduced the FOSS4G raster imagery performance benchmark tests using ERDAS APOLLO to compare the results and has published them on his blog:

Here are the Results of ERDAS APOLLO 2010 using the FOSS3G Test Sceneario

We've all been really busy with the Recent APOLLO 2010 Release, but Chris made it a priority to get these numbers completed and published. Nice work Chris!

Chris did a great job of providing graphs for Image Format performance for each Map Server and graphs for Throughput and Response Times for each format with each Map Server to make it comparable by Server or by Format/Server.

Some analysis of the results:

1. ECW imagery format is a "performance purpose" built imagery format. The throughput of ERDAS APOLLO with the ECW format is 2.5 the Open Source Servers at a 150 user load and can support a much higher load than 150 which wasn't represented in the this test scenario. The throughput curve had a steep positive slope even at 150 users where the other servers were peaked out with throughput at the 150 user measurement.

2. ERDAS APOLLO outperformed each Open Source server on every image format with the highest Throughput (transactions/second) and lowest Average Response Times (the time that it takes to get the image on your map). ERDAS APOLLO outperformed open source with every image format in the test.

ERDAS will be providing several webinars and whitepapers regarding the subject of "Raster Imagery Performance" to showcase the ERDAS APOLLO 2010 and it's performance capabilities.

Keep in mind that these numbers do not even showcase our faster imagery protocols that we provide with ERDAS APOLLO....

ECWP and our Optimized Tile Delivery are MUCH faster and support much higher user loads compared to the WMS protocol (THOUSANDS of users on a standard server). In short, we can produce maps MUCH FASTER than these numbers published with the "slowest" protocol provided by the APOLLO 2010.

Good work again Chris...

Thursday, October 15, 2009

How Does ERDAS Support Open STANDARDS?

ERDAS expends a large effort to support Open Standards and in the end, interoperability with other software implementations that as well support open standards. We invest in the development and implementation of Open Standards because as an organization, we believe in our software products working seamlessly with any other geospatial package that exists on the market and providing our customers with a large variety of deployment and design options for geospatial solutions. We support open standards to additionally be a compelling and viable option in ANY geospatial system under design irregardless of an organizations existing systems "vendor". We also measure each component and feature we develop with any pertinent existing IT and Geospatial standard to ensure that we maintain a high degree of interoperability.

So "how" does ERDAS support Open Standards then?

We support the entire array of IT and Spatial Standards. On the IT side of the house, we support a variety of Operating Systems, credentials stores (LDAP, Active Directory, DB, etc), Application Servers, Databases, chip-sets and virtualization environments (both hardware and OS) and of course, industry web service standards of WSDL/SOAP/UDDI.

On the GeoSpatial side of things, we support the web mapping standard WMS, the gridded data delivery service of WCS, vector feature delivery service of WFS, open catalog service of CSW, map context WMC, WRS, URN, OWS common, Filter, on and on an on....

We also participate in the development of standards within the OGC standards body. There is a "cost" to participating in these standards development bodies of which ERDAS is more than willing to participate to ensure the standard meets our customers use cases and to bring to the table the wealth of industry proven "knowledge" to ensure the standards meet in the end, market needs.

Why support Open Standards?

We do this because it's important to our exiting customers and a market driving initiative in the geo-market space. We do it because the standards are becoming "mature" and capable to meet customer use cases (not just a prototype, actually used in a production environment). We also do it to proliferate the standards within the industry and prove the capabilities of the standards within extremely high volume, rapidly changing production environments.

Why point these facts out?

There's been quite a bit of "noise" superimposing Open Standards with "Open Source" that can be confusing to non-IT decision makers and a propensity of some open source pundits to raise an argument that "commercial"="proprietary"="vendor lockin"=an advantage to open source. This is definitely not the case in the market today, whereas ERDAS is not the only vendor ensuring a high degree of open standard support and interoperability.

This "noise" rises from the organization and productization of the disperate open source projects in the geo-space and to create competitive marketing against the existing commercial products. It is now officially a "vendor" option to customers. ERDAS stands firm in out position in the market and our capabilities to provide highly interoperable, entire end-to-end geospatial processing and analysis chains with market proven maturity and market leading segment to PROVE it.

Wednesday, October 14, 2009

ESRI Bails on FOSS4G Performance Shootout

ESRI has withdrawn from the FOSS4G Performance Shootout. There hasn't been any official statement on the Performance Mailing List announced as to the reason why yet, just that they will no longer continue with the exercise.

Why would ESRI pull out so late in the game?? The performance numbers were supposed to be completed this week?? Why put in so many WEEKS of effort in getting the system setup and deal with all the "issues" that were encountered during the setup and testing period to "bail" right at the end??

I've been following the mailing list quite closely, and I have a couple of observations:

1. There was a lot of disorganization in terms of the coordination of the hardware topology, testing procedures, the data configuration, what data to use, what tests would be performed and "who" was responsible for what! It was a total free for all with no scheduling, responsibility and probably the "hand holding" necessary to keep ESRI engaged. It was up to ESRI to meander through the minefield of issues presented with the data, changes to the system, test scripts and hardware.

If your ESRI and you feel like you just jumped into a circus, would you continue to parade in line with the rest of the show?

2. I am EXTREMELY glad I didn't waste any resources on attempting to participate in what should be an EXTREMELY interesting and engaging exercise.

I highly recommend that an independent organization provide a non biased, preapproved methodology, preapproved dataset with the opportunity to put the data in a vendor recommended format (which every customer usually does anyways), capable and diverse hardware set to limit the non-functional limitations, proven performance testing software and methodology to measure LOAD not throughput and the right to review, control the rights of the results to be published/non published, there will be very high participation from ALL the available vendors.

This is even after the publication announcing the "shootout" which I felt was aimed at throwing ESRI under the bus anyways.

What a disaster...

Thursday, October 8, 2009

WorldView II Successfully Launched!!

So the WorldView II sensor is in orbit! I can't wait to get some of the products into the APOLLO and the Web Processing Service!!

The sensor has a short wave IR band that REALLY is needed for lots algorithms that can only use LANDSAT TM and MODIS imagery...but WorldView II will do it at 46 cm!

I will definitely be showing that data off on the demo site as soon as we gain access to it!

Friday, September 18, 2009

Geospatial Performance Benchmarks "Apples to Apples"

I get a lot of requests for Performance Benchmarks of APOLLO vs. lots of other systems. We provide extremely analytical and detailed performance results of our server every release. A couple issues always crop up in any performance benchmarks:

1. Features - first of all, ANY competitive product cannot DO what APOLLO can I find myself either A: Dummying down APOLLO and the test to even be able to do an "apples to apples" test or B: not making it "apples to apples".

2. Return on Investment - ROI on software is not just performance, but HOW LONG IT TOOK TO SETUP, ADMINISTER and GET the feature operational in a production scenario!! I find myself spending such a HUGE part of my time getting the competitive software to even "work" to do the test.

I've been following the FOSS4G's Web Mapping Shootout announced for their 2009 conference. I get a really HUGE chuckle because their "shootout" couldn't be perfomed on a more CARTOON set of data and NON-REALISTIC use case. I don't know ONE client that requires one image and a handfull of vector data sets (3 to be precise).

Our smallest benchmark has 459 7 band images...choke on that Open Source.

They should call it a "water gun fight" instead of a "Shootout".

Also what will NOT be collected in the "shootout" is how long it took them to setup the system and service enable the many "WEEKS" are you willing to struggle with that?

PERFORMANCE is about ROI on the investement and of course the ability of the system to handle a user load. Weigh both when your looking at the numbers!!


The ERDAS APOLLO 2010 has been released at the beginning of this week!! For the ERDAS Enterprise Products, we will have a very detailed BETA Web Page providing procedures, documentation, video short tutorials (*new*), announcement and FAQ's. It's not too late, contact your ERDAS Sales Rep to participate!!

The first week has been a GREAT week! The Geoprocessing workflow feedback has been excellent!! What I'm particularly VERY PROUD of this release is not just that we have an interoperable WPS and can perform such AMAZINGLY COMPLEX spatial workflows within a single process, but the WORKFLOW of WPS is so smooth and easy to use. From the publishing experience for Analyst users (THANKS IMAGINE TEAM) to the management of processes on the server, to the execution through a thin client (web browser) of models by end users who don't know imagery or remote sensing AT ALL...the end to end use case is AWESOME!!

If your end users need ADVANCED Spatial Modeling and INTEROPERABLE delivery of data and spatial modeling products...AND your looking at MAN YEARS of customization of your existing proprietary system or NEVER ENDING full blown core development of your extremely limited, disconnected OPEN SOURCE project (because these are the only options that exist today), ERDAS IS YOUR SOLUTION!!

Saturday, August 15, 2009

Maintaining a Schedule with an Agile Software Development Methodology

As a commercial software vendor, it is EXTREMELY important to maintain a SCHEDULE and deliver software on the date that we communicate to the market that the software will be released. This is critical because we have existing customers who pay software maintenance and expect on a regular basis:

1. bug fixes
2. minor feature improvement/enhancements to existing features
3. resolutions to any workflow issues they may have reported
4. new features at major releases

It is also critical to deliver new major features to the market to enable the sales force to meet the demand of the feature in the market on time to capture the sale! If you deliver the feature 1 year after another software vendor has already released the equivelent feature, you put your sales force at a disadvantage as they are playing catch-up against the competition.

ERDAS targets two releases per year (a minor release in Q1 and a major release in Q4).

It is far too easy to get trapped in "release date creep" in an Agile Software Development Methodology. There are several reasons for this...

1. The very nature of the methodology is user experience driven, not software architectural by nature like a waterfall methodology
2. It is easy to "not" do architecture or take the "big picture" into account before implementing a use cases. This sometimes leads to the inclusion of "technical debt" into the software by the final implementation not "fitting" into a holistic architecture of the software. This results in the need to refactor the software at a later time to "clean up" the architecture and/or create harmony across similar use cases.
3. The use case may be implemented, but the resulting performance and non functional requirements were not accounted for...only the ability of the use case to be met by the actor in the system was focused upon. For example, the use case works on Oracle, but not PostGreSQL, or the use case works in IE6, but not Firefox.

In short, the analysis of "what it will take to implement a use case" process when planning a sprint can be a difficult process on very large feature sets and feature sets that span multiple tiers of the software (i.e. Database, Server, Middleware, Clients).

In previous blog posts, I mentioned our own software development teams use of the "Spike" in our methodology to "flush out" technical unknowns and document architectural requirements before implementing use cases. This process in the Agile Methodology has been a great tool to our development teams.

Besides technically flushing out architecture and technology before implementing use cases, it's important to take a step back and look at the release cycle as a whole and the stages of the release cycle to ensure that the software meets not only the user experience, but the non functional categories of quality, performance and OS/DB/App Server needs.

Rather than develop the use cases "just in time" on a sprint by sprint basis, our Product Management Teams develop all the use cases for a release cycle BEFORE the release cycle begins. This allows the development team to understand the system under design as a whole and also for the Product Management teams to clearly and concisely present to the development teams what we expect the system actors to be able to accomplish when the software is released. Building a "Release" backlog of use cases really enables the development teams to consider architectural dependent use cases, understand the software as a whole and choose appropriate technology to meet all of the use cases, not just incorporate technology at run time during sprints.

We also provide ample time at the end of the release cycle for software stabilization (bug and improvement issues that provide quality to the software and ensure the software meets performance and the non functional requirements). Completion of the new features and stabilization signifies a "Feature Complete" state of the software, where the teams agree that the software could be released to the market.

We're still not "done" at that point!! The software goes through a final QA which it must pass to be released to the market, Acceptance Testing and BETA and then finally if AT and BETA does not reveal any critical bugs, the software goes into a box and is delivered to the market.

In short, if you are new to the Agile Methodology, MAKE A DATE that you expect the software to be released, MAKE A PLAN that adequately will enable your development team to meet that date and MAKE ROOM to test the quality and performance of the software BEFORE releasing it to the market!

Friday, August 14, 2009

ERDAS APOLLO 2010 OGC Web Processing Service (WPS)

The ERDAS APOLLO 2010 release coming this October will have an OGC Web Processing Service (WPS). This OGC standard is not as well known as the Web Mapping Service (WMS), Web Feature Service (WFS), Web Coverage Service (WCS) or Catalog Service Web (CSW), so here's some information about this service and what ERDAS has done to build the most powerful WPS available on the market!

What is the Web Processing Service?

"The WPS standard defines an interface that facilitates the publishing of geospatial processes and makes it easier to write software clients that can discover and bind to those processes. Processes include any algorithm, calculation or model that operates on spatially referenced raster or vector data. Publishing means making available machine-readable binding information as well as human-readable metadata that allows service discovery and use.

A WPS can be used to define calculations as simple as subtracting one set of spatially referenced data from another (e.g., determining the difference in influenza cases between two different seasons), or as complicated as a hydrological model. The data required by the WPS can be delivered across a network or it can be made available at the server. This interface specification provides mechanisms to identify the spatially referenced data required by the calculation, initiate the calculation, and manage the output from the calculation so that the client can access it.

The OGC's WPS standard will play an important role in automating workflows that involve geospatial data and geoprocessing services." Quoted from the OGC Web Site

What is the WPS in APOLLO?

The WPS in APOLLO is the interoperable Web Service that is exposed by the server for client applications to self describe, publish and execute Geoprocesses. At a higher level, the WPS is integrated into a web based Geoprocessing workflow where end users can execute extremely powerful WPS processes through a web client "on demand".

Highlighting the consumer users use case, they will be able to navigate to anywhere in the map, discover the WPS Processes they have been provided the security right to execute, select the process, discover the PROPER data to execute the process and "on the fly" execute and receive output "data" that can be immediately mapped and downloaded after the WPS process has been completed.

The WPS in APOLLO is EXTREMELY powerful because we have enabled the IMAGINE Spatial Modeler Engine within the WPS process execution framework! So what does that mean...

In short, Analyst actors in the system will be capable of graphically designing complex spatial models and algorithms in the IMAGINE Spatial Modeler to create chained spatial model workflows and publish these workflows to the APOLLO WPS for execution for consumer end users!!! This means that a single WPS process bundles a full geoprocessing model (i.e. hydrologic models, change detection models, terrain analysis and portrayal, any gridded data processing model in fact), not just a simple mathmatical or pixel process!

We've added many "bells and whistles" to integrate the MASSIVE catalog of data that exists in the APOLLO and make it VERY EASY for end users to know what data "loads" a WPS process. During the Publishing Workflow of a model from IMAGINE, the analyst user stores for each model input a CSW query against the catalog to provide for the end user a "list" of valid data that exists within the catalog that satisfies the model input requirements (i.e. is a Multispectral image with NIR and Red band or is Terrain of > 10 meter resolution, etc, etc). The web client executes these CSW queries (along with a spatial domain query based on where the user is in the map) to display a list of valid inputs for wherever the end user is looking at the map!!

Compiling the extreme power and ease of CREATING complex geoprocesses in the Spatial Modeler, with the interoperable web service for executing these processes, with the EASE of the user experience to execute these processes by a 'non remote sensing' web client user and you have a secure, CONSUMER based geoprocessing platform over the web!

Get ready for my WPS demonstrations coming soon!!! It's going to blow your socks off!

Wednesday, June 3, 2009

Get a Preview of What ERDAS Is Working On For the Next Release!!

We have released ERDAS Labs website today that provides a "peek" at what we are working on for this September ERDAS 2010 Software Suite!

I am really excited about the ERDAS IMAGINE 2010 Ribbon Interface, LPS eATE and of course, the Web Processing Service (WPS)!!

ERDAS Labs is created to enable our existing and potential customers to see what we are developing through feature synopsis and demo videos. The site elicits comments and direct developer feedback for each showcased feature!

Check out just a few of the ERDAS 2010 major!

Tuesday, June 2, 2009

Do you need data to go??

Gridded Data Provisioning Solution built on ERDAS APOLLO Image Manager...

Check it out!

Thursday, May 28, 2009

The Spike - Quality Software and Agile Software Development

There is a balance between delivering use cases to the market and maintaining overall software quality in an Agile Software Development Project. It's extremely easy to get "trapped" into the "tunnel vision" of providing user facing features as fast as possible and quickly encounter architectural deficiencies and technical debt that impacts the overall quality and performance of the software.

Agile does not mean NO architectural design! Many teams just learning agile have difficulty understanding how design "fits" into the use case or user story methodology.

Here at ERDAS, we utilize planned 'spikes' and design sessions and plan them in iterations before implementation of the the user/system interaction to ensure that the resulting imlpementation soundly builds upon our architecture.

A SPIKE is a "technology" discovery process. It can be a research project into technologies or algorithms, an evaluation/benchmark or prototype of technologies to find a "best fit" or a discovery process of existing algorithms and archtecture to provide man day estimates or "Level of Effort" to get some use case completed. We effectively use SPIKES to address the "unknown" or "uncertain", dedicate time to make it known and to determine how long it will take to satisfy a use case and always report the results of every spike in the Iteration Review.

We also enforce a "time capped" rule on spikes. This rule essentially allocates a fixed amount of time that it will take to "discover" what we want to know. At run time if a blocking issue is encountered, we can always increase the duration of the spike, but we very seldomly do so. Time capping the spike really enables detailed planning, ensuring we avoid "creep" in a discovery process and stay on schedule.

OpenGeo Team is "faster and better" than anyone else in the world???

I really get a kick out of the self provocative proclamation that the OpenGeo Team is "...faster and better than anyone else in the world..." at solving geospatial problems on the OpenGeo Team page.

I recommend a "geospatial" Academic Decathalon!

ERDAS has been solving "real world" National Mapping Agency geospatial workflows for decades now. how are you 'faster' and or 'better' than the world class geospatial scientists, remote sensing scientist and developers that exist at ERDAS?

Should we measure this based on software revenue? Possibly an 'apples to apples' comparison of products and satisfied use cases? "Challenge" each team with a use case to satisfy (FULLY!!)? How about number of supported sensors and formats? Or what about a third party review of resumes?

Thats just a rediculous statement guys...

Thursday, May 21, 2009

The Price of "FREE" Open Source Software has really become Expensive!!

I was looking at the OpenGeo Version Matrix and the price to "buy in" to the open source geospaital software has really become crazy! It appears the line between capitalist and geospatial philanthropist has really become blurred. It's more expensive to buy into open source than to purchase COTS software today!

$70,000 for 300 hours of service!!!!!!!!!!! OMG!

I run into so many clients that are "hamstrung" on open source solutions that are being funnelled into a bottomless money pit with open source. No doubt, the "hook" to allure people into the evaluation stage is there with the "free" pitch, but the REALITY of what it will take to really meet requirements smacks you in the face immediately.

The business model of "Try it...figure out what you really want...then pay me 70K" open source model is a bit crazy.

Always remember, you buy into it, YOU MAINTAIN it for the rest of your life. OUCH!

The market is begging for a vendor to pick up the ball here...luckily, ERDAS is HERE!

Give the "out of the box" SDI that works, has a WORLD CLASS development, support and product management team supporting the project with real world PRODUCTIZED features and evaluate the difference for yourself!


Can somebody calculate an ROI for me immediately!

Tuesday, May 19, 2009

Cherokee County, Georgia...the MOST MAPPED SPOT ON EARTH??

For the ERDAS Enterprise Products Public Demo Site, we've been very fortunate to leverage our data vendor business partners and the Cherokee County GIS Team to collect lots of vector, terrain and imagery data for this area of interest and serve this data through our enterprise products. For one, ERDAS global headquarters is in Norcross, Georgia which makes it a close proximity to Cherokee County and second, it's just a pretty nice place (thats went through massive change over the past 10 years).

Lets take an inventory of the data that we've collected:

1. LANDSAT 1, 4-5 and 7 scenes from 1973 - 2008 (multispectral and panchromatic)
2. Digital Ortho Quarter Quads from 1999
3. Airborne 2006 high resolution ortho imagery
4. IKONOS imagery from 2000-2008 (multispectral and panchromatic)
5. USGS Digital raster graphics at 1:24k, 100k and 250K
7. SPOT scenes from 1999-2008 (multispectral and panchromatic)
8. National Land Cover Dataset from 1992 and 2001
9. 2008 Vectors of Roads, Parcels, Land Lots, Zoning, Buildings, etc, etc from Cherokee County Georgia

on and on and on and on....

Note that all of the imagery and terrain is being served from a SINGLE Web Service endpoint (of course you are only gaining access to the public layers by clicking on this).

The vectors are being hosted from an Oracle 11g Database with Spatial Cartridge with no proprietary middleware required or proprietary SDK or proprietary data model, JUST Oracle spatial please!

All of this data over all of these timeframes in one location made me ponder, is Cherokee County Georgia now the MOST MAPPED AREA IN THE WORLD!!!

We are using our business relationships to collect even more data over the area so stay tuned to and see how many "sensors" we can collect over a single area!!

There is some really EXCITING FEATURES coming in the ERDAS APOLLO 2010 release this September so GET READY to see Cherokee County Georgia like you've never seen it before!!!

Friday, May 15, 2009

Is the Geospatial World devoid of Performance Benchmarks??

Why are performance benchmarks so elusive for enterprise geospatial software?? Every Request for Proposals for enterprise geospatial software requires some level of "performance" to be reported. Commonly RPF's request the expected number of concurrent users, throughput and "load" the system can handle and indirectly a hardware set to be recommend based on a purported number of users that will be using the system.

Here at ERDAS, we invested in HP LoadRunner to design an enterprise performance testing system that is the best I've ever seen in my history with enterprise software. Unlike many vendors, we don't report "predictive" numbers, we produce and report ACTUAL performance numbers on real world enterpise systems under design! We of course use the testing setup internally to determine 'things to improve' and the impact of individual features (i.e. portrayal or reprojection) vs. a known baseline. Just to be forwarned, the setup was not cheap and it was also not easy to figure out how to properly implement, but the stability, flexibility, repeatability of test "scenarios" and RESULTS produced are AWSOME!

All I know is that I "FROTH" at the opportunity to stand APOLLO up to any system on the market today! We've "handily" beat out several competitors in "head to head" evaluations and always meet our documented performance results. I attribute this to our investment in performance testing setups and required performance test scenarios to pass before every release. The testing setup has proven INVALUABLE in succinctly diagnosing performance issues, ensuring our performance at release time is to our standards and report to customers the performance they should expect...and MEET that expectation!

Wednesday, May 13, 2009

ESRI vs. OGC Community

ESRI is really pushing hard on proliferating their own PROPRIETARY services with their 9.3.x Server offering and recommending this over standards based interoperable web services to the GIS community "at large".

I am also PUBLICLY stating that thier support for the OGC services (especially CONSUMING them in their clients) is VERY WEAK functionally as a feature set and the performance is very, very poor. The ERDAS APOLLO Image Manager Web Client is such a better user experience and faster at consuming OGC services than ArcMap!!

Lets put ourselves in their shoe's and dwell on why this would be??

ESRI currently holds the largest marketshare in the GIS domain. They have every intention to keep that marketshare. To the market leader, making the OGC services actually work equivelently to thier proprietary services does have the possibility of marginalizing and commodotizing feature sets in the GIS market leaving "opportunity" to those who are supporting the interoperable services. Forcing the customer to "have" to use proprietary services and SDK's to meet thier use case is also in ESRI's interest as it requires vendor lock-in on the server and client side. It's quite easy to say that the OGC services aren't "rich" enough to provide the use cases that clients need when thier only experience with it is extremely limited and the performance is very slow (as experienced in thier software today). They also have no interest in a "governing" body controlling technology decisions and/or application profiles on the technical side.

OGC Services on the other hand need to provide the user experience and the PERFORMANCE that proprietary services do. In my opinion, this can only be provided by the geospatial vendors. The open-source project don't have the wealth of domain experience, existing codebase and market experience to do this. They of course will provide a user experience, but at a very poor performance.

Entre...VENDORS SUPPORTING THE STANDARDS AND DOING IT RIGHT! ERDAS has really supported the OGC standards in an extremely MEANINGUL and HIGH PERFORMANCE manner. We are CITE certified OGC services and provide "under the hood" the depth and richness of format support, sensor model support, workflow and an out of the box end user experience in a single product that is expected of a commercial vendor.

If you really want to see the OGC services FLY on TERRABYTES worth of heterogenous data with real world use cases...the APOLLO Enterprise Suite is what your looking for.

Tuesday, May 12, 2009

The ESRI Geodatabase Proprietary cluster

I usually don't complain in general, but this time I've had it up to my eyeballs with the inability to work with the ESRI geodatabase without using their proprietary SDK's. I've developed with ArcObjects for over a decade now so it's not a matter of "complexity", it's simply an issue of total lack of interoperability!

The "marketecture" on thier website speaks of interoperability and IT standards yet they don't allow anybody to access the data that they store in their PROPRIETARY storage format...say one thing, do another.

Don't get me wrong, I'm a huge fan of the FEATURES of the geodatabase, but I've had it with having to use ArcObjects to work with what should simply be free flowing GI.

So there is supposed to be a published specification for the "file" geodatabase in the 9.4 release. Great...but what about the DB persisted "enterprise" geodatabase? It must only be "simple" feature specification as all the behavior of objects is in the application tier?? I'm looking forward to implementing the real "simple feature specification" on top of whatever specification they provide....ughhh.

The "marketecture" should read, "We are totally interoperable...with ourselves only"!!!! (note the very small typeset caveat disclaimer said that under my breath reality check).

Wednesday, April 22, 2009

Tuesday, April 21, 2009

Monday, April 13, 2009

Call for Content

The blog contains a variety of readers interested in different disciplines; enterprise, product management and methodology and ERDAS product related content. This is an open post to request from you the reader what subjects you would like to hear about or read about.

Just add your comment to this post and I will make sure to reply to your request.

SOAP vs. REST in Geospatial Applications

It's interesting to see the decision making process at organizations evaluating geospatial technologies. One area that I see a lot of "effort" from an IT and developer stakeholders in Geospatial Software Purchases is in the form, function and ROI of the technologies REST or SOAP. This is being driven on the enterprise side of the house where these technologies are essentially thier choice of which to invest in.

The ESRI Developer Summit Keynote provides a good technology overview by Dave Chappel on "SOAP vs. REST". Probably for people at a decision maker level trying to understand the technology and "keywords" or end users trying to get a better understanding of the technology, this is a good starting point.

Two areas that is always analyzed and heavily weighed in any evaluations is INTEROPERABILIY and SECURITY, and this is where SOAP clearly becomes the "leader of the pack" in technologies today. This is with any technology that these two categories are scrutinized. The requirement for proprietary REST client interfaces is such a drawback of the technology. Why would I want to have to integrate a proprietary interface in everything that I want geospatial data and have a proprietary client interface integration effort to 'work' with my GI and services? The pushback in that process is definately "loud and clear" from the market. In terms of security, you CAN secure REST endpoints, this is not a problem, but a lack of any standard to do so and/or the client interfaces proper ability to handle security rhealms must be accounted for, you shouldn't have to 'poke' at it to find out if it works. Again, same theme, proprietary is a big blocking issue there.

Mr. Chappell raises a good point in that SOAP is not "easy" to manage from the client side of the house as managing XML isn't the easiest or most eloquent. Fortunately in the GI space, everybody already supports the standardized services (WMS, WCS, WFS, etc, etc), so the "effort" and ability to handle it with standard interoperable client interfaces is widely prolifererated and highly available, so i think that point is rather 'mute'.

Don't get me wrong, REST is a GREAT technology, it simply needs some standards bodies to define how and where they will be used to get the same ROI and proliferation as the well defined SOAP services today. This process is firing up with the OGC right now. Hope the conjur up something useful with that... :)

Friday, April 10, 2009

EAIM 2009 R2 Released

The ERDAS APOLLO Image Manager 2009 R2 has officially been RELEASED!! The performance improvements are incredible! HUGE Volumes of imagery in their original formats delivered even faster! Please request the EAIM Performance Benchmarks Whitepaper to see the numbers...I'm very, very happy with the results.

The new high performance catalog with CSW interface has made such a huge difference in all aspects of the software. Not only from a user experience in WMS and CSW Catalog Search response times but the ability of a single server to handle a large load of web clients, concurrent GI Crawlers and CPU intensive jobs (i.e. Clip, Zip and Ship).

GET READY for the market previews of the ERDAS APOLLO Web Processing Service (WPS)!!!! I've never been so excited about a software feature before!! A preview will be coming in the NEW ERDAS Labs website soon, so stay tuned!!