Why are performance benchmarks so elusive for enterprise geospatial software?? Every Request for Proposals for enterprise geospatial software requires some level of "performance" to be reported. Commonly RPF's request the expected number of concurrent users, throughput and "load" the system can handle and indirectly a hardware set to be recommend based on a purported number of users that will be using the system.
Here at ERDAS, we invested in HP LoadRunner to design an enterprise performance testing system that is the best I've ever seen in my history with enterprise software. Unlike many vendors, we don't report "predictive" numbers, we produce and report ACTUAL performance numbers on real world enterpise systems under design! We of course use the testing setup internally to determine 'things to improve' and the impact of individual features (i.e. portrayal or reprojection) vs. a known baseline. Just to be forwarned, the setup was not cheap and it was also not easy to figure out how to properly implement, but the stability, flexibility, repeatability of test "scenarios" and RESULTS produced are AWSOME!
All I know is that I "FROTH" at the opportunity to stand APOLLO up to any system on the market today! We've "handily" beat out several competitors in "head to head" evaluations and always meet our documented performance results. I attribute this to our investment in performance testing setups and required performance test scenarios to pass before every release. The testing setup has proven INVALUABLE in succinctly diagnosing performance issues, ensuring our performance at release time is to our standards and report to customers the performance they should expect...and MEET that expectation!
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment