Web Services are not slow when used wisely which boils down to choosing
the right architecture (hardest part) and implementation (things change
often). Performance benchmarks are interesting certainly for internal
teams to track progress in a continuous build mode, but since there’s
no industry standard benchmark (ala SPEC) at this point, any
such benchmark results should be taken with a grain
of salt. It doesn’t mean they’re wrong, just that you can’t do straight
comparison of results across benchmarks. With that in mind some Web
Services-related benchmarks popped-up lately discussing the main Java Web
Stacks various merits…
First, four days ago, the WS02 guys (the main force behind Axis 2 as I
understand) published a
benchmark comparing Axis 2 and Codehaus XFire. It didn’t take
long before the XFire camp replied (via the BileBlog).
Now the GlassFish JAX-WS team is releasing
their own numbers (testing started long before WSO2 published
their results) vs. Axis 2. Theses tests use the same hardware and the
same driver talking to Axis 2 and JAX-WS 2.1 and show better results
across the board, from 30% to 100% better than Axis 2 :
As discussed in the post comments, the JAX-WS 2.1 benchmark is using
Java 5(_10). A move to Java 6 improves GlassFish’s performance up
by 7-10%. The VM tuning certainly has less options
than in WSO2’s tests.
Obviously, in any Web Services benchmark, the binding part plays a
great role. JAXB 2 (the standard Java EE 5 API used by JAX-WS) has
really been kicking butt in terms of flexibility, performance and
adoption (JBoss, BEA and others are using it straight out of
GlassFish). Axis 2’s support for JAXB is not ready yet, so the test was
run using XMLBeans which shows good sustained marshalling/unmarshalling
performance on bigger and more complex data sets.
Contrasting this with the WSO2
results, I had never heard of ADB
before, I don’t know how many people are using it, but it seems to have
schema limitations which begs the question of what to do when the
contract-first approach with generic data (a protocol needs to be
data-independant) starts using XML Schema which break ADB? Also,
admitting Axis 2 beats XFire in the WSO2 test, the Axis TPS figures
seem pretty low compared to what the JAX-WS team measured on Axis.
Maybe the server utilization isn’t high enough (more important than
saturating the giga-bit network IMHO) and they’re hitting some Tomcat
(HTTPd ?) listener limitation. GlassFish has Grizzly.
Note JAX-WS 2.1 is the version integrated into GlassFish v2
which will be shortly in beta and final before JavaOne. This will
package up into a single product the JAX-WS 2.1 implementation together
with it’s natural extension WSIT
which brings many WS-\* specification implementations (as shown here)
and great Microsoft WCF interop.
Of course, these results are by no means final :
– while some fairly big messages were exercised in the JAX-WS 2.1
benchmark, MTOM/XOP (more interoperable) and FastInfoset (better
performance) were not used. These could help achieve even better
– when XFire and Axis 2 get to JAX-WS conformance (they both have
plans), it will be interesting to compare them all using the
same programming model.
– it would also be interesting to see how these numbers all compare to
– this all reminds me of Systinet’s Web Services engine. Whatever happened to it?
– test with your own messages/payloads/architecture/load, this is the only benchmark that
JAX-WS 2.1 not only about performance, it’s also about developer ease
of use (Statefull
Web Services, Maven
with multiple application frameworks including Spring
just to name a few). See this
blog for more info. Want to test-drive JAX-WS 2.1? Download
the standalone bits from http://jax-ws.dev.java.net
or get GlassFish
v2 M4. In both cases, extensive and up-to-date documentation
can be found here.