Posts Tagged ‘J2EE


Embedding Cassandra into your JavaEE AppServer

Recently I started some tests with Apache Cassandra. Sincerely I got impressed with its aptitude to horizontally scale and therefore to handle load increase. Among its main virtues are:

  • Ability of replacing failed nodes without downtime
  • Complete absence of a single point for failure. Every node is identical in regard to functionality

not to mention a few other features.
But still there is one point that bothers me: I still see it and its API as a foundation thing. Something like JGroups turned into. JGroups is largely used nowadays but rarely directly by the application developer. You use it indirectly when you use a clustered JBoss or JBoss TreeCache or Infinispan.
One of these responsibilities that still lies on application developer is the fail over capacity. When you connect to a Cassandra Database node through your application you still need to fetch through its JMX API the rest of the nodes that are part of this cluster, otherwise if this node fails even though your Cassandra cluster is still up and healthy your application won’t know how to connect to it. Another possibility (and in fact this is a recommendation even if you retrieve this node list) is to have a list of reliable servers to serve as bootstrap connection servers but remember Murphy Law even all the servers on this list may go down so you still need to retrieve the whole list for failing over.
So, in summary we have something like what is depicted on the following picture:

Cassandra Regular usage thru JCA and Thrift

Cassandra Regular usage thru JCA and Thrift

Note that there is an iherent complexity in this solution, the JCA connector will be responsible for keeping a list of fail over nodes, something a Cassandra instance already does, so we end up violating the DRY principle.
But what alternatives do we have?

The StorageProxy API

It turns out that CassandraDaemon class (the main one responsible for starting up a Cassandra node) does only a few things that we can embed into our application, or should I say into our JCA Connector since in a JavaEE this is the only place you should be spawning threads and opening files directly from filesystem.
In fact those few steps are properly described on Cassandra Wiki.
Then if you take what we could call a regular approach you’d spawn the embedded Cassandra and perform a connection to localhost in order to communicate make the application talk to the Cassandra Server you’ll end up having an unecessary network communication that could reduce performance by increasing latency. It turns out that this can be avoided too by using the (not so stable) StorageProxy API.
By taking all the steps describe above you’d end up with a much simpler architecture as the one below:

Cassandra Embedded into JavaEE Server

Cassandra Embedded into JavaEE Server

With this architecture you are shielded from the complexity of failing over, Cassandra handles this automatically for you. Then you could argue: what if I need to independently scale the Cassandra layer? No problem! You can resort to an hybrid architecture like the one below:
Cassandra Embedded with Extra Nodes

Cassandra Embedded with Extra Nodes

In order to achieve this you only need to provide to these extra nodes the address of some of the JavaEE servers, this way, the extra standalone nodes can communicate with Cassandra daemons inside the AppServer and become part of the Cassandra cluster.

Memory mapped TTransport

My first attempt on doing this with Cassandra involved implementing a TTransport class that would be responsible for sending over a byte buffer commands to the server worker thread and receiving response in a similar fashion. I tried this first due to the complete ignorance of the existence of the StorageProxy API. But later I thought this could solve the issue related to the lack of stability this API has (as per the apache Wiki page). But this turned to be a not so easy task.
I thought of having a CassandraMMap that would act as the CassandraDaemon class but it would differ on TThreadPoolServer initialization as below:

		this.serverTransp = new MMapServerTransport();
		TThreadPoolServer.Options options = new TThreadPoolServer.Options();
		options.minWorkerThreads = 64;
		this.serverEngine = new TThreadPoolServer(new TProcessorFactory(
				processor), serverTransp, inTransportFactory,
				outTransportFactory, tProtocolFactory, tProtocolFactory,

The same instance of MMapServerTransport would be handled to the client through a getter method in order to open client connections as follows:

		TTransport tr = mmap.getServerTransp().getClientTransportFactory()
		TProtocol proto = new TBinaryProtocol(tr);
		Cassandra.Client client = new Cassandra.Client(proto);;

Requests through getTransport would be queued on server using the class below and a TTransport for the server would be returned upon acceptImpl:

public class MMapTransportTuple {
	private TByteArrayOutputStream server2Client;
	private TByteArrayOutputStream client2Server;
	private TTransport clientTransport;
	private TTransport serverTransport;

	public MMapTransportTuple(int size) {
		server2Client = new TByteArrayOutputStream(size);
		client2Server = new TByteArrayOutputStream(size);
		clientTransport = new MMapTransport(server2Client, client2Server);
		serverTransport = new MMapTransport(client2Server, server2Client);
        //certain codes ommited for brevity

This class would be responsible for binding the memory buffers from client to server and vice-versa.
The last class involved in this implementation would be the MMapTransport:

public class MMapTransport extends TTransport {

	private TByteArrayOutputStream readFrom;

	private TByteArrayOutputStream writeTo;

	public MMapTransport(TByteArrayOutputStream readFrom,
			TByteArrayOutputStream writeTo) {
		this.readFrom = readFrom;
		this.writeTo = writeTo;
        //read and write would operate on the respective buffer
        //and they would point to different buffers on client and server TTransport instances...

But this turned to be harder than I thought at first and as time became a short resource I’ll stick with the StorageProxy API approach for now.


Inbound JCA Connectors Introduction

After some posts about Outbound JCA connectors, let’s have a look at the concepts related to an Inbound JCA connector.

First question I always hear: “So, the server (legacy application) connects a client socket into a server socket on my J2EE server, right?” and I always answer: “depends”. People tends to think that the term inbound and outbound are related to TCP/IP connection, but, in fact, it is related to the flow of the process. In an outbound connector, our system is the actor and if you trace the flow of the call you’ll notice that it is leaving from our system to the legacy one; or perharps we could say outgoing. On the other hand, in an inbound connector, the actor is the legacy system that is triggering an ingoing message that is started outside our system and goes all the way into it.

Let’s see a sequence diagram of an inbound connector to make things clearer:

JCA Inbound Connector Sequence Diagram

JCA Inbound Connector Sequence Diagram

As you can see in the diagram, the component that ties the Application Server, the legacy system and the J2EE Application is the ActivationSpec.

Usually, instance configurations such as port, host and other instance related data is stored on the ActivationSpec, you may also have configuration in the ResourceAdapter itself, but remember that all the configurations placed on the ResourceAdapter will be shared across multiple ActivationSpecs that may be deployed on this ResourceAdapter.

The real class responsible for handling Enterprise Events is ResourceAdapter dependant and will be initialized during endpointActivation method call on a ResourceAdapter. These classes are usually implemented as a Thread or they implement the Work interface and are submitted to the container for execution through the WorkManager instance. If you opt to use a simple Thread, remember to daemonize it otherwise your application server wont be able to properly shutdown.

For the next weeks I’ll be posting some insights about how to implement an Inbound Connector using JCA.


Blog Stats

  • 375,197 hits since aug'08

%d bloggers like this: