20
May
10

Designing for Extreme Transaction Processing – Memento Pattern

Applications with huge transaction processing requirements or tight response times always result in a careful architecture design. Special care has to be taken on how to partition data in order to be able to better parallelize load. This task can be even trickier if some of the data isn’t suitable for proper partitioning. Being ableto partition data is an essential requirement for some of the elastic frameworks around – some of them even demand that data is local to the node processing the request while others may still work but with a significant performance drop if it is not. This negative impact due to difficulties in the proper partitioning and grouping of data can be mitigated at the cost of increased memory usage as it is always possible to increase replication forcing data to be in multiple nodes to avoid serialization upon request as there isn’t the concept of gravitation on such tools as there was on previous distributed caches.
Billy Newport (the mind behind IBM eXtreme Scale) proposed a classification scheme for the different styles of Extreme Transaction Processing (XTP) systems he identified:

  • Type 1: Non partitionable data models – Applications in this category usually perform adhoc queries that can span an unpartitionable amount of data which leaves scaling up as the only scaling possibility.
  • Type 2a: Partitionable data models with limited needs for scaling out – Applications in this category already present means of partitioning data and thus load but still on a limited fashion. Hence those applications can still be built using regular application servers backed by Sharded Databases, elastic NoSQL databases or IMDGs but still won’t use a Map/Reduce pattern for processing requests.
  • Type 2b: Partitionable data models with huge needs for scaling out – Finally this category is composed of applications that presents means of partitioning data as on Type 2a but instead of being exposed to limited load, Type 2b are applications are pushed to the limit. Type 2b applications usually evolved from Type 2a and were faced with the scalability limits of traditional architectures moving from regular application servers to Map/Reduce frameworks. It is worth noting that it is possible to scale to similar load requirements with traditional architectures based on regular application servers but they’ll usually require more human resources for administration as well as more hardware infrastructure.

Among the list of common items on a classic Transaction Processing (TP) system that must be avoided on an XTP system are two phase commit resources. Note that it is not that you can’t have a two phase commit resource as part of an XTP but they must be carefully evaluated and excess will certainly compromise system performance.
Billy presented (at the eXtreme Scale meet the experts session@IBM Impact 2010) an excellent example scenario on where an excess of two phase commit resources could undermine the performance of an e-commerce solution. In his example, the checkout process of the referred e-commerce site would, as part of a single transaction, scan the whole shopping cart and for each product in the cart it would then perform an update on a fictional inventory database as well as updating a few other two phase commit databases from other departments. If any of the items were out of stock transaction would be rolled back and an error would be presented to the user. It is obvious that this hypothetical system wouldn’t be able to scale much since the cost related to the long time span of this transaction combined with the number of resources involved would be tremendous. Not to mention that there would be a huge load on the transaction manager.
Instead of using this rather obvious approach, Billy suggested that these updates could be chained and combined with the Memento design pattern – the updates would then be sequentially applied and if any of them failed the Memento pattern would then be used to revert the changes that have already been applied. Using this approach the contention on all the databases involved would be minimal and the application requirement would still be fulfilled.
This is one of the many examples we can point that need to be carefully observed when designing XTP systems.

Advertisements

0 Responses to “Designing for Extreme Transaction Processing – Memento Pattern”



  1. Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


ClustrMaps

Blog Stats

  • 353,192 hits since aug'08

%d bloggers like this: