Sie sind auf Seite 1von 21

FinnOne NEO:

Clustering
What is Cluster
Cluster : A group of servers that acts like a single
system
Why Clustering
Scalability(Load Balancing) : Scalability means that your
application is capable to handle more load /requests by
increasing servers.
Load balancing means that all servers are active, and new
requests are distributed among them.

High Availability(Fail-Over) : HA means that your application will


be available, without interruption.
Fail-Over means, If an active server stops responding, all its
requests are directed to the other server the system 'fails
over' to another active.
Types of Clustering
Horizontal Clustering : In horizontally clustered environment, cluster-
enabled application is deployed on multiple physical machines.
Each machine is available for requests.

Horizontal clusters offers protection over hardware failure, increases


efficiency, provides load balancing and process failover. However,
since there are many number of physical machines involved the
installation and maintenance cost increases proportionally.

Vertical Clustering : In Vertical clustering, multiple application server


instances are hosted on the same physical machine.

This type of clustering provides increased efficiency, load balancing


and process failover. However, if hardware fails then there may not
be ready alternative.
Clustering in FinnOne NEO
Type of Cluster: Horizontal
Load Balancer: Software Based (Apache HTTPD Server)
HA : Yes (using Load Balancer)
DB Cluster: No
Load Balancer: Software Based (Apache HTTPD Server)
Sticky Sessions: Yes (using Apache HTTPD Server)
Pictorial Representation of Cluster
Architectures in Clustering
State Replication: We will have multiple service instances on cluster
nodes and synchronize the state by replicating it.

This architecture has advantage in fail-over and disadvantage in


performance.

Hub-and-Spoke (Master-Slave): In this architecture, we have only


one instance of services on master cluster node and child nodes will
invoke master node services through for instance RMI method calls.

This architecture has advantage in performance but disadvantage in


fail-over capability.
For NEO: State Replication
In FinnOne NEO, we completely rely on STATE REPLICATION
architecture.
For NEO, states are defined in DB, Application Context and
Cache.

DB Sharing: Yes (Single DB Instance)


Cache Sharing: Yes
Application Context Sharing: No (Not needed in present
scenario)
For NEO: Master Slave
In FinnOne NEO, Master Slave concept is used in limited areas
like EOD/BOD, Spring Based schedulers etc.
RMI method calls are made to perform Master-Slave calls.

For EOD, EOD can be initiated from the single Master Node only.
EOD execution load is distributed across all the nodes inclusding
Master and other slave nodes.
Focus Points in NEO Clustering
Locking: Application locks should be shared across different cluster nodes. If
DB locks are used then it is cluster-safe.

Caches: Persistent CACHE objects must be shared across cluster. Achieved via
INFINISPAN along with JGROUPS.

Schedulers: SPRING Based schedulers are executed on specific master node


controlled by spring-profile scheduler-node.
QUARTZ schedulers are based on DB. As DB is shared, these schedulers
are implicitly clustered.

Logging: Logs are not shared. Individual LOGs for each Node.

User Sessions: User sessions are not replicated. But SESSION REGISTRY is
shared using Infinispan Cache. Authentication data are stored and shared at
DB level.
Caching in NEO Clustering
In FinnOne NEO we use following type of Caching.

Hibernate L2 Cache
Custom Cache (L3 Cache)
Lucene Indexes
XML Based Config: Hibernate L2 Cache
New profile in framework-persistence-context.xml app-server-cluster-provided.
The default Hibernate Infinispan Cache configurations have been overridden via the JPA
property hibernate.cache.infinispan.cfg.
hibernate.cache.infinispan.cfg File having configurations for L2 Cache neutrino-
cluster-infinispan.xml
XML Based Config: Hibernate L2 Cache
Configurations for L2 Caching neutrino-cluster-infinispan.xml

Sample Named Cache for Entity Types:

<namedCache name="entity">
<clustering mode="replication">
<stateTransfer fetchInMemoryState="false" timeout="20000000" />
<sync replTimeout="20000" />
</clustering>
<locking isolationLevel="READ_COMMITTED" concurrencyLevel="1000"
LockAcquisitionTimeout="15000" useLockStriping="false" />
<eviction maxEntries="1000000" strategy="LRU" />
<expiration maxIdle="-1" wakeUpInterval="5000" />
<transaction transactionMode="TRANSACTIONAL" autoCommit="false"
lockingMode="OPTIMISTIC" />
</namedCache>

Other Hibernate L2 Named Caches:


entity/replicated-entity/entity-repeatable
local-query/replicated-query
timestamps
XML Based Config: Hibernate L2 Cache
Configurations for L2 Caching neutrino-cluster-infinispan.xml

Transport layer loading within below global tag.


Property configurationFile loads the file which will contain JGROUPS CLUSTER TOPOLOGY related
configuration for L2 Cache.
File neutrino-jgroups.xml is loaded for JGROUPS TOPOLOGY configurations.

<global>
<transport distributedSyncTimeout="50000" clusterName="neutrino-cluster">
<properties>
<property name="configurationFile" value="neutrino-jgroups.xml" />
</properties>
</transport>
<globalJmxStatistics enabled="false allowDuplicateDomains="true" />
</global>
XML Based Config: Hibernate L2 Cache
Configurations for L2 Caching neutrino-cluster-infinispan.xml
XML Based Config: Hibernate L2 Cache
JGROUPS CLUSTER TOPOLOGY configurations defined within neutrino-jgroups.xml.
Sample XML contains JGROUPS configuration based on TCP transport. (UDP Option is also
available)
Custom Cache : NeutrinoCache
CustomCache has been completely restructured for supporting Clustering.

NeutrinoCache is introduced. This is wrapper of Infinispan Advanced Cache.


CacheManager is introduced. This is used for creating multiple NeutrinoCaches within a
CacheManager bean.
CacheService is introduced. This service is to be used across the application code to get a
specific NeutrinoCache instance.
XML based Infinispan configuration file is introduced for each and every CacheManager.
Corresponding JGROUPS Configurations file is introduced for each CacheManagers Infinispan
Config file.

For GA 2.0, below are the number of CacheManager instances within each module.
CAS: 1
LMS: 2
LMS Common: 1
FW: 1
Custom Cache : NeutrinoCache
Sample code snippet for FW_CACHE using CacheManager.

Bean for FW_CACHE (CacheManager) created in framework-services-context.xml

<bean id="FW_CACHE" class="com.nucleus.finnone.pro.cache.common.CacheManager">


<property name = "namedCaches">
<set>
<value>SESSION_REGISTRY_PRINCIPALS_CACHE</value>
<value>SESSION_REGISTRY_SESSION_IDS_CACHE</value>
<! Set of Named Caches -->
</set>
</property>
<property name="configFileName" value="neutrino-infinispan-fw-cache.xml" />
<!Infinispan config XML File name-->
</bean>
Custom Cache : NeutrinoCache
Sample neutrino-infinispan-fw-cache.xml for FW_CACHE CacheManager.
Clustering :Lucene Indexes on
FileSystem/Infinispan
Lucene Indexes are created across application. These can be shared across cluster either on
FileSystem or Infinispan.

For Infinispan:
neutrino-hibernatesearch-infinispan.xml is introduced. This overrides the default config
provided by Hibernate/Infinispan.
TCP based JGROUPS configuration file is also introduced.

For FileSystem:
Within framework-persistence-context.xml, JPA Property
hibernate.search.default.exclusive_index_use is introduced and set as FALSE.
Deployment on AS (Weblogic/JBOSS)
Deployment is now controlled from a single Admin Console.
Single copy of WAR is maintained at Admin Server.
Weblogic/JBOSS Cluster is to be created first.

Das könnte Ihnen auch gefallen