April 28, 2022
Liferay Portal has verity of implementation out there from small application to largest scale enterprise portals. And whenever one server isn’t sufficient to serve the high traffic needs of your portal, you can scale your Liferay portal by adding additional servers.
Basically when group of servers and other resources act like a single system and enable high availability is termed as “Clustering”. It is mainly required for parallel processing, fault tolerance and load balancing, high traffic on the application.
With release of Liferay Portal CE 7.0 GA5+, we have some good news, Clustering is back in community editions. In Liferay, It is fairly simple and easy to cluster multiple machines or cluster multiple VMs on a single machine, or any mixture of two.
Now lets see how to achieve this. To implement it, we need two or more instances of Liferay running on Single/Separate VM. Once Liferay portal is installed in more than one application server node, there are few changes that need to be done. Here are some basics/rules :
In portal-ext.properties of liferay_home directory add below property.
1) Database configuration: Each node should be configured with a data source or JNDI connection that points to one Liferay database (or a database cluster) which all the nodes will share. JNDI is recommended configuration.
jdbc.default.jndi.name=jdbc/LiferayPool |
<Resource name=”jdbc/LiferayPool” auth=”Container” driverClass=”net.sourceforge.jtds.jdbc.Driver” jdbcUrl=”jdbc:jtds:sqlserver://DB.Server.IP:Port/dxpportal” user=”root” password=”***” maxPoolSize=”100″ minPoolSize=”10″ acquireIncrement=”5″ factory=”org.apache.naming.factory.BeanFactory” type=”com.mchange.v2.c3p0.ComboPooledDataSource”> /> |
2) Data folder configuration: Liferay’s Document and media library can mount several repositories with it. Users would be able to use default mounted Liferay repository. Liferay portal can use one of several different store implementations. In clustered environment all nodes must point to the same Data folder to store the Documents and Media.
Let’s take an example of local file system store configuration for two nodes on same machine.
dl.store.impl=com.liferay.portal.store.file.system.FileSystemStore |
rootDir=opt/LiferayPortal_node1/data/document_library |
Testing: start nodes sequentially and execute following steps:
3) Search Clustering: Search engine should be separate from Liferay server. Liferay supports Elasticsearch or Solr, either of both can also be configured in clustered environment.
For more information of how to configure Elasticsearch or Solr with Liferay refer the links. Liferay portal default ships with Elasticsearch engine.
Once the Liferay portal servers and Search engine have been configured as a cluster, change Liferay Portal from embedded mode to remote mode. On the first connection, the two sets of clustered servers communicate with each other the list of all IP addresses; in case of a node going down, the proper fail over protocols will enable. Queries and indices can continue to be sent for all nodes.
Now restart Liferay servers and perform index replication by navigating to Control Panel > Configuration>Server Administration>Resources Execute Reindex.
Testing: Add any web content and search it from both nodes.
4) Enabling cluster link: By enabling cluster link it automatically activates distributed caching. Data is generated single time and replicated to other servers in cluster. The cache is distributed across multiple Liferay Portal nodes running concurrently. Enabling Cluster Link can increase performance dramatically.
cluster.link.enabled=true |
5) Hot Deploy: Plugins/Modules must be deployed to all nodes of the cluster. plugin/module must be places in Liferay portals deploy folder on each node, if you are not using centralized server farms deployment. To avoid any inconsistency Liferay needs to have the same patches installed across all nodes.
6) Application server Configuration: Configure tomcat servers for Cluster awareness.
<Cluster className=”org.apache.catalina.ha.tcp.SimpleTcpCluster” channelSendOptions=”8″>
<Manager className=”org.apache.catalina.ha.session.DeltaManager” expireSessionsOnShutdown=”false” notifyListenersOnReplication=”true”> </Manager> <Channel className=”org.apache.catalina.tribes.group.GroupChannel”> <Membership className=”org.apache.catalina.tribes.membership.McastService” address=”224.0.0.5″ port=”45564″ frequency=”500″ dropTime=”3000″></Membership> <Sender className=”org.apache.catalina.tribes.transport.ReplicationTransmitter”> <Transport className=”org.apache.catalina.tribes.transport.nio.PooledParallelSender”></Transport> </Sender> <Receiver className=”org.apache.catalina.tribes.transport.nio.NioReceiver” address=”auto” port=”4000″ autoBind=”100″ selectorTimeout=”5000″ maxThreads=”6″> </Receiver> <Interceptor className=”org.apache.catalina.tribes.group.interceptors.TcpFailureDetector”> </Interceptor> <Interceptor className=”org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor”> </Interceptor> </Channel> <Valve className=”org.apache.catalina.ha.tcp.ReplicationValve” filter=””></Valve> <Valve className=”org.apache.catalina.ha.session.JvmRouteBinderValve”></Valve> <ClusterListener className=”org.apache.catalina.ha.session.ClusterSessionListener”> </ClusterListener> </Cluster> |
Now restart the servers sequentially and verify cache and session replication.
Zeenesh Patel