This document describes how to set up an Alfresco v4.2 repository server cluster. It does not describe the details of non-clustering specific set up, please refer to the main documentation for this information. Additionally, this document does not describe how to set up Alfresco Share.
Content store, e.g. NFS server
Hazelcast mancenter server
Repository cluster set up instructions
By default, all enterprise servers connected to the same database will form a repository cluster.
Follow these steps for each server in the cluster:
Install and configure repository server – follow your normal procedure for deploying Alfresco (alfresco.war) into the Servlet Container of your choice (e.g. Apache Tomcat). In addition, ensure that:
The content store is available to all members in the cluster (e.g. an NFS server mounted locally and referred to by the dir.root property)
Each cluster member must be set up to access the same database, using the same database properties in alfresco-global.properties
Deploy a Solr indexing server for use across the cluster and configure each member’s properties to utilize this Solr server.
Ensure port 5701 (the default clustering port) is accessible on each repository server by all the other repository servers in the cluster.
It should not normally be necessary to specify which network interface clustering will use, however in some circumstances (e.g. multiple network interface cards) the wrong interface will be used. In this case provide a wildcarded (e.g. 10.50.*.*) or exact (e.g. 192.168.1.100) IP address of the interface to use. The advantage of a wildcarded address is that the configuration may be used on multiple servers without local changes. The java property name to use is alfresco.cluster.interface (optional)
Set the java property hazelcast.jmx=true to activate Hazelcast’s own JMX reporting (optional)
Set the cluster password with the java property alfresco.hazelcast.password (recommended for security reasons)
Starting the cluster
In many cases it is not necessary to apply any clustering-specific configuration - just starting the servers will result in a cluster. Supposing you have two cluster members on IP addresses 10.244.50.101 and 10.244.50.102. Upon starting the first member, a log message similar to the following should be seen:
2013-08-05 17:06:31,794 INFO [cluster.core.ClusteringBootstrap] [Thread-3] Cluster started, name: MainRepository-2c0aa5c6-e38a-4f64-bd29-1a7cf9894350 2013-08-05 17:06:31,797 INFO [cluster.core.ClusteringBootstrap] [Thread-3] Current cluster members: 10.244.50.101:5701 (hostname: repo1.local)
This shows that a cluster name has been automatically generated, based on the repository name (MainRepository) and a UUID (a random/unique identifier). The cluster has then been started and the cluster members listed - at the moment, only the one cluster member is present.
During startup of the second member, log entries similar to the following should be shown:
2013-08-05 17:06:58,350 INFO [cluster.core.ClusteringBootstrap] [Thread-3] Cluster started, name: MainRepository-2c0aa5c6-e38a-4f64-bd29-1a7cf9894350 2013-08-05 17:06:58,353 INFO [cluster.core.ClusteringBootstrap] [Thread-3] Current cluster members: 10.244.50.102:5701 (hostname: repo2.local) 10.244.50.101:5701 (hostname: repo1.local)
The same cluster name is shown of course, followed by the current member list - now both members are in the cluster.
Testing the cluster
The quickest, easiest way to test the cluster is via the new Admin Console. This is available on the URL:
It is then possible to access cluster information by clicking the link 'Repository Server Clustering', or by visiting the URL http(s)://<repository-host>:<port>/alfresco/service/enterprise/admin/admin-clustering
Here you will find information regarding the current cluster members as well as a button at the bottom of the page labelled 'Validate Cluster' - click on this to start a quick test that will check communications are available between each pair of cluster members.
Summary of properties
The most common clustering-related properties are shown below. Please note that all properties are optional.
Specifies a particular network interface to use for clustering. May be wildcarded, e.g. 10.256.*.* would mean attempt to bind to the interface having an IP address beginning “10.256.”.
Not normally used. Human-friendly description of the cluster member – as shown in JMX under “non-clustered servers”. This is useful to give a name to non-clustered servers such as a transformation server that it attached to the same database as the cluster, but not participating in it (e.g. alfresco.cluster.enabled=false)
Password used by the cluster members to access/join the Hazelcast cluster.
Specifies the port to use for clustering.
If set to true, Hazelcast will try several times to find a free port starting at the value of alfresco.hazelcast.port. Not recommended.
If enabled, the server will push stats and other useful information to Hazelcast’s “mancenter” dashboard application.
The URL where the mancenter application may be found (alfresco.hazelcast.mancenter.enabled must be true for this to have any effect).
Hazelcast dashboard (“mancenter”) set up instructions
The hazelcast diagnostics and reporting application – named mancenter – is a useful addition to an Alfresco repository cluster. It may be installed on any servlet container and could for example be installed on the same server as the load balancer (though you probably wouldn’t do this in a production environment).
Install a servlet container, e.g. tomcat.
Deploy the mancenter.war file to the servlet container
A data directory must be present and writeable by the user that the servlet container is running as. To specify the location of the directory, the java property hazelcast.mancenter.home may be set, e.g. add –Dhazelcast.mancenter.home=/home/tomcat7/mancenter_data to the CATALINA_OPTS environment variable.
Set the repository property alfresco.hazelcast.mancenter.enabled=true to enable mancenter use.
Ensure that the repository servers are able to reach the mancenter server at the stated URL (e.g. configure appropriate firewall rules) – the cluster members will push cluster information updates to this URL.