Obsolete Pages{{Obsolete}}
The official documentation is at: http://docs.alfresco.com
Provide detailed instructions for installing Alfresco CE Cluster that services internal and external network users. The instructions will cover:
Before installing Alfresco you need a DB instance for Alfresco to connect to and populate. As the sql administrator create the ALFRESCO user identified by the password of your choosing. Change tablespace locations as neccessary. These are the minimum grants neccessary.
sql>
CREATE USER 'ALFRESCO' PROFILE 'DEFAULT' IDENTIFIED BY '
Create a SINGLE working Alfresco instance and then later we'll create 3 clones changing just 4 files.
You don't need a lot of disk or fast disks for that matter. It is CPU/IO intensive. The operating install will be dependent on your needs and environment. That being said I highly suggest the following partition layout with the at LEAST the /alf and /alf/tmp required:
/dev/sda1 2039 559 1377 29% /
/dev/sda2 - SWAP
/dev/sda3 6115 1371 4745 23% /usr
/dev/sda5 510 109 401 22% /tmp
/dev/sda6 1020 57 963 6% /var
/dev/sda7 2039 75 1965 4% /var/log
/dev/sda8 510 17 494 4% /home
/dev/sda9 4077 373 3704 10% /alf
/dev/sdb1 11900 32 11254 1% /alf/tmp
The /alf/tmp partition has EXT2 file system. It happens to be on a dedicated drive in our configuration. This tmp area is for the OpenOffice/Alfresco document conversions and the repository index rebuilds. In our experience 4GB was the MINIMUM for the temp storage area. As it is temporary we formatted with the EXT2 filesystem to eek some performance.
In preparing for HA/LB with the LVS we will assign TWO network addresses. One will be the servers REAL address while the other address will be for the LVS service. LVS DIRECT requires NO-ARP for the virtual address. Unfortunately you can't have a single NIC for both the ARPing real address and an NON-ARPing alias address. The NIC is either all ARP or NO-ARP. Hence the need for a physical second NIC.
Configure the Real IP Address on ETH0. Season to taste.
DEVICE=eth0
BOOTPROTO=static
BROADCAST=X.X.X.255
IPADDR=X.X.X.X
NETMASK=255.255.255.0
NETWORK=X.X.X.0
ONBOOT=yes
NOZEROCONF=yes
USERCTL=no
Configure the Virtual IP Address on ETH1. Season to taste. Note: The most important setting is ARP=no !! This will be the 'shared' address for all the Alfresco clustered servers.
DEVICE=eth1
ARP=no
KEEPALIVE=yes
BOOTPROTO=static
IPADDR=X.X.X.X
NETMASK=255.255.255.0
NETWORK=X.X.X.0
TYPE=Ethernet
NOZEROCONF=yes
USERCTL=no
ONBOOT=yes
In our organization we used RedHat ES5 Update0 i386. We also removed all extra and unneccessary RPM packages.
Since there are so many dependencies with installing ImageMagick, install yum and let it do the work. (87+packages..)
rpm -i yum-3.0.1-5.el5.noarch.rpm python-elementtree-1.2.6-5.i386.rpm python-sqlite-1.1.7-1.2.1.i386.rpm rpm-python-4.4.2-37.el5.i386.rpm yum-metadata-parser-1.0-8.fc6.i386.rpm python-urlgrabber-3.1.0-2.noarch.rpm expat-1.95.8-8.2.1.i386.rpm m2crypto-0.16-6.el5.1.i386.rpm
Install createrepo:
rpm -i createrepo-0.4.4-2.fc6.noarch.rpm
You'll have to expand the RPMs from the RedHat CDs/DVD and have them either on the NFS server and then reference/mount the collection to create a local repo.
createrepo /mnt/es5/U0/Expanded
Create a local.repo YUM config file
vi /etc/yum.repos.d/local.repo
Insert the local.repo config settings:
[localrepo]
name=Fedora Core $releasever - My Local Repo
baseurl=file:///mnt/es5/U0/Expanded
enabled=1
gpgcheck=0
#gpgkey=file:///path/to/you/RPM-GPG-KEY
For JAAS Authentication install the Kerberos libraries. This is the minimum. If you want to test the config you'll need kinit which is in another krb rpm package.
rpm -i krb5-libs-1.5-17
Configure Kerberos for your environment. Here is an EXAMPLE config that we utilize.
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = [!YOUR_AD_DOMAIN!]
dns_lookup_realm = true
dns_lookup_kdc = true
ticket_lifetime = 24h
[realms]
[!YOUR_AD_DOMAIN!] = {
kdc = [!MICROSOFT_KRB_SERVER!]:88
admin_server = [!MICROSOFT_KRB_SERVER!]:749
default_domain = [!YOUR_AD_DOMAIN!]
}
[domain_realm]
.ad.company.com = [!YOUR_AD_DOMAIN!]
ad.company.com = [!YOUR_AD_DOMAIN!]
.company.com = [!YOUR_AD_DOMAIN!]
company.com = [!YOUR_AD_DOMAIN!]
[kdc]
profile = /var/kerberos/krb5kdc/kdc.conf
[appdefaults]
pam = {
debug = false
ticket_lifetime = 36000
renew_lifetime = 36000
forwardable = true
krb4_convert = false
}
If you need to test the Kerberos authentication configuration install the appropriate krb rpm package.
rpm -i krb5-workstation-1.5-17.i386.rpm
Then run kinit and supply just the username. If the config is correct the 'domain' will be appended. If the config is bad or mis-configured there will be errors reported.
kinit AD_USERNAME
Password for AD_USERNAME@COMPANY.COM:
If the config is good and the password was correct a kerberos certificate should be listed. Run klist.
klist
You should see:
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: AD_USERNAME@COMPANY.COM
Valid starting Expires Service principal
08/31/07 17:40:25 09/01/07 03:40:29 krbtgt/COMPANY.COM@COMPANY.COM
renew until 09/01/07 17:40:25
Kerberos 4 ticket cache: /tmp/tkt0
klist: You have no tickets cached
yum install ImageMagick
Alfresco seems to make calls to imconvert. So, make a symbolic link.
ln -s /usr/bin/convert /usr/bin/imconvert
Get The JAVA 6.0u2 JDK from [Sun]. Agree to license and install the RPM.
rpm -i jdk-6u2-linux-i586.rpm
alternatives --remove java
alternatives --install /usr/bin/java java /usr/java/default/jre/bin/java 1
vi /etc/profile.d/java.sh
export PATH=$PATH:/usr/java/default/jre/bin:/usr/java/default/bin
export JAVA_HOME=/usr/java/default
export JRE_HOME=/usr/java/default/jre
export CLASSPATH=$CLASSPATH:/usr/java/default/lib/ojdbc14.jar
. /etc/profile.d/java.sh
Alfresco with OracleDB depends on the Java JAR file ojdbc14.jar. You can get it from [Oracle] Copy/Move the jar file into /usr/java/default/lib/ then link it to /usr/java/default/jre/lib/
ln -s /usr/java/default/lib/ojdbc14.jar /usr/java/default/jre/lib/ojdbc14.jar
OpenOffice will operate in a headless mode but needs a framebuffer to connect to. So install Xvfb. Xvfb will start BEFORE OpenOffice will start.
yum install Xvfb
Get the current OpenOffice Linux suite [OOo_2.2.1_LinuxIntel_install_en-US.tar.gz]
You'll be running in OpenOffice in a HEADLESS configuration. But for the initial configuration/registration you'll need to have an available X session.
cd /usr/local/src
tar xvfzp OOo_2.2.1_LinuxIntel_install_en-US.tar.gz.tar.gz
cd OOF680_m18_native_packed-1_en-US.9161/RPMS
rm openoffice.org-gnome-integration-2.2.1-9161.i586.rpm openoffice.org-kde-integration-2.2.1-9161.i586.rpm
rpm -Uvih *.rpm
export DISPLAY=
Go thru the registration and set the TEMPORARY file location to the big '/alf/tmp partition. In the OO GUI go to Toolsptionsaths:Temporary Files and change /tmp to '/alf/tmp/soffice'. That's it. You can close it up.
This script will start Xvfb and wait for it prior to starting soffice in headless mode.
vi /etc/init.d/soffice
#!/bin/sh
#
# soffice
#
# chkconfig: 345 98 11
# description: Starts and stops the soffice non-interactive document transformation
# Source function library.
. /etc/rc.d/init.d/functions
XVFB=/usr/bin/Xvfb
SOFFICE=/opt/openoffice.org2.2/program/soffice.bin
SOFFICE_TMP=/alf/tmp/soffice
KILLER=/usr/bin/killall
case '$1' in
start)
#
# Start Soffice
#
echo -n 'Starting Xvfb for SOFFICE: '
$XVFB :1 -screen 0 800x600x16 -fbdir /tmp > /dev/null 2>&1 &
sleep 3s
echo_success
echo
echo -n 'Clearing SOFFICE ($SOFFICE_TMP): '
rm -rf $SOFFICE_TMP/*
echo_success
echo
echo -n 'Starting SOFFICE: '
$SOFFICE -invisible -accept='socket,host=localhost,port=8100;urp;' -display :1 > /dev/null 2>&1 &
echo_success
echo
;;
startnow)
#
# Start Soffice
#
echo -n 'Starting Xvfb for SOFFICE: '
$XVFB :1 -screen 0 800x600x16 -fbdir /tmp > /dev/null 2>&1 &
echo_success
echo
echo -n 'Starting SOFFICE: '
$SOFFICE -invisible -accept='socket,host=localhost,port=8100;urp;' -display :1 > /dev/null 2>&1 &
echo_success
echo
;;
stop)
#
# Stop Soffice
#
echo -n 'Stopping SOFFICE: '
$KILLER -q -s TERM soffice.bin
echo_success
echo
echo -n 'Stopping Xvfb for SOFFICE: '
$KILLER -s TERM Xvfb
echo_success
echo
;;
*)
echo 'Usage: $0 {start|startnow|stop}'
exit 1;;
esac
exit 0
Add the init script and disable it. Due to system dependencies if must be started in rc.local
chkconfig --add soffice
chkconfig soffice off
Java must be installed and functional before installing Apache Tomcat!
We prefer Apache Tomcat from [source] rather than RPMs. If you use RPMs then you'll need to accomodate for path differences. We used apache-tomcat-6.0.13
cd /usr/local
tar xvfzp
Add/Create the TOMCAT user and group. Season to taste.
echo 'tomcat:x:52:52:Tomcat:/usr/local/tomcat:/sbin/nologin' >> /etc/passwd
echo 'tomcat:x:52:' >> /etc/group
Unpack the service scripts that help manage tomcat more easily
cd /usr/local/tomcat/bin
tar xfvzp jsvc.tar.gz
cd jsvc-src
sh ./configure
make
Change ownership and put logs in /var/log.
chown -R tomcat.tomcat /usr/local/tomcat
rm -rf /usr/local/tomcat/logs
mkdir /var/log/tomcat
chown -R tomcat.tomcat /var/log/tomcat
ln -s /var/log/tomcat /usr/local/tomcat/logs
If it doesn't exist create it.
mkdir /alf/tmp/tomcat
Change ownership on /alf/tmp to tomcat user.
chown -R tomcat.tomcat /alf/tmp
Replace the contents with this minimal set:
<Server debug='0' port='8005' shutdown='SHUTDOWN'>
<Service name='Alfresco-Service'>
<Connector protocol='AJP/1.3' address='[!REAL_SERVER_IP_ADDRESS!]'
port='8009' minProcessors='15' maxProcessors='200'
enableLookups='false' redirectPort='8443' emptySessionPath='true'
acceptCount='10' debug='0' connectionTimeout='60000'
useURIValidationHack='false' URIEncoding='UTF-8'/>
<Engine jvmRoute='fresco1' debug='0' name='Alfresco-Engine'>
<Realm className='org.apache.catalina.realm.MemoryRealm' />
<Host name='[!YOUR_FQDN_WEB!]' appBase='/alf/alfy'
debug='0' autoDeploy='false' unpackWARs='false'>
<Context docBase='' path=''/>
<Valve className='org.apache.catalina.valves.FastCommonAccessLogValve' directory='logs'
prefix='alfresco_access.' suffix='.log' pattern='common' resolveHosts='false'/>
</Host>
</Engine>
</Service>
</Server>
NOTE: java.io.tmpdir is changed to /alf/tmp/tomcat
vi /etc/init.d/tomcat
#!/bin/sh
#
# tomcat Utilizing the jsvc start and stop the servlet engine.
#
# chkconfig: 345 99 10
# description: Starts and stops the tomcat servlet engine.
# Source function library.
. /etc/rc.d/init.d/functions
##############################################################################
#
# Copyright 2004 The Apache Software Foundation.
#
# Licensed under the Apache License, Version 2.0 (the 'License');
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an 'AS IS' BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##############################################################################
#
# Small shell script to show how to start/stop Tomcat using jsvc
# If you want to have Tomcat running on port 80 please modify the server.xml
# file:
#
#
# <Connector className='org.apache.catalina.connector.http.HttpConnector'
# port='80' minProcessors='5' maxProcessors='75'
# enableLookups='true' redirectPort='8443'
# acceptCount='10' debug='0' connectionTimeout='60000'/>
#
# That is for Tomcat-5.0.x (Apache Tomcat/5.0)
#
# Adapt the following lines to your configuration
JAVA_HOME=/usr/java/default/jre
CATALINA_HOME=/usr/local/tomcat
DAEMON_HOME=/usr/local/tomcat/bin/jsvc-src
TOMCAT_USER=tomcat
TOMCAT_GROUP=tomcat
TOMCAT_PID=/var/run/jsvc.pid
TMP_DIR=/alf/tmp/tomcat
CATALINA_OPTS=
CLASSPATH=$JAVA_HOME/lib/ojdbc14.jar:$JAVA_HOME/lib/tools.jar:$CATALINA_HOME/bin/commons-daemon.jar:$CATALINA_HOME/bin/bootstrap.jar
CATALINA_WORK_DIR=$CATALINA_HOME/work
# To get a verbose JVM
#-verbose \
# To get a debug of jsvc.
#-debug \
WAIT_MAX=2
case '$1' in
start)
#
# Start Tomcat
#
# Wait for SOFFICE 2 times...
WAIT=1
while [ $WAIT -le $WAIT_MAX ];
do
echo -n 'Waiting for SOFFICE ($WAIT/$WAIT_MAX 1min):'
SOFFICE=`netstat -an | grep 8100`
if [ -n '$SOFFICE' ]; then
echo_success
TOMCAT_CONTINUE=TRUE
let WAIT=$WAIT_MAX+1
else
echo_failure
TOMCAT_CONTINUE=FALSE
let WAIT+=1
sleep 1m
fi
echo
done
echo -n 'Starting Tomcat: '
if [ $TOMCAT_CONTINUE = 'FALSE' ]; then
echo_failure
echo
exit 1
fi
# Tuned for low pause times and high throughput i-cms
# For monitoring...
#-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:-TraceClassUnloading \
#-Xms2304m -Xmx2304m -Xmn768m -Xss2048k -XX:+AggressiveOpts\
#-Xmx1024m -Xms1024m -XX:+UseLargePages -XX:LargePageSizeInBytes=1m \
#-Xmx1536m \
$DAEMON_HOME/jsvc \
-user $TOMCAT_USER \
-home $JAVA_HOME \
-jvm server \
-Xmx1536m \
-Dcatalina.home=$CATALINA_HOME \
-Djava.io.tmpdir=$TMP_DIR \
-outfile $CATALINA_HOME/logs/catalina.out \
-errfile '&1' \
$CATALINA_OPTS \
-cp $CLASSPATH \
org.apache.catalina.startup.Bootstrap
sleep 2s
if [ -f $TOMCAT_PID ]; then
echo_success
else
echo_failure
fi
echo
;;
stop)
#
# Stop Tomcat
#
echo -n 'Stopping Tomcat: '
PID=`cat $TOMCAT_PID`
kill $PID 2>/dev/null 1>/dev/null
if [ `ps -p ${PID} > /dev/null 2>&1` ]; then
echo_failure
else
echo_success
rm -f $TOMCAT_PID > /dev/null 2>&1
fi
echo
;;
flush)
#
# Stop Tomcat
#
echo -n 'Stopping Tomcat: '
PID=`cat $TOMCAT_PID`
kill $PID 2>/dev/null 1>/dev/null
if [ `ps -p ${PID} > /dev/null 2>&1` ]; then
echo_failure
else
echo_success
rm -f $TOMCAT_PID > /dev/null 2>&1
fi
echo
echo -n 'Flushing Tomcat CACHE ($CATALINA_WORK_DIR) : '
rm -rf $CATALINA_WORK_DIR
if [ -e $CATALINA_WORK_DIR ]; then
echo_failure
echo
exit 1;
fi
mkdir $CATALINA_WORK_DIR
chown $TOMCAT_USER.$TOMCAT_GROUP $CATALINA_WORK_DIR
if [ ! -e $CATALINA_WORK_DIR ]; then
echo_failure
echo
exit 1;
fi
echo_success
echo
;;
*)
echo 'Usage: $0 {start|stop|flush}'
exit 1;;
esac
exit 0
Add the init script and disable it. Due to system dependencies if must be started in rc.local
chkconfig --add tomcat
chkconfig tomcat off
Because of an soffice XVfb dependency wait until all other rc levels are run. Insert the following into /etc/init.d/rc.local
service soffice start
service tomcat start
Running Alfresco/Tomcat as the tomcat user prevents binding to ports lower than 1024. In anticipation of this local NAT rules with iptables allows connections to the standard CIFS and FTP ports while running Alfresco/Tomcat with a non-root user. I'm assuming you have iptables installed. If not you know the drill. CAUTION: This clears an existing iptables setting. If you already have an iptables policy SAVE IT and add these NAT rules to it and reload.
Clear iptables
iptables -F
iptables -X
The NAT rules for high to low routing:
iptables -t nat -A PREROUTING -p tcp --dport 445 -i eth0 -j REDIRECT --to-port 2445
iptables -t nat -A OUTPUT -p tcp -d 127.0.0.1 --dport 445 -j REDIRECT --to-port 2445
iptables -t nat -A PREROUTING -p tcp --dport 139 -i eth0 -j REDIRECT --to-port 2139
iptables -t nat -A OUTPUT -p tcp -d 127.0.0.1 --dport 139 -j REDIRECT --to-port 2139
iptables -t nat -A PREROUTING -p udp --dport 137 -i eth0 -j REDIRECT --to-port 2137
iptables -t nat -A OUTPUT -p tcp -d 127.0.0.1 --dport 137 -j REDIRECT --to-port 2137
iptables -t nat -A PREROUTING -p udp --dport 138 -i eth0 -j REDIRECT --to-port 2138
iptables -t nat -A OUTPUT -p tcp -d 127.0.0.1 --dport 138 -j REDIRECT --to-port 2138
iptables -t nat -A PREROUTING -p udp --dport 21 -i eth0 -j REDIRECT --to-port 2021
iptables -t nat -A OUTPUT -p udp -d 127.0.0.1 --dport 21 -j REDIRECT --to-port 2021
iptables -t nat -A PREROUTING -p tcp --dport 21 -i eth0 -j REDIRECT --to-port 2021
iptables -t nat -A OUTPUT -p tcp -d 127.0.0.1 --dport 21 -j REDIRECT --to-port 2021
To save state between restarts:
service iptables save
We will not be using the Alfresco user database for authentication. Instead we will be importing users via LDAP from the Microsoft AD and relying on NTLM passthru and JAAS/Kerberos authentication.
Get the community source WAR file [alfresco-community-war-2.1.0.tar.gz]
Create the working directory /alf/alfy and unzip the alfresco.war file.
cd /alf
tar xvfzp alfresco-community-war-2.1.0R1.tar.gz
mkdir alfy
cd alfy
unzip ../alfresco.war
chown -R tomcat.tomcat /alf/*
Create the lucene indexes directory. This will NOT be shared in the cluster. The lucene indexes must exist locally for each cluster member server.
mkdir /alf/alf_data
Create the repository directory. This will later be moved to the NFS server and automounted amongst the Alfresco cluster member servers.
mkdir /alf/alf_data_cluster
To enable the multi-lingual menus in Alfresco web copy the translation extensions. This is optional but a very nice one.
cp -ax /alf/extensions/messages/* /alf/alfy/WEB-INF/classes/alfresco/messages
These are important! Most examples are reports from running diff -u. Custom configurations are created in extensions sub-directory.
Redirect logging to a better location.
log4j.appender.File.File=/var/log/tomcat/alfresco.log
Accomodate for using Oracle DB.
@@ -1,7 +1,7 @@
#
# Hibernate configuration
#
-hibernate.dialect=org.hibernate.dialect.MySQLInnoDBDialect
+hibernate.dialect=org.hibernate.dialect.Oracle9Dialect
hibernate.jdbc.use_streams_for_binary=true
hibernate.show_sql=false
DON'T change user.name.caseSensitive=false.
Reflect the repository directory changes.
@@ -1,11 +1,10 @@
# Directory configuration
-dir.root=./alf_data
+dir.root=/alf/alf_data
-dir.contentstore=${dir.root}/contentstore
-dir.contentstore.deleted=${dir.root}/contentstore.deleted
-
-dir.auditcontentstore=${dir.root}/audit.contentstore
+dir.contentstore=/alf/alf_data_cluster
+dir.contentstore.deleted=${dir.contentstore}/contentstore.deleted
+dir.auditcontentstore=${dir.contentstore}/audit.contentstore
# The location for lucene index files
dir.indexes=${dir.root}/lucene-indexes
Change the Index Recovery to AUTO. Likely need set to FULL on first startup.
@@ -14,7 +13,8 @@
dir.indexes.lock=${dir.indexes}/locks
# The index recovery mode (NONE, VALIDATE, AUTO, FULL)
-index.recovery.mode=VALIDATE
+index.recovery.mode=AUTO
+#index.recovery.mode=FULL
# Change the failure behaviour of the configuration checker
system.bootstrap.config_check.strict=true
Reflect the Oracle DB settings and email host settings.
@@ -70,16 +70,17 @@
# Database configuration
db.schema.update=true
-db.driver=org.gjt.mm.mysql.Driver
+db.driver=oracle.jdbc.OracleDriver
db.name=alfresco
-db.url=jdbc:mysql:///${db.name}
-db.username=alfresco
+db.url=jdbc:oracle:thin:@
Reflect a valid default FROM email address.
@@ -87,7 +88,7 @@
mail.encoding=UTF-8
# Set this value to 7bit or similar for Asian encoding of email headers as required
mail.header=
-mail.from.default=alfresco@alfresco.org
+mail.from.default=
Due to a quirk in Alfresco 2.1 CE and LDAP imports with the Quartz schedueler change the autoStartup to true.
@@ -18,7 +18,7 @@
</property>
<property name='autoStartup'>
- <value>false</value>
+ <value>true</value>
</property>
</bean>
Disable GuestLogins
@@ -155,7 +155,7 @@
<ref bean='authenticationManager' />
</property>
<property name='allowGuestLogin'>
- <value>true</value>
+ <value>false</value>
</property>
</bean>
Disallow creating missing people. We get them form the LDAP import. The default settings for handling duplicate usernames was sufficient for our needs. Unless you know what you are doing with them I'd leave the settings for proccessing duplicates alone.
@@ -216,7 +216,8 @@
<property name='createMissingPeople'>
- <value>${server.transaction.allow-writes}</value>
+
+ <value>false</value>
</property>
<property name='userNamesAreCaseSensitive'>
<value>${user.name.caseSensitive}</value>
Added a Default Home Folder to match Personal Home Folders under User Homes. Change the location of Personal Home Folders to reside in /Company Home/User Homes.
@@ -290,12 +291,18 @@
</property>
</bean>
+ <bean name='defaultHomeFolderProvider' class='org.alfresco.repo.security.person.UIDBasedHomeFolderProvider'>
+ <property name='homeFolderManager'>
+ <ref bean='homeFolderManager' />
+ </property>
+ </bean>
+
<bean name='personalHomeFolderProvider' class='org.alfresco.repo.security.person.UIDBasedHomeFolderProvider'>
<property name='serviceRegistry'>
<ref bean='ServiceRegistry' />
</property>
<property name='path'>
- <value>/${spaces.company_home.childname}</value>
+ <value>/${spaces.company_home.childname}/${spaces.user_homes.childname}</value>
</property>
<property name='storeUrl'>
<value>${spaces.store}</value>
<pre>
<bean name='personalHomeFolderProvider' class='org.alfresco.repo.security.person.UIDBasedHomeFolderProvider'>
<property name='serviceRegistry'>
<ref bean='ServiceRegistry' />
</property>
<property name='path'>
- <value>/${spaces.company_home.childname}</value>
+ <value>/${spaces.company_home.childname}/${spaces.user_homes.childname}</value>
</property>
<property name='storeUrl'>
<value>${spaces.store}</value>
Add additional users from AD for administrative priveleges.
<property name='adminUsers'>
<set>
<value>admin</value>
<value>administrator</value>
<value>!ADUSERNAME!</value>
</set>
</property>
This critical extension enables the importation of users and groups from the Microsoft Active Directory via LDAP. It may take some patience to adjust this file to your needs. Once settled though make sure set clearAllChildren=false for user imports. It will minimize interuptions when users login during import jobs. User accounts if non-existant will create usernamed home directories in under the 'Company Home/User Homes'. If you want them created somewhere else you'll need to modify the cm:homeFolderProvider.
The personQuery can trip you up. But this works well for Microsoft AD. It only imports users that have a givenName, surname and email values in their accounts. So, if you aren't seeing users that you expect a user may be lacking one or more of these values.
<property name='personQuery'>
<value></value>
</property>
The groupQuery is also tricky. But this works well for our purposes. Rather than clutter the Alfresco group management with groups that are not of any relevance we only import alfresco groups. Modify to meet your needs.
<property name='groupQuery'>
<value></value>
</property>
Make sure the searchBase is correct for your environment.
<property name='searchBase'>
<value>dc=[!COMPANY!],dc=[!TLD!]</value>
</property>
Per the notes in the file and from our experience this setting is accurate.
<property name='userIdAttributeName'>
<value>sAMAccountName</value>
</property>
vi /alf/alfy/WEB-INF/classes/alfresco/extension/ldap-authentication-context.xml
<beans>
<bean name='authenticationDao' class='org.alfresco.repo.security.authentication.DefaultMutableAuthenticationDao' >
<property name='allowDeleteUser'>
<value>true</value>
</property>
</bean>
<bean id='ldapInitialDirContextFactory' class='org.alfresco.repo.security.authentication.ldap.LDAPInitialDirContextFactoryImpl'>
<property name='initialDirContextEnvironment'>
<map>
<entry key='java.naming.factory.initial'>
<value>com.sun.jndi.ldap.LdapCtxFactory</value>
</entry>
<entry key='java.naming.provider.url'>
<value>ldap://[!MICROSOFT_LDAP_SERVER_ADDRESS!]:389</value>
</entry>
<entry key='java.naming.security.authentication'>
<value>simple</value>
</entry>
<entry key='java.naming.security.principal'>
<value>[!LDAP_READ_USER!]</value>
</entry>
<entry key='java.naming.security.credentials'>
<value>[!LDAP_READ_USER_PASSWORD!]</value>
</entry>
</map>
</property>
</bean>
<bean id='ldapPeopleExportSource' class='org.alfresco.repo.security.authentication.ldap.LDAPPersonExportSource'>
<property name='personQuery'>
<value></value>
</property>
<property name='searchBase'>
<value>dc=[!COMPANY!],dc=[!TLD!]</value>
</property>
<property name='userIdAttributeName'>
<value>sAMAccountName</value>
</property>
<property name='LDAPInitialDirContextFactory'>
<ref bean='ldapInitialDirContextFactory'/>
</property>
<property name='personService'>
<ref bean='personService'></ref>
</property>
<property name='namespaceService'>
<ref bean='namespaceService'/>
</property>
<property name='attributeMapping'>
<map>
<entry key='cm:userName'>
<value>sAMAccountName</value>
</entry>
<entry key='cm:firstName'>
<value>givenName</value>
</entry>
<entry key='cm:lastName'>
<value>sn</value>
</entry>
<entry key='cm:email'>
<value>mail</value>
</entry>
<entry key='cm:organizationId'>
<value>o</value>
</entry>
<entry key='cm:homeFolderProvider'>
<null/>
</entry>
</map>
</property>
<property name='attributeDefaults'>
<map>
<entry key='cm:homeFolderProvider'>
<value>personalHomeFolderProvider</value>
</entry>
</map>
</property>
</bean>
<bean id='ldapGroupExportSource' class='org.alfresco.repo.security.authentication.ldap.LDAPGroupExportSource'>
<property name='groupQuery'>
<value></value>
</property>
<property name='searchBase'>
<value>dc=ad,dc=menasha,dc=com</value>
</property>
<property name='userIdAttributeName'>
<value>sAMAccountName</value>
</property>
<property name='groupIdAttributeName'>
<value>cn</value>
</property>
<property name='groupType'>
<value>group</value>
</property>
<property name='personType'>
<value>person</value>
</property>
<property name='LDAPInitialDirContextFactory'>
<ref bean='ldapInitialDirContextFactory'/>
</property>
<property name='namespaceService'>
<ref bean='namespaceService'/>
</property>
<property name='memberAttribute'>
<value>member</value>
</property>
<property name='authorityDAO'>
<ref bean='authorityDAO'/>
</property>
</bean>
<bean id='ldapPeopleTrigger' class='org.alfresco.util.TriggerBean'>
<property name='jobDetail'>
<bean id='ldapPeopleJobDetail' class='org.springframework.scheduling.quartz.JobDetailBean'>
<property name='jobClass'>
<value>org.alfresco.repo.importer.ImporterJob</value>
</property>
<property name='jobDataAsMap'>
<map>
<entry key='bean'>
<ref bean='ldapPeopleImport'/>
</entry>
</map>
</property>
</bean>
</property>
<property name='startDelay'>
<value>480000</value>
</property>
<property name='repeatInterval'>
<value>4500000</value>
</property>
<property name='scheduler'>
<ref bean='schedulerFactory' />
</property>
</bean>
<bean id='ldapGroupTrigger' class='org.alfresco.util.TriggerBean'>
<property name='jobDetail'>
<bean id='ldapGroupJobDetail' class='org.springframework.scheduling.quartz.JobDetailBean'>
<property name='jobClass'>
<value>org.alfresco.repo.importer.ImporterJob</value>
</property>
<property name='jobDataAsMap'>
<map>
<entry key='bean'>
<ref bean='ldapGroupImport'/>
</entry>
</map>
</property>
</bean>
</property>
<property name='startDelay'>
<value>180000</value>
</property>
<property name='repeatInterval'>
<value>14400000</value>
</property>
<property name='scheduler'>
<ref bean='schedulerFactory' />
</property>
</bean>
<bean id='ldapPeopleImport' class='org.alfresco.repo.importer.ExportSourceImporter'>
<property name='importerService'>
<ref bean='importerComponentWithBehaviour'/>
</property>
<property name='transactionService'>
<ref bean='transactionComponent'/>
</property>
<property name='authenticationComponent'>
<ref bean='authenticationComponent'/>
</property>
<property name='exportSource'>
<ref bean='ldapPeopleExportSource'/>
</property>
<property name='storeRef'>
<value>${spaces.store}</value>
</property>
<property name='path'>
<value>/${system.system_container.childname}/${system.people_container.childname}</value>
</property>
<property name='clearAllChildren'>
<value>false</value>
</property>
<property name='nodeService'>
<ref bean='nodeService'/>
</property>
<property name='searchService'>
<ref bean='searchService'/>
</property>
<property name='namespacePrefixResolver'>
<ref bean='namespaceService'/>
</property>
<property name='caches'>
<set>
<ref bean='permissionsAccessCache'/>
</set>
</property>
</bean>
<bean id='ldapGroupImport' class='org.alfresco.repo.importer.ExportSourceImporter'>
<property name='importerService'>
<ref bean='importerComponentWithBehaviour'/>
</property>
</bean>
<bean id='ldapGroupImport' class='org.alfresco.repo.importer.ExportSourceImporter'>
<property name='importerService'>
<ref bean='importerComponentWithBehaviour'/>
</property>
<property name='transactionService'>
<ref bean='transactionComponent'/>
</property>
<property name='authenticationComponent'>
<ref bean='authenticationComponent'/>
</property>
<property name='exportSource'>
<ref bean='ldapGroupExportSource'/>
</property>
<property name='storeRef'>
<value>${alfresco_user_store.store}</value>
</property>
<property name='path'>
<value>/${alfresco_user_store.system_container.childname}/${alfresco_user_store.authorities_container.childname}</value>
</property>
<property name='clearAllChildren'>
<value>true</value>
</property>
<property name='nodeService'>
<ref bean='nodeService'/>
</property>
<property name='searchService'>
<ref bean='searchService'/>
</property>
<property name='namespacePrefixResolver'>
<ref bean='namespaceService'/>
</property>
<property name='caches'>
<set>
<ref bean='userToAuthorityCache'/>
<ref bean='permissionsAccessCache'/>
</set>
</property>
</bean>
</beans>
This important extension enables the JAAS user authentication mechanism for the Alfresco WEB interface.
vi /alf/alfy/WEB-INF/classes/alfresco/extension/jaas-authentication-context.xml
<beans>
<bean id='authenticationComponent'
class='org.alfresco.repo.security.authentication.jaas.JAASAuthenticationComponent'>
<property name='realm'>
<value>AD.MENASHA.COM</value>
</property>
<property name='jaasConfigEntryName'>
<value>Alfresco</value>
</property>
</bean>
<bean id='alfDaoImpl' class='org.springframework.transaction.interceptor.TransactionProxyFactoryBean'>
<property name='proxyInterfaces'>
<value>
org.alfresco.repo.security.authentication.MutableAuthenticationDao
</value>
</property>
<property name='transactionManager'>
<ref bean='transactionManager' />
</property>
<property name='target'>
<bean class='org.alfresco.repo.security.authentication.ntlm.NullMutableAuthenticationDao' />
</property>
<property name='transactionAttributes'>
<props>
<prop key='*'>${server.transaction.mode.default}</prop>
</props>
</property>
</bean>
</beans>
This extension will enable CIFS, FTP and NTLM Passthru to the Microsoft AD server. The Alfresco WEB interface is authenticating with JAAS.
vi /alf/alfy/WEB-INF/classes/alfresco/extension/file-servers-custom.xml
<alfresco-config area='file-servers'>
<config evaluator='string-compare' condition='CIFS Server' replace='true'>
<serverEnable enabled='true'/>
<host name='[!SERVER_NAME!]' domain='[!YOUR_DOMAIN_NAME!]'/>
<comment>CIFS</comment>
<sessionDebug flags='Negotiate,NetBIOS,State,Tree,Search,Info,File,FileIO,Tran,Echo,Errors,IPC,Lock,Pkttype,Dcerpc,Statecache,Notify,Streams,Socket'/>
<bindto>[!SERVER_REAL_IP_ADDRESS!]</bindto>
<broadcast>X.X.X.255</broadcast>
<tcpipSMB port='2445' platforms='linux'/>
<netBIOSSMB bindto='[!SERVER_REAL_IP_ADDRESS!]' sessionPort='2139' namePort='2137' datagramPort='2138' platforms='linux'/>
<hostAnnounce interval='5'/>
<WINS>
<primary>[!WINS_SERVER_IP_ADDRESS!]</primary>
</WINS>
</config>
<config evaluator='string-compare' condition='FTP Server' replace='true'>
<serverEnable enabled='true'/>
<debug flags='File,Search,Error,Directory,Info,DataPort'/>
<port>2021</port>
<bindto>[!SERVER_REAL_IP_ADDRESS!]</bindto>
<rootDirectory>/Alfresco</rootDirectory>
</config>
<config evaluator='string-compare' condition='NFS Server' replace='true'>
<serverEnable enabled='false'/>
<enablePortMapper/>
<rpcAuthenticator>
<userMappings>
<user name='admin' uid='0' gid='0'/>
<user name='auser' uid='501' gid='501'/>
</userMappings>
</rpcAuthenticator>
</config>
<config evaluator='string-compare' condition='Filesystems' replace='true'>
<filesystems>
<filesystem name='Alfresco'>
<store>workspace://SpacesStore</store>
<rootPath>/app:company_home</rootPath>
<urlFile>
<filename>__Alfresco.url</filename>
<webpath>https://[!YOUR_WEB_FQDN!]/</webpath>
</urlFile>
<offlineFiles/>
<desktopActions>
<global>
<path>alfresco/desktop/Alfresco.exe</path>
<webpath>https://[!YOUR_WEB_FQDN!]/</webpath>
</global>
<action>
<class>org.alfresco.filesys.smb.server.repo.desk.CheckInOutDesktopAction</class>
<name>CheckInOut</name>
<filename>__CheckInOut.exe</filename>
</action>
<action>
<class>org.alfresco.filesys.smb.server.repo.desk.JavaScriptDesktopAction</class>
<name>JavaScriptURL</name>
<filename>__ShowDetails.exe</filename>
<script>alfresco/desktop/showDetails.js</script>
<attributes>anyFiles</attributes>
<preprocess>copyToTarget</preprocess>
</action>
</desktopActions>
</filesystem>
<avmfilesystem name='AVM'>
<virtualView/>
</avmfilesystem>
</filesystems>
</config>
<config evaluator='string-compare' condition='Filesystem Security' replace='true'>
<authenticator type='passthru'>
<Server>[!MICROSOFT_AD_SERVER_ADDRESS!]</Server>
<Domain>[!YOUR_DOMAIN_NAME!]</Domain>
</authenticator>
</config>
</alfresco-config>
This extension will keep keep the search index refreshed as documents are added/removed from the repository.
vi /alf/alfy/WEB-INF/classes/alfresco/extension/index-tracking-context.xml
<beans>
<bean id='indexTrackerTrigger' class='org.alfresco.util.TriggerBean'>
<property name='jobDetail'>
<bean class='org.springframework.scheduling.quartz.JobDetailBean'>
<property name='jobClass'>
<value>org.alfresco.repo.node.index.IndexRecoveryJob</value>
</property>
<property name='jobDataAsMap'>
<map>
<entry key='indexRecoveryComponent'>
<ref bean='indexTrackerComponent' />
</entry>
</map>
</property>
</bean>
</property>
<property name='startDelay'>
<value>300000</value>
</property>
<property name='repeatInterval'>
<value>10000</value>
</property>
<property name='scheduler'>
<ref bean='schedulerFactory' />
</property>
</bean>
<bean
id='indexTrackerComponent'
class='org.alfresco.repo.node.index.IndexRemoteTransactionTracker'
parent='indexRecoveryComponentBase'>
<property name='remoteOnly'>
<value>true</value>
</property>
</bean>
</beans>
This enables the multi-lingual features and overriding email address and max search results. This is also where custom aspects will be added.
vi
<alfresco-config>
<config>
<client>
<from-email-address>[!ALFRESCO_ADMIN_EMAIL_ADDRESS]</from-email-address>
<search-max-results>300</search-max-results>
</client>
</config>
<config evaluator='string-compare' condition='Languages'>
<languages>
<language locale='ca_ES'>Catalan</language>
<language locale='da_DK'>Danish</language>
<language locale='de_DE'>German</language>
<language locale='es_ES'>Spanish</language>
<language locale='el_GR'>Greek</language>
<language locale='fr_FR'>French</language>
<language locale='it_IT'>Italian</language>
<language locale='ja_JP'>Japanese</language>
<language locale='nl_NL'>Dutch</language>
<language locale='pt_BR'>Portuguese (Brazilian)</language>
<language locale='ru_RU'>Russian</language>
<language locale='fi_FI'>Finnish</language>
<language locale='tr_TR'>Turkish</language>
<language locale='zh_CN'>Simplified Chinese</language>
</languages>
</config>
</alfresco-config>
In preparation for Alfresco in a cluster the cache mechanism must be set to synchronize with other members. Thankfully with the EHCache and multicasting the servers can join/exit without always re-configuring. Cache synchronization allows changes conducted on one Alfresco server to be evident on the other member servers. So, in the highly unlikely event that the same user is logged on to two seperate Alfresco instances, any changes in the users state will be reflected on all member servers. (There is a slight update delay but hardly noticable.)
Cache Synchronization is TIME SENSITIVE! Make sure that the member servers have a precise time agreement between them. I'll assume that NTP is configured and running on each respective server.
@@ -3,294 +3,658 @@
<diskStore
path='java.io.tmpdir'/>
-
+ class='net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory'
+ properties='port=40001, socketTimeoutMillis=90000'/>
+
<defaultCache
maxElementsInMemory='5000'
eternal='true'
timeToIdleSeconds='0'
timeToLiveSeconds='0'
- overflowToDisk='false'
- >
-
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+
+
</defaultCache>
<cache
name='org.hibernate.cache.StandardQueryCache'
maxElementsInMemory='50'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.hibernate.cache.UpdateTimestampsCache'
maxElementsInMemory='2000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.domain.hibernate.NodeImpl'
maxElementsInMemory='10000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.domain.hibernate.QNameEntityImpl'
maxElementsInMemory='100'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.domain.hibernate.NodeStatusImpl'
maxElementsInMemory='10000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.domain.hibernate.NodeImpl.aspects'
maxElementsInMemory='10000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.domain.hibernate.NodeImpl.properties'
maxElementsInMemory='10000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.domain.hibernate.ChildAssocImpl'
maxElementsInMemory='200000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.domain.hibernate.NodeAssocImpl'
maxElementsInMemory='5000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.domain.hibernate.StoreImpl'
maxElementsInMemory='100'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.domain.hibernate.VersionCountImpl'
maxElementsInMemory='100'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.domain.hibernate.AppliedPatchImpl'
maxElementsInMemory='100'
timeToLiveSeconds='300'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.domain.hibernate.DbAccessControlListImpl'
maxElementsInMemory='1000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.domain.hibernate.DbAccessControlListImpl.entries'
maxElementsInMemory='1000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.domain.hibernate.DbAccessControlEntryImpl'
maxElementsInMemory='5000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.domain.hibernate.DbPermissionImpl'
maxElementsInMemory='500'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.domain.hibernate.DbAuthorityImpl'
maxElementsInMemory='10000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.domain.hibernate.DbAuthorityImpl.externalKeys'
maxElementsInMemory='5000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.audit.hibernate.AuditConfigImpl'
maxElementsInMemory='2'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.audit.hibernate.AuditDateImpl'
maxElementsInMemory='2'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.audit.hibernate.AuditSourceImpl'
maxElementsInMemory='2000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.attributes.AttributeImpl'
maxElementsInMemory='5000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.attributes.ListEntryImpl'
maxElementsInMemory='2000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.attributes.MapEntryImpl'
maxElementsInMemory='2000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.attributes.GlobalAttributeEntryImpl'
maxElementsInMemory='1000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.avm.AVMNodeImpl'
maxElementsInMemory='5000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.avm.AVMStoreImpl'
maxElementsInMemory='100'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.avm.VersionRootImpl'
maxElementsInMemory='200'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.avm.ChildEntryImpl'
maxElementsInMemory='10000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.avm.HistoryLinkImpl'
maxElementsInMemory='200'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.avm.MergeLinkImpl'
maxElementsInMemory='200'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.avm.AVMNodePropertyImpl'
maxElementsInMemory='2000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.avm.AVMStorePropertyImpl'
maxElementsInMemory='500'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.repo.avm.AVMAspectNameImpl'
maxElementsInMemory='1000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.cache.parentAssocsCache'
maxElementsInMemory='10000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.cache.userToAuthorityCache'
maxElementsInMemory='10000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.cache.permissionsAccessCache'
maxElementsInMemory='50000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.cache.nodeOwnerCache'
maxElementsInMemory='20000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.cache.personCache'
maxElementsInMemory='1000'
eternal='true'
- overflowToDisk='false'
- />
+ overflowToDisk='false'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
<cache
name='org.alfresco.cache.ticketsCache'
maxElementsInMemory='1000'
eternal='true'
- overflowToDisk='true'
- />
+ overflowToDisk='true'>
+ <cacheEventListenerFactory
+ class='net.sf.ehcache.distribution.RMICacheReplicatorFactory'
+ properties='replicateAsynchronously=true, replicatePuts=true,
+ replicateUpdates=true, replicateUpdatesViaCopy=true,
+ replicateRemovals=true'/>
+
+ <bootstrapCacheLoaderFactory
+ class='net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory'
+ properties='bootstrapAsynchronously=true, maximumChunkSizeBytes=5000000'/>
+ </cache>
The Keepalived LVS Cluster will provide the High Availability and Load Balancing amongst the Alfresco member servers. This LVS scenario is also a DIRECT LVS configuration. This means that packets will be delivered to the member cluster servers and then in turn the selected member server will communicate back DIRECTLY with the client. This differs from LVS NAT and alleviates the Keepalived servers from becoming a bottleneck. It also greatly simplifies communicating with each member server. LVS NAT requires special routes for specific services like backup, mail and database servers. LVS DIRECT is acheived with all cluster members using a 'shared' ip address that is ONLY ARP'ed from the Keepalived servers. The configured 'shared' ip address on each member server is in NO-ARP mode. If the member servers ARP the address then clients could bypass the Keepalived servers thus defeating the HA/LB benefits.
You don't need a lot of disk or fast disks for that matter. The Keepalived servers are network/io intensive.
A wealth of detail is assumed. Install and configure 2 RedHat ES servers. Each Keepalived server will route traffic providing both master and backup to each other and themselves each being HA/LB.
One of the interfaces is for the private network between the two for VRRP synchronization and heartbeat. Should one or the other servers go down, all traffic and current sessions are replicated on the available server and no one is the wiser.
The remaining two network interfaces are bonded for redundancy, lowered latency and increased bandwidth.
I've lost track of any specific RPM requirements to compile Keepalived. Suffice to say you will need to install appropriate GCC, and Kernel development rpms and any other libraries that the keepalived configure script identifies.
Assuming that keepalived-1.1.13.tar.gz is located in /usr/local/src Untar and configure the source.
cd /usr/local/src
tar xvfzp keepalived-1.1.13.tar.gz
cd keepalived-1.1.13
./configure --prefix=/usr/local/kad --with-kernel-dir=/usr/src/kernels/2.6.9-42.EL-i686
Compile the keepalived daemons and other binaries.
make
If everything was successfull you can install Keepalived
make install
For packet marking to match keepalived.conf
The Apache HTTP/mod_jk server cluster will provide an easier administration and manipulation of the web services amongst the Alfresco member servers.
You don't need a lot of disk or fast disks for that matter.
A wealth of detail is assumed. Install and configure 2 RedHat ES servers. Each Apache HTTPD server is a member of the Keepalived LVS for HA/LB. They are not synchronized and are unaware of each other in the cluster.
Network interface ALIASES will be configured to accomodate SSL and other unique server addressing.
Now that the servers are working for JAAS and passthru on CIFS. Enable NTLM passthru for the Web Interface
Uncomment the sections for NTLM
@@ -73,27 +73,26 @@
<filter>
<filter-name>Authentication Filter</filter-name>
+
-
+ <filter-class>org.alfresco.web.app.servlet.NTLMAuthenticationFilter</filter-class>
</filter>
<filter>
<filter-name>WebDAV Authentication Filter</filter-name>
+
-
+ <filter-class>org.alfresco.repo.webdav.auth.NTLMAuthenticationFilter</filter-class>
</filter>
<filter>
@@ -107,7 +106,6 @@
</filter-mapping>
-
+
<filter-mapping>
<filter-name>WebDAV Authentication Filter</filter-name>
Note: There is a 'bug' with the authenticator. You'll need to re-compile with this fix to allow browsers in windows authenticate properly.
Create custom images specific to the company branding efforts. By later removing sizes in the JSP files you have some latitude but I suggest keeping it reasonable and close to the original dimensions, especially the 32x32 logo png file.
Copy the files to EACH of the Alfresco cluster member servers. I've chosen alt but you may name it something else. It must reside under /alf/alfy, the webserver root, to be referenced correctly for web browsers.
mkdir /alf/alfy/alt
Example filenames:
company1_login.png
company1_logo32.png
company2_login.png
company2_logo32.png
...
For each custom company branding virtual host directive block add the Rewrite rules. Obviously change /alt/companyX_login.png and /alt/companyX_logo32.png to reflect the company specific file names and locaitons. Also, change the alt directory if you named it something else.
RewriteEngine on
RewriteCond %{REQUEST_URI} ^/images/logo/AlfrescoLogo200.png$
RewriteRule ^/images/logo/AlfrescoLogo200.png$ /alt/companyX_login.png [PT]
RewriteCond %{REQUEST_URI} ^/images/logo/AlfrescoLogo32.png$
RewriteRule ^/images/logo/AlfrescoLogo32.png$ /alt/companyX_logo32.png [PT]
Remove width and height size references and set align='center'. e.g.,
<td colspan=2 align='center' >
Remove raise_issue from the web gui.
-<td><nobr><a href='http://www.alfresco.com/services/support/issues/' target='new'><h:outputText value='#{msg.raise_issue}' /></a></nobr>
-<td width=8>
Author: James B. Crocker
EMail: james.crocker@menasha.com
http://i.creativecommons.org/l/by-sa/.../88x31.png
This work is licensed under a Creative Commons Attribution-Share Alike 3.0 License.
Ask for and offer help to other Alfresco Content Services Users and members of the Alfresco team.
Related links:
By using this site, you are agreeing to allow us to collect and use cookies as outlined in Alfresco’s Cookie Statement and Terms of Use (and you have a legitimate interest in Alfresco and our products, authorizing us to contact you in such methods). If you are not ok with these terms, please do not use this website.