I installed ACS 6.0 in cluster (2 nodes) under two windows 2016 server failover cluster (in vmware). The server didn't took the load (users sessions) althought i gave them a lot of ram (96g) and CPU (16). Tomcat jvm has 64g. Did someone installed an ACS 6 cluster that is running fine and support about 200-300 users? What is your solution?
If you are using virtual machines, you may be sure that your RAM or CPU resources are really (or near) dedicated to your cluster (at least de RAM ones). If you have 200-300 potential users it seems oversized as commented by Angel initially, but having a huge number of documents in your repo, SOLR setup may need a lot JVM and CPU resources. Anyway, some basic profiling may help you to diagnose it (Support Tools or jvisualvm may help to see your JVM use and CPU load).
You should consider to have SOLR Search Services in a dedicated virtual machine and having SSD disks for SOLR indices, if you do not do it yet. Appropiate storage and database resources helps too, and normally the second ones are more relevant. Finally, I use Ubuntu or Linux based OS (when I can) in my setups, but this probably is not as important if you are a Windows administrator.
It looks like from my actual installation (4.2), it would reach 100 concurrent users. And it works fine. No problem at all. My actual installation is on a physical server with windows 2012 R2 (128g ram, 32 cpu cores, 64g for tomcat). Nb of documents is about 2 500 000. We don't use share interface, only alfresco explorer. The new installation is for an upgrade to 6.0.1 with the same load. But after 30 concurrents sessions, alfresco became non responsive and slow.
I don't understand what could cause that.
I'm asking that question, because i never heard or read on someone that used my kind of architecture. I mean two windows 2016 failover cluster with a CSV drive (Cluster Shared Volume) for the contentstore. I don't know if i've go to suspect my alfresco installation or my OS solution.
If you're talking about cluster the Alfresco support team may be a better audience ;-)
100 concurrent users shouldn't a problem depending on the usage scenario, number of nodes and the customizations. We have single instance (no cluster) customers with >500 concurrent, active working users with > 20 mio doc nodes in a 3 vm (alfersco, solr, postgres) installation having 3x 8 Cores and it is very fast.
You should know that a cluster will always cost performance and will never increase it (except some very uncommon read only scenarios).
Anyway: Independant from using a cluster or not: You should focus on common tuning and archtecture best practices which could not be covered by a forum thread.
* more RAM and (v)CPUs are not always better. More RAM may cause more CPU load and may just hide the real problems
* Run and tune db, repo/share tier, solr on separate systems
* In most cases the transformation, the db and especially their storage are the bottlenecks. Given you have the knowledge and resocurces making the db very fast may be the easier part. Avoid shared SAN LUNs and try to run at least the db on an exclusive cluster aware SSD backed storage. Same for the Solr index.
* it's easier to tune and find bottlenecks in linux instead of in windows
* never ever run docker on windows in production. It will not scale. Docker in production should only an option if you run it directly on bare metal installed linux systems / not VMs or maybe if you know what your're doing only one tier (alfresco, db, solr) on 1 VM. Our customers don't have the required teams for that. For the large systems mentioned before we run even 6.0 / 6.2 in esx VMs without any docker involved.