I'm testing activiti performance with a slightly tweaked version of Joram Barrez activiti-benchmark project from github.
The activiti version is 6.0.0
I'm using MariaDb 10.2 running in a docker container as database backend.
My tweaked version uses another workflow than the ones used by Joram.
This workflow contains 10 service tasks and 10 script tasks.
All tasks are executed sequentially (no parallel gateways or the like).
The service tasks are realized by a descendant of TaskActivityBehavior, so they are being executed asynchronously.
The script tasks don't do anything besides creating a variable: var x = 0;
I adapted Joram's approach to complete user tasks for my needs to trigger my service tasks.
I tested two scenarios:
A: The service tasks don't set any process variables
B: The service tasks set 100 process variables per service task call
The tests were executed with up to 20 threads, which means up to 20 processes in parallel.
When setting many process variables in a scenario where many processes were executed in parallel,
the execution time per process increases a lot.
After further analysis, I found out, that setting a lot of variables leads to a lot of database call.
Here is what I'd like to know:
1. Is my observation correct, that setting many process variables causes the increase in average execution time for a process?
2. Is there a way to set process variables with less overhead?
3. Is there a suggestion for how many processes to start simultaneously with workflows of the described size (10 service tasks, 10 script tasks)?
I know, this heavily depends on the hardware activity runs on and probably also on the database. But maybe there is a rule of thumbs?
I'd like to add my benchmark results but can't find a way to attach a file to this post.
Thanks in advance,