While I would need a lot more detail (such as queue size, load etc) o provide a direct answer to your question, one wonders why you don't set your "modifyService" to act synchronously if it is to fire immediately after "timerintermediatecatchevent1". If you set it to be synchronous, the timer event will pick up (timers are sitting on the job queue) and immediately (on the same thread) call "modifyService".
Like I said, doesnt answer your question, but you have to be careful about using DB datestamps as the order of insertion may not actually match the order of execution due to transaction boundaries (especialy when you have more than one job scheduler associated with the one db).
When you mentioned "queue size" you think about state of AcquireAsyncJobsDueRunnable thread? I can't give you specific numbers, but this scenario takes place during "rush hours" for our application.
We set modifyService as asynchronous because we want it executed by general thread pool dedicated to async tasks. We don't want to bother "timer thread" to do the task, because sometimes it can take more time (this one is calling some other webservice).
About DB timestamps - we verified logs of applications and it also confirms this strange behavior - task was executed after end event for process.
What information do you need to diagnose the problem in more details? Maybe BPMN definition would be helpful? We are suffering because of this problem, so we want to spot and eliminate it as soon as it possible.
I'm still a little confused as to why you wont set the modifyService task to synchronous. By binding it to the same thread as the timer task (which is essentially what synchronous does) you are still using the Job Executor thread pool. Yes, it doesnt use the pool associated with async tasks, but your problem goes away, and, if you are concerned about performance, then separate the nodes that process async and timer tasks. The fact that this happens during "Rush hours" indicates that you need to tune up your job executor.
Another option you have is to "block" the "End event" using a signal receiver that is triggered by a signal send event placed immediately after the modifyService.
I attached the screenshot showing process definition. As I said modifyService should be executed right after timer and in most cases is. I don't disagree with your fix proposition but I would like to know what is causing such behavior - is this our fault, or is it engine problem. We also observed this kind of behavior in other processes where serviceTask are located one after the other.
The reason of marking task as async is to make clear transaction boundaries and limit rollback to retry only one task. When we designed process without that, engine was retrying several tasks before faulty one.