Reading is slow when multiple ContentStores are enabled

cancel
Showing results for 
Search instead for 
Did you mean: 
liviu_ioan
Member II

Reading is slow when multiple ContentStores are enabled

Jump to solution

Hello,

I have an Alfresco One v5.1.3 configured with 6 ContentStores (6 S3ContentStores).

From time to time, reading is very slow (40-50 seconds).

I debugged. The problem is that when a read occurs, every ContentStore is interrogated for the object in the request (in the order defined by contentStoreSelector). Also, reading is not slow every time because there is a cache used for these requests (a cache in the Alfresco code, I mean).

Important note 1: the S3 storage is respoding very quickly to requests.

Important note 2: the Alfresco S3 Connector has the number of retries set to 5, so every ContentStore is actually interrogated 6 times; that's why reading becomes very slow (40-50seconds). The number of retries is hardcoded in the connector, no way to configure it "outside".

More detailed & technical information:

  1. When a read occurs, the selectReadStore is called (AbstractRoutingContentStore class). You can see the iteration I'm talking about in selectReadStore. You can also see the caching at the beginning of selectReadStore.
  2. When a write operation occurs, the selectWriteStore is called (StoreSelectorAspectContentStore class). In selectWriteStore, the proper ContentStore is selected using cm:storeName.

Q1: Why is cm:storeName not used by selectReadStore, too? Why is Alfresco iterating through ContentStores, when it's clear from cm:storeName which ContentStore is holding the requested object?

Q2: Is there something we can do to speed things up (besides making use of just 1 S3ContentStore, so no iteration is needed)? The slow reading seems to happen pretty often, despite the cache in selectReadStore (I have not made sufficient tests to actually tell you how often).

Thank you.

Regards,

Liviu

1 Solution

Accepted Solutions
afaust
Master

Re: Reading is slow when multiple ContentStores are enabled

Jump to solution

Regarding your Q1: During read, the ContentStore implementation only has access to the content URL, not the context of the node that is being accessed. For this reason, there is no way to access the cm:storeName property and optimise the lookup. Unfortunately, all content stores in default Alfresco (and the Enterprise-only S3 module) use the same content URL protocol and path structure, so there is no additional information in the URL itself to help in differentiating stores and optimising lookup.

In my custom implementation of various stores as part of my simple-content-stores addon, I encountered the various issues that this non-differentiation may cause quite early, and decided to allow configuration of distinctive protocols for each store. With that in hand, the iterative lookup in a routing content store was optimised by checking if the store actually supports the URL before doing the (potentially costly) HTTP request.

Regarding your Q2: Unfortunately, Alfresco tends to hard-code some of the internals that other people may find useful to configure. So since you cannot reduce the amount of attempts, the only thing you could do is change the configured size of the cache. The biggest problem is, that the first access to every piece of content that has not been accessed yet will still be slow, unless you add some code to pre-load them. Also, as soon as your system is restarted, the cache is reset to an empty state.

Depending on your content access patterns, you may benefit from configuring a CachingContentStore in front of the routing content store. This allows you to cache S3 content on your local server disk and short-circuit the access operations for already cached content. This would also survive a system restart, though of course it may require a substantial amount of storage to cover all the frequently accessed content. This store also supports a quote manager that may reject to cache large files and trigger cleanup of old cache files once a set limit is reached.

In any case, you should create a ticket with Alfresco Support, since we are talking about Enterprise-only functionality here...

View solution in original post

4 Replies
afaust
Master

Re: Reading is slow when multiple ContentStores are enabled

Jump to solution

Regarding your Q1: During read, the ContentStore implementation only has access to the content URL, not the context of the node that is being accessed. For this reason, there is no way to access the cm:storeName property and optimise the lookup. Unfortunately, all content stores in default Alfresco (and the Enterprise-only S3 module) use the same content URL protocol and path structure, so there is no additional information in the URL itself to help in differentiating stores and optimising lookup.

In my custom implementation of various stores as part of my simple-content-stores addon, I encountered the various issues that this non-differentiation may cause quite early, and decided to allow configuration of distinctive protocols for each store. With that in hand, the iterative lookup in a routing content store was optimised by checking if the store actually supports the URL before doing the (potentially costly) HTTP request.

Regarding your Q2: Unfortunately, Alfresco tends to hard-code some of the internals that other people may find useful to configure. So since you cannot reduce the amount of attempts, the only thing you could do is change the configured size of the cache. The biggest problem is, that the first access to every piece of content that has not been accessed yet will still be slow, unless you add some code to pre-load them. Also, as soon as your system is restarted, the cache is reset to an empty state.

Depending on your content access patterns, you may benefit from configuring a CachingContentStore in front of the routing content store. This allows you to cache S3 content on your local server disk and short-circuit the access operations for already cached content. This would also survive a system restart, though of course it may require a substantial amount of storage to cover all the frequently accessed content. This store also supports a quote manager that may reject to cache large files and trigger cleanup of old cache files once a set limit is reached.

In any case, you should create a ticket with Alfresco Support, since we are talking about Enterprise-only functionality here...

liviu_ioan
Member II

Re: Reading is slow when multiple ContentStores are enabled

Jump to solution
Unfortunately, all content stores in default Alfresco (and the Enterprise-only S3 module) use the same content URL protocol and path structure, so there is no additional information in the URL itself to help in differentiating stores and optimising lookup.

Exactly what I thought; the unuseful interrogations could be avoided if the contentUrl had some info embedded (e.g., the name of the ContentStore/storage). And yes, the selectReadStore method receives a String contentUrl as an argument, while the selectWriteStore receives a ContentContext as an argument.
Q1: Why is this? Why just a String contentUrl for the selectReadStore (with no ContentStore info related, also)?

Q2: OK, so you're saying that if I am using the indicated add-on, then the exists() method returns quickly, because of the way paths are different from one ContentStore to another?
(this way, checking that a file belongs to a specific ContentStore becomes fast, without the need to talk to the actual storage)
In short, by using the addon, you say our reads become fast, so our issue is solved. Am I correct?

Q3: The add-on looks very useful, indeed. How well tested is it? Is it safe to use it in production?

OK, so the cache used in selectReadStore can be configured. I do not know if this is truly helpful.

Q4: OK, so you're saying I can actually have a CachingContentStore in front of the RoutingStore?
I also thought about using a cache, but I had the impression the I can using a caching store just in front of the S3ContentStores (which wouldn't help, because the wrong ContentStores are still iterated).

Still, when a cache miss takes place, we still get those painful 50 seconds (even if the cache is in front of the RoutingStore).

afaust
Master

Re: Reading is slow when multiple ContentStores are enabled

Jump to solution

Q1: Why is that? Well, the only answer I can give is: It was implemented by Alfresco that way in the API. Since Alfresco considers this not to be part of the public API, there is very little chance this changes even if you create a JIRA enhancement request for it.

Q2: Correct, that is the point. But be aware, that addon currently does not provide a S3 connector, though it is generally planned for "whenever I have time".

Q3: It is an open source addon that I have developed in my free time. I have used parts of it in production at customers with 30+ million documents, but typically before I integrated the parts with that addon. The addon itself has not been used in any customer production environment yet. As with any open source / 3rd party addon, due diligence needs to be done with proper testing in your test / QA environments before putting it in production.

With regards to testing: I have tested it in various Alfresco versions on my own environments. Formal test coverage with unit tests etc. is non-existent at the moment and also part of the plan for "whenever I have time".

Q4: Yes, you can put a caching content store in front of any other content store, though of course that only makes sense for potentially slow stores, or ones where the data transfer may incur additional costs (think S3 pricing for GET requests, or the infrequent access service level).

Yes, if the cache misses you will still get the painful 50 seconds delay - the caching content store is only meant to reduce the number of occurences, especially if the same set of files are frequently accessed (in every use case you typically have a core working set of documents that get accessed multiple times within a day).

liviu_ioan
Member II

Re: Reading is slow when multiple ContentStores are enabled

Jump to solution

OK, so if we want to use the add-on, then we need to implement our own S3ContentStore, because the S3ContentStore needs to be 'compliant' with the rest of the ContentStore classes from the add-on.