Upgrade from 3.3.1 to 4.2.2: Lots of Documents
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-21-2015 11:43 AM
Hi
We have a large (we feel it's large) 3.3.1 Alfresco installation holding approx. 50 million documents. This installation runs on a single Linux machine and uses DB2 for its database.
I am not sure how to proceed with the upgrade…
Looking at the differences between 3.3.1 and 4.2.2 it seems that there are a number of steps that re-create the big ALF Tables
TABLE_NAME ROWCOUNT
ALF_NODE_ASPECTS 320 Million
ALF_NODE_PROPERTIES 250 Million
ALF_NODE 60 Million
ALF_CHILD_ASSOC 50 Million
I don't think our current installation has big enough DB2 transaction log space to actually execute the DB2 upgrade scripts.
Any hints? Do other people have similar size installations?
How did they upgrade? How long did it take? Do we need to create a new prod installation and then transfer deltas after the progress has completed?
Thanks in advance
DrD
We have a large (we feel it's large) 3.3.1 Alfresco installation holding approx. 50 million documents. This installation runs on a single Linux machine and uses DB2 for its database.
I am not sure how to proceed with the upgrade…
Looking at the differences between 3.3.1 and 4.2.2 it seems that there are a number of steps that re-create the big ALF Tables
TABLE_NAME ROWCOUNT
ALF_NODE_ASPECTS 320 Million
ALF_NODE_PROPERTIES 250 Million
ALF_NODE 60 Million
ALF_CHILD_ASSOC 50 Million
I don't think our current installation has big enough DB2 transaction log space to actually execute the DB2 upgrade scripts.
Any hints? Do other people have similar size installations?
How did they upgrade? How long did it take? Do we need to create a new prod installation and then transfer deltas after the progress has completed?
Thanks in advance
DrD
Labels:
- Labels:
-
Archive
1 REPLY 1
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-22-2015 05:59 AM
In particular… 3.3/org.hibernate.dialect.DB2Dialect/node-prop-serializable.sql
This is a complete copy of 247 million rows which is 47GB.
Does that sound feasible?
INSERT INTO t_alf_node_properties ( node_id, actual_type_n, persisted_type_n, boolean_value, long_value, float_value, double_value, string_value, serializable_value, qname_id, list_index, locale_id ) SELECT node_id, actual_type_n, persisted_type_n, boolean_value, long_value, float_value, double_value, string_value, serializable_value, qname_id, list_index, locale_id FROM alf_node_properties;
This is a complete copy of 247 million rows which is 47GB.
Does that sound feasible?