The data distribution is normal for an Alfresco system. alf_node_properties is almost always going to be the largest table, and due to support of transactional metadata queries (TMQs), the required indices will always be larger (combined) than the actual data.
Alfresco does not officially support table partitioning, since the techniques for that are different for the supported databases, and it would mean extreme effort to support all upgrade scripts against all possible techniques of partitioning. Since you are using MS SQL, you must be using Alfresco Enterprise and have an active support subscription. If you were to partition your tables you would be running the risk of not being suppported any more in any future issues you may have with the database.
Technically speaking though it is definitely possible and I have done this in a couple of (PostgreSQL-based) systems. The contents of alf_node_properties can be partitioned quite nicely based on the qname_id column, and one could create partitions based on the semantic content model.
In terms of "archiving", it depends on your specific requirements. Do the historic data need to be accessible after archiving? Then I am afraid there is no approach that would allow you to reduce the data in your database, except maybe cleaning up your version histories. If the content does not need to be accessible anymore, you could of course export any sub-structures which you no longer need and delete the nodes from the system.
By default, Alfresco only supports export as Alfresco Content Package (ACP) which bundles metadata, ACLs and content (no version history / no process information), and can be used to re-import the data. Be aware though that an ACP is just a ZIP file, will be created within the space for temporary files (and be limited by available space) and the export will be done in a single, sequential process. ACPs and the tooling arround them have not been designed for high volume export/import.
There is no support in Alfresco for having part of the data in a separate database. Of course if a specific database supports distributing individual tables across a database server farm, provides a transparent / transactionally consistent view, and supports distributed foreign keys / referential integrity, then again, it would be technically possible.
I don't believe many customers so far have had the necessity to split data (including customers with way more data / documents than in your specific case).