Do you need to sync 200,000 documents or are you syncing only a few documents and NOT syncing 200,000 documents? If you are syncing all 200,000 you will definitely not use the replication service.
Also, 200,000 is probably too many docs to put in an ACP file although it depends on how big the documents are. If you want to use ACP you will probably have to do many smaller batches.
I recommend an asynchronous approach. At a very very high-level, you can put messages on a queue or an event stream and then have an integration server that is monitoring the queue/stream so that it can replicate the changes to another server. I've done this with ActiveMQ. I wrote a behavior to watch for the changes I'm interested in. The behavior then puts a message on a queue that essentially says, "This object changed". Then, over in my other Alfresco server, I have a listener that is subscribed to the queue. When it sees a message it grabs the node reference from the message then calls the source Alfresco server to fetch the object and persist it into the repo.
Similarly, when an object is deleted, a message goes on the queue, then the target server sees that message and deletes the object on the target server.
In my example, I didn't care about node refs changing between the two servers. I just stored the originating node reference as a property on the target server so I could track back to the source.
In your case, instead of fetching the object you could trigger an export into an ACP file, then on the target server, import the ACP to preserve the node reference.
My example does one-way sync, which it sounds like would work for you. The folks at Parashift have a product that does two-way synchronization leveraging event streams and Apache Camel. Here is a blog post they wrote about it: Stream Processing with Alfresco – Parashift . They did a presentation at DevCon in Zaragoza but I can't seem to find that presentation at the moment.
If you want to try using streams instead of queues, you might take a look at a project I have on github that writes events to Apache Kafka from a behavior. It's just something I was playing with so it is not production-hardened code.
Anyway, I hope that gives you some useful ideas.