If it's just a test server I strongly recommend doing the copy of the data on demand and not doing it online. You will have to mount scripts to do this for you. It's usually a simple thing to do the dump of the database as if it were to make a backup , put in the appropriate place and restore on the other server. Probably you do not need more than this. There are several ways to get the same result.
But if you have a real reason to replicate it has some ways.
In fact most solutions can, even if optionally, replicate in only one direction. I would say most can only do it in one direction.
It seems like you need at most a log replication. I think it's the simplest. The PostgreSQL documentation talks about this . You have the option of streaming but in your case it is an exaggeration.
Log shipping has low overhead on the primary server and works fine without much complication. It does not offer high availability or immediate redundancy capabilities, nor can it be used in caraga swing but it does not seem to be what you need.
You also have the Slony . I think it's a cannon to kill bird but it's an option.
Another similar option is Bucardo .
There are other solutions that will not suit you (although I do not think these are good ideas either), or I do not know them. One of them would be the PGPool-II that frees the main server just to meet production requirements but requires additional hardware to coordinate sending queries to the servers, it is no advantage to you.
I think other solutions can be used independently of the database. You can use operating system-based utilities that replicate content. But I also doubt they are any better.