However, I have re-started using rsync to back up some key local folders to my NFS server (10.1.1.40) via the autofs mounted folder (/nfs) and am getting some very poor performances.
which I haven’t tried yet as I am letting my data backup at the moment.
So my question is while I wait for my backup to complete, is there a definitive approach for using rsync via /nfs or should I avoid it and stick to using ssh as some blogs have suggested?
On 05/16/2017 05:36 AM, kitman wrote:
>
> In a previous thread http://tinyurl.com/ma5dops I was introduced to
> autofs and have been happy ;).
>
> However, I have re-started using rsync to back up some key local folders
> to my NFS server (10.1.1.40) via the autofs mounted folder (/nfs) and
> am getting some very poor performances.
>
> Using
>
> Code:
> --------------------
> rsync -vlrptz --progress --delete ~/data/ /nfs/main/data/
> --------------------
>
> results in very slow/poor transfer speeds, in the order of tens of
> kilobytes/second and sometimes even locking my system up.
>
> If I used rsync with ssh as I had done in the past
>
> Code:
> --------------------
> rsync -vlrptz --progress --delete ~/data/ -e “ssh” root@10.1.1.40:/main/data/
> --------------------
>
> the transfer speed is in the order of tens of megabytes/second and quite
> robust.
I do not have experience with rsync and NFS playing together, but a couple
thoughts come to mind:
Using ‘-z’ (compress) does not make sense too me; the transfer of data
is happening via NFS, and now you’re going to compress it again to move it
through the local system. Chances are very good that it adds no value
since the target version will not be compressed.
I’d test without --delete to see if that impacts performance
significantly.
It may be interesting to see what is happening on the wire. When you
only see KB/s rates, is that because it is looking for new work to do, or
because the actual transfer of a big file is slow? I have no idea why
transfer of a big file would be that slow, particularly after you disable
the needless compression, but that would be an interesting clue.
–
Good luck.
If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below.
If you want to send me a private message, please let me know in the
forum as I do not use the web interface often.
I had assumed that the -z option is to be used if transferring over a network instead of local and that it reduces bandwidth - at the cost of a bit of extra CPU.
It may be interesting to see what is happening on the wire.
If by ‘wire’ you mean using a tool like iftop instead of relying on the reporting of rsync then there is a difference. While rsync reports ~500kbps, iftop reports ~4Mbps.
I had repeated the trial using 3 methods - rsync over the autofs/nfs share, rsync using ssh into the remote server and finally turning on the rsync daemon on the remote and using rsync:// modules.
Bottom line is that the transfer for all 3 methods as reported on the ‘wire’ are all about 4Mbps without the -z option. Though when I did try -z it did seem to make a very slight improvement.
I am not sure why I experienced an nfs issue with the remote server but I did reboot it in the meantime before the new tests. Oh well.
Still curious as why some blogger’s suggested not to use rsync over nfs. But at the end of the day I think I will stick with using ssh until I understand the rsync daemon a little better.
What rsize and wsize are you using to mount the NFS share? Try different sizes, start from 32768 for both values and ramp up to a meg if necessary.
Are you mounting it with async? If not, do so.
I have not set any rsize/wsize in the client so I assume I am using a default of 4096. The async option is not set on the server either. I may experiment with those items soon.
Actually “wire” means getting packet trace using wireshark, tcpdump or whatever is available.
Anyway, to make some educated guess please show NFS export options on server and mount options on client.
The NFS share is setup via openmediavault and only has these options set
subtree_check,secure,no_root_squash
Since I started this thread I successfully got the rsync daemon running on the openmediavault server and am happy using this rsync:// technique to backup my data files.