Help please setting up Luckybackup (rsync) to networked NAS

My objective is to have a backup of my home directory on my /home/‘me’ partition using Luckybackup.
I have set up within my remote NAS a directory and permissions and enabled NFS. I know the static IP address of the NAS so afaik the NAS should be able to receive my data and I am trying to set up Luckybackup.
At this point, possibly because of firewall, I cannot the this or any other NFS shares and I do not know if I need to set up an NFS share or just connect to the shared folder on the NAS. I am also stuck for details of how to enter the correct address for this remote folder. Please can somebody guide me in the right direction.

The easiest way is to mount the NFS share outside of what you want to backup, f.e. /NAS-data to backup your /home to. Once the NFS share is mounted like thatm simply select /home as the source, /NAS-data as the destination.

Luckybackup is just a frontend to rsync, the web has numerous examples, which you can use to setup luckybackup for syncing to the NAS directly.

Hi and thanks for the pointers. Luckybackup makes front end easy but I have been lost by all the bling within Qnap NAS. Am getting there slowly. Will call again if I get really stuck!
Thanks again,

Sorry but I am really stuck. Not helped by the Qnap advice which tends to mask linux commands so I have been going back to CLI when I can and started over. I can SSH into the NAS from my laptop OK and from there can navigate to the target directory to receive my backup. One small step. I have had to use the IP address to get there not the Hostname which doesn’t work.

I have then set up Luckybackup using what I thought I had done before and here is the command created using Luckybackup gui which I am running:-

rsync -h --progress --stats -r -tgo -p -l -D --update --exclude=**/*cache*/ --exclude=**/*Cache*/ --exclude=**~ --exclude=**/*Trash*/ --exclude=**/*trash*/ --exclude=/alastair/pCloudDrive/ --exclude=/alastair/Email_Archives/ --protect-args /home/alastair/ admin@192.168.---.---:admin@192.168.---.---:/share/W530_backup/

The first question is why is the target address included twice?

Second question is what do to error messages mean:-

  rsync: change_dir "/root/admin@" failed: No such file or directory (2) 


  rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1674) [Receiver=3.1.3] 

Am still trying but my trials get even more error messages however if I used the command line in console and deleted the repeated target address it seemed to work. Is the issue with the way I have used Luckybackup or Qnap settings I ask?

If you login on the NAS with SSH as admin, do


Let’s call that now /home/admin f.e… Then use something like

rsync ...options here..... admin@IPaddres://home/admin/....

But better would be to have a special user for this. It looks like you land in /root and you shouldn’t do that.

A much easier way woult be to mount the NFS share

Hi and thanks for the help. I think I understand but for clarification is to mount a NFS share of the intended backup directory of the NAS in my home directory (/home/alastair/) and then copy everything from that home directory that I wish to keep as a backup into the mounted directory. Is that right? (avoiding a recursive loop!)

I shall start a new thread on the Luckybackup problem which has never been a problem before and means I have made a mistake along the way.

Many thanks once more.

I think you are mixing up things and as a consequence are focusing on the wrong subject.
IMHO the subject should be: how and where do I mount the NFS exports of my NAS? Because after you succeeded in that, all is part of the one and only directory tree that a Unix/Linux system has. You can then forget about the fact where it is physicaly and you can use space with the NAS in the same way as any other disk space on your system (by using the path to it). Which again means that you (as system manager) can make the user (probably again you, but a different being for the system) owner of space there. And then you (as user) can copy to and from what you like with the tools you like. This is NOT an Luckybackup or rsync problem, but a “how do I access the space on the NAS” problem.

When nfs mounting qnap backup is as easy as a local rsync. See:

You are correct of course, but the whole clue of NFS mounting is that those “remote file systems” behave as if they were local. Thus it is logical that you can backup “as easy as local”. Same with all other backup tools, self invented or ready made, that work local.

Those tools “never worry” about something being an NFS mount or a local mount or being a mount at all. And neither should their users.

Basically yes. But it depends. There was some enhanced experience in the early nineties when migrating from Apollo / Domain/OS to HP 9000 Series 700 / HP-UX. An Apollo could manipulate the exports of a 715/50 with their file manager. However moving a folder to trash on the Apollo would perform the equivalent of ‘reboot -f’ on the 715/50. :\

Hi and thanks to Henk and Karlmistelberger,
the advice and links are much appreciated. I am enjoying the learning experience which also underlines the fact that there are often many solutions when using Linux, some not as good as others. I also appreciate understanding preferred solutions rather than quick fixes. Much to think on here.

Sorry I became sidetracked by Luckybackup but once my short term needs have been resolved I do need to work hard on backup systems and policies as what had been a private pastime has suddenly become rather serious with 24Gb of data and growing daily.


Well, maybe I wasn’t clear enough. The options you have basically can be narrowed down to these 2:

  1. Mount the NFS share from the NAS on /some/mountpoint on your client ans use Luckybackup/rsync to backup to /some/mountpoint.
  2. Use Luckybackup/rsync to backup directly over the network to the NAS

FWIW, I would not mount the share in /home/alistair, rather on /NAS-shares, i.e. outside /home.

A mountpoint somewhere higher up can be recommended (but I would never use the word “share”, it is an NFS export ;)).
And when it is for the benefit of the user (so, no system backup by root, but one by the user for the user), I would recommend the user to make a symbolic link from somewhere injside that user’s home directory to the mount point.

And of course, root will have to take care of the correct fstab entry and for the creation of the mountpoint (using YaST > NFS client might help you to do this correct and not to forget something). But root should not forget to make that user the owner of the mount point, else the user will have no access.

OK so I am going for a NFS connection between the NAS and this laptop.
I have created a directory /usr/local/nas3_data and am using yast to create a NFS share with the remote directory on the NAS.

I have used yast before to do this but on this occasion I have received the following error:-

command '/bin/mount -t nfs ‘’

‘/usr/local/nas3_data’’ failed:


Job for rpc-statd.service failed because the service did not take the steps

required by its unit configuration.
See “systemctl status rpc-statd.service” and “journalctl -xe” for details.
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use ‘-o nolock’ to keep locks local, or start statd.

mount.nfs: mounting failed, reason given by

server: No such file or directory

exit code:

What next please?

Have you read this guide?

Recommend that you use NFSv4 if possible as well.

And is there any prove that W530_backup is an export on Is that the complete path of that directory?

And of course what did you exactly fill in in YaST > NFS client?

We certainly do not look over your shoulder.

Hi deano, Well I read the link a couple of times and had a look at the NAS settings too. I have now selected NFSv4 and turned off the other version to ensure I am using the right kit.
When using Yast I was able to enter the IP address of the NAS server (not Hostname) and then select the directory from the list offered which found my desired backup data.
I then selected Force NFSv4 as you suggested and browsed to my local intended mount point /usr/local/nas3_data. I left the options as defaults and accepted OK. I then went to the NFS Settings tab but was not offered a chance to open the firewall. I have

Some firewalld services are not available:

  • nfs (Not available)

These services must be defined in order to configure the firewall.

Unfortunately I couldn’t find the help I needed in the the link you sent but am keen to learn. I can confirm that the NAS nfs server is working and exporting my directory, just cannot connect to it.’

Hi Henk,
I filled in the IP address of the NAS and found and selected the exported directory using the popup window. AFAK this shows that the directory is being exported but when I try and accept and close the window I get the error message, although this time it is shorter:-

command ‘/bin/mount -t nfs -o ‘nfsvers=4’ ‘’
‘/usr/local/nas3_data’’ failed:

mount.nfs: mounting failed, reason given by

server: No such file or directory

exit code:

Your expert guidance will be needed once more I fear!