Backing up ~/ Directory

Hello, I wasn’t quite sure where this topic belongs, but I guess it is most relevant to installation since it has to do with recovery and re-installation.

I have salvaged a 140GB HDD from a netbook, I want to use it as a secondary back-up medium.

I want to weekly copy EVERYTHING from ~/ directory and give it a date.

I mean everything, including hidden files. Is there a right method to do this? I tried cp -rf ~/. ./ while terminal is open from the drive, it didn’t copy the hidden files.

Normal user do not see hidden files by default so cp does not either. Personally I use rsync using the Luckybackup GUI front end.

You can include or exclude directories. Note you use directory/*** to designate them. The advantage of rsync is that only changed files are copied

Well, I’ll look in to rsync,

I tried cp-rf as root, it seems to copy everything.

You want to add -p to preserve the attribute of the files. But that assumes copy to a Linux file system if copied to a Windows file system you lose all those attributes

From man cp


-p     same as --preserve=mode,ownership,timestamps

       --preserve=ATTR_LIST]
              preserve the specified attributes (default:  mode,ownership,timestamps),  if  possible  additional  attributes:  context,
              links, xattr, all

If you don’t preserve then restore can be a problem
.

If backup is a Windows FS then it is better to tar the files first since this does preserve the attributes.

Hi,

Like what the previous post/answer there are tools created to backup just the way you want it. I would just like to point out that the hidden files or dotfiles is not seen by any utility by default but there is a builtin shell option switch from bash to show those files.

compare the output of the following commands inside a directory.

echo *

vs

shopt -s dotglob; echo *

Since shopt is a bash shell builtin utility one can look for help by using

help shopt

Personally I use “dar” for that. Note that “dar” is not part of a standard install, though it is in the repos. It writes an archive file – a single file containing copies of everything backed up. The destination (archive file) can have the date as part of the file name, though it will also be there as the timestamp on the archive file.

. is probably not what you want. When you want to address all files, jut * is needed. uing . will only address files with at least one . (dot) somwhere n the name

But, indeed that will skip file names starting with a . (dot). You need to address them explicit with .* . Thus your command then will change to

cp -rf ~/* ~/.*

I do not know why you added the ./ there. What that is, depends of course on what your working directory is at that moment, which you did not tell, thus we do not know and can not give any advice on it’s effects.

But the better way to do this might be

dotglob=yes cp -rf ~/*

because

dotglob
If set, bash includes filenames beginning with a `.’ in the results of pathname expansion.

But the better way to do this might be

dotglob=yes cp -rf ~/*

Hi,

dotglob is part of the shell globbing feature at least in bash. If one would run the builtin command

shopt

It should show all the globbing features in the shell and It’s status (on or off)

To turn on the glob in question one would run

shopt -s xxxxx

where xxxxx is the glob

To turn off the glob one would run

shopt -u xxxxx

That is true on both interactive shell and in a script.

See

help shopt 

The dotglob=yes is just an assignment, you are assigning the value of dotglob to yes it does not activate the globbing feature of the shell.

I am not quite sure what your message is. Is the statement I suggest incorrect in the way that it does not do what the OP and I intend: to expand of the path in the command (~/*) including the names starting with a dot?

Hi,

Yes because there is a difference between

shopt -u dotglob; echo cp -rf ~/*

and

shopt -s dotglob; echo cp -rf ~/*

That is quite possible, but I used neither of those.

Ok, I think I understand now what you mean. You mean that my dotglob command is wrong. And it is. Thank you for pointing that out to me.

I also admit that I missed your earlier post with a correct command. Shame on me :shame:

In fact I skipped some posts after finding out that most were offering other backup methods to the OP (which is not bad of course, it puts the question in a broader perspective) and I skipped yours on the way, again sorry.

I observed a certain lack of understanding in th OP’s first post, using patterns like ., which is most probably not wat he wants (to me it seems as if he studied computer science on MS-DOS). And I tried to teach him a bit on patterns in path globbing. Then at the end of it I made the mistake to add that wrong statement. I tried it here first, but seemlngly in the wrong way, making myself even more ridiculous.

BTW, your example with two echo commands only will show a difference if there is at least one file with name starting with a dot in the working directory.

Alrighty,

Here’s what I am doing so far. I open a terminal from mount point, log in as root, then do


cp -rf /usr/SJL/* ./

The drive is set up in NTFS system, so I will re-format it for ext4(which is the format of my OpenSuse OS)

then try

 cp -rfp /usr/SJL/* ./

Now that some of you mentioned backup softwares, the extra drive is larger than my SSD containing OpenSUSE. The reason that I wish to only copy ~/ is because it would take VERY LONG to copy the entire drive.

Which software is recommended for Cloning the entire SSD to HDD and synchronizing? In addition, something that can be accessed from Windows in order to clone back to another SSD?

So this is the idea:
current=SSD0, extraHDD=HDD,future=SSD1.

SSD0->HDD cloned and weekly synchronized.

HDD->SSD1, can be accessed from Windows 7 computer, as my “stationary” laptop only has W7pro
cloned in case of catastrophic failure(which I am yet to experience with OpenSUSE once it has stabilized after 1 week, first week goes through a LOT of customization, it is the window where I can and do break the system).

Also, hcvv, you are most correct, I was in my first computer class when I was in grade 2 in Korea. I started with MS-DOS then HTML, made my first website in grade 2, and slowly transitioned my way to “modern” computing and now at my final year in University in Canada.

When you made your way to “modern” computing, I hope you understand that . is not part of it. rotfl!

Your whole story is a bit confusing to me. When you say that you do as root

cp -rf /usr/SJL/* ./

then you copy to a relative path. Relative to the working directory. But as you do no not give any indication what the working directory (./) is, we can not goive any advice to the usefulness of this command.

Isn’t it far better to start from what you want instead of starting frrom some uncertain commands? You should find out what you want to make backups for. When you only want to cater for a user (you?) by incident destryoing/removing a file, then a backup to the same file system might be OK. When you want to be able to restore from a broken disk, you should of course backup to another physical disk (of course on a Linux file system). When you are afraid of a fire in the system, then you should use removeable mass storage and put that somewhere else in the house. When afraid of your house burned down, you should store the backup media elsewhere in town, etc.

Also, what to backup (some of this interacts with the above). When it is only about the data of one or more users, the user(s) could do that themselves. You could also, as a system manager, offer your users to do that for themm (as root). When you want to backup system data, like the configuration files in /etc, that should be done as root.

Etc. It is not obvious to design a backup policy.

So please first try to explain what you want to achieve. When that is to make a copy of a directory and all that is beneath it, to another directory and to keep that synchronised say every week (or other), then you should look into using rsync. It will only copy all files the first time it is called. After that it will only copy changed files and will delete removed files from the destination. And thus, normal, after the first copy is done, will only take a fraction of the time.

There are many possibilities. I, e.g., every week first make a copy of the existing backup in a way that I allways have the status of a maximum of 10 weeks. That means, that even after I made a backup and I need a file as it was three weeks ago, that is still possible. You could look into rsnapshot http://rsnapshot.org/. it is in the OSS repository. (It is script only, thus it is easy to get the basics out of it, it uses cp -al for making the generation copy and rsync for that backup itself, and program it yourself as I did, but you can of course also use as it is).

You story about different disks is a bit bewildering. You are not copying diks (or partitions) you are copying files. And for backups, it is of course better to copy them to a different file system, then to copy them on the same file system. (see above).

On 2015-09-04 10:26, hcvv wrote:

> You story about different disks is a bit bewildering. You are not
> copying diks (or partitions) you are copying files. And for backups, it
> is of course better to copy them to a different file system, then to
> copy them on the same file system. (see above).

If it were about copying disks or partitions, I would use clonezilla,
which has it own boot media.

For copying files I would use rsync, running from the openSUSE rescue
XFCE media, because it can verify the copy to be exact.

rsnapshot is quite good, specially for doing this periodically.

I would not use cp.


Cheers / Saludos,

Carlos E. R.

(from 13.1 x86_64 “Bottle” (Minas Tirith))

I will give clonezilla a try,

sorry, I usually have time to write here past midnight after I’m finished at school and work. My mind is usually out the window by then.

./ I meant by after opening the terminal at the mounted drive.