Unless you’ve been living under a rock or don’t ever touch a Windows system, you should know that probably the worse type of malware has been hitting Windows for the last 4 years or so… The malware searches out all personal and data files on your local machine and then all files in your accessible network shares, encrypting your files as it goes… And then, when it’s finished tells you to pay some number of bitcoins to get the key to decrypt your files. Businesses have been driven to close up shop if they can’t pay the ransom or the key is broken or no key is sent. There is only one known “proper” solution, and that is wipe the machine completely and to restore from backup, and you know how diligent people and businesses are at making regular backups and testing.
So, how are they getting into a Linux system?
Encrypting wouldn’t be that difficult to do on a Linux system, all that’s needed is to target the Linux system using appropriate utilities.
The linked article above mentions compromising Redis, which is an attack vector I read about elsewhere with a different payload/objective.
Redis is a useful message queueing app (think networking buffer to smooth out network anomalies) with minimal security if set at all (often just not configured because it adds overhead decreasing performance and is generally considered not a a target). No one should be exposing Redis to the Internet (shame on you if you are), but may be exposed behind the firewall in the private network, potentially accessible by some other compromised machine (Windows ?) in the network.
Anyway,
Just because we run Linux and are immune from the large majority of malware that targets Windows, don’t believe we are completely invulnerable, in fact we can be vulnerable to application level attacks which often aren’t specific to the OS and hardware below.
Verified backups that are disconnected from your systems are the only real defence against this sort of attack. The verified and disconnected part are important. I learned the verified backup lesson the hard way many years ago.
Main CPU important data - current work, archives, config files and misc info - backed up automatically to a versioned cloud service. I use SpiderOak as it’s encryption is done on the client side. It’s nice as long as you don’t forget your password.
Another CPU used mostly as a work backup machine, turned on about twice a week, that syncs automatically with everything that is backed up in the main CPU. This machine has a very limited set of applications - no multimedia, no games, very few third party (community) applications.
A third (space limited) CPU that syncs to the current work directory on the main machine.
Each CPU runs a different oS version (13.1, 13.2, 42.1, etc.), so an infected package probably won’t affect all three CPUs at the same time.
So, when opening a synced file in any of the other CPUs, if it works OK is an indication that there’s no ramsonware on the main machine. If there is, it’s a case of checking the versioned backups to see when the files started being encrypted, and after cleaning the infected machine - most probably the main one - redownload the important files in their last clean version.
I think this is reasonable insurance against the problem. What do you think?
Of course the cloud thing can be exchanged for a local “cloud” system and a couple spare, old CPUs. But I like having one of the backups offsite, and the power bill for running that would not be much less than the annual subscription.
The current crop of Ransomware implementations seem to be able to also encrypt everything you have placed in “The Cloud” . . .
Assuming that, this system “pulls” ‘everything’ there is the possibility that, the Ransomware will also be “pulled”.
May I suggest that a “very selective” ‘pull’ be used.
See point 2).
Given Linux, probably doesn’t help – given that the Ransomware runs in User-Space.
Versioned Backups were always a good method to recover from catastrophic issues; and, they’ll possibly remain as being the “best method”.
Given that, one accepts that the likelihood of losing the “last 5 minutes” or “the last day’s” or “the last week’s” work and/or changes is quite high (nothing is perfect).
Please note that, “rsync” is often not
configured to provide versioned Backups . . .
AFAIK rw (ramsonware) first decrypts what is sent out of the system for some time so you don’t notice it. For example, if you send a file by e-mail or copy to an external unit the file is decrypted first. If not, as you say, then the rw will be noticed much earlier, thus making recovery easier - something the rw doesn’t usually want. According to many security reports some rw stay hidden for months.
An “instant” rw would obviously be less damaging, as long as you have backup.
Why would you assume so? Also, you’d have to assume that the executable is not only “pulled in” but executed as well. Makes no sense to me.
Of course. That’s what “… important data - current work, archives, config files and misc info…” means. No executables there. SpiderOak backup selection is very granular, you can specify down to file level, not just directories. It would be dumb to backup executables, even dumber - and certainly messy given the different oS versions - if they were system executables.
I think it may help exactly because it’s Linux. Executables come from different official/semiofficial repos, supposedly safer than downloading a w32 shareware from an unknown site. Although it’s a moot point since no executables are pulled in, you may argue that a repo may be compromised (like mint last January). However, I don’t think it’s probable that two or three different oS version repos will be compromised at the same time and not all machines are updated at the same time. In fact, perhaps one of the backup CPUs should only be updated very rarely, limited to security updates only. That’s how it usually pans out here, although not intentionally. Something to think about.
That is managed by SpiderOak, I’ve had on occasion recovered an older version with info deleted from the most recent version, works quite well.
I suppose a local setup would use rsync, directly or as a backend. Owncloud, perhaps.
> malcolmlewis;2795625 Wrote:
> > third party security (soon to be driverless cars…).
> Knowing computers, programming, and related tech:
> … scares the absolute ******* out of me!!!
>
>
About fifteen years or so ago, when I was still working, we had a talk
from a retired NASA worker who regaled us with tales of fly-by-wire
aircraft incidents. On one occasion, the instrument displays all went
dark and then lit up again with the comforting words, 'Please
wait . . . '. On another occasion, a pilot on approach found that
when he tried to make a left turn, the aircraft went right; when he
tried to descend, the aircraft climbed. He still managed to land the
aircraft safely by making the opposite actions to what would seem to
have been needed. Subsequent exhaustive tests were unable to reproduce
the effects.
He also mentioned how NASA had investigated the how much effort was
required to debug code and how effective it was. It found that in the
early stages, little effort was needed in order to get a good benefit
but the stage was finally approached where no matter how much effort
was put into reducing the number of bugs, no progress could be made.
The limit was 7 bugs per 1,000 lines of code.
One of the programs I used to be responsible for had been running for
twenty years or so, many times a day, 24x7x365. It then failed
over Christmas. Although the program had been revised and updated many
times over the twenty years, the ‘bug’ responsible for the failure had
been there from day one.
–
Graham Davis, Bracknell, Berks., UK.
openSUSE 42.1; KDE Plasma 5.8.0; Qt 5.7.0; Kernel 4.1.31;
AMD Phenom II X2 550 Processor; Sound: ATI SBx00 Azalia (Intel HDA);
Video: nVidia GeForce 210 (Driver: nouveau)
Which means that, most versioned backup schemes will not help at all to recover from the catastrophe.
[HR][/HR]Perhaps the only solution is to use a defined and strictly controlled file-name scheme with a checksum of the file-data of the known, listed, defined, file-name set – products such as “Tripwire” (there is an open-source version and an O’Reilly book) offer protection in this direction.
Only if you don’t regularly look at the data replicated to other machines. That’s assuming the other machines are not all compromised, of course. But that I think is improbable if the machines are set & used more or less as I described (note I’m absolutely no expert in security). If you do and any file is encrypted, you can, in a reasonably easy way, check which machine is compromised, deal with that (full new install from verified images) and recover most of your data.
But if the machine gets compromised, new file-data will be already created/modified encrypted, and the checksum will just reflect that. You’d have to check unchanged (older) files, I think. If such a system gets popular, someone would create a rw that only encrypts data from the moment it is installed, leaving the older data alone. Better than nothing, I suppose.
Yes, this is exactly the problem – constantly inspecting stored (persistent) data for unexpected and/or unauthorised changes.
As well as constantly checking that only expected and/or authorised process are currently executing on the machine.
This is where things such as SELinux and AppArmor need to be considered and decisions have to be made as to how the configuration of these approaches shall be made – the following Wikipedia page gives an introduction into the thoughts behind the two approaches: <https://en.wikipedia.org/wiki/Security-Enhanced_Linux>
SELinux can potentially control which activities a system allows each user, process and daemon, with very precise specifications.
Another point is that, the mechanisms used by Ransomware are not unknown – the only difference to earlier system attacks is that, the people behind the Ransomware have figured out how to make money with the attacks – it’s a business . . .
Earlier system attacks were also sometimes business attacks but, with the purpose of knocking competitors out of business – and therefore increasing revenue (for the attacker) – also business.
Or, they were only interested in stealing the stored data and then once the data had been obtained, disabling the attacked systems – possibly also a business . . .
The people behind SELinux have only one goal – system security – don’t let attacks obtain and/or destroy your stored data . . .
Obviously would depend on what you’re doing. Once in a while works for me, doesn’t have to be constant. If it does, just script it. I’m wary of these one-size-fit-all authoritative “solutions”. Every case is different.
OTOH I haven’t seen a virus or trojan since windows 95 IIRC, when I was a completely clueless noob, and I consider linux - at least with my conservative usage pattern - much safer. After all I’m not running any wan-exposed server…
I’d not be surprised if there’s a lot of FUD in all this threat rigmarole. It’s easy to shout “you’re under attack!”, "you may be compromised!, “no one is safe!”. “Do this, use that!”. And whatever you do, it’s never enough (yeah, I know, the jorney-not-destination thing. Sigh.).
Many end with “buy my product…”. A big industry indeed.
Many of the people who are involved in the security (encryption) business would possibly tend to agree with you on this point – maybe Bruce Schneier’s thoughts on this issue could be interesting for the readers of this thread: <https://www.schneier.com/>. I recommend that one browses Bruce’s “Crypto-Gram” newsletter (started 1998) for his thoughts on issues such as this theme: <https://www.schneier.com/crypto-gram/>.