There seems to be a bit of misinformation or at least miscommunication in this thread.
First off - ‘ZRam,’ formerly known as compcache, is a compressed ram disk - that is to say, it creates a virtual block device, which can be used for ‘storage’ - but is backed by RAM. One of the uses for this is as “swap” for a system that doesn’t have a physical swap device - such as an embedded system, or a system with a limited-write-life SSD / flash memory. (zram is used in some editions of Android, to shoehorn more functionality into older phones by kinda-sorta providing more total memory without needting to kill the flash storage with swap).
ZRam comes at a performance cost, of course - the act of compressing (and decompressing) data as it goes into and out of ‘swap’ (again, actually a block of RAM) uses CPU - and sometimes it may not actually compress down all that well.
ZSwap is similar, but it’s designed around being used on a system that DOES have a physical swap partition / file / whatever backing it. Per the Arch wiki, “zswap is a compressed RAM cache for swap devices.” So, you get another tier between main memory and swap - which is ‘compressed cache.’ (This is funny to me, given the former name of ZRam being compcache)
Now, what I used - for a while - was ZRam as a swap device, higher priority than my ‘normal’ swap device - but as you can read on the link below, this is actually not optimal, as it seems that the priority system isn’t intelligent enough to keep the ‘most recently / frequently used’ data in the highest-priority swap. Instead, once the ‘fast, zram swap’ gets used up, anything new will get written to the disk / secondary swap, regardless of which data / pages need to be read/written more often.
There’s a surprisingly good breakdown of this over on AskUb____.
I haven’t actually tested out ZSwap, but it seems like it would be a better option for a “normal” system. That being said, this user’s review of performance over on Ubuntu 13.10 seemed to be sub-optimal - actually worsening overall performance - but he or she was using an artificially small memory size (1GB) for testing, which I suspect makes a pretty big difference.
The long and the short of this is that ZSwap and ZRam will both make use of CPU time / processing power to ‘shrink’ your memory pages, in one way or another.
And, really, if you’re not hitting swap all that often, none of these will probably make much of a difference! I’m currently of the ‘decrease swappiness and use an SSD for most of your applications’ school of thought. Less swappiness means less of your memory will be free for disk cache, but while cache is great, if your disk is a nice fast SSD, that’s not going to be as much of a problem / benefit, IMO.
But, your mileage will totally vary depending on what type of load you’ve got on your system!
Also, please remember to check your disk schedulers - I don’t know where openSUSE is in terms of CFQ/NOOP/DEADLINE autoselecting appropriately, but use the latter two on your VMs, and probably also for systems using SSDs!