HP 8570plaptop - 16g of ram and swap

This is driving me absolutely nuts lately.

I changed the swappiness to 10 and still it uses swap with 16 g of ram!

Granted it’s only a little swap used ( 20 to 180 megs ) at times but it SLOWS response down when it does this.

How can i fix this without getting rid of swap?

There is no valid reason for this I can find.

G. Dixon

Hi
Sounds like something else, I run at;


vm.swappiness=1
vm.vfs_cache_pressure=50

What desktop environment?

i run pressure at 50 also.

I’m going to try 1 and see what happens. I just don’t want to turn it off tottaly but may if i don’t get this irritation to stop.

I run KDE

oops i thought i had swappiness at 10 and i did change it to 1 all ready so thats not a solution

Hi
Desktop environment, filesystem? HDD or SSD, if so what brand/model?

duplicate post

as i stated KDE and swap does not belong on a ssd so it’s on a hdd, main sys is on ssd. Swap,Files, movies, doc,s music saved to hdd.

I did try swap on the ssd and it didn’t slow down like a hdd does but again swap does not belong on a ssd

I use ext 4 for everything.

EDIT:

ssd is a western digital 1 tb and hdd is a 5400 rpm WD blue

Hi
I run all my systems with swap on a SSD, no issues and thousands of hours on some of them. I only use HDD’s for backup.

Could be the swap transition to rotating disk, it’s likely using bfq, maybe look at setting the hdd scheduler to none or tweaking it for ext4.

I’m running WD’s on this Tumbleweed system;


pinxi -Dxxz
Drives:    Local Storage: total: 1.36 TiB used: 704.83 GiB (50.4%) 
           ID-1: /dev/nvme0n1 vendor: Western Digital model: WDS250G1B0C-00S6U0 size: 232.89 GiB speed: 15.8 Gb/s lanes: 2 
           serial: <filter> 
           ID-2: /dev/sda vendor: Western Digital model: WDS250G2B0B-00YS70 size: 232.89 GiB speed: 6.0 Gb/s serial: <filter> 
           ID-3: /dev/sdb vendor: Western Digital model: WD10JPVX-60JC3T0 size: 931.51 GiB speed: 6.0 Gb/s serial: <filter> 

how do you have separate schedulers for one ssd and one hdd in a laptop?

I have the scheduler set to deadline currently

EDIT:

Of course it is the transition to a 5400 rpm hdd but it should not be using any swap with 16g of ram regardless.

Hi
Consider switch to bfq-deadline and bfq for the HDD (it’s automatic and the default these days).

Use a udev rule, this is what I use to use, it’s the default now, so you can use whatever you like…


# Add multi-queue scheduler support
# Also add to grub kernel options
# scsi_mod.use_blk_mq=1
# Filename: /etc/udev/rules.d/61-bfq-scheduler.rules

# SSD
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="mq-deadline"

# Rotating
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="bfq"

Install smem to see what/who etc is using swap.

is it default for tumbleweed to use both as in one for ssd and one for hdd or do i have to set it up after choosing bfq-deadline in yast? will it automagically apply bfq for the hdd?

I’m still wrapping my head around using udev and systemd after all the yrs of simple text files to administer or change to ones liking.

I’ve been running Tumbleweed for over 2 yrs (maybe even 3yrs, ********** crs) on this laptop now without a fresh install, just zypper dup so my defaults on many things probably don’t match what a fresh install would look like.

ahhh ok i see brq-deadline or just brq is not in the yast kernel choices so i would have to figure out how to manually make the changes for both I see.

Do I not have the choices because my original install is so old?

more research to do.

EDIT:

I installed smem and thats a start to see whats what. One change at a time so i know what fix’s what or makes it not fixed :slight_smile:

Hi
It’s enabled by default, so unless you still have something hardcoded (eg via a udev rule or kernel option)?

I see;


cat /sys/block/sd[ab]/queue/scheduler
[mq-deadline] kyber bfq none <ssd>
mq-deadline kyber [bfq] none <backup disk>

cat /sys/block/nvme0n1/queue/scheduler
[mq-deadline] kyber bfq none

Here is what i get.

‘gregory@hp-8570p:~> cat /sys/block/sd[ab]/queue/scheduler
[mq-deadline] kyber bfq none
[mq-deadline] kyber bfq none
gregory@hp-8570p:~> /sys/block/nvme0n1/queue/scheduler
bash: /sys/block/nvme0n1/queue/scheduler: No such file or directory
gregory@hp-8570p:~>’

Hi
So looks like somewhere is setting the hdd to mq-deadline rather than bfq which it should be… that may be part of the issue… Since you don’t have a NVMe device the second command isn’t needed :wink:

In the past did you create a udev rule down in /etc/udev/rules.d?

No i never made changes outside of yast with the kernel parameters and that was to deadline.

My main drive sda is a WD blue 1tb and sd2 is the WD 1tb blue hdd

I changed the HDD from sda1 location to sda2 when i added the ssd a few months ago and put swap on the hdd

i’m thinking that having such an old original install is the cause.

EDIT:

I installed smem so next time it happens i will know what is causing it or at least what is using swap.

Hi
Via YaST, can you revert your setting and let the system use the defaults. I’m assuming you have it set in the bootloader kernel options?

only choices there are none, cfq, noop and deadline and yes thats where i chose deadline. Revert back to what?

EDIT:

I have it set in kernel settings

Hi
Can you set it to nothing/blank? Where is YaST is it set (screenshot)?

Finally got a handle on it.

It turns out that adjusting swappiness and cache pressure is the solution but the usual recomendations of swappiness = 10 and cache pressure of 50 is misleading.

I ended up with swappiness = 1 ( the lower the number the less likely swap is used )

cache pressure ended up at 150 and this is where it is misleading. The higher the number the more aggressively ram is recovered with the usual default being 100. Changing to 50 like many recommend means it works less aggressively and I needed it to be a bit more aggressive in my situation.

I’m going to try swappiness at 10 and adjust cache pressure downward over the next couple of weeks to find the sweet spot. Most likely it will be between 100 and 150 i’m guesstimating.

Thanks for the suggestions early in this game. I have my my scheduler set to blank or none in yast kernel settings and will play around with that after i find a sweet spot with swappiness and cache pressure. One thing at a time so I know when i cause the problem!

I’ll keep this thread updated with my results.