Cpuset files in /dev renamed?

I’m reviving an old product, and the existing code uses cpusets. I’m (99%) sure that the old scripts worked with Leap 15.5, but I’m now working with Leap 15.6, and a number of the cpuset files don’t exist:

  • /dev/cpuset/<setName>/cpuset.cpus
  • /dev/cpuset/<setName>/cpuset.cpu_exclusive
  • /dev/cpuset/<setName>/cpuset.mems
  • /dev/cpuset/<setName>/cpuset.sched_load_balance
  • /dev/cpuset/<setName>/cpuset.mem_hardwall

However, when I look inside /dev/cpuset/<setName>, I see all of these files, but without the cpuset. prefix. This is strange; all the docs I can find (including the kernel source installed by sudo zypper install kernel-source) includes the cpuset. prefix.

Have these files been (recently) renamed, removing the cpuset. prefix? Or, can I take the files I see (e.g. cpu_exclusive, sched_load_balance) can be used as if the were the documented files (cpuset.cpu_exclusive, cpuset.sched_load_balance)?

More info. First, the full /dev/cpuset/sys directory (where sys is a cpuset name) on a Leap 15.6 machine:

devuser@product:/dev/cpuset/sys> ls -al
total 0
drwxr-xr-x 2 root root 0 Feb 18 11:25 .
dr-xr-xr-x 5 root root 0 Feb 18 11:25 ..
-rw-r--r-- 1 root root 0 Feb 18 11:29 cgroup.clone_children
-rw-r--r-- 1 root root 0 Feb 18 11:29 cgroup.procs
-rw-r--r-- 1 root root 0 Feb 18 11:29 cpu_exclusive
-rw-r--r-- 1 root root 0 Feb 18 11:29 cpus
-r--r--r-- 1 root root 0 Feb 18 11:29 effective_cpus
-r--r--r-- 1 root root 0 Feb 18 11:29 effective_mems
-rw-r--r-- 1 root root 0 Feb 18 11:29 mem_exclusive
-rw-r--r-- 1 root root 0 Feb 18 11:29 mem_hardwall
-rw-r--r-- 1 root root 0 Feb 18 11:29 memory_migrate
-r--r--r-- 1 root root 0 Feb 18 11:29 memory_pressure
-rw-r--r-- 1 root root 0 Feb 18 11:29 memory_spread_page
-rw-r--r-- 1 root root 0 Feb 18 11:29 memory_spread_slab
-rw-r--r-- 1 root root 0 Feb 18 11:29 mems
-rw-r--r-- 1 root root 0 Feb 18 11:29 notify_on_release
-rw-r--r-- 1 root root 0 Feb 18 11:29 sched_load_balance
-rw-r--r-- 1 root root 0 Feb 18 11:29 sched_relax_domain_level
-rw-r--r-- 1 root root 0 Feb 18 12:41 tasks
devuser@product:/dev/cpuset/sys> 

And here is the same directory on a Leap 15.5 machine:

devuser@product:/dev/cpuset/sys> ls -al
total 0
drwxr-xr-x 2 root root 0 Feb 18 14:07 .
dr-xr-xr-x 3 root root 0 Feb 18 14:03 ..
-rw-r--r-- 1 root root 0 Feb 18 14:07 cgroup.clone_children
-rw-r--r-- 1 root root 0 Feb 18 14:07 cgroup.procs
-rw-r--r-- 1 root root 0 Feb 18 14:07 cpuset.cpu_exclusive
-rw-r--r-- 1 root root 0 Feb 18 14:07 cpuset.cpus
-r--r--r-- 1 root root 0 Feb 18 14:07 cpuset.effective_cpus
-r--r--r-- 1 root root 0 Feb 18 14:07 cpuset.effective_mems
-rw-r--r-- 1 root root 0 Feb 18 14:07 cpuset.mem_exclusive
-rw-r--r-- 1 root root 0 Feb 18 14:07 cpuset.mem_hardwall
-rw-r--r-- 1 root root 0 Feb 18 14:07 cpuset.memory_migrate
-r--r--r-- 1 root root 0 Feb 18 14:07 cpuset.memory_pressure
-rw-r--r-- 1 root root 0 Feb 18 14:07 cpuset.memory_spread_page
-rw-r--r-- 1 root root 0 Feb 18 14:07 cpuset.memory_spread_slab
-rw-r--r-- 1 root root 0 Feb 18 14:07 cpuset.mems
-rw-r--r-- 1 root root 0 Feb 18 14:07 cpuset.sched_load_balance
-rw-r--r-- 1 root root 0 Feb 18 14:07 cpuset.sched_relax_domain_level
-rw-r--r-- 1 root root 0 Feb 18 14:07 notify_on_release
-rw-r--r-- 1 root root 0 Feb 18 14:07 tasks
devuser@product:/dev/cpuset/sys> 

So, exactly the same set of files, except that where Leap 15.5 has a cpuset. prefix on thirteen of file files, Leap 15.6 has the same files but with cpuset. removed.

Weird…

And, more info. Here’s how I made the cpuset directory on the Leap 15.6 system. On a freshly-rebooted system:

devuser@product:~> ls /dev/cpuset
ls: cannot access '/dev/cpuset': No such file or directory
devuser@product:~> sudo mkdir /dev/cpuset
[sudo] password for root: 
devuser@product:~> sudo mount -t cpuset none /dev/cpuset
devuser@product:~> sudo mkdir /dev/cpuset/sys
devuser@product:~> ls /dev/cpuset/sys
cgroup.clone_children  mem_exclusive       mems
cgroup.procs           mem_hardwall        notify_on_release
cpu_exclusive          memory_migrate      sched_load_balance
cpus                   memory_pressure     sched_relax_domain_level
effective_cpus         memory_spread_page  tasks
effective_mems         memory_spread_slab
devuser@product:~> 

It looks like mounting a cpuset in 15.6 defaults to having the noprefix option, while that option was omitted in 15.5. So, I’m going back to 15.5 and will be working there. Once I get it all up and running I’ll consider switching back to 15.6.

@dtgriscom Leap 15.5 is EoL, no more support…

Like SLES, SLED is based on openSUSE Tumbleweed and shares a common codebase with openSUSE Leap.

Please read documentation “Shielding Linux Resources” for “SUSE Linux Enterprise Realtime” and migrate to cset (cgroups v1) or systemd (cgroups v2).

The cset command is a Python application that provides a command line front-end for the Linux cpusets functionality. Working with cpusets directly can be confusing and slightly complex. The cset tool hides that complexity behind an easy-to-use command line interface.

The csetutility supports cpuset controller only on v1 hierarchy (legacy or hybrid in systemd lingo). On a system with the unified (v2) hierarchy,cset is not supported and cpuset controller can be used via systemd.

Read:

for information about kernel control groups v1 and v2 (cgroups).

You find here:

some basic instructions for the daily usage of cgroups v2. Use a german to english translator like deepl.com.

You should read the release notes of:

before you upgrade your (open-)SUSE installation.

You should read for example this release notes entry:

5.5.8 systemd uses cgroup v2 by default [#](https://www.suse.com/releasenotes/x86_64/SUSE-SLES/15-SP6/index.html#jsc-PED-1447)

SUSE Linux Enterprise Server 15 SP6 changes default cgroup mode to unified (cgroup v2). Hybrid mode can be enabled using a boot parameter for workloads that depend on cgroup v1, see https://documentation.suse.com/sles/15-SP6/html/SLES-all/cha-tuning-cgroups.html#sec-cgroups-hybrid-hierarchy.

before you upgrade your openSUSE

Be aware of the end-of-life dates:

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.