Cephadm bootstrap failure on Tumbleweed ARM with Pi 4.

Hello All,

I’m a newbie to OpenSuse and Ceph. I’m experimenting with Ceph on Raspberry Pi 4. It’s easy enough to install the Ceph Tubmleweed-ARM repo and run the commands. However, Cephadm bootstrap command fails after it pulls down the docker.io image. I’ve searched the internet and the forums, but can’t seem to make any headway. Cephadm is pulling down the docker.io/ceph/daemon-base:latest-master-devel image. I believe the script it’s failing because it maybe for x86 or amd64?

Unfortunately, I’m unable to find decent documentation or examples for using Cephadm on the Pi 4. Again, I’m a newb. So I have a few questions:

  1. Is there a way to direct Cephadm to use a specific Ceph-arm branch in the docker.io repo?
  2. Any suggestions on a site about how to use Cephadm with examples? I find the man pages a little lacking for my knowledge base. I’m still learning.

I know that this is a development version and that bugs and problems will arise. I’m also hoping that it’s something easy that I missed. I’ve posted the output below if that helps.

Many Thanks!


localhost:~ # cephadm bootstrap --mon-ip
This is a development version of cephadm.
For information regarding the latest stable release:
Verifying podman|docker is present…
Verifying lvm2 is present…
Verifying time synchronization is in place…
Unit chronyd.service is enabled and running
Repeating the final host check…
podman|docker (/usr/bin/podman) is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: 88b723c6-2cdf-11eb-bd67-dca632807381
Verifying IP port 3300 …
Verifying IP port 6789 …
Mon IP is in CIDR network
Pulling container image docker.io/ceph/daemon-base:latest-master-devel
Extracting ceph user uid/gid from container image…
Non-zero exit code 1 from /usr/bin/podman run --rm --net=host --ipc=host -e CONTAINER_IMAGE=docker.io/ceph/daemon-base:latest-master-devel -e NODE_NAME=localhost --entrypoint stat docker.io/ceph/daemon-base:latest-master-devel -c %u %g /var/lib/ceph
stat:stderr standard_init_linux.go:219: exec user process caused: exec format error
Traceback (most recent call last):
File “/usr/sbin/cephadm”, line 5859, in <module>
r = args.func()
File “/usr/sbin/cephadm”, line 1248, in _default_image
return func()
File “/usr/sbin/cephadm”, line 2717, in command_bootstrap
(uid, gid) = extract_uid_gid()
File “/usr/sbin/cephadm”, line 1912, in extract_uid_gid
raise RuntimeError(‘uid/gid not found’)
RuntimeError: uid/gid not found

I’m pretty sure architecture is important, particularly for a low level technology like Ceph which deals in storage and networking. Might be less important if the technology you’re deploying is higher level and can exist on at the Application layer.

For instance on the following Docker Hub page for Ceph, you can specify ARM (which you probably want) and ARM-64 (which you probably don’t want) along with other architectures.


I wonder if it might be easier or more certain for you to simply create your own container, then you’d know what’s in it.

There isn’t too much useful info about Ceph on openSUSE but there is the following Wiki page someone created…


You’ll find more information about deploying Ceph on SLES. Often, if you overlook the installation, deployment and default configuration, SLES articles can be used to described how to use the technology


It’s been a long time since I’ve looked at Ceph and I’m sure there have been significant changes, but I think you’ll probably need to figure out how to deploy more than just what you asked…
And, I also suspect that learning off a deployment on containers might not be recommended. I find that containers have their own issues setting up that can get in the way of learning, and what’s in the container will of course affect everything. I instead recommend installing from scratch (maybe following the openSUSE Wiki), and if you’re learning on a budget (as most of us are) that you use a full virtualization solution at first like Virtualbox, VMware, etc. instead of Docker containers. You can still set up your RPi as a Ceph node. Then, when you’re somewhat familiar with setting up using virtualization, you can then attempt to do the same using containers.



Just plowing through it. Lots to learn. It’s kinda complicated. At least more complicated than mdadm.