I’m not sure how easily systemd-nspawn can do what you are trying to do, are you following any kind of guide?
My personal use of systemd-nspawn has been purely as a chroot replacement, and it works really well.
If you are trying to launch an entire container, I’d instead recommend running Docker which is the latest and highly recommended way to run Linux Containers.
Here are links on what I wrote about Docker for 13.1.
All should apply to 13.2 as well.
Only modification I haven’t made to my wiki pages is that I recommend you install using the Main Update repository instead of the Virtualization repo… after all, you probably would want to run “zypper up” anyway to get the latest packages anyway… http://en.opensuse.org/User:Tsu2#Docker
If you have any other questions running systemd-nspawn or Docker, I’d recommend you post in the Virtualization or Application forums instead of the Netowrking forum.
OK, this is a systemd-nspawn featureI haven’t done before, but after reading the MAN pages it looks like this new(?) implementation of networking is very similar to what exists in Docker which is slightly different than traditional virtual networking.
First, regarding macvlan…
It has been part of the standard Linux kernel for many years now. If you <really> want to implement explicitly, it might still be possible to manually configure… a search for “/etc/network/interfaces macvlan” returns many results which can be tried. IIRC macvlan’s special feature is support for wireless networking, but even so nowadays it might be possible to use <any> kind of networking without using macvlan directly (other methods may use it for you without special configuration).
But, IMO based on the nspawn MAN pages, it looks like networking should be configured without a major issue.
I would recommend possibly easiest…
Configure a Linux Bridge device. There are many ways to do this, it doesn’t matter how you do it. Once created, it’s available for <any> virtualization including linux containers and now seems to include systemd-nspawn. Various ways to create include
YAST - Very easy, from “Network Devices” add a “New” “bridge” and follow the instructions. Creates bridges with default “br0” type names
Command line
Libvirt - Using vm manager, you create bridge names with “virbr0” style names. Advantage of libvirt is the ability to create options easily, like DHCP, various network configurations including NAT/Bridge/Private, more.
VBox,VMware Workstation, etc - All come with management utilities which easily create and manage virtual networks. So, for instance if you <already> have one of these installed, just use the bridge devices already created.
Whatever method you use to create a bridge device, you will likely want to use the brctl command to display and manage devices from the command line. This is completely cross-virtualiztion technology and displays/manages all bridge devices no matter how and where they were created.
Once the bridge device is created, then you only need to configure your nspawn container to use that bridge device. Note that any number of containers and virtual machines can use the same bridge device, you don’t necessarily need to create a different bridge device for each machine. I’d have to look closer at the nspawn implementation to verify it’s consistent with general use that the bridge device is only the HostOS side which describes only the network and is not network address specific. If this is the case then the network interface in the container has the actual network configuration (address, routes, DG and more).
More than likely one of the following should then work in the nspawn invoking command
–network-interface=
–network-bridge=
this all is not new for me, i know how to create a macvlan interface, and the other things, i also know docker and its network downsides (yes pipework exists but it need something inside the container , so the container is no longer really universal) , and the problems with integration in system managers like systemd, that (and such other things) are the reason i try systemd-nspawn , which looks better to fit my needs as docker.
in theory --network-veth and --network-bridge should work, but it does not, i can reproduce this also on SLES12 (but i only have a evaluation licence)