From cb05e9a9ae6e9774d546a121c70bb8deac27bf29 Mon Sep 17 00:00:00 2001 From: Peter Boy Date: Feb 09 2022 14:37:06 +0000 Subject: Updated various files to version 35, ready to get published --- diff --git a/docs/modules/ROOT/assets/attachments/nspawn-autostart-libvirt-hook.tgz b/docs/modules/ROOT/assets/attachments/nspawn-autostart-libvirt-hook.tgz new file mode 100644 index 0000000..5676003 Binary files /dev/null and b/docs/modules/ROOT/assets/attachments/nspawn-autostart-libvirt-hook.tgz differ diff --git a/docs/modules/ROOT/nav.adoc b/docs/modules/ROOT/nav.adoc index 5a59ebb..ea7b6f9 100644 --- a/docs/modules/ROOT/nav.adoc +++ b/docs/modules/ROOT/nav.adoc @@ -6,5 +6,7 @@ * xref:server-virtualization.adoc[Virtualization] ** xref:virtualization-install.adoc[Add Virtualization Support] ** xref:virtualization-vmcloud.adoc[VM Installation Using Cloud Image] +* xref:server-containerization.adoc[Containerization] +** xref:container-nspawn-install.adoc[Container systemd-nspawn installation] * xref:server-faq.adoc[FAQ] * xref:server-communicating.adoc[Getting in Touch] diff --git a/docs/modules/ROOT/pages/container-nspawn-install.adoc b/docs/modules/ROOT/pages/container-nspawn-install.adoc new file mode 100644 index 0000000..2eb84b8 --- /dev/null +++ b/docs/modules/ROOT/pages/container-nspawn-install.adoc @@ -0,0 +1,529 @@ += Container systemd-nspawn – Installation +Peter Boy; Jan Kuparinen +:page-authors: {author}, {author_2} + +[sidebar] +**** +Author: Peter Boy (pboy) | Creation Date:2022-02-09 | Last update: 2022-02-09 | Related Fedora Release(s): 34,35 +**** + +The systemd-nspawn container runtime is part of the systemd system software. It has been offloaded into its own package, systemd-container, a while ago. + +The prerequisite is a fully installed basic system. A standard interface of the host to the public network is assumed, via which the container receives independent access (own IP). In addition an interface for an internal, protected net between containers and host is assumed, usually a bridge. It may be a virtual network within the host, e.g. libvirts virbr0, or a physical interface connecting multiple hosts. + +But of course a container can also be operated with other variations of a network connection or even without a network connection at all. + +== 1. Setting up the nspawn container infrastructure + +1. *Create a container storage area* ++ +The systemd-nspawn tools like machinctl look for containers in `/var/lib/machines` first. This directory is also created during the installation of the programs if it does not exist. ++ +Following the Fedora server storage scheme, create a logical volume, create a file system and mount it to `/var/lib/machines`. The tools can use BTRFS properties, so this can be used as a filesystem in this case. +If you don't want to follow the Fedora Server rationale, skip this step. ++ +[source,] +---- + […]# dnf install btrfs-progs + […]# lvcreate -L 20G -n machines {VGNAME} + […]# mkfs.btrfs -L machines /dev/mapper/{VGNAME}-machines + […]# mkdir /var/lib/machines + […]# vim /etc/fstab + (insert) + /dev/mapper/{VGNAME}-machines /var/lib/machines auto 0 0 + + […]# mount -a +---- +2. *Check and, if necessary, correct the SELinux labels* ++ +Ensure that the directory belongs to root and can only be accessed by root (should be done by the installer). ++ +[source,] +---- +[…]# restorecon -vFr /var/lib/machines +[…]# chown root:root /var/lib/machines +[…]# chmod 700 /var/lib/machines +---- +3. *Adding configuration for nspawn to the `etc/systemd` directory* ++ +[source,] +---- + […]# mkdir /etc/systemd/nspawn +---- + +== 2. Creating a nspawn container + +=== 2.1 Creating a container directory tree + +The creation of a container filesystem or the provision of a corresponding image is treated as "out of scope" by systemd-nspawn. There are a number of alternative options. By far the easiest and most efficient way is simply to use the distribution specific bootstrap tool, DNF in case of fedora, in the container’s directory. This is the recommended procedure. + +1. Creating a BTRFS subvolume with the name of the container ++ +[source,] +---- + […]# cd /var/lib/machines + […]# btrfs subvolume create {ctname} +---- +2. Creating a minimal container directory tree ++ +**__Fedora 34 / 35__** ++ +[source,] +---- + […]# dnf --releasever=35 --best --setopt=install_weak_deps=False --installroot=/var/lib/machines/{CTNAME}/ \ +install dhcp-client dnf fedora-release glibc glibc-langpack-en glibc-langpack-de iputils less ncurses passwd systemd systemd-networkd systemd-resolved vim-default-editor +---- ++ +F34 installs 165 packages (247M) and allocates 557M in the file system. +F35 installs 174 packages (270M) and allocates 527M in the file system. ++ +**__CentOS 8-stream__** ++ +First create a separate CentOS repository file (e.g. /root/centos.repo) and import CentOS keys.On this basis, perform a standard installation using DNF. ++ +[source,] +---- + […]# vim /root/centos8.repo + + [centos8-chroot-base] + name=CentOS-8-Base + baseurl=http://mirror.centos.org/centos/8/BaseOS/x86_64/os/ + gpgcheck=1 + gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial + # + [centos8-chroot-appstream] + name=CentOS-8-stream-AppStream + #baseurl=http://mirror.centos.org/$contentdir/$stream/AppStream/$basearch/os/ + baseurl=http://mirror.centos.org/centos/8-stream/AppStream/x86_64/os/ + gpgcheck=1 + gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial + # + [epel8-chroot] + name=Epel-8 + baseurl=https://ftp.halifax.rwth-aachen.de/fedora-epel/8/Everything/x86_64/ + gpgcheck=1 + gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-8 + + […]# dnf install http://mirror.centos.org/centos/8-stream/BaseOS/x86_64/os/Packages/centos-gpg-keys-8-2.el8.noarch.rpm + + […]# rpm -Uvh --nodeps https:/dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm + + […]# dnf -c /root/centos8.repo --releasever=8-stream --best --disablerepo=* --setopt=install_weak_deps=False --enablerepo=centos8-chroot-base --enablerepo=centos8-chroot-appstream --enablerepo=epel8-chroot --installroot=/var/lib/machines/{CTNAME} install centos-release dhcp-client dnf glibc-langpack-en glibc-langpack-de iproute iputils less passwd systemd systemd-networkd vim-enhanced + +---- ++ +This installs 165 packages that occupy 435 M in the file system. +The message: `install-info: File or directory not found for /dev/null` appears several times. The cause is that the `/dev/` file system is not yet initialized at this point. You may savely ignore the message. + +=== 2.2 Configuration and commissioning of the container + +1. Setting the password for root ++ +This requires temporarily setting SELinux to permissive, otherwise passwd will not make any changes. ++ +[source,] +---- + […]# setenforce 0 + […]# systemd-nspawn -D /var/lib/machines/{ctname} passwd + […]# setenforce 0 +---- +2. Provision of network interfaces for the container within the host ++ +If only a connection to an internal, protected network is needed (replace the host bridge interface name accordingly): ++ +[source,] +---- + […]# vim /etc/systemd/nspawn/{ctname}.nspawn + (insert) + [Network] + Bridge=vbr6s0 +---- ++ +If a connection to the external, public network is also required, two corresponding interfaces must be provided, whereby a mac-vlan is used on the interface of the host for the external connection (again, replace the host interface names accordingly). ++ +[source,] +---- + […]# vim /etc/systemd/nspawn/{ctname}.nspawn + (insert) + [Network] + MACVLAN=enp4s0 + Bridge=vbr6s0 +---- + +3. Configuration of the connection to the internal network within the container ++ +[source,] +---- +[…]# vim /var/lib/machines/{ctname}/etc/systemd/network/20-host0.network + (insert) + # {ctname}.localnet + # internal network interface via bridge + # static configuration, no dhcp defined + [Match] + Name=host0* + + [Network] + DHCP=no + Address=10.10.10.yy/24 + #Gateway=10.10.10.10 + + LinkLocalAddressing=no + IPv6AcceptRA=no +---- ++ +If the internal network is also to be used for external access via NAT, the gateway entry must be commented in. Otherwise do not! + +4. Optionally, configure an additional connection to the public network via Mac Vlan ++ +In this case, the gateway entry _must_ be commented _out_ in the configuration of the internal network, as mentioned in item 3. ++ +[source,] +---- + […]# vim /var/lib/machinec/{ctname}/etc/systemd/network/10-mv.network + (insert) + # {ctname}.sowi.uni-bremen.de + # public interface via mac-vlan + # static configuration, no dhcp available + [Match] + Name=mv-enp* + + [Link] + ARP=True + + [Network] + DHCP=no + + # IPv4 static configuration, no DHCP configured! + Address=134.102.3.zz/27 + Gateway=134.102.3.30 + # without Destination specification + # treated as default! + #Destination= + + # IPv6 static configuration + Address=2001:638:708:f010::zzz/64 + IPv6AcceptRA=True + Gateway=2001:638:708:f010::1 + # in case of issues possible workaround + # cf https://github.com/systemd/systemd/issues/1850 + #GatewayOnlink=yes + + [IPv6AcceptRA] + UseOnLinkPrefix=False + UseAutonomousPrefix=False +---- ++ +Don't forget to adjust interface names and IP addresses accordingly! + +5. Boot the container and log in ++ +Check if container boots without error messages ++ +[source,] +---- + […]# systemd-nspawn -D /var/lib/machines/{ctname} -b + OK Spawning container {ctname} on /var/l…01. + OK … + {ctname} login: +---- +6. Checking the status of systemd-networkd ++ +If inactive, activate and start the service. ++ +[source,] +---- + […]# systemctl status systemd-networkd + … + […]# systemctl enable systemd-networkd + […]# systemctl start systemd-networkd + […]# systemctl status systemd-networkd +---- +7. Check if all network interfaces are available ++ +[source,] +---- + […]# ip a +---- +8. Check for correct routing ++ +[source,] +---- + […]# ip route show +---- +9. Configure default DNS search path ++ +Specify a search domain to appended to a unary hostname without domain part, usually the internal network domain name, e.g. example.lan. Adjust the config file according to the pattern below: ++ +[source,] +---- + […]# vim /etc/systemd/resolved.conf + + [Resolve] + ... + #dns.quad9.net + #DNS= + #FallbackDNS= + #Domains= + Domains=example.lan + #DNSSEC=no + ... +---- +10. Check if name resolution is configured correctly ++ +[source,] +---- + […]# ls -al /etc/resolv.conf + lrwxrwxrwx. 1 root root 39 29. Dez 12:15 /etc/resolv.conf -> ../run/systemd/resolve/stub-resolv.conf +---- ++ +If the file is missing or is a text file, correct it. ++ +[source,] +---- + […]# cd /etc + […]# rm -f resolv.conf + […]# ln -s ../run/systemd/resolve/stub-resolv.conf resolv.conf + […]# ls -al /etc/resolv.conf + […]# cd +---- ++ +Ensure that systemd-resolved service is enabled. ++ +[source,] +---- + […]# systemctl status systemd-resolved +---- ++ +Activate the service if necessary. ++ +[source,] +---- + […]# systemctl enable systemd-resolved +---- + +11. Set the intended hostname ++ +[source,] +---- + […]# hostnamectl + […]# hostnamectl set-hostname +---- +12. Terminate the container ++ +[source,] +---- + […]# +]]] + Container terminated by signal KILL. +---- + +== 3. Starting the container as a system service for productive operation + +1. Booting the container using systemctl ++ +In this step, a separate UID/GID range is automatically created for the container. ++ +[source,] +---- + […]# systemctl enable systemd-nspawn@{ctname} + […]# systemctl start systemd-nspawn@{ctname} + […]# systemctl status systemd-nspawn@{ctname} +---- ++ +On first boot after installing systemd-container, a SELinux bug currently (Fedora 34/35) blocks execution. The solution is to fix the SELinux label(s). ++ +* Select the SELinux tab in Cockpit, preferably before booting the container for the first time. +* There, the AVCs are listed and solutions are offered, such as: ++ +`type=AVC msg=audit(1602592088.91:50075): avc: denied { search } for pid=35673 comm="systemd-machine" name="48865" dev="proc" ino=1070782 scontext=system_u:system_r:systemd_machined_t:s0 tcontext=system_u:system_r:unconfined_service_t:s0 tclass=dir permissive=0` ++ +The proposed solution is roughly as follows: ++ +[source,] +---- +[…]# ausearch -c 'systemd-machine' --raw | audit2allow -M my-systemdmachine +[…]# semodule -i my-systemdmachine.pp +---- +* The operation must be repeated until no SELinux error is reported and the container starts as a service. ++ +Alternatively, the SELinux CLI tool can be used, which also suggests these solutions. + +2. Enable automatic start of the container at system startup ++ +[source,] +---- + […]# systemctl enable systemd-nspawn@{ctname} + […]# systemctl status systemd-nspawn@{ctname} +---- + +3. Log in to the container ++ +[source,] +---- + […]# setenforce 0 + […]# machinectl login {ctname} +---- ++ +When machinectl is called with parameters for the first time, an SELinux bug (Fedora 34/35) also blocks execution. The correction is done in the same way as for the container start. + +4. Completing and finalizing the container configuration ++ +Within the container, perform other designated software installations and customizations. ++ +In case of a CentOS 8-stream container, the epel repository should be installed (dnf install epel-release-latest-8) so that systemd-networkd is provided with updates. + +5. Logging off from the container ++ +After finishing all further work inside the container press ]]] ( Mac: 666) to exit the container and reactivate SELinux. ++ +[source,] +---- + […]# setenforce 1 +---- + +=== 3.1 Autostart of the container on reboot of the host + +An autostart of the container in the "enabled" state fails on Fedora 35 and older. The cause can be seen in a status query after rebooting the host, which issues an error message according to the following example: + +[source,] +---- + […]# systemctl status systemd-nspawn@CT_NAME + systemd-nspawn[802]: Failed to add interface vb-{CT_NAME} to bridge vbr6s0: No such device +---- + +This means that systemd starts the container before all required network interfaces are available. + +==== Resolution for (physical) interfaces managed by NetworkManager + +1. The service file requires an amendment (Bug #2001631). In section [Unit], for the `Wants=` and `After=` configurations, add a target `network-online.target` at the end of each line. The file must then look like this (ignore the commented out marker rows): ++ +[source,] +---- + […]# systemctl edit systemd-nspawn@ --full + ... + [Unit] + Description=Container %i + Documentation=man:systemd-nspawn(1) + Wants=modprobe@tun.service modprobe@loop.service modprobe@dm-mod.service network-online.target + ### ^^^^^^^^^^^^^^^^^^^^^ + PartOf=machines.target + Before=machines.target + After=network.target systemd-resolved.service modprobe@tun.service modprobe@loop.service modprobe@dm-mod.service network-online.target + ### ^^^^^^^^^^^^^^^^^^^^^ + RequiresMountsFor=/var/lib/machines/%i + ... +---- ++ +Important is the character "@" after `nspawn`! In the opening editor make the insertions and save them. + +2. Then execute ++ +[source,] +---- + […]# systemctl daemon-reload +---- + +At the next reboot the containers will be started automatically. + +==== Resolution for virtual interfaces managed by libvirt + +For such interfaces (usually the bridge virbr0) the addition mentioned above does not help. The container must be started by script in an extra step after Libvirt initialization is complete. For this you can use a hook that Libvirt provides. + +[source,] +---- +[…]# mkdir -p /etc/libvirt/hooks/network.d/ +[…]# vim /etc/libvirt/hooks/network.d/50-start-nspawn-container.sh +(INSERT) +#!/bin/bash +# Check defined nspawn container in /var/lib/machines and +# start every container that is enabled. +# The network-online.target in systemd-nspawn@ service file +# does not (yet) wait for libvirt managed interfaces. +# We need to start it decidely when the libvirt bridge is ready. + +# $1 : network name, eg. Default +# $2 : one of "start" | "started" | "port-created" +# $3 : always "begin" +# see https://libvirt.org/hooks.html + +set -o nounset + +network="$1" +operation="$2" +suboperation="$3" + +ctdir="/var/lib/machines/" +ctstartlog="/var/log/nspawn-ct-startup.log" + +echo " P1: $1 - P2: $2 - P3: $3 @ $(date) " +echo " " > $ctstartlog +echo "=======================================" >> $ctstartlog +echo " Begin $(date) " >> $ctstartlog +echo " P1: $1 - P2: $2 - P3: $3 " >> $ctstartlog + +if [ "$network" == "default" ]; then + if [ "$operation" == "started" ] && [ "$suboperation" == "begin" ]; then + for file in $ctdir/* ; do + echo "Checking: $file " >> $ctstartlog + echo " Filename: $(basename $file) " >> $ctstartlog + echo " Status: $(systemctl is-enabled systemd-nspawn@$(basename $file) ) " >> $ctstartlog + + if [ "$(systemctl is-enabled systemd-nspawn@$(basename $file) )" == "enabled" ]; then + echo " Starting Container $(basename $file) ... " >> $ctstartlog + systemctl start systemd-nspawn@$(basename $file) + echo "Container $(basename $file) started" >> $ctstartlog + fi + done + fi +fi + +[…]# chmod +x /etc/libvirt/hooks/network.d/50-start-nspawn-container.sh +---- +You may also use the link:{attachmentsdir}/nspawn-autostart-libvirt-hook.tgz[attached script] instead of typing. + +== 4. Troubleshooting + +=== 4.1 RPM DB problem in a CentOS 8-stream container on Fedora host + +For dnf / rpm queries the error message is displayed: +`warning: Found SQLITE rpmdb.sqlite database while attempting bdb backend: using sqlite backend` + +The cause is that Fedora's dfn, which is used for the installation, uses sqlite while CentOS/RHEL use the Berkeley (bdb) format. + +Check configuration within the running container: +[source,] +---- + […]# rpm -E "%{_db_backend}" +---- +The output must be `bdb`. Then fix it executing +[source,] +---- + […]# rpmdb --rebuilddb +---- + +=== 4.2 Error message dev-hugepages + +You will find message such as +[source,] +---- +dev-hugepages.mount: Mount process exited, code=exited, status=32/n/a +dev-hugepages.mount: Failed with result 'exit-code'. +[FAILED] Failed to mount Huge Pages File System. +See 'systemctl status dev-hugepages.mount' for details. +---- +DFN installs this by default, but it is not applicable inside a container. It is a general kernel configuration that cannot be changed by a container (at least as long as it is not configurable within namespaces). + +The messages can be safely ignored. + +=== 4.3 Package update may fail + +Some packages, e.g. the `filesystem` package, may not get updated in a container (error message "Error: Transaction failed"), see also https://bugzilla.redhat.com/show_bug.cgi?id=1548403 and https://bugzilla.redhat.com/show_bug.cgi?id=1912155. + +Workaround: Run before update: +[source,] +---- + […]# echo '%_netsharedpath /sys:/proc' > /etc/rpm/macros.netshared +---- +When an update has already been performed, execute this command and update the package again. + +As of Fedora 35, the bug should be fixed. + diff --git a/docs/modules/ROOT/pages/server-containerization.adoc b/docs/modules/ROOT/pages/server-containerization.adoc new file mode 100644 index 0000000..a16be4c --- /dev/null +++ b/docs/modules/ROOT/pages/server-containerization.adoc @@ -0,0 +1,98 @@ += Containerization +Peter Boy; Jan Kuparinen +:page-authors: {author}, {author_2} + +[sidebar] +**** +Author: Peter Boy (pboy) | Creation Date: 2022-02-09 | Last update: 2022-02-09 | Related Fedora Version(s): 34,35 +**** + +Since some years "Container" are on everyone's lips. It's a prominent subject of public dicsussion. Complete operating systems are rebuilt to serve primarily as runtime environments for containers. And in public discussion "container" are mostly equated with "Docker". It is hard to find software that is not at least also offered as a Docker image. And it didn't take long for the disadvantages of such a monopolization to become apparent, e.g. in the form of serious security risks. + +As we learn time and time again, one size does not fit all. A number of the advantages of containerization are widely agreed upon. But the needs and requirements in IT are so diverse that not all of them can be optimally realized by one implementation. Therefore, there are alternative container implementations with different application profiles. And containerization is not always helpful either. + +*Fedora Server supports and allows several alternatives that can be used depending on the local context and/or user's requirement profile.* + +== Containerization options in Fedora Server + +A common feature of all container systems is the sharing of the host kernel and the use of kernel capabilities (e.g. cnames) to achieve a certain mutual isolation and autonomy. + +They differ in implementation, architecture principles, toolset, runtime environment and community. A rough classification is the distinction between "system container" and "application container", roughly determined by the existence and scope of an init system. + +=== Podman + +Its characteristics are + +* Application container +* Security enhancement: no root privileges and no central controlling daemon required +* Optimized for the interaction of multiple, coordinated containers (a "pod"), each dedicated to a specific task and cooperating with others to accomplish a complex overall task (e.g. customer management with connection to a specific database system). Reinforces the architecture principle: one and only one application per container. +* Binary compatible container image as Docker, mutually usable +* Free open source software + +Podman is *natively supported by Fedora Server* and the recommended solution for application containers. + +=== Docker + +Its characteristics are + +* Application container +* Dependent on a daemon with ROOT privileges +* Huge trove of pre-built containers for all sorts of software +* Mixture of a free community edition and a commercial product + +Docker releases it own Community Edition for various distributions. Therefore there is *no native support* for Fedora Server, but a *vendor repository* maintained for Fedora. + +=== LXC (libvirt) + +Its characteristics are + +* System container, kind of "lightweight virtual machine" +* Administration of container runtimes supported by Libvirt management routines for virtual machines as well as by Cockpit libvirt module +* Container runtime solely dependent on kernel capabilities, no own toolset +* Creation of a container disk image is not considered a task of libvirt, but a matter for the administrator. It includes composing a xml-based descriptor file. Therefore, the toolset is rated as somewhat "rough". +* Free open source software + +Libvirt LXC is *natively supported by Fedora Server* (via libvirt as default virtualization tool) + +=== LXC (linux containers) + +Its characteristics are + +* System container +* One of the first implementations of containers and the "progenitor" of (meanwhile emancipated) Docker and Podman +* Complete toolset, container images, community. Its designated successor is LXD (see next). +* Free open source software + +Linux Container's LXC is *natively supported by Fedora Server* (in its LTS versions) + +=== LXD (linux containers) + +Its characteristics are + +* System container, kind of "lightweight virtual machine" +* LXC with advanced toolset +* Complete versatile toolset, including container images and active supportive community. +* Free open source software + +LXD is *not natively supported* by Fedora Server, but there is a *COPR project* available, Additionally there is *vendor support* for Fedora by a third party package manager. + +=== systemd-nspawn container + +Its characteristics are + +* System container as a "lightweight virtual machine" and also configurable as a kind of application container (with a stub init system) +* Toolset highly integrated into systemd system management and thus a strong simplification of administration and maintenance. +* Both technically stringent and systematic documentation as well as stringent naming and structuring of the toolset, which facilitates administration. +* Rather new development and currently still somewhat rough SELinux support (so far its weakest point). +* Free open source software + +The systemd-nspawn container is *natively supported by Fedora Server*. + +=== Linux Vserver + +It requires a modified kernel and is *not supported by Fedora Server* + +=== OpenVZ + +It uses a self customized version of RHEL / CentOS and is *not supported by Fedora Server* + diff --git a/docs/modules/ROOT/pages/server-installation.adoc b/docs/modules/ROOT/pages/server-installation.adoc index 01b2e55..90ab090 100644 --- a/docs/modules/ROOT/pages/server-installation.adoc +++ b/docs/modules/ROOT/pages/server-installation.adoc @@ -2,11 +2,6 @@ Peter Boy; Kevin Fenzi; Jan Kuparinen :page-authors: {author}, {author_2}, {author_3} -[NOTE] -==== -**Status:** published -==== - [sidebar] **** Author: Peter Boy (pboy) | Creation Date: 2021-04-05 | Last update: 2022-02-09 | Related Fedora Version(s): 34,35 diff --git a/docs/modules/ROOT/pages/server-virtualization.adoc b/docs/modules/ROOT/pages/server-virtualization.adoc index 142d98f..fee3105 100644 --- a/docs/modules/ROOT/pages/server-virtualization.adoc +++ b/docs/modules/ROOT/pages/server-virtualization.adoc @@ -4,16 +4,18 @@ Fredrik Arneving; Peter Boy; Jan Kuparinen [sidebar] **** -Author: Peter Boy (pboy) | Creation Date: 2021-03-31 | Last update: N/A | Related Fedora Version(s): 33,34 +Author: Peter Boy (pboy) | Creation Date: 2021-03-31 | Last update: 2022-02-09 | Related Fedora Version(s): 34,35 **** -For more than a decade now, PC hardware has been powerful enough to run not only one operating system, but many of them simultaneously. This inspired software developers to create virtualisation solutions for the (Intel) PC architecture, as was already common on mainframes at the time. +For more than a decade now, server hardware has been powerful enough to run not only one operating system, but many of them simultaneously. This inspired software developers to create virtualisation solutions for affordable (Intel) server architecture, as was already common on mainframes at the time. -Virtualization means that several complete and probably different operating systems – the virtual guest machines (guest VM) – run on one and the same hardware, even if intended for a different hardware architecture (full virtualization). In any case, each guest VM uses its own kernel und operates as far as possible independently and isolated from each other and the host system. The basis is a 'hypervisor' that manages hardware resources and assigns them to the individual operating systems rsp. virtuel guest machines. +Virtualization means that several complete and probably different operating systems – the virtual guest machines (guest VM) – run on one and the same hardware, even if intended for a different hardware architecture (full virtualization). In any case, each guest VM uses its own kernel und operates as far as possible independently and isolated from each other and the host system. The basis is a 'hypervisor' that manages hardware resources and assigns them to the individual operating systems rsp. virtual guest machines. Containers are an alternative often discussed. Here, several containers use the kernel of the host system simultaneously. The mutual isolation is lower, but the performance overhead is also lower. -== XEN +== Virtualization options in Fedora + +=== XEN XEN was the first virtualisation technique suitable for inclusion in a open source distribution as Fedora. This solution first boots a hypervisor that manages resource access and then subsequently loads the distribution kernel, which in the case of XEN takes on a special management function (Dom0). This technology is called 'Type 1' hypervisor and is the basis of many commercial virtualisation solutions, especially in the mainframe world. @@ -23,7 +25,7 @@ Fedora was an early adoptor of XEN but preferred later KVM as virtualization sol *XEN* is still *natively supported by Fedora Server* -== KVM +=== KVM The development of KVM began later, when the issues with kernel additions were largely settled. It also benefited from being a true open source project, which made it easier to integrate modules into the kernel. @@ -31,34 +33,82 @@ Technically, KVM follows a different approach as XEN. It requires an already ins Fedora was an early adoptor of **KVM/Qemu** and it is *natively supported by Fedora Server*. -== Fedora recommended Virtualization +=== Fedora recommended Virtualization Fedora Project decided some time ago to replace XEN as the preferred virtualization and use KVM as default instead. Due to the 'Type 2'-like concept of KVM, Fedora Server Edition is first installed and configured as usual. System software required for virtualization is not automatically installed. In a second step, virtualization capabilities are added (it can be added as an option during installation to combine both into one step on the surface. However, a subsequent, precisely fitting installation is very much preferrable). +== Adding Virtualization Support +This step includes the xref:virtualization-install.adoc[installation of the libvirt software and further configuration] steps. For example, external connectivity must be set up for virtual machines, e.g. through a virtual bridge or mac-vlan. Often, an internal network is also required for protected communication between the virtual machines or with the host system. +After Fedora Server is enabled for virtualization, one or more virtual machines can be installed. This might be Fedora Server, or any other Linux distribution. -=== Adding Virtualization +== Adding Virtual Machines -This step includes an installation of the libvirt software and further configuration steps. For example, external connectivity must be set up for virtual machines, e.g. through a virtual bridge. Often, an internal network is also required for protected communication between the virtual machines or with the host system. -After Fedora Server is enabled for virtualization, one or more virtual machines can be installed. This might be Fedora Server, or any other distribution. +There are two common ways to install virtual machines: -=== Adding Virtual Machines +- Use of the __distribution's native installer__, often a bootable iso file originally intended for burning to DVD or transferring to a USB stick. A utility boots this iso file into an initially temporary virtual machine that runs the native installer, permanently setting up the virtual machine with the desired system. +- Use of a _virtual disk image_ containing a preconfigurated, ready to boot operational system, often optimized for virtualization. A special case are cloud (base) images. -There are two common ways to install virtual machines: +=== Distribution's native installer + +With this method, the distribution's own installation procedure of the targeted distribution is executed in a virtual machine. In the case of Fedora, this is Anaconda, which most administrators are already familiar with from the host server installation. The learning curve is therefore very flat. + +In most cases, the installation procedure is __interactive__, as in the case of Anaconda on Fedora Server installation media. Advantages are the familiarity of administrators with the tool, the customizability in all the details that the installation of each distribution offers, and the perfect replication of all the features and capabilities that are offered. + +The big disadvantage is that this process is very time-consuming. If several VMs have to be installed or you only need a test instance "quickly", this is not an efficient solution. + +As the tool to run the distribution's native installer, Fedora recommends __Cockpit__, a web-based graphical and comfortable administration tool. An alternative is Virt-Manager, also a graphical utility. However, it must be installed on the local workstation (Linux only) and then works via a ssh connection. Execution on Fedora Server itself is not supported, as Fedora Server is designed to be "headless", i.e. without a graphical user interface. + +Experienced administrators can also initialize an installation via the command line using VNC and virt-install. However, if you don't have a routine for this and no stock of configuration snippets to build on, this is also quite time-consuming and, moreover, then error-prone. + +=== Virtual disk image + +The basic idea is not to always create a virtual disk anew when a new VM is to be installed. This involves going through the same configuration steps in Anaconda over and over again and let Anaconda copying the same rpm packages one by one from the installation media to the virtual disk. Instead, the administrator resorts to a preconfigured, bootable generic disk image and "imports" it in its entirety unchanged into the virtual machine as a virtual hard disk on one go. + +This import is unproblematic, since the virtualized central hardware elements of a QEMU/kvm Linux virtual machine of the same architecture hardly differ. Any remaining differences can be adjusted in the course of post-installation tasks. In this way, an installation that would otherwise easily take about a quarter of an hour is reduced to a few minutes. + +==== Custom virtual disk images + +The easiest way to obtain a suitable virtual disk image is, to back up a copy of the disk image created by Anaconda during the initial or a follow-up VM installation. + +To do this, the administrator could copy the virtual disk file created by Anaconda from the `__~/libvirt/images/__` directory to, for example, the `__~/libvirt/boot/__` directory, the Installation Media pool. The appropriate opportunity is after completion of the installation and before the first boot and prior to any post-install steps. As many as desired VMs can import this (custom) image to their own disk. + +Of course, each of these VMs uses the same initial configuration, especially the same root password and the same administrative user. A static configuration of the Ethernet interfaces would also be impractical, because it leads to conflicts at the beginning, unless the original VM is stopped temporarily. Therefore, the initial Ehernet configuration should use DHCP in any case, even if DHCP is not available. In any case, at least the internal network uses DHCPand results in a working configuration. + +All of this is a bit cumbersome and not optimal, but the initial post-install stage can adjust all this. And it is the only way to get a perfect resemblance of Fedora Server Edition so far. + +==== Virtual disk image repositories + +There are several offerings available -- Use of the distribution-specific installation program with the help of specialized utilities -- Installation of cloud (base) images, a variant of the operating system optimized for virtualization. +* *virt-install* ++ +Installation of guestfs-tools (f35, libguestfs-tools-c in f34) and use virt-builder to create a partly customized disk image (e.g. root password) that you can import using virsh ++ +You get a disk image file pretty quickly and importing it into KVM is easy and fast as well. ++ +You will not get a resemblance of Fedora Server virtualized, contrary to the description in virt-builder. Cockpit is not installed, filesystem is without LVM, many software packages are missing (vim, tar , sshfs, etc), firewall without Fedora Server zone - just to name a few. ++ +The qcow2 image file created by default has a capacity of 10 GiB and also occupies 10 GiB in the file system - so no use of dynamic capability of qcow2 format for thin provisioning. The image is generated on an external server and imported as a binary. ++ +With all the effort we put into Fedora to build even the smallest rpm package completely from source on Fedora infrastructure, it seems pretty absurd to use a complete VM binary from a third party source or to rely on an external binary image. ++ +Overall, this is not a suitable solution. -==== Distribution specific installation +* *Image Builder Tool* ++ +The tool creates an elaborated image file. But in the current implementation it requires almost a similar effort as an Anaconda installation. And as in the case of virt-builder, a binary is imported from an external source. ++ +So it is a similarly limited suitable solution. -To do this, the standard ISO file is copied to the server and then executed via a utility. The web-based administration tool Cockpit is currently recommended for Fedora Server. An alternative is Virt-Manager, a graphical utility. However, it must be installed on the local workstation (Linux only) and then works via a ssh connection. Execution on the server itself is not supported, as Fedora Server is designed to be "headless", i.e. without a graphical user interface. +==== Cloud base images -Experienced administrators can also initialize an installation via the command line using VNC and virt-install. However, this is comparatively time-consuming and error-prone. +Most distributions provide "cloud images". These are virtual disk images, each custom built for easy installation in one of the cloud systems, e.g. Amazon's AWS or Google's GCP. They use cloud system specific features for configuration and administration provided by the respective cloud system, e.g. cloud-init for initialization at the first boot. These are not readily available on an autonomous server. -==== Cloud Base Images +In addition, most distributions also provide a "cloud base image", a base system built like the specific cloud images, but without the respective configuration and administrative tools. For example, it supports the "no-cloud" option for cloud-init. These are installable (i.e. "imported") in an autonomous server as a VM. -A special feature of cloud images is a configuration for a specific runtime environment and purpose with the help of a special program, cloud-init. The necessary information is provided by the cloud system, e.g. Open Stack. These are not readily available on an autonomous server. With Fedora 33 and version 3 of the virt-install program, the installation and use of cloud images and cloud-init has been greatly simplified. +It is very different to what extent these cloud base images resemble an autonomous server of the respective distribution. In the case of Fedora, there is intentionally a fairly low resemblance to Fedora Server Edition. This path might be considered to be of limited usefulness. -Installation is now accomplished with a single and simple invocation of the virt-install CLI program. Currently, no support is available through a graphical or web-based program. \ No newline at end of file +Installation is accomplished on the command line with a single and simple invocation of the virt-install program. \ No newline at end of file