User guide for osbuild-composer
osbuild-composer
is a service for building customized operating system images (currently only Fedora and RHEL). These images can be used with various virtualization software such as QEMU, VirtualBox, VMWare and also with cloud computing providers like AWS, Azure or GCP.
This guide contains instructions on installing osbuild-composer
service and its basic usage.
If you want to fix a typo, or even contribute new content, the sources for this webpage are hosted in osbuild/guides GitHub repository.
For Red Hatters, the internal guides can be found here.
Basic concepts
osbuild-composer
works with a concept of blueprints. A blueprint is a description of the final image and its customizations. A customization can be:
- an additional RPM package
- enabled service
- custom kernel command line parameter, and many others. See Blueprint reference for more details.
An image is defined by its blueprint and image type, which is for example qcow2
(QEMU Copy On Write disk image) or AMI
(Amazon Machine Image).
Finally, osbuild-composer
also supports upload targets, which are cloud providers where an image can be stored after it is built. See the Uploading cloud images section for more details.
Example blueprint
name = "base-image-with-tmux"
description = "A base system with tmux"
version = "0.0.1"
[[packages]]
name = "tmux"
version = "*"
The blueprint is in TOML format.
Image types
osbuild-composer
supports various types of output images. To see all supported types, run this command:
$ composer-cli compose types
Installation
To get started with osbuild-composer
on your local machine, you can install the CLI interface or the Web UI, which is part of Cockpit project.
CLI interface
For CLI only, run the following command to install necessary packages:
$ sudo dnf install osbuild-composer composer-cli
To enable the service, run this command:
$ sudo systemctl enable --now osbuild-composer.socket
Verify that the installation works by running composer-cli
:
$ sudo composer-cli status show
If you prefer to run this command without sudo privileges, add your user to the weldr
group:
$ sudo usermod -a -G weldr <user>
$ newgrp weldr
Web UI
If you prefer the Web UI interface, known as an Image Builder, install the following package:
$ sudo dnf install cockpit-composer
and enable cockpit
and osbuild-composer
services:
$ sudo systemctl enable --now osbuild-composer.socket
$ sudo systemctl enable --now cockpit.socket
Managing repositories
There are two kinds of repositories used in osbuild-composer:
- Custom 3rd party repositories - use these to include packages that are not available in the official Fedora or RHEL repositories.
- Official repository overrides - use these if you want to download base system RPMs from elsewhere than the official repositories. For example if you have a custom mirror in your network. Keep in mind that this will disable the default repositories, so the mirror must contain all necessary packages!
Custom 3rd party repositories
These are managed using composer-cli
(see the manpage for complete reference). To add a new repository, create a TOML
file like this:
id = "k8s"
name = "Kubernetes"
type = "yum-baseurl"
url = "https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64"
check_gpg = false
check_ssl = false
system = false
and add it using composer-cli sources add <file-name.toml>
. Verify its presence using composer-cli sources list
and its content using composer-cli sources info <id>
.
Using sources with specific distributions
A new optional field has been added to the repository source format. It is a list of distribution strings that the source will be used with when depsolving and building images.
Sources with no distros
will be used with all composes. If you want to use a
source for a specific distro you set the distros
list to the distro name(s)
to use it with.
eg. A source that is only used when depsolving or building fedora 32:
check_gpg = true
check_ssl = true
distros = ["fedora-32"]
id = "f32-local"
name = "local packages for fedora32"
system = false
type = "yum-baseurl"
url = "http://local/repos/fedora32/projectrepo/"
This source will be used for any requests that specify fedora-32, eg. listing packages and specifying fedora-32 will include this source, but listing packages for the host distro will not.
Official repository overrides
osbuild-composer
does not inherit the system repositories located in /etc/yum.repos.d/
. Instead, it has its own set of official repositories defined in /usr/share/osbuild-composer/repositories
. To override the official repositories, define overrides in /etc/osbuild-composer/repositories
. This directory is meant for user defined overrides and the files located here take precedence over those in /usr
.
The configuration files are not in the usual "repo" format. Instead, they are simple JSON
files.
Defining official repository overrides
To set your own repositories, create this directory if it does not exist already:
$ sudo mkdir -p /etc/osbuild-composer/repositories
Based on the system you want to build an image for, determine the name of a new JSON file:
- Fedora 32 -
fedora-32.json
- Fedora 33 -
fedora-33.json
- RHEL 8.4 -
rhel-84.json
- RHEL 9.0 -
rhel-90.json
Then, create the JSON file with the following structure (or copy the file from /usr/share/osbuild-composer/
and modify its content):
{
"<ARCH>": [
{
"name": "<REPO NAME>",
"metalink": "",
"baseurl": "",
"mirrorlist": "",
"gpgkey": "",
"check_gpg": "",
"metadata_expire": "",
}
]
}
Specify only one of the following attributes: metalink
, mirrorlist
, or baseurl
. All the remaining fields like gpgkey
, metadata_expire
, etc. are optional.
For example, for building a Fedora 33 image running on x86_64, create /etc/osbuild-composer/repositories/fedora-33.json
with this content:
{
"x86_64": [
{
"name": "fedora",
"metalink": "https://mirrors.fedoraproject.org/metalink?repo=fedora-33&arch=x86_64",
"gpgkey": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINBF4wBvsBEADQmcGbVUbDRUoXADReRmOOEMeydHghtKC9uRs9YNpGYZIB+bie\nbGYZmflQayfh/wEpO2W/IZfGpHPL42V7SbyvqMjwNls/fnXsCtf4LRofNK8Qd9fN\nkYargc9R7BEz/mwXKMiRQVx+DzkmqGWy2gq4iD0/mCyf5FdJCE40fOWoIGJXaOI1\nTz1vWqKwLS5T0dfmi9U4Tp/XsKOZGvN8oi5h0KmqFk7LEZr1MXarhi2Va86sgxsF\nQcZEKfu5tgD0r00vXzikoSjn3qA5JW5FW07F1pGP4bF5f9J3CZbQyOjTSWMmmfTm\n2d2BURWzaDiJN9twY2yjzkoOMuPdXXvovg7KxLcQerKT+FbKbq8DySJX2rnOA77k\nUG4c9BGf/L1uBkAT8dpHLk6Uf5BfmypxUkydSWT1xfTDnw1MqxO0MsLlAHOR3J7c\noW9kLcOLuCQn1hBEwfZv7VSWBkGXSmKfp0LLIxAFgRtv+Dh+rcMMRdJgKr1V3FU+\nrZ1+ZAfYiBpQJFPjv70vx+rGEgS801D3PJxBZUEy4Ic4ZYaKNhK9x9PRQuWcIBuW\n6eTe/6lKWZeyxCumLLdiS75mF2oTcBaWeoc3QxrPRV15eDKeYJMbhnUai/7lSrhs\nEWCkKR1RivgF4slYmtNE5ZPGZ/d61zjwn2xi4xNJVs8q9WRPMpHp0vCyMwARAQAB\ntDFGZWRvcmEgKDMzKSA8ZmVkb3JhLTMzLXByaW1hcnlAZmVkb3JhcHJvamVjdC5v\ncmc+iQI4BBMBAgAiBQJeMAb7AhsPBgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAAK\nCRBJ/XdJlXD/MZm2D/9kriL43vd3+0DNMeA82n2v9mSR2PQqKny39xNlYPyy/1yZ\nP/KXoa4NYSCA971LSd7lv4n/h5bEKgGHxZfttfOzOnWMVSSTfjRyM/df/NNzTUEV\n7ORA5GW18g8PEtS7uRxVBf3cLvWu5q+8jmqES5HqTAdGVcuIFQeBXFN8Gy1Jinuz\nAH8rJSdkUeZ0cehWbERq80BWM9dhad5dW+/+Gv0foFBvP15viwhWqajr8V0B8es+\n2/tHI0k86FAujV5i0rrXl5UOoLilO57QQNDZH/qW9GsHwVI+2yecLstpUNLq+EZC\nGqTZCYoxYRpl0gAMbDLztSL/8Bc0tJrCRG3tavJotFYlgUK60XnXlQzRkh9rgsfT\nEXbQifWdQMMogzjCJr0hzJ+V1d0iozdUxB2ZEgTjukOvatkB77DY1FPZRkSFIQs+\nfdcjazDIBLIxwJu5QwvTNW8lOLnJ46g4sf1WJoUdNTbR0BaC7HHj1inVWi0p7IuN\n66EPGzJOSjLK+vW+J0ncPDEgLCV74RF/0nR5fVTdrmiopPrzFuguHf9S9gYI3Zun\nYl8FJUu4kRO6JPPTicUXWX+8XZmE94aK14RCJL23nOSi8T1eW8JLW43dCBRO8QUE\nAso1t2pypm/1zZexJdOV8yGME3g5l2W6PLgpz58DBECgqc/kda+VWgEAp7rO2A==\n=EPL3\n-----END PGP PUBLIC KEY BLOCK-----\n",
"check_gpg": true
}
]
}
Using repositories that require subscription
osbuild-composer
can use subscriptions from the host system if they are configured in the appropriate file in /etc/osbuild-composer/repositories
. To enable such repository, copy the baseurl
from /etc/yum.repos.d/redhat.repo
and paste it into the JSON repository definition. Then allow RHSM support using "rhsm": true
like this:
{
"x86_64": [
{
"baseurl": "https://localhost/repo",
"gpgkey": "...",
"rhsm": true
}
]
}
osbuild-composer
will read the /etc/yum.repos.d/redhat.repo
file from the host system and use it as a source of subscriptions. The same subscriptions must be available on a remote worker, if used.
Container registry credentials
All communication with container registries is done by the osbuild-worker
service. It can be configured via the /etc/osbuild-worker/osbuild-worker.toml
configuration file. It is read only once at service start, so the service
needs to be restarted after making any changes.
The configuration file has a containers
section with an auth_file_path
field that is a string referring to a path of a containers-auth.json(5)
file
to be used for accessing protected resources. An example configuration could
look like this:
[containers]
auth_file_path = "/etc/osbuild-worker/containers-auth.json"
For detailed information on the format of the authorization file itself,
refer to the corresponding man page: man 5 containers-auth.json
.
Creating images with the CLI interface
An image is specified by a blueprint and an image type. Unless you specify otherwise, it will use the same distribution and version (e.g. Fedora 33) as the host system. The architecture will always be the same as the one on the host.
Blueprints management using composer-cli
osbuild-composer
provides a storage for blueprints. To store a blueprint.toml
blueprint file, run this command:
$ composer-cli blueprints push blueprint.toml
To verify that the blueprint is available, list all currently stored blueprints:
$ composer-cli blueprints list
base-image-with-tmux
To display the blueprint you have just added, run the command:
$ sudo composer-cli blueprints show base-image-with-tmux
name = "base-image-with-tmux"
description = "A base system with tmux"
version = "0.0.1"
modules = []
groups = []
[[packages]]
name = "tmux"
version = "*"
Building an image using composer-cli
To build a customized image, start by choosing the blueprint and image type you would like to build. To do so, run the following commands:
$ sudo composer-cli blueprints list
$ sudo composer-cli compose types
and trigger a compose (example using the blueprint from the previous section):
$ composer-cli compose start base-image-with-tmux qcow2
Compose ab71b61a-b3c4-434f-b214-1e16527766ff added to the queue
Note that the compose is assigned with a Universally Unique Identifier (UUID), that you can use to monitor the image build progress:
$ composer-cli compose info ab71b61a-b3c4-434f-b214-1e16527766ff
ab71b61a-b3c4-434f-b214-1e16527766ff RUNNING base-image-with-tmux 0.0.1 qcow2 2147483648
Packages:
tmux-*
Modules:
Dependencies:
At this time, the compose is in a "RUNNING" state. Once the compose reaches the "FINISHED" state, you can download the resulting image by running the following command:
$ sudo composer-cli compose results ab71b61a-b3c4-434f-b214-1e16527766ff
ab71b61a-b3c4-434f-b214-1e16527766ff.tar: 455.18 MB
$ fd
ab71b61a-b3c4-434f-b214-1e16527766ff.tar
$ tar xf ab71b61a-b3c4-434f-b214-1e16527766ff.tar
$ fd
ab71b61a-b3c4-434f-b214-1e16527766ff-disk.qcow2
ab71b61a-b3c4-434f-b214-1e16527766ff.json
ab71b61a-b3c4-434f-b214-1e16527766ff.tar
logs
logs/osbuild.log
From the example output above, the resulting tarball contains not only the qcow2
image, but also a JSON
file, which is the osbuild manifest (see the Developer guide for more details), and a directory with logs.
For more options, see the help
text for composer-cli
:
$ sudo composer-cli compose help
Tip: Booting the image with qemu
If you want to quickly run the resulting image, you can use qemu
:
$ qemu-system-x86_64 \
-enable-kvm \
-m 3000 \
-snapshot \
-cpu host \
-net nic,model=virtio \
-net user,hostfwd=tcp::2223-:22 \
ab71b61a-b3c4-434f-b214-1e16527766ff-disk.qcow2
Be aware that you must specify a way to access the machine in the blueprint. For example, you can create a user with known password, set an SSH key, or enable cloud-init
to use a cloud-init
ISO file.
Building OSTree image
This section contains a guide for building OSTree commits. As opposed to the "traditional" image types, these commits are not directly bootable so although they basically contain a full operating system, in order to boot them, they need to be deployed. This can, for example, be done via the Fedora installer (Anaconda).
OSTree is a technology for creating immutable operating system images and it is a base for Fedora CoreOS, Fedora IoT, Fedora Silverblue, and RHEL for Edge. For more information on OSTree, see their website.
Overview of the intended result
As mentioned above, osbuild-composer produces OSTree commits which are not directly bootable. The commits are inside a tarball to make their usage more convenient. In order to deploy them, you will need:
-
Fedora installation ISO - such as netinst (https://getfedora.org/en/server/download/)
-
HTTP server to serve the content of the tarball to the Fedora virtual machine booted from the ISO
-
Kickstart file that instructs Anaconda (Fedora installer) to use the OSTree commit from the HTTP server
In this guide, a container running Apache httpd
will be used as the HTTP server.
The result will look like this:
_________________ ____________________________
| | | |
| |------> | Fedora VM with mounted ISO |
| | | - Anaconda |
| Fedora Host OS | |____________________________|
| | |
| | _______|________________________
| | | |
| |------->| Fedora container running httpd |
|_________________| | serving content of the tarball|
| and the kickstart file |
|________________________________|
Note: If you would like to understand what is inside the tarball, read the upstream OSTree documentation.
Building an OSTree commit
Start by creating a blueprint for your commit. Using your favorite text editor, vi
, create a file named fishy.toml
with this content:
name = "fishy-commit"
description = "Fishy OSTree commit"
version = "0.0.1"
[[packages]]
name = "fish"
version = "*"
Now push the blueprint to osbuild-composer using composer-cli
:
$ composer-cli blueprints push fishy.toml
And start a build:
$ composer-cli compose start fishy-commit fedora-iot-commit
Compose 8e8014f8-4d15-441a-a26d-9ed7fc89e23a added to the queue
Monitor the build status using:
$ composer-cli compose status
And finally when the compose is complete, download the result:
$ composer-cli compose image 8e8014f8-4d15-441a-a26d-9ed7fc89e23a
8e8014f8-4d15-441a-a26d-9ed7fc89e23a-commit.tar: 670.45 MB
Writing a Kickstart file
As mentioned above, the Kickstart file is meant for the Anaconda installer. It contains instructions on how to install the system.
Create a file named ostree.ks
with this content:
lang en_US.UTF-8
keyboard us
timezone UTC
zerombr
clearpart --all --initlabel
autopart
reboot
user --name=core --groups=wheel --password=foobar
ostreesetup --nogpg --url=http://10.0.2.2:8000/repo/ --osname=iot --remote=iot --ref=fedora/33/x86_64/iot
For those interested in all the options, you can read Anaconda’s documentation.
The crucial part is on the last line. Here, ostreesetup
command is used to fetch the OSTree commit. Now for those wondering about the IP address, this tutorial uses qemu
to boot the virtual machine and 10.0.2.2
is an address which you can use to reach the host system from the guest: User Networking.
Setting up an HTTP server
Now that the kickstart file and OSTree commit are ready, create a container running HTTP server and serving those file. Start by creating a Dockerfile:
FROM fedora:latest
RUN dnf -y install httpd && dnf clean all
ADD *.tar *.ks /var/www/html
EXPOSE 80
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
Make sure you have everything in the build directory (keep in mind that the UUID is random, so it will be different in your case):
$ ls
8e8014f8-4d15-441a-a26d-9ed7fc89e23a-commit.tar
Dockerfile
ostree.ks
Build the container image:
$ podman build -t ostree .
And run it:
$ podman run --rm -p 8000:80 ostree
Note: You might be wondering why to bother with a container when you can just use "python -m http.server". The problem is that OSTree produces way too many requests and the Python HTTP server simply fails to keep up with OSTree.
Running a VM and applying the OSTree commit
Start with downloading the Netinstall image from here: https://getfedora.org/en/server/download/
Create an empty qcow2 image. That is an image of a hard drive for the virtual machine (VM).
$ qemu-img create -f qcow2 disk-image.img 5G
Run a VM using the hard drive and mount the installation ISO:
$ qemu-system-x86_64 \
-enable-kvm \
-m 3000 \
-snapshot \
-cpu host \
-net nic,model=virtio \
-net user,hostfwd=tcp::2223-:22 \
-cdrom $HOME/Downloads/Fedora-Server-netinst-x86_64-33-1.2.iso \
disk-image.img
Note: To prevent any issue, use the latest stable Fedora host OS for this tutorial.
This command instructs qemu (the hypervisor) to:
- Use KVM virtualization (makes the VM faster).
- Increase memory to 3000MB (some processes can get memory hungry, for example
dnf
). - Snapshot the hard drive image, don't override its content.
- Use the same CPU type as the host uses.
- Connect the guest to a virtual network bridge on the host and forward TCP port 2223 from the host to the SSH port (22) on the guest (makes it easier to connect to the guest system).
- Mount the installation ISO.
- Use the hard drive image created above.
At the initial screen, use arrow keys to select the "Install Fedora 33" line and press TAB key. You’ll see a line of kernel command line options appear below. Something like:
vmlinuz initrd=initrd.img inst.stage2=hd:LABEL=Fedora quiet
Add a space and this string:
inst.ks=http://10.0.2.2:8000/ostree.ks
Resulting in this kernel command line:
vmlinuz initrd=initrd.img inst.stage2=hd:LABEL=Fedora quiet inst.ks=http://10.0.2.2:8000/ostree.ks
The IP address 10.0.2.2
is again used here, because the VM is running inside Qemu.
Press "Enter", the Anaconda GUI will show up and automatically install the OSTree commit created above.
Once the system is installed and rebooted, use username "core" and password "foobar" to login. You can change the credentials in the kickstart file.
Building a RHEL for Edge Installer
The following describes how to build a boot ISO which installs an OSTree-based system using the "RHEL for Edge Container" in combination with the "RHEL for Edge Installer" image types. The workflow has the same result as the Building OSTree Image guide with the new image types automating some of the steps.
Note that there are some small differences in this procedure between RHEL 8.4 and RHEL 8.5:
- The names of the image types have changed. In 8.4, the image types were prefixed by
rhel-
. This prefix was removed in 8.5.- The old names
rhel-edge-container
andrhel-edge-installer
still work in RHEL 8.5 as aliases to the new names, however these names are considered deprecated and may be removed completely in future versions.
- The old names
- The internal port for the container has changed from 80 in RHEL 8.4 to 8080 in RHEL 8.5.
Process overview
- Create and load a blueprint with customizations.
- Build an
edge-container
(RHEL 8.5) orrhel-edge-container
(RHEL 8.4) image. - Load image in podman and start the container.
- Create and load an empty blueprint.
- Build an
edge-installer
(RHEL 8.5) orrhel-edge-installer
(RHEL 8.4) image, pointing theostree-url
tohttp://10.0.2.2:8080/repo/
and setting theostree-ref
torhel/edge/demo
.
The edge-container
image type creates an OSTree commit and embeds it into an OCI container with a web server. When the container is started, the web server serves the commit as an OSTree repository.
The edge-intaller
image type pulls the commit from the running container and creates an installable boot ISO with a kickstart file configured to use the embedded OSTree commit.
Detailed workflow
Build the container and serve the commit
Start by creating a blueprint for the commit. The content below is an example and can be modified to fit your needs. For this guide, we will name the file example.toml
.
name = "example"
description = "RHEL for Edge Installer example"
version = "0.0.3"
[[packages]]
name = "vim-enhanced"
version = "*"
[[packages]]
name = "tmux"
version = "*"
[customizations]
[[customizations.user]]
name = "user"
description = "Example User"
password = "$6$uvdfeuHQYM6kUaea$fvvzyu.Z.u89TVCB2tq8UEc52XDFGnAqCo75BX3zu8OzIbS.EKMo/Saammb151sLrdzmlESnpNEPrJ7h5b0c6/"
groups = ["wheel"]
Now push the blueprint to osbuild-composer using composer-cli
:
$ composer-cli blueprints push example.toml
And start the container build:
$ composer-cli compose start-ostree --ref "rhel/edge/example" example edge-container
Compose 8e8014f8-4d15-441a-a26d-9ed7fc89e23a added to the queue
The value for --ref
can be changed but must begin with an alphanumeric character and contain only alphanumeric characters, /
, _
, -
, and .
.
Note: In RHEL 8.4, the image type was called rhel-edge-container
. It has been renamed to edge-container
in 8.5 onwards.
Monitor the build status using:
$ composer-cli compose status
When the compose is FINISHED, download the result:
$ composer-cli compose image 8e8014f8-4d15-441a-a26d-9ed7fc89e23a
8e8014f8-4d15-441a-a26d-9ed7fc89e23a-rhel84-container.tar: 670.45 MB
Load the container into registry:
$ cat 8e8014f8-4d15-441a-a26d-9ed7fc89e23a-rhel84-container.tar | podman load
Getting image source signatures
Copying blob 82934cd3e69d done
Copying config d11911c3dc done
Writing manifest to image destination
Storing signatures
Loaded image(s): @d11911c3dc4bee46cabd52b91c87f48b8a7d450fadc8cfbeb69e2de98b413521
Tag the image for convenience:
$ podman tag d11911c3dc4bee46cabd52b91c87f48b8a7d450fadc8cfbeb69e2de98b413521 localhost/edge-example
Start the container (note the different internal port numbers between the two versions)
For RHEL 8.4:
$ podman run --rm -d -p 8080:80 --name ostree-repo localhost/edge-example
For RHEL 8.5+:
$ podman run --rm -d -p 8080:8080 --name ostree-repo localhost/edge-example
Note: The -d
option detaches the container and leaves it running in the background. You can also remove the option to keep the container attached to the terminal.
Build the installer
Start by creating a simple blueprint for the installer. The blueprint must not have any customizations or packages; only a name, and optionally a version and a description. Add the content below to a file and name it empty.toml
:
name = "empty"
description = "Empty blueprint"
version = "0.0.1"
The edge-installer
image type does not support customizations or package selection, so the build will fail if any are specified.
Push the blueprint:
$ composer-cli blueprints push empty.toml
Start the build:
$ composer-cli compose start-ostree --ref "rhel/edge/example" --url http://10.0.2.2:8080/repo/ empty edge-installer
Compose 09d98a67-a401-4613-9a5b-b93f8a6e695f added to the queue
Note: In RHEL 8.4, the image type was called rhel-edge-installer
. It has been renamed to edge-installer
in 8.5 onwards.
The --ref
argument must match the one from the rhel-edge-container
compose.
The --url
in this case is IP address of the container. This tutorial uses qemu
to boot the virtual machine and 10.0.2.2
is an address which you can use to reach the host system from the guest: User Networking.
Monitor the build status using:
$ composer-cli compose status
When the compose is FINISHED, download the result:
$ composer-cli compose image 09d98a67-a401-4613-9a5b-b93f8a6e695f
09d98a67-a401-4613-9a5b-b93f8a6e695f-rhel84-boot.iso: 1422.61 MB
The downloaded image can then booted to begin the installation. If you used the blueprint in this guide, use the username "user" and password "password42" to login.
Uploading cloud images
osbuild-composer
can upload images to a cloud provider right after they are built. The configuration is slightly different for each cloud provider. See individual subsections of this documentation.
Uploading an image to AWS
osbuild-composer
provides the users with a convenient way to upload images directly to AWS right after the image is built. Before you can use this feature, you have to define vmimport
IAM role in your AWS account. See VM Import/Export Requirements in AWS documentation.
Now, you are ready to upload your first image to AWS. Using a text editor of your choice, create a configuration file with the following content:
provider = "aws"
[settings]
accessKeyID = "AWS_ACCESS_KEY_ID"
secretAccessKey = "AWS_SECRET_ACCESS_KEY"
bucket = "AWS_BUCKET"
region = "AWS_REGION"
key = "OBJECT_KEY"
There are several considerations when filling values in this file:
AWS_BUCKET
must be in theAWS_REGION
- The
vmimport
role must have read access to theAWS_BUCKET
OBJECT_KEY
is the name of an intermediate S3 object. It must not exist before the upload, and it will be deleted when the process is done.
If your authentication method requires you to also specify a session token, you can put it in the
settings
section of the configuration file in a field namedsessionToken
.
Once everything is configured, you can trigger a compose as usual with additional image name and cloud provider profile:
$ sudo composer-cli compose start base-image-with-tmux ami IMAGE_KEY aws-config.toml
where IMAGE_KEY will be the name of your new AMI, once it is uploaded to EC2.
Uploading an image to an AWS S3 Bucket
osbuild-composer
provides the users with a convenient way to upload images, of all sorts, directly to an AWS S3 bucket right after the image is built.
Using a text editor of your choice, create a configuration file with the following content:
provider = "aws.s3"
[settings]
accessKeyID = "AWS_ACCESS_KEY_ID"
secretAccessKey = "AWS_SECRET_ACCESS_KEY"
bucket = "AWS_BUCKET"
region = "AWS_REGION"
key = "OBJECT_KEY"
There are several considerations when filling values in this file:
AWS_BUCKET
must be in theAWS_REGION
If your authentication method requires you to also specify a session token, you can put it in the
settings
section of the configuration file in a field namedsessionToken
.
Once everything is configured, you can trigger a compose as usual with additional image name and cloud provider profile:
$ sudo composer-cli compose start base-image-with-tmux qcow2 IMAGE_KEY aws-s3-config.toml
Uploading an image to GCP
osbuild-composer
provides the users with a convenient way to upload images directly to GCP right after the image is built. Before you can use this feature, you have to provide credentials for your user or service account, which you would like to use for uploading images to GCP.
The account associated with the credentials must have at least the following IAM roles assigned:
roles/storage.admin
- to create and delete storage objectsroles/compute.storageAdmin
- to import a VM image to Compute Engine
Now, you are ready to upload your first image to GCP.
Using a text editor of your choice, create a configuration file gcp-config.toml
with the following content:
provider = "gcp"
[settings]
bucket = "GCP_BUCKET"
region = "GCP_STORAGE_REGION"
object = "OBJECT_KEY"
credentials = "GCP_CREDENTIALS"
There are several considerations when filling values in this file:
GCP_BUCKET
must point to an existing bucket.GCP_STORAGE_REGION
can be a regular Google storage region, but also a dual or multi region.OBJECT_KEY
is the name of an intermediate storage object. It must not exist before the upload, and it will be deleted when the upload process is done. If the object name does not end with.tar.gz
, the extension is automatically added to the object name.GCP_CREDENTIALS
is a Base64 encoded content of the credentials JSON file downloaded from GCP. The credentials are used to determine the GCP project to upload the image to.Specifying this value in the
gcp-config.toml
may be optional if you use a different mechanism of authenticating with GCP. For more information about the various ways of authenticating with GCP, read the Authenticating with GCP below.
After everything is configured, you can trigger a compose as usual with an additional image name and cloud provider profile:
sudo composer-cli compose start base-image-with-tmux gce IMAGE_KEY gcp-config.toml
where IMAGE_KEY will be the name of your new GCE image, once it is uploaded to GCP.
Authenticating with GCP
osbuild-composer supports multiple ways of authenticating with GCP.
In case the osbuild-composer is configured to authenticate with GCP in multiple ways, it uses them in the following order of preference:
- Credentials specified with the
composer-cli
command in the configuration file. - Credentials configured in the osbuild-composer worker configuration.
- Application Default Credentials from the Google GCP SDK library, which tries to automatically find a way to authenticate using the following options:
- If
GOOGLE_APPLICATION_CREDENTIALS
environment variable is set, it tries to load and use credentials from the file pointed to by the variable. - It tries to authenticate using the service account attached to the resource which is running the code (e.g. Google Compute Engine VM).
- If
Note that the GCP credentials are used to determine the GCP project to upload the image to. Therefore, unless you want to upload all of your images to the same GCP project, you should always specify credentials with the
composer-cli
command.
Specifying credentials with the composer-cli
command
You need to specify the credentials with the composer-cli
command in the provided upload target configuration gcp-config.toml
:
provider = "gcp"
[settings]
...
credentials = "GCP_CREDENTIALS"
The GCP_CREDENTIALS
value is a Base64 encoded content of the Google account credentials JSON file. The reason for this is that the file is quite large and contains multiple key values, therefore mapping them to the TOML configuration format would require more manual work from the user, than encoding the whole file in Base64 and specifying it as a single value.
To get the encoded content of the Google account credentials file with the path stored in GOOGLE_APPLICATION_CREDENTIALS
environment variable, run:
base64 -w 0 "${GOOGLE_APPLICATION_CREDENTIALS}"
Specifying credentials in the osbuild-composer worker configuration
You can configure the credentials to be used for GCP globally for all image builds in the worker configuration /etc/osbuild-worker/osbuild-worker.toml
:
[gcp]
credentials = "PATH_TO_GCP_ACCOUNT_CREDENTIALS"
Uploading an image to to a bucket in a Generic S3 server
osbuild-composer
provides the users with a convenient way to upload images, of all sorts, directly to a bucket in a Generic S3 server right after the image is built.
Using a text editor of your choice, create a configuration file with the following content:
provider = "generic.s3"
[settings]
endpoint = "S3_SERVER_ENDPOINT"
accessKeyID = "S3_ACCESS_KEY_ID"
secretAccessKey = "S3_SECRET_ACCESS_KEY"
bucket = "S3_BUCKET"
region = "S3_REGION"
key = "OBJECT_KEY"
There are several considerations when filling values in this file:
AWS_REGION
must still be set (e.g. to us-east-1) even if it has no meaning in your S3 server- If your server is using HTTPS with a certificate signed by your own CA, you can either pass the CA bundle by setting the field
ca_bundle
, pointing it to the CA's public certificate, or skip SSL verification by settingskip_ssl_verification
totrue
Once everything is configured, you can trigger a compose as usual with additional image name and cloud provider profile:
$ sudo composer-cli compose start base-image-with-tmux qcow2 IMAGE_KEY generic-s3-config.toml
Uploading an image to Microsoft Azure
osbuild-composer
builds images and delivers them to Microsoft Azure
automatically. These images are ready to use with virtual machines in the
Azure cloud.
Initial setup
Before you can upload images to Azure with osbuild-composer
, your account
needs some initial setup. Be sure to complete these steps
- Create a resource group
- Create a storage account inside the resource group
- Create a storage container within the storage account
- Gather your access keys
For a detailed walkthrough on each step within the Azure portal, review the Build RHEL images for Azure with Image Builder post on the Red Hat Blog.
Make a note of the following items during the setup so you can provide them to
osbuild-composer
during the build process:
- the name of your storage account
- the name of the storage container inside your storage account
- the access key for your storage account
Deploy
Push a blueprint containing your image configuration and create a new file
called azure.toml
that contains the information about your Azure storage
account:
provider = "azure"
[settings]
storageAccount = "your storage account name"
storageAccessKey = "storage access key you copied in the Azure portal"
container = "your storage container name"
Build and deploy the image to Azure:
composer-cli compose start my_blueprint vhd my_image_key azure.toml
In this example my_blueprint
is the name of the blueprint containing your
image configuration. Replace my_image_key
with the preferred image name you
want to see in Azure. This is the name that appears inside your storage
container.
Uploading an image to OCI
osbuild-composer
provides the users with a convenient way to upload images directly to OCI right after the image is built.
See Managing Custom Images in OCI documentation (includes permissions details).
Now, you are ready to upload your first image to OCI. Using a text editor of your choice, create a configuration file with the following content:
provider = "oci"
[settings]
user = "OCI_CLI_USER"
tenancy = "OCI_CLI_TENANCY"
fingerprint = "OCI_CLI_FINGERPRINT"
region = "OCI_CLI_REGION"
bucket = "OCI_BUCKET"
namespace = "OCI_NAMESPACE"
compartment = "OCI_COMPARTMENT"
private_key = '''
...
'''
There are several considerations when filling values in this file:
OCI_BUCKET
must be in theOCI_REGION
and must exist before the upload
Once everything is configured, you can trigger a compose as usual with additional image name and cloud provider profile:
$ sudo composer-cli compose start BLUEPRINT_NAME oci IMAGE_KEY oci-config.toml
where IMAGE_KEY
will be the name of your new OCI image once uploaded.
Uploading a container image to a registry
osbuild-composer
can upload a container image, like the RHEL for
edge container, to a registry directly after it has been built.
In order to do so, the container reference and an upload configuration file need to be specified when building a container artifact:
$ sudo composer-cli compose start BLUEPRINT container REFERENCE CONFIG.toml
where BLUEPRINT
is the name for the container and REFERENCE
the
reference to the container image, like registry.example.com/image:tag
.
If :tag
is omitted, :latest
is the default. The CONFIG.toml
file
must include provider = "container"
. Other values are optional.
provider = "container" # required
[settings]
tls_verify = false # optional, TLS verification, default: true
username = "USERNAME" # optional, username to use
password = "PASSWORD" # optional, password to use
Instead of specifying username
and password
directly, a central
containers-auth.json(5)
file can be used, see
Container registry credentials.
Blueprint reference
Blueprints are simple text files in TOML format that describe which packages to install into the image, allowing to specify the packages version. They can also define a limited set of customizations to make to the final image.
A basic blueprint looks like this:
name = "base"
description = "A base system with bash"
version = "0.0.1"
[[packages]]
name = "bash"
version = "4.4.*"
Where:
name
field is the name of the blueprint. It can contain spaces, but they will be converted to-
when it is written to disk. It should be short and descriptive.description
can be a longer description of the blueprint, it is only used for display purposes.version
is a semver compatible version number. If a new blueprint is uploaded with the same version the server will automatically bump the PATCH level of the version. If the version doesn't match it will be used as is, for example, uploading a blueprint with version set to 0.1.0 when the existing blueprint version is 0.0.1 will result in the new blueprint being stored as version 0.1.0.
Packages and modules
[[packages]]
and [[modules]]
entries describe the package names and matching version glob to be installed into the image.
The package names must match the names exactly, and the versions can be an exact match or a filesystem-like glob of the version using *
wildcards and ?
character matching.
Currently there are no differences between packages and modules in osbuild-composer
. Both are treated like an rpm package dependency.
For example, to install tmux-2.9a
and openssh-server-8.*
packages, add this to your blueprint:
[[packages]]
name = "tmux"
version = "2.9a"
[[packages]]
name = "openssh-server"
version = "8.*"
Containers
[[containers]]
entries describe the container images to be embedded into the image.
The source
field is required and is a reference to a container image at a registry.
A tag or digest can be specified. If none is given the latest
tag is used. The name
to be used locally can be selected via the name
field. Transport layer security can
be controlled via the optional tls-verify
boolean field. The default is true
.
The container is pulled during the image build and stored in the image at the default
local container storage location that is appropriate for the image type, so that all
support container-tools like podman
and cri-o
will be able to work with it.
The embedded containers are not started.
To embed the latest fedora container from http://quay.io, add this to your blueprint:
[[containers]]
source = "quay.io/fedora/fedora:latest"
To access protected container resources a containers-auth.json(5)
file can be used,
see Container registry credentials.
Groups
The [[groups]]
entries describe a group of packages to be installed into the image. Package groups are defined in the repository metadata. Each group has a descriptive name used primarily for display in user interfaces and an ID more commonly used in kickstart files. Here, the ID is the expected way of listing a group.
Groups have three different ways of categorizing their packages: mandatory, default, and optional. For purposes of blueprints, just mandatory and default packages will be installed. There is no mechanism for selecting optional packages.
For example, if you want to install the anaconda-tools
group, add the following to your blueprint:
[[groups]]
name="anaconda-tools"
groups is a TOML list, so each group needs to be listed separately, like packages but with no version number.
Customizations
The [customizations]
section can be used to configure the hostname of the final image. for example:
[customizations]
hostname = "baseimage"
This is optional and can be left out to use the defaults.
Kernel command-line arguments
This allows you to append arguments to the bootloader's kernel command line.
For example:
[customizations.kernel]
append = "nosmt=force"
SSH Keys
Set an existing user's ssh key in the final image:
[[customizations.sshkey]]
user = "root"
key = "PUBLIC SSH KEY"
The key will be added to the user's authorized_keys
file.
Warning: key
expects the entire content of the public key file, traditionally ~/.ssh/id_rsa.pub
but any algorithm supported by the OS is valid
Additional user
Add a user to the image, and/or set their ssh key. All fields for this section are optional except for the name. The following is a complete example:
[[customizations.user]]
name = "admin"
description = "Administrator account"
password = "$6$CHO2$3rN8eviE2t50lmVyBYihTgVRHcaecmeCk31L..."
key = "PUBLIC SSH KEY"
home = "/srv/widget/"
shell = "/usr/bin/bash"
groups = ["widget", "users", "wheel"]
uid = 1200
gid = 1200
If the password starts with $6$, $5$, or $2b$ it will be stored as an encrypted password. Otherwise it will be treated as a plain text password.
Warning: key
expects the entire content of ~/.ssh/id_rsa.pub
Additional group
Add a group to the image. Name is required and GID is optional:
[[customizations.group]]
name = "widget"
gid = 1130
Timezone
Customizing the timezone and the NTP servers to use for the system:
[customizations.timezone]
timezone = "US/Eastern"
ntpservers = ["0.north-america.pool.ntp.org", "1.north-america.pool.ntp.org"]
The values supported by timezone can be listed by running the command:
$ timedatectl list-timezones
If no timezone is setup, the system will default to using UTC. The NTP servers are also optional and will default to using the distribution defaults, which are suitable for most uses.
Some image types have already NTP servers setup, for example, Google cloud image, and they cannot be overridden, because they are required to boot in the selected environment. But the timezone will be updated to the one selected in the blueprint.
Locale
Customize the locale settings for the system:
[customizations.locale]
languages = ["en_US.UTF-8"]
keyboard = "us"
The values supported by languages can be listed by running can be listed by running the command:
$ localectl list-locales
The values supported by keyboard can be listed by running the command:
$ localectl list-keymaps`
Multiple languages can be added. The first one becomes the primary, and the others are added as secondary. You must include one or more languages or keyboards in the section.
Firewall
By default the firewall blocks all access, except for services that enable their ports explicitly, like sshd. The following command can be used to open other ports or services. Ports are configured using the port:protocol
format:
[customizations.firewall]
ports = ["22:tcp", "80:tcp", "imap:tcp", "53:tcp", "53:udp"]
Numeric ports, or their names from /etc/services
can be used in the ports enabled/disabled lists.
The blueprint settings extend any existing settings in the image templates. Thus, if sshd is already enabled, it will extend the list of ports with those already listed by the blueprint.
If the distribution uses firewalld, you can specify services listed by firewall-cmd --get-services
in a customizations.firewall.services section:
[customizations.firewall.services]
enabled = ["ftp", "ntp", "dhcp"]
disabled = ["telnet"]
Remember that the firewall.services are different from the names in /etc/services.
Both are optional, if they are not used leave them out or set them to an empty list []
. If you only want the default firewall setup this section can be omitted from the blueprint.
NOTE: The Google and OpenStack templates explicitly disable the firewall for their environment. This cannot be overridden by the blueprint.
Systemd services
This section can be used to control which services are enabled at boot time. Some image types already have services enabled or disabled in order for the image to work correctly, and cannot be overridden. For example, ami
image type requires sshd
, chronyd
, and cloud-init
services. Without them, the image will not boot. Blueprint services do not replace this services, but add them to the list of services already present in the templates, if any.
The service names are systemd service units. You may specify any systemd unit file accepted by systemctl enable, for example, cockpit.socket:
[customizations.services]
enabled = ["sshd", "cockpit.socket", "httpd"]
disabled = ["postfix", "telnetd"]
Distribution selection with blueprints
The blueprint now supports a new distro
field that will be used to select the
distribution to use when composing images, or depsolving the blueprint. If
distro
is left blank it will use the host distribution. If you upgrade the
host operating system the blueprints with no distro
set will build using the
new os.
eg. A blueprint that will always build a fedora-32 image, no matter what version is running on the host:
name = "tmux"
description = "tmux image with openssh"
version = "1.2.16"
distro = "fedora-32"
[[packages]]
name = "tmux"
version = "*"
[[packages]]
name = "openssh-server"
version = "*"
Filesystem Support
The blueprints can be extended to provide filesytem support. Currently the mountpoint
and minimum partition size
can be set. Custom mountpoints are currently only supported for RHEL 8.5
& RHEL 9.0
. For other distributions, only the root
partition is supported, the size argument being an alias for the image size.
[[customizations.filesystem]]
mountpoint = "/var"
size = 2147483648
In addition to the root mountpoint, /
, the following mountpoints
and their sub-directories are supported:
/var
/home
/opt
/srv
/usr
/app
/data
Example Blueprint
The following blueprint example will:
- install the
tmux
,git
, andvim-enhanced
packages - set the root ssh key
- add the groups: widget, admin users and students
name = "example-custom-base"
description = "A base system with customizations"
version = "0.0.1"
[[packages]]
name = "tmux"
version = "*"
[[packages]]
name = "git"
version = "*"
[[packages]]
name = "vim-enhanced"
version = "*"
[customizations]
hostname = "custombase"
[[customizations.sshkey]]
user = "root"
key = "A SSH KEY FOR ROOT"
[[customizations.user]]
name = "widget"
description = "Widget process user account"
home = "/srv/widget/"
shell = "/usr/bin/false"
groups = ["dialout", "users"]
[[customizations.user]]
name = "admin"
description = "Widget admin account"
password = "$6$CHO2$3rN8eviE2t50lmVyBYihTgVRHcaecmeCk31LeOUleVK/R/aeWVHVZDi26zAH.o0ywBKH9Tc0/wm7sW/q39uyd1"
home = "/srv/widget/"
shell = "/usr/bin/bash"
groups = ["widget", "users", "students"]
uid = 1200
[[customizations.user]]
name = "plain"
password = "simple plain password"
[[customizations.user]]
name = "bart"
key = "SSH KEY FOR BART"
groups = ["students"]
[[customizations.group]]
name = "widget"
[[customizations.group]]
name = "students"
[[customizations.filesystem]]
mountpoint = "/"
size = 2147483648
Current architecture
This diagram shows the overall architecture, and each sub-section goes into details about the three main components, which are run as independent services.
The metadata defining the service for App-Interface is kept upstream and open as templates for both the osbuild-composer and image-builder components. The tooling to operate the service is to large parts open source and publicly accessible, e.g. qontract in the form of qontract-server, qontract-reconcile. The architecture documents in this section comply with the AppSRE contract.
Image Builder CRC API Architecture Document
Service Description
The image-builder
API in CRC serves as the public API used either directly by customers or through the
CRC UI. Through this API customers can create, manage and view image builds. The service in CRC is
responsible for access management, quotas, rate-limiting, etc. In the future it may interact with other
services in CRC in order to add value to the image build experience.
The actual image build requests are passed on to composer
, which is outside the scope of this document.
Technology Stack
The service is written in Golang, and the list of dependencies can be found in go.mod.
The ubi8/go-toolset:latest
container is used as a builder, and ubi8/ubi-minimal:latest
to run the
binary. The container images are located here: https://quay.io/repository/cloudservices/image-builder.
Components
The service consists of the image-builder app running in CRC, and its backing database. If either of these are unavailable, the service does not work at all, new images cannot be built, and historical builds cannot be introspected. Already built images that may be in used by customers are unaffected, only their history and metadata can no longer be queried through the service.
Routes
The public route is /api/v1/image-builder
, a detailed list can be found at
https://cloud.redhat.com/beta/docs/api/image-builder.
Dependencies
Image builder has the following internal and external dependencies.
Internal
Image Builder relies on 3Scale to set the x-rh-identity
header. It uses the header for authentication,
and quota application. It also uses the account number to map previously made compose requests to that
account number.
External
- AWS RDS for data storage. See the section on state.
- Quay as a container registry. Without this, the service cannot be redeployed.
- Github as an upstream repository. Without this, the service cannot be redeployed.
- Gitlab, AWS EC2, and Openstack for upstream testing. Without this changes to the service cannot land.
Service Diagram
See parent page.
Application Success Criteria
- Customers can queue image builds and view their state.
- Customers can introspect and manage existing builds.
- Quotas are applied according to policy to manage cost of running the service.
- The service is able to provide functionality to make its own functionality discoverable
- Enumerate supported features
- Package search
SLOs
The image builder API has the following SLOs, but we aim to add more and make these stricter as we gain more experience from production. Our SLO targets are defined in App Interface.
Latency
The ratio of requests that are considered significantly fast. The aim is to make it possible
to have a responsive UI. The exception is currently the /compose
call, which is long-running and
our SLO targets reflect a higher latency threshold. The UI must be implemented with this
in mind.
Stability
The proportion of successful (or unsuccessful due to user error) /compose
requests. The aim is for users
to be able to reliably queue image builds, even if some retries are required.
State
The service depends on a PostgreSQL database, the default postgres12-rds-1 template is used. The database stores metadata about each build, making it possible to enumerate past builds and to enforce quota limits. If the state is lost historical data would be lost, but the user could still use their images if they have saved the necessary information. The quote calculations would be off, but in the worst case scenario customers would be able to build more images than they are meant to, which would not be a big problem.
Load Testing
Image Builder is currently being load tested on a weekly basis with failure thresholds reflecting the SLIs. The load tests happen against stage CRC. An example can be found here.
More information can be found upstream.
Capacity
The needed capacity might grow a little bit in all directions (DB and number of pods), but any growth should be slow. Currently pods are running, and limits have been set on memory or cpu usage. The default insights limits and quotas are used, which should be more than enough.
Image Builder Composer API Architecture Document
Service Description
The image-builder-composer
API routed via api.openshift.com serves as a
job queue for pending image builds as well as a metadata-store for already built images. When an image
build is queued via the API it is turned into a set of jobs that are put on the job queue, and together
do the necessary tasks to determine how the image should be built, build the image, upload the image to
its destination, and possibly register or import it to its final format.
The image-builder-worker
API routed via api.openshift.com serves as the
other side of the job queue, where jobs can be dequeued to be executed and their results posted.
The actual jobs are executed by workers
, which is outside the scope of this document.
Technology Stack
The service is written in Golang, and the list of dependencies can be found in go.mod.
The ubi8/go-toolset:latest
container is used as a builder, and ubi8/ubi-minimal:latest
to run the
binary. The container images are located here: https://quay.io/repository/app-sre/composer.
Components
The service consists of the composer and the composer-worker apps running in an AppSRE managed cluster, and their backing database.
If either composer or the database are unavailable, the service does not work at all, new images cannot be built, and historical builds cannot be introspected. Already built images that may be in used by customers are unaffected, only their history and metadata can no longer be queried through the service.
If composer-workers is unavailable, new jobs can be queued and old ones can be queried, but workers will not be able to pick up new jobs until the API is back, and they will not be able to report back results correctly for jobs they finish while the API is down.
Routes
The public routes are /api/image-builder-composer/v2/
and /api/image-builder/worker/v1/
, detailed
lists can be found at https://api.openshift.com/api/image-builder-composer/v2/openapi and
https://api.openshift.com/api/image-builder-worker/v1/openapi.
Dependencies
Composer has the following internal and external dependencies.
Internal
Composer relies on Red Hat SSO for authentication.
External
- AWS RDS for data storage. See the section on state.
- Quay as a container registry. Without this, the service cannot be redeployed.
- Github as an upstream repository. Without this, the service cannot be redeployed.
- Gitlab, AWS EC2, and Openstack for upstream testing. Without this changes to the service cannot land.
Service Diagram
See parent page.
Application Success Criteria
- Image builds can be queued successfully
- Jobs can be dequeued successfully and correctly
- Jobs are tracked correctly
- The state of historical or in-flight builds can be queried and introspected successfully
State
The service depends on a PostgreSQL database, the default postgres12-rds-1 template is used. The database stores metadata about each build, making it possible to enumerate past builds as well as function as a job queue.
If the state is lost historical data would be lost, and pending image builds might never get scheduled, but the user could still use their existing images if they have saved the necessary information. Data loss would not affect the ability to schedule new builds.
Load Testing
The Image Builder API in console.redhat.com is currently being load tested on a weekly basis with failure thresholds reflecting the SLIs. The load tests happen against stage CRC, which is backed by composer in api.stage.openshift.com. An example can be found here.
More information can be found upstream.
Capacity
The defaults described in App Interface, 1 cpu 512Mi memory per container running in the default three pods are sufficient, and our expectations are that this will remain sufficient for the next twelve months.
Image Builder Workers Architecture Document
Service Description
The workers
are a fleet of (for now) amazon EC2 instances, responsible for requesting pending jobs
from the composer-worker API in AOC, perform jobs as instructed and report back the results. The
kinds of jobs are:
- determining build instructions for future image builds
- building images
- uploading images to their destination
- registering images in their target platform
Workers are stateless, apart from their caches, and hence trivially restartable. To build images workers need to be running in a VM with kernel access, rather than in a container, and in order to upload the results the workers need the right credentials for each of the possible targets. In order to request new jobs, the workers need to be issued with RH credentials.
Technology Stack
The service is written in Golang, and the list of vendored dependencies can be found in go.mod. The underlying tool is written in python3.
Both the service and underlying tool are built as RPMs and installed into AMIs. Their dependencies are specified in their respective .spec files:
- https://github.com/osbuild/osbuild/blob/main/osbuild.spec
- https://github.com/osbuild/osbuild-composer/blob/main/osbuild-composer.spec
Components
The service consists of a fleet of workers. If no workers are available, no jobs will be built until workers are again available. Nothing is lost as jobs will stay in the queue, but everything will simply stall.
Routes
The workers expose no routes.
Dependencies
The workers have the following internal and external dependencies.
Internal
- Red Hat SSO for authentication. Without this, the worker cannot request new jobs.
External
- EC2. Without this the workers cannot run.
- EC2, GCP and Azure to upload the respective images. Without this image upload will fail.
- S3 to upload images for download by the user. Without this image upload will fail.
- Packer as a build tool. Without this, the service cannot be redeployed.
- TerraForm as a deployment orchestrator. Without this, the service cannot be redeployed.
- Github as an upstream repository. Without this, the service cannot be redeployed.
- Gitlab, AWS EC2, and Openstack for upstream testing. Without this changes to the service cannot land.
Service Diagram
See parent page.
Application Success Criteria
The worker fleet is successful if:
- It scales on demand to avoid pending jobs having overly long queue times.
- The jobs are executed in a timely fashion.
- The job error rate (including image builds and uploads) is low.
State
Workers only have ephemeral state.
To optimize build-times, workers will keep a cache of previously (partially) built or downloaded artifacts if this is lost it will be recreated on demand with no other loss than extra running time.
Load Testing
Image Builder is currently being load tested on a weekly basis with failure thresholds reflecting the SLIs. The load tests happen against stage CRC. An example can be found here.
The load testing happens against stage, and tests the entire stack, including the workers.
Capacity
Increasing the rate at which workers can handle jobs is easily done by scaling up the ASG.
The workers are also limited by a 2 week image retention period in our cloud accounts. For GCP images this means there's can have a maximum of 1000 images stored at any given time. For AWS it's limited by the snapshots per region limit (100k). And it's limited by the amount of images that can concurrently be imported (20). The latter might pose a problem in future.
Developer guide
In this section, you will find a description of the source code in osbuild
organization.
The following scheme describes how separate components communicate with each other:
In the very basic use case where osbuild-composer
is running locally, the "pool of workers" also lives on the user's host machine. The osbuild-composer
and osbuild-worker
processes are spawned by systemd. We don't support any other means of spawning these processes, as they rely on systemd to open sockets, create state directories etc. Additionally, osbuild-worker
spawns osbuild as a subprocess to create the image itself. The whole image building machinery is spawned from a user process, for example, composer-cli
.
Workflow
Git Workflow
Commits
Commits should be easy to read.
The commit message should explain clearly what it's trying to do and why. The following format is common but not required:
<module>: Topic of the commit
Body of the commit, describing the changes in more detail.
The <module>
should point to the area of the codebase (for instance tests
or tools
). The topic
should summarize what the commit is doing.
GitHub truncates the first line if it's longer than 65 characters, which is something to keep in mind as well.
A Fixes #issue-number
can be added to automatically link and close a related issue if it exists.
Pull requests
A pull request should be one or more commits which form a coherent unit, it can be rebased/rewritten/force-pushed until it's fit for merging.
Pull requests are usually opened against the main branch. They should be opened from a developer's own fork to avoid a lot of random branches on the origin.
Each pull request should be reviewed, and the CI should pass.
A pull request can be marked as draft, if it shouldn't be reviewed yet. But once it's ready, do not hesitate to add reviewers. If you're unsure who to add as a reviewer, ask in the irc channel (#osbuild on Libera Chat).
Once a pull request is ready to be merged, it should be merged via the Rebase and merge
or Squash and merge
option. This avoids merge commits on the main branch.
Branches
Force-pushing to, or rebasing the main branch (or other release branches) is not allowed. Avoid directly pushing (fast-forward) to those branches as well. Commits can always be reverted by opening a new pull request.
Code style guidelines
This depends a little bit on the project and the language. Most of our projects have linters available, so do make use of those.
If unsure on how to format a specific statement, try to look for examples in the code.
General
-
No trailing whitespace
-
Avoid really long lines where possible (>120 characters)
-
Single newline at the end of each file
Golang
This is easy, simply use Gofmt.
Python
Python code should follow the PEP 8 style guide.
Shell
ShellCheck is used to lint shell code.
Javascript
Projects like Cockpit Composer use eslint to enforce style.
osbuild
A CLI tool for building OS images. It takes manifest as an input and produces an image as an output. The manifest consists of:
- sources section
- pipeline
In our usual use-case, that is tied to Fedora and RHEL, not applicable to other non-RPM distros, the sources section contains an org.osbuild.files
section, which is a list of RPMs described by their name, hash, and URL for downloading. We do not support metalink at the moment.
This section is, very often, a source of build failures. This happens because we can only include a single link and RPM repos are often instable. Furthermore, we need to set a timeout for the curl
download, because we want the build to timeout eventually in case the RPMs are unavailable, but it sometimes fails on slow Internet connection as well.
The pipeline consists of a series of stages and ends with an assembler. A stage is our unit of filesystem tree modification and it is implemented as a standalone executable. For example, we have a stage for installing RPM packages, adding a user, enabling systemd service, or setting a timezone.
The difference between a stage and an assembler is that the former takes a read-write filesystem-tree and performs a certain modification to it, whereas the latter takes a read-only filesystem tree and produces an output artifact.
The pipeline contains one more "nested" pipeline, which does not have an assembler. It is called a "build" pipeline.
High level goals
- reproducibility
- extensibility
The ideal case for building images would be that, given the same input manifest, the output image would always be the same no matter what machine was used for building it. Where "the same" is defined as a binary equivalent. The world of IT is, of course, not ideal therefore we define reproducibility as a functional equivalence (that is the image behaves the same when built on different machines) and we limit the set of build machines only to those running the same distribution, in the same version, and on the same architecture. That means if you want to build a Fedora 33 aarch64 image, you need a Fedora 33 aarch64 machine.
It is possible to run a RHEL pipeline on Fedora, for example, but we do not test it and therefore we can't promise it will produce the correct result.
The advantage of the stage/assembler model is that any user can extend the tool with their own stage or assembler.
How osbuild works in practise
The following subsections describe how OSBuild tries to achieve the outlined high level goals.
Manifest versions
OSBuild accepts two versions of manifests. Both manifests are plain JSON files. The following sections contain examples of both (note that comments are not allowed in JSON, so the examples below are not actually valid JSON).
Version 1
The version 1 manifest is built around the idea that an artifact is produced by downloading files from the Internet (e.g. RPMs), using them to build and modify a filesystem tree (using stages), and finally using a read-only version of the final filesystem tree as an input to a assembler which produces the desired artifact.
{
# This version contains 2 top-level keys.
# First sources, these get downloaded from a network and are available
# in the stages.
"sources": {},
# Second is a pipeline, which can optionally contain a nested "build"
# pipeline.
"pipeline": {
# The build pipeline is used to create a build container that is
# later used for building the actual OS artifact. This is mostly
# to increase reproducibility and host-guest separation.
# Also note that this is optional.
"build": {
"pipeline": {
"stages": [
{
"name": "",
"options": {}
},
{
"name": "",
"options": {}
}
],
"runner": ""
}
},
# The pipeline itself is a list of osbuild stages.
"stages": [
{
"name": "",
"options": {}
},
{
"name": "",
"options": {}
}
],
# And finally exactly one osbuild assembler.
"assembler": {
"name": "",
"options": {}
}
},
}
Version 2
Version 2 is more complicated because OSBuild needed to cover additional use cases like OSTree commit inside of a OCI container. In general that is an artifact inside of another artifact. This is why it comes with multiple pipelines.
{
# This version has 3 top-level keys.
# The first one is simply a version.
"version": "2",
# The second one are sources as in version 1, but keep in mind that in this
# version, stages take inputs instead of sources because inputs can be both
# downloaded from a network and produced by a pipeline in this manifest.
"sources": {},
# This time the 3rd entry is a list of pipelines.
"pipelines": [
{
# A custom name for each pipeline. "build" is used only as an example.
"name": "build",
# The runner is again optional.
"runner": "",
"stages": [
{
# The "type" is same as "name" in v1.
"type": "",
# The "inputs" field is new in v2. You can specify what goes to
# the stage. Example inputs are RPMs and OSTree commits from the
# "sources" section, but also filesystem trees built by othe
# pipelines.
"inputs": {},
"options": {}
}
]
},
{
# Again only example name.
"name": "build-fs-tree",
# But this time the pipeline can use the previous one as a build pipeline.
# The name:<something> is a reference format in OSBuild manifest v2.
"build": "name:build",
"stages": []
},
{
"name": "do-sth-with-the-tree",
"build": "name:build",
"stages": [
{
"type": "",
"inputs": {
# This is an example of how to use the filesystem tree built by
# another pipeline as an input to this stage.
"tree": {
"type": "org.osbuild.tree",
"origin": "org.osbuild.pipeline",
"references": [
# This is a reference to the name of the pipeline above.
"name:build-fs-tree"
]
}
},
"options": {}
}
]
},
{
# In v2 the assembler is a pipeline as well.
"name": "assembler",
"build": "name:build",
"stages": []
}
]
}
Components of osbuild
OSBuild is designed as a set of loosely coupled or independent components. This subsection describes each of them separately so that the following section can describe how they work together.
Object Store
Object store is a directory (also a class representing it) that contains multiple filesystem trees. Each filesystem tree lives in a directory whose name represents hash of the pipeline resulting in this tree. In OSBuild, a user can specify a "checkpoint" which stores particular filesystem tree inside of the Object Store.
Build Root
It is a directory where OSBuild modules (stages and assemblers) are executed. The directory contains full operating system which is composed of multiple things:
- Executables and libraries needed for building the OS artifact (these are either from the host or created in a build pipeline).
- Directory where the resulting filesystem tree resides.
- Few directories bind-mounted directly from the host system (like
/dev
) - API sockets for communication between the stage running inside a container and the osbuild process running outside of it (directly on the host).
Sources
Sources are artifacts that are downloaded from the Internet. For example, generic files downloaded with curl
, or OSTree commits downloaded using libostree
.
Inputs
Inputs are a generalization of the concept of sources, but this time an "input" can be both downloaded, as sources are, or generated using osbuild pipeline. That means one pipeline can be used as an input for another pipeline so you can have an artifact inside of an artifact (for example OSTree commit inside of a container).
APIs
OSBuild allows for bidirectional communication from the build container to the osbuild process running on the host system. It uses Unix-domain sockets and JSON-based communication (jsoncomm
) for this purpose. Examples of available APIs:
- osbuild - provides basic osbuild features like passing arguments to the stage inside the build container or reporting exceptions from the stage back to the host
- remoteloop - helps with setting up loop devices on the host and forwarding them to the container
- sources - runs a source module and returns the result
What happens during simplified osbuild run
This section puts the above concepts into context. It does not aim to describe all the possible code paths. To understand osbuild
properly, you need to read the source code, but it should help you get started.
During a single osbuild
run, this is what usually happens:
- Preparation
- Validate the manifest schema to make sure it is either v1 or v2 manifest
- Object Store is instantiated either from an empty directory or from already existing one which might contain already cached filesystem trees.
- Processing the manifest
- Download sources
- Run all pipelines sequentially
- Processing a pipeline (one of N)
- Check the Object Store for cached filesystem trees and start from there if it already contains parially built artifact
- Processing a module (stage or assembler)
- Create a BuildRoot, which means initializing a
bwrap
container, mounting all necessary directories, and forwarding API sockets. - From the build container, use the osbuild API to get arguments and run the module
- Create a BuildRoot, which means initializing a
- If an assembler is present in the manifest, run it and store the resulting artifact in the output directory
Issues that do not fit into the high level goals
Bootstrapping the build environment
The "build" pipeline was introduced to improve reproducibility. Ideally, given a build pipeline, one would always get the same filesystem tree. But, to create the first filesystem tree, you need some tools. So, where go you get them from? Of course from the host operating system (OS). The problem with getting tools from the host OS this is that the host can affect the final result.
We've already had this issue many times, because most of the usual CLI tools were not created with reproducibility in mind.
The struggle with GRUB
The standard tooling for creating GRUB does not fit to our stage/assembler concept because it wants to modify the filesystem tree and create the resulting artifact at the same time. As a result we have our own reimplementation of these tools.
Running OSBuild from sources
It is not strictly required to run OSBuild installed from an RPM package. If you attempt to run osbuild
from the command line in combination with an SELinux stage in the manifest it will most likely fail. For example:
$ python3 -m osbuild
The cause of error is a lack of proper labelling of the python3
executable, all stages and assemblers. Creating two additional files resolves the problem:
- New entrypoint which will soon have the right SELinux label, let's call it
osbuild-cli
:
#!/usr/bin/python3
import sys
from .osbuild.main_cli import osbuild_cli as main
if __name__ == "__main__":
r = main()
sys.exit(r)
- A script to relabel all the files that need it:
#!/bin/bash
LABEL=$(matchpathcon -n /usr/bin/osbuild)
echo "osbuild label: ${LABEL}"
chcon ${LABEL} osbuild-cli
find . -maxdepth 2 -type f -executable -name 'org.osbuild.*' -print0 |
while IFS= read -r -d '' module; do
chcon ${LABEL} ${module}
done
Now run the script and use the entrypoint to execute OSBuild from git checkout.
Stage development
Stage unit testing
To update a stage unit test, modify appropriate test/data/stages/<stage_suffix>/b.mpp.json
.
Regenerate testing manifests:
make test-data
You can run osbuild
stage test only for a specific stage:
sudo python3 -m pytest test/run/test_stages.py -k test_<stage_suffix>
Based on the result of the unit test adjust test/data/stages/<stage_suffix>/diff.json
Inspecting filesystem tree modified by the stage using unit test manifest
# needed only first time
mkdir -p store/
mkdir -p output/
rm -rf rpmbuild
make rpm
sudo dnf install -y rpmbuild/RPMS/noarch/*.rpm
sudo rm -rf store/*
# This command assumes that the latest pipeline stage, which you want to inspect has index "1".
# If this is not true, adjust the index in the `jq .pipeline.stages[1].id`
STAGE_ID=$(osbuild --inspect test/data/stages/<stage_suffix>/b.json | jq .pipeline.stages[1].id | tr -d '"')
sudo osbuild --store store/ --checkpoint "$STAGE_ID" --export "$STAGE_ID" --output-directory output/ test/data/stages/<stage_suffix>/b.json
The modified filesystem tree will be located in store/objects/<stage_id>/
Special case - the stage requires additional dependency
If the additional dependency is not present int the build pipeline of the stage test manifest, you'll have to fix it. Modify the appropriate manifest imported in the build pipeline of the b.mpp.json
file. This may be e.g. the f34-build.json
present in test/data/manifests/
. Modify it's "mpp" version, e.g. test/data/manifests/f34-build.mpp.json
and run make test-data
in the git checkout root.
osbuild CI runs unit tests inside a special osbuild-ci
container. If the stage imports a 3rd party Python module, then you will have to make sure, that this Python module is present in the container image. Adding the dependency to the build pipeline will cover only the case when stages are tested, but not other types of unit testing. In order to extend the osbuild-ci
image, you need to submit a Pull Request against the OSBuild Containers repository.
osbuild-composer
It is a web service for building OS images. The core of osbuild-composer
, which is common to all APIs, is osbuild manifests generation a job queuing. If an operating system is to be supported by osbuild-composer
, it needs the manifest generation code in internal/distro
directory. So far, we only focus on RPM based distributions, such as Fedora and RHEL. The queuing mechanism is under heavy development at the moment.
Interfacing with dnf package manager
We use our custom wrapper for dnf
, which we call simply dnf-json
, because its interface goes like this:
- Stdin - takes a JSON object
- Stdout - returns a JSON object
- Return code is used only for
dnf-json
internal errors, not for errors in the operation specified on the input. Those errors are reported in the returned JSON object.
Local API - Weldr
This API comes from the Lorax-composer project
. osbuild-composer
was created as a drop-in replacement for Lorax which influenced many design decisions. It uses Unix-Domain socket, so it is meant for local usage only. There are two clients:
- composer-cli
- cockpit-composer (branded as Image Builder in the Cockpit console)
Activate this API by invoking systemctl start osbuild-composer.socket
. Systemd will create a socket at /run/weldr/api.socket
.
Remote APIs - Cloud and Koji
Both are under heavy development.
Latest RPM builds
While developing osbuild and osbuild composer it is convenient to download the latest RPM builds directly from upstream. The repositories in the osbuild organization don't use any automation from Copr or Packit. Instead, the RPMs are built directly in the Jenkins CI and stored in AWS under the commit hash which allows anyone to download precisely the version built from a desired commit.
The URL is specified in the mockbuild.sh
scripts in the osbuild and osbuild-composer repositories:
And the final resulting URL is displayed in the Jenkins output (available only from Red Hat VPN).
Common trap: If you click on a link to a repo, such as:
you will get HTTP 403 because that's a directory and we don't allow directory listing. If you append a known file path, such as repodata/repomd.xml
you will see that the repo is there:
Testing strategy
Let me start with a quote:
As the team obsessed with immutable test dependencies, how could we use ..
One osbuild developer in one PR fixing one more piece of infrastructure which could still change.
TODO: what do we test in each repo
TODO: rpmci, rpmrepo
osbuild-composer
This section provides a basic summary of the various types of testing done for osbuild-composer
. Detailed information about testing can be found in the upstream repository.
Unit tests
There is pretty heavy mocking in the osbuild-composer codebase.
HTTP API is unit-tested without any network communication (there is no socket), only the HTTP request/responses are tested.
Integration tests
These test cases live under test/cases
and each of them is a standalone script. Some of them invoke additional binaries which live under cmd
if not specified otherwise.
-
api.sh [aws|azure|gcp]
- test the Cloud API (running at localhost:443)-
Provisions osbuild-composer and locally running remote worker.
-
Creates a request for compose and uploads the image to specified cloud provider. Currently AWS, Azure and GCP are supported.
-
The uploaded image is used for a VM instance in the respective cloud environment, booted and connected to via SSH. This is currently tested only for AWS and GCP.
-
Requires credentials for the respective cloud provider to work properly.
-
-
aws.sh
Use osbuild-composer "the way we expect our customers to use it". That means provision
osbuild-composer
and use Weldr API to build an AMI image and upload it to EC2. Then use theaws
CLI tool to spawn a VM from the image and make sure it boots and can be accessed.- Requires AWS credentials
-
base_tests.sh
This script runs binaries implemented as part of osbuild-composer codebase in golang. It provisions osbuild-composer and then runs the tests in a loop.
-
osbuild-composer-cli-tests
- Weldr API tests using composer-cli- Executing
composer-cli
utility - Invoke multiple image builds
- Executing
-
osbuild-weldr-tests
- Weldr API tests using golang library frominternal/client
- These live directly in the
internal
directory, which is a bit odd given that all other tests live undercmd/
, but there might be a reason for this. - They invoke a build of a qcow2 image
- These live directly in the
-
osbuild-dnf-json-tests
- These make sure the interface to dnf still works-
This binary will execute
dnf-json
multiple times and it will also run multiplednf
depsolving tasks in parallel. It is possible that it will require a high amount of RAM. -
My guess would be at least 2GB memory for a VM running this test.
-
-
osbuild-auth-tests
- Make sure the TLS certificate authentication works as expected for the koji api and worker api sockets.- A certificate authority is created for these tests and the files are stored in
/etc/osbuild-composer-test/ca
- The certificates live in the standard configuration directory:
/etc/osbuild-composer
- Multiple certificates are created:
- For osbuild-composer itself (let's say a "server" certificate)
- For osbuild-worker
- For a client application, in this case the test binary
- For kojihub
- A certificate authority is created for these tests and the files are stored in
-
-
image_tests.sh
Possibly the most resource-hungry test case. It builds an image for all supported image types for all supported distributions on all supported architectures (note that every distro has a different set of aches and arches have different set of supported types, e.g. there is no s390x image for AWS because there is no such machine). The "test cases" are defined in
test/cases/manifests
and they contain a boot type (where to spawn the VM), compose request (what to ask Weldr API for), and finally the expected manifest. Osbuild-composer should generate the same manifest, build the image successfully, optionally upload it to a cloud provider, boot the image, and finally verify it is running.- Require AWS, Openstack, and Azure credentials
-
koji.sh
Runs a koji instance in a container. It sets up certificates and Kerberos KDC because osbuild-composer uses Kerberos to authenticate with Koji.
-
ostree.sh
This test case creates an OSTree commit, boots it, then it creates a commit with an upgrade on top of the previous commit and makes sure the VM can upgrade to the new one.
- Uses libvirt to run the VM
-
qemu.sh
Create a qcow2 image and boot it using libvirt.
Leaking resources
The cloud-cleaner binary was created to clean up all artifacts (like images, but also registered AMIs, security groups, etc.) that could be left behind. Not all executables in our CI have proper error handling and clean up code and what is even worse, if Jenkins fails and takes down all running jobs, it is possible that the clean-up code will not run even if it is implemented.
Possibly leaking resources:
-
api.sh
test case:- Image uploaded to AWS, Azure or GCP
-
aws.sh
test case:-
Image uploaded to EC2
-
VM running in EC2
-
Releasing
This guide describes the process of releasing osbuild and osbuild-composer to upstream, into Fedora and CentOS Stream.
Clone the release helpers
Go to the maintainer-tools repository, clone the repository and run pip install -r requirements.txt
in order to get all the dependencies to be able to execute the release.py
and update-distgit.py
scripts.
It's also advised to set a GitHub personal access token, otherwise you might run into API usage quotas. Go to Personal access tokens on GitHub and create a new token with scope public_repo
. Now, create a new packit user configuration at ~/.config/packit.yaml
and paste there the following content:
authentication:
github.com:
token: [YOUR_GITHUB_PERSONAL_ACCESS_TOKEN]
Upstream release
Note: Upstream releases are done automatically on a fortnightly alternating schedule, meaning one week we release osbuild and then the next week we release osbuild-composer.
Manual upstream release
Navigate to your local repository in your terminal and call the release.py
script. It will interactively take you through the following steps:
-
Gather all pull request titles merged to
main
since the latest release tag -
Create a draft of the next release tag
While writing the commit message, keep in mind that it needs to conform to both Markdown and git commit message formats, have a look at the commit message for one of the recent releases to get a clear idea how it should look like.
-
Push your signed git tag to
main
From here on a GitHub composite action will take over and
- Create a GitHub release based on the tag (version and message)
- Bump the version in
osbuild.spec
orosbuild-composer.spec
(and potentiallysetup.py
) - Commit and push this change to
main
so the version is already reflecting the next release
Fedora release
We use packit (see .packit.yml
in the osbuild or osbuild-composer repository respectively or the official packit documentation) to automatically push new releases directly to Fedora's dist-git.
Then our fedora-bot takes over and performs the remaining steps:
- Get a kerberos ticket by running
kinit $USER@FEDORAPROJECT.ORG
- Call
fedpkg build
to schedule Koji builds for each active Fedora release (or: dist-git branch) - Update Bodhi with the latest release
CentOS Stream / RHEL releases
If you are a Red Hat employee, please continue reading about this in our internal release guide.
Spreading the word on osbuild.org
The last of releasing a new version is to create a new post on osbuild.org. Just open a PR in osbuild/osbuild.github.io. You can find a lot of inspiration in existing release posts.
Glossary
Term | Explanation |
---|---|
AMI | Amazon Machine Image (image type) |
Blueprint | Definition of customizations in the image |
Compose | Request from the user that produces one or more images. Images in a single compose are, in theory, the same, but for different platforms, such as Azure or AWS. In practice they are slightly different because every cloud platform requires a different package set and system configuration. osbuild-composer running the Weldr API can only create one image at a time, so one compose maps directly to one image build. It can map to multiple image builds when used with other APIs, such as the Koji API. |
Composer API | HTTP API meant as publicly accessible (over TCP). It was created specifically for osbuild-composer and does not support some Weldr features like blueprint management, but adds new features like building different distros and architectures. |
GCP | Google Cloud Platform |
Image Build | One request from osbuild-composer to osbuild-worker. Its result is a single image. |
Image Type | Image file format usually associated with a specific use case. For example: AMI for AWS, qcow2 for OpenStack, etc. |
Manifest | Input for the osbuild tool. It should be a precise definition of an image. See https://www.osbuild.org/man/osbuild-manifest.5 for more information. |
osbuild | Low-level tool for building images. Not meant for end-user usage. |
osbuild-composer | HTTP service for building OS images. |
OSTree | Base technology for immutable OS images: Fedora IoT and RHEL Edge |
Repository overrides | osbuild-composer uses its own set of repository definitions. In case a user wants to use custom repositories, "overrides" can be created in /etc/osbuild-composer |
Weldr API | Local HTTP API used for communication between composer-cli/cockpit-composer and osbuild-composer. It comes from the lorax-composer project. |