Build Reliable Operating System Images
Use Image Builder to create images of your Linux operating system in a reliable fashion, isolating the image creation from your host operating system, and producing a reliable, well-defined image ready to be deployed.
Image Builder provides both the tools to build custom operating system images, as well as applications to deploy hosted image building services. At its core, the osbuild
project takes the responsibility of assembling custom images of operating systems according to the precise needs of the user. The osbuild-composer
project builds on top of osbuild and implements an image creation service that can be deployed as a hosted service.
Image Builder on console.redhat.com
Image Builder is available as a managed service as part of Red Hat's Hybrid Cloud Console. It is used during the development of RHEL and Fedora but is also used by Red Hat’s customers or anyone with a Red Hat developer license. It builds customized images of RHEL and CentOS Stream for many footprints (traditional, ostree-based), targets (Public clouds, Baremetal, etc), and architectures (x86, ARM, IBM).
Image Builder on premises
Image Builder can also be deployed and self-hosted. In its most basic setup, it will use one osbuild-composer service and one worker running osbuild. The architecture you can build images for will depend on the architecture of the worker, i.e. on an x86 worker you can only build x86 images. You can leverage blueprints to customize your images.
How to contribute
All of our code is open source and on GitHub.
Our developer guide is a great starting point to learn about our workflow, code style and more!
How to reach out to us
- Matrix:
#image-builder
on fedoraproject.org - Mailing List: image-builder@redhat.com
- Issues and pull requests: github.com/osbuild
Service architecture
This service is open source, so all of its code is inspectable and can be contributed to.
Click each component in this diagram to get to the hash of the source code currently running in production.
The metadata defining the service for App-Interface is kept upstream and open as templates for both the osbuild-composer and image-builder components. The tooling to operate the service is to large parts open source and publicly accessible, e.g. qontract in the form of qontract-server, qontract-reconcile. The architecture documents in this section comply with the AppSRE contract.
How to contribute
Our developer guide is a great starting point to learn about our workflow, code style and more!
If you want to contribute to our frontend or backend, here are guides on how to get the respective stack set up for development:
How to reach out to us
- Matrix:
#image-builder
on fedoraproject.org - Mailing List: image-builder@redhat.com
- Issues: Service, On premises, github.com/osbuild
- Pull requests: github.com/osbuild
How open is this service?
🟢 Open assets
- 🟢 The source code is open.
- 🟢 Unit tests are open.
- 🟢 Performance tests are open.
- 🟢 Functional tests are open.
- 🟢 The dependencies are open source.
- 🟢 Deployment metadata is open. [1] [2]
🟢 Contribution workflow
- 🟢 External contributors can follow the same workflow as team members.
- 🟢 The workflow is publicly documented.
- 🟢 Regular contributors can trigger CI.
- 🟢 External contributions are eagerly reviewed.
🟠 Issue tracking and planning
🟢 Documentation
🟠 Communication
- 🟢 There is a public mailinglist.
- 🔴 There are public meetings.
🟠 Open Site Reliability Engineering
- 🟢 There is an open status page.
- 🔴 Logging, monitoring, and alerting is open.
- 🔴 Incident management is open.
Image Builder CRC API Architecture Document
Service Description
The image-builder
API in CRC serves as the public API used either directly by customers or through the
CRC UI. Through this API customers can create, manage and view image builds. The service in CRC is
responsible for access management, quotas, rate-limiting, etc. In the future it may interact with other
services in CRC in order to add value to the image build experience.
The actual image build requests are passed on to composer
, which is outside the scope of this document.
Technology Stack
The service is written in Golang, and the list of dependencies can be found in go.mod.
The ubi8/go-toolset:latest
container is used as a builder, and ubi8/ubi-minimal:latest
to run the
binary. The container images are located here: https://quay.io/repository/cloudservices/image-builder.
Components
The service consists of the image-builder app running in CRC, and its backing database. If either of these are unavailable, the service does not work at all, new images cannot be built, and historical builds cannot be introspected. Already built images that may be in used by customers are unaffected, only their history and metadata can no longer be queried through the service.
Routes
The public route is /api/v1/image-builder
, a detailed list can be found at
https://console.redhat.com/docs/api/image-builder.
Dependencies
Image builder has the following internal and external dependencies.
Internal
Image Builder relies on 3Scale to set the x-rh-identity
header. It uses the header for authentication,
and quota application. It also uses the account number to map previously made compose requests to that
account number.
External
- AWS RDS for data storage. See the section on state.
- Quay as a container registry. Without this, the service cannot be redeployed.
- Github as an upstream repository. Without this, the service cannot be redeployed.
- Gitlab, AWS EC2, and Openstack for upstream testing. Without this changes to the service cannot land.
Service Diagram
See parent page.
Application Success Criteria
- Customers can queue image builds and view their state.
- Customers can introspect and manage existing builds.
- Quotas are applied according to policy to manage cost of running the service.
- The service is able to provide functionality to make its own functionality discoverable
- Enumerate supported features
- Package search
SLOs
The image builder API has the following SLOs, but we aim to add more and make these stricter as we gain more experience from production. Our SLO targets are defined in App Interface.
Latency
The ratio of requests that are considered significantly fast. The aim is to make it possible
to have a responsive UI. The exception is currently the /compose
call, which is long-running and
our SLO targets reflect a higher latency threshold. The UI must be implemented with this
in mind.
Stability
The proportion of successful (or unsuccessful due to user error) /compose
requests. The aim is for users
to be able to reliably queue image builds, even if some retries are required.
State
The service depends on a PostgreSQL database, the default postgres12-rds-1 template is used. The database stores metadata about each build, making it possible to enumerate past builds and to enforce quota limits. If the state is lost historical data would be lost, but the user could still use their images if they have saved the necessary information. The quote calculations would be off, but in the worst case scenario customers would be able to build more images than they are meant to, which would not be a big problem.
Load Testing
Image Builder is currently being load tested on a weekly basis with failure thresholds reflecting the SLIs. The load tests happen against stage CRC. An example can be found here.
More information can be found upstream.
Capacity
The needed capacity might grow a little bit in all directions (DB and number of pods), but any growth should be slow. Currently pods are running, and limits have been set on memory or cpu usage. The default insights limits and quotas are used, which should be more than enough.
Image Builder Composer API Architecture Document
Service Description
The image-builder-composer
API routed via api.openshift.com serves as a
job queue for pending image builds as well as a metadata-store for already built images. When an image
build is queued via the API it is turned into a set of jobs that are put on the job queue, and together
do the necessary tasks to determine how the image should be built, build the image, upload the image to
its destination, and possibly register or import it to its final format.
The image-builder-worker
API routed via api.openshift.com serves as the
other side of the job queue, where jobs can be dequeued to be executed and their results posted.
The actual jobs are executed by workers
, which is outside the scope of this document.
Multitenancy
osbuild-composer
deployments in both api.openshift.com and api.stage.openshift.com have support for running builds for multiple tenants. Jobs created by a tenant can be picked only by workers belonging to the same tenant.
The tenant of an API request is currently determined from the JWT token that the API caller used. Specifically, the implementation extracts the tenant ID from rh-org-id
or account_id
fields. This is defined in the deployment template.
Internally in osbuild-composer
, the tenant ID is prefixed with org-
and saved to the jobqueue as a channel
, see composer's source code.
Technology Stack
The service is written in Golang, and the list of dependencies can be found in go.mod.
The ubi8/go-toolset:latest
container is used as a builder, and ubi8/ubi-minimal:latest
to run the
binary. The container images are located here: https://quay.io/repository/app-sre/composer.
Components
The service consists of the composer and the composer-worker apps running in an AppSRE managed cluster, and their backing database.
If either composer or the database are unavailable, the service does not work at all, new images cannot be built, and historical builds cannot be introspected. Already built images that may be in used by customers are unaffected, only their history and metadata can no longer be queried through the service.
If composer-workers is unavailable, new jobs can be queued and old ones can be queried, but workers will not be able to pick up new jobs until the API is back, and they will not be able to report back results correctly for jobs they finish while the API is down.
Routes
The public routes are /api/image-builder-composer/v2/
and /api/image-builder/worker/v1/
, detailed
lists can be found at https://api.openshift.com/api/image-builder-composer/v2/openapi and
https://api.openshift.com/api/image-builder-worker/v1/openapi.
Dependencies
Composer has the following internal and external dependencies.
Internal
Composer relies on Red Hat SSO for authentication.
External
- AWS RDS for data storage. See the section on state.
- Quay as a container registry. Without this, the service cannot be redeployed.
- Github as an upstream repository. Without this, the service cannot be redeployed.
- Gitlab, AWS EC2, and Openstack for upstream testing. Without this changes to the service cannot land.
Service Diagram
See parent page.
Application Success Criteria
- Image builds can be queued successfully
- Jobs can be dequeued successfully and correctly
- Jobs are tracked correctly
- The state of historical or in-flight builds can be queried and introspected successfully
State
The service depends on a PostgreSQL database, the default postgres12-rds-1 template is used. The database stores metadata about each build, making it possible to enumerate past builds as well as function as a job queue.
If the state is lost historical data would be lost, and pending image builds might never get scheduled, but the user could still use their existing images if they have saved the necessary information. Data loss would not affect the ability to schedule new builds.
Load Testing
The Image Builder API in console.redhat.com is currently being load tested on a weekly basis with failure thresholds reflecting the SLIs. The load tests happen against stage CRC, which is backed by composer in api.stage.openshift.com. An example can be found here.
More information can be found upstream.
Capacity
The defaults described in App Interface, 1 cpu 512Mi memory per container running in the default three pods are sufficient, and our expectations are that this will remain sufficient for the next twelve months.
Image Builder Workers Architecture Document
Service Description
The workers
are a fleet of (for now) amazon EC2 instances, responsible for requesting pending jobs
from the composer-worker API in AOC, perform jobs as instructed and report back the results. The
kinds of jobs are:
- determining build instructions for future image builds
- building images
- uploading images to their destination
- registering images in their target platform
Workers are stateless, apart from their caches, and hence trivially restartable. To build images workers need to be running in a VM with kernel access, rather than in a container, and in order to upload the results the workers need the right credentials for each of the possible targets. In order to request new jobs, the workers need to be issued with RH credentials.
Technology Stack
The service is written in Golang, and the list of vendored dependencies can be found in go.mod. The underlying tool is written in python3.
Both the service and underlying tool are built as RPMs and installed into AMIs. Their dependencies are specified in their respective .spec files:
- https://github.com/osbuild/osbuild/blob/main/osbuild.spec
- https://github.com/osbuild/osbuild-composer/blob/main/osbuild-composer.spec
Components
The service consists of a fleet of workers. If no workers are available, no jobs will be built until workers are again available. Nothing is lost as jobs will stay in the queue, but everything will simply stall.
Routes
The workers expose no routes.
Dependencies
The workers have the following internal and external dependencies.
Internal
- Red Hat SSO for authentication. Without this, the worker cannot request new jobs.
External
- EC2. Without this the workers cannot run.
- EC2, GCP and Azure to upload the respective images. Without this image upload will fail.
- S3 to upload images for download by the user. Without this image upload will fail.
- Packer as a build tool. Without this, the service cannot be redeployed.
- TerraForm as a deployment orchestrator. Without this, the service cannot be redeployed.
- Github as an upstream repository. Without this, the service cannot be redeployed.
- Gitlab, AWS EC2, and Openstack for upstream testing. Without this changes to the service cannot land.
Service Diagram
See parent page.
Application Success Criteria
The worker fleet is successful if:
- It scales on demand to avoid pending jobs having overly long queue times.
- The jobs are executed in a timely fashion.
- The job error rate (including image builds and uploads) is low.
State
Workers only have ephemeral state.
To optimize build-times, workers will keep a cache of previously (partially) built or downloaded artifacts if this is lost it will be recreated on demand with no other loss than extra running time.
Load Testing
Image Builder is currently being load tested on a weekly basis with failure thresholds reflecting the SLIs. The load tests happen against stage CRC. An example can be found here.
The load testing happens against stage, and tests the entire stack, including the workers.
Capacity
Increasing the rate at which workers can handle jobs is easily done by scaling up the ASG.
The workers are also limited by a 2 week image retention period in our cloud accounts. For GCP images this means there's can have a maximum of 1000 images stored at any given time. For AWS it's limited by the snapshots per region limit (100k). And it's limited by the amount of images that can concurrently be imported (20). The latter might pose a problem in future.
Image Builder Koji integration
This document describes how various instances of the Koji build system can and do integrate with the Image Builder service.
Architecture
osbuild-composer
can integrate with a Koji
instance as an external Content Generator using the koji-osbuild plugin. The overview of the integration is described in the koji-osbuild project README.
In short, a Koji
instance integrates directly with osbuild-composer
API, usually as a separate tenant with a dedicated set of workers.
Technology Stack
The koji-osbuild plugin is implemented in Python, and the list of dependencies can be found in the SPEC file.
Building images via Koji integration
The koji-osbuild
plugin allows one to submit image builds via the Koji Hub API using the osbuildImage
task. The accepted arguments schema of the osbuildImage
task is described in the plugin implementation. The koji-osbuild
plugin processes the request and submits a new compose request using the osbuild-composer
Cloud API. The plugin always sets the koji
property in the compose request, signaling to osbuild-composer
that the request is coming from a Koji
plugin.
Images built as part of a compose requests submitted via the Koji
plugin are always implicitly uploaded to the respective Koji
instance. Since version 10 of the koji-osbuild
plugin, images can be uploaded directly to the appropriate cloud environment, in addition to being uploaded to Koji
. More details are below in the Cloud upload section.
The koji-osbuild
plugin also supports specifying all image customizations supported by the osbuild-composer
Cloud API.
There are currently two easy ways how to trigger the osbuildImage
tasks in Koji:
koji-cli
- the command line client for interacting withKoji
. The prerequisite is to install thekoji-osbuild-cli
plugin. For more information, runkoji osbuild-image --help
.- Pungi - a distribution compose tool.
Pungi
interacts directly with theKoji
Hub API and is able to submitosbuildImage
tasks as part of a distribution compose. The details of how to configurePungi
to trigger image builds are described in the project documentation.
Cloud upload
Prerequisites
koji-osbuild
version >= 10osbuild-composer
version >= 58
Details
Images built via the Koji
integration can be automatically uploaded to the appropriate cloud environment, in addition to the Koji
instance. In order for this to happen, one must provide upload_options when using the osbuildImage
task and the integrated osbuild-composer
instance must be configured appropriately to be able to upload to the respective cloud environments.
Currently supported upload_options are:
- AWS EC2
- AWS S3
- Azure (as an image)
- GCP
- Container registry
Please note, that each image type can be uploaded only to its respective cloud target, represented by upload_options (e.g. the ami
image can be uploaded only to AWS EC2
, gce
image can be uploaded only to GCP
, etc.).
The allowed upload_options schema is defined in the koji-osbuild
Hub plugin and currently matches the osbuild-composer
Cloud API UploadOptions
.
If the compose request contains multiple image requests (meaning that multiple images will be built), the provided upload_options will be used as is for all images (with all its consequences).
All the necessary data to locate the image in the cloud is attached by the koji-osbuild
plugin to the image build task in Koji
as compose-status.json
file. Below is an example of such file:
{
"image_statuses": [
{
"status": "success",
"upload_status": {
"options": {
"ami": "ami-02e34403c421dfc17",
"region": "us-east-1"
},
"status": "success",
"type": "aws"
}
}
],
"koji_build_id": 1,
"koji_task_id": null,
"status": "success"
}
Note: Starting with osbuild-composer
version 91, the cloud upload target results are also attached to the image output metadata as well as to the build metadata. See the Image outout metadata section for more details.
Cloud upload via koji-cli
In order to upload images to the cloud when using koji-cli
, one must first create a JSON file with the appropriate upload_options.
Example gcp_upload_options.json
:
{
"region": "eu",
"bucket": "my-bucket",
"share_with_accounts": ["alice@example.org"]
}
Then add the --upload-options=gcp_upload_options.json
argument to the command line when calling koji
CLI.
Cloud upload via Pungi
In order to upload images to the cloud, when using Pungi
to trigger image builds, one must specify the upload_options
option in the configuration dictionary as described in the project documentation.
Please note that the support for cloud upload has been merged to the Pungi
project after the 4.3.6 release. Therefore, if you want to take advantage of this feature, make sure to use version higher than 4.3.6.
Type-specific metadata
osbuild-composer
attaches extra metadata to a Koji build as well as to each of the outputs attached to a Koji build.
Output metadata
osbuild-composer
attaches the following outputs for each of the built images to the build:
- built image
- osbuild manifest
- osbuild logs
All outputs have the (build) type set to image
, except for the log, which don't have any (build) type set and also have no metadata attached. The metadata attached to the image and manifest outputs is described below.
osbuild-composer
uses the image
type for all image builds via Koji, and so type-specific information is placed into the extra.image
map of each output. Note that this is a legacy type in Koji and may be changed to use extra.typeinfo.image
in the future. Clients fetching such data should first look for it within extra.typeinfo.image
and fall back to extra.image
when the former is not available.
Image output metadata
Data attached to the image output as metadata under extra.image
:
arch
- architecture of the imageboot_mode
- boot mode of the image. Can be one of:legacy
uefi
hybrid
none
osbuild_artifact
- information about the osbuild configuration used to produce the imageexport_filename
- filename of the image artifact as produced by osbuildexport_name
- name of the manifest pipeline that was exported to produce the image
osbuild_version
- version of osbuild used to produce the imageupload_target_results
- optional list of cloud upload target results, if the image build request contained request to upload the image also to a specific cloud environment, in addition to Koji. Each entry in the list contains:name
- name of the upload targetoptions
- upload-target specific options with information to locate the image in the cloud environmentosbuild_artifact
- information about the osbuild configuration used to produce the image for this specific upload target. Technically, osbuild can export multiple different artifacts from the same manifest, but in reality, this is not used at this point.export_filename
- filename of the image artifact as produced by osbuildexport_name
- name of the manifest pipeline that was exported to produce the image
Example of image output metadata
The following example shows the metadata attached to an image output under the extra.image
key:
{
"arch": "x86_64",
"boot_mode": "hybrid",
"osbuild_artifact": {
"export_filename": "image.raw.xz",
"export_name": "xz"
},
"osbuild_version": "93",
"upload_target_results": [
{
"name": "org.osbuild.aws",
"options": {
"ami": "ami-0d06fff61b0395df0",
"region": "us-east-1"
},
"osbuild_artifact": {
"export_filename": "image.raw.xz",
"export_name": "xz"
}
}
]
}
Manifest output metadata
Data attached to the manifest output as metadata under extra.image
:
arch
- architecture of the image produced by the manifestinfo
- additional information about the manifestosbuild_composer_version
- version ofosbuild-composer
used to produce the manifestosbuild_composer_deps
- list ofosbuild-composer
dependencies, which could affect the content of the manifest. Each entry in the list contains:path
- Go module path of the dependencyversion
- version of the dependencyreplace
- optional Go module path of the replacement module, if the dependency was replacedpath
- Go module path of the replacement moduleversion
- version of the replacement module
Example of manifest output metadata
The following example shows the metadata attached to a manifest output under the extra.image
key:
{
"arch": "x86_64",
"info": {
"osbuild_composer_version": "git-rev:f6e0e993919cb114e4437299020e80032d0e40a7",
"osbuild_composer_deps": [
{
"path": "github.com/osbuild/images",
"version": "v0.7.0"
}
]
}
}
Build metadata
The metadata attached by osbuild-composer
to the Koji build itself is a compilation of the metadata attached to the individual outputs. The individual output metadata are always nested under the output's filename.
Image output metadata are nested under the extra.typeinfo.image
key, manifest output metadata are nested under the extra.osbuild_manifest
key.
Example of build metadata
The following example shows the metadata attached to a Koji build under the extra
key:
{
"typeinfo": {
"image": {
"name-version-release.x86_64.raw.xz": {
"arch": "x86_64",
"boot_mode": "hybrid",
"osbuild_artifact": {
"export_filename": "image.raw.xz",
"export_name": "xz"
},
"osbuild_version": "93",
"upload_target_results": [
{
"name": "org.osbuild.aws",
"options": {
"ami": "ami-0d06fff61b0395df0",
"region": "us-east-1"
},
"osbuild_artifact": {
"export_filename": "image.raw.xz",
"export_name": "xz"
}
}
]
}
}
},
"osbuild_manifest": {
"name-version-release.x86_64.raw.xz.manifest.json": {
"arch": "x86_64",
"info": {
"osbuild_composer_version": "git-rev:f6e0e993919cb114e4437299020e80032d0e40a7",
"osbuild_composer_deps": [
{
"path": "github.com/osbuild/images",
"version": "v0.7.0"
}
]
}
}
}
}
Image Builder on premises
osbuild-composer
is a service for building customized operating system images (currently only Fedora and RHEL). These images can be used with various virtualization software such as QEMU, VirtualBox, VMWare and also with cloud computing providers like AWS, Azure or GCP.
There are two frontends that you can use to communicate with osbuild-composer:
-
Cockpit Composer: The web-based management console Cockpit comes bundled with a UI extension to build operating system artifacts. See the documentation of Cockpit Composer for information, or consult the Cockpit Guide for help on general Cockpit questions.
-
Command-line Interface: With composer-cli there exists a linux command-line interface (CLI) to some of the functionality provided by OSBuild. The CLI is part of the Weldr project, a precursor of OSBuild.
This guide contains instructions on installing osbuild-composer
service and its basic usage.
If you want to fix a typo, or even contribute new content, the sources for this webpage are hosted in osbuild/guides GitHub repository.
For Red Hatters, the internal guides can be found here.
cockpit-composer | osbuild | osbuild-composer | |
---|---|---|---|
8.10 | 47-1 | 96-1 | 88-1 |
8.9 | 47-1 | 93-1 | 88-1 |
9.3 | 47-1 | 93-1 | 88-1 |
9.4 | 47-1 | 96-1 | 88-1 |
CentOS Stream 8 | 47-1 | 96-1 | 88-1 |
CentOS Stream 9 | 47-1 | 96-1 | 88-1 |
Fedora 37 | 47-1 | 95-1 | 90-1 |
Fedora 38 | 47-1 | 95-1 | 90-1 |
Fedora 39 | 47-1 | 96-1 | 90-1 |
Git | 47 | 96 | 90 |
Service | n/a | n/a | v90-4-g3a9bcde |
Workers | n/a | v93 | v90-4-g3a9bcde |
Basic concepts
osbuild-composer
works with a concept of blueprints. A blueprint is a description of the final image and its customizations. A customization can be:
- an additional RPM package
- enabled service
- custom kernel command line parameter, and many others. See Blueprint reference for more details.
An image is defined by its blueprint and image type, which is for example qcow2
(QEMU Copy On Write disk image) or AMI
(Amazon Machine Image).
Finally, osbuild-composer
also supports upload targets, which are cloud providers where an image can be stored after it is built. See the Uploading cloud images section for more details.
Example blueprint
name = "base-image-with-tmux"
description = "A base system with tmux"
version = "0.0.1"
[[packages]]
name = "tmux"
version = "*"
The blueprint is in TOML format.
Image types
osbuild-composer
supports various types of output images. To see all supported types, run this command:
$ composer-cli compose types
Installation
To get started with osbuild-composer
on your local machine, you can install the CLI interface or the Web UI, which is part of Cockpit project.
CLI interface
For CLI only, run the following command to install necessary packages:
$ sudo dnf install osbuild-composer composer-cli
To enable the service, run this command:
$ sudo systemctl enable --now osbuild-composer.socket
Verify that the installation works by running composer-cli
:
$ sudo composer-cli status show
If you prefer to run this command without sudo privileges, add your user to the weldr
group:
$ sudo usermod -a -G weldr <user>
$ newgrp weldr
Web UI
If you prefer the Web UI interface, known as an Image Builder, install the following package:
$ sudo dnf install cockpit-composer
and enable cockpit
and osbuild-composer
services:
$ sudo systemctl enable --now osbuild-composer.socket
$ sudo systemctl enable --now cockpit.socket
Managing repositories
There are two kinds of repositories used in osbuild-composer:
- Custom 3rd party repositories - use these to include packages that are not available in the official Fedora or RHEL repositories.
- Official repository overrides - use these if you want to download base system RPMs from elsewhere than the official repositories. For example if you have a custom mirror in your network. Keep in mind that this will disable the default repositories, so the mirror must contain all necessary packages!
Custom 3rd party repositories
These are managed using composer-cli
(see the manpage for complete reference). To add a new repository, create a TOML
file like this:
id = "k8s"
name = "Kubernetes"
type = "yum-baseurl"
url = "https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64"
check_gpg = false
check_ssl = false
and add it using composer-cli sources add <file-name.toml>
. Verify its presence using composer-cli sources list
and its content using composer-cli sources info <id>
.
Using sources with specific distributions
A new optional field has been added to the repository source format. It is a list of distribution strings that the source will be used with when depsolving and building images.
Sources with no distros
will be used with all composes. If you want to use a
source for a specific distro you set the distros
list to the distro name(s)
to use it with.
eg. A source that is only used when depsolving or building fedora 32:
check_gpg = true
check_ssl = true
distros = ["fedora-32"]
id = "f32-local"
name = "local packages for fedora32"
type = "yum-baseurl"
url = "http://local/repos/fedora32/projectrepo/"
This source will be used for any requests that specify fedora-32, eg. listing packages and specifying fedora-32 will include this source, but listing packages for the host distro will not.
Verifying Repository Metadata with GPG
In addition to checking the GPG signature on rpm packages, DNF can check that
repository metadata has been signed with a GPG key. You can setup such a
repository yourself by signing your repomd.xml
file after you have run
createrepo_c
on your repository. For example:
cd repo/
createrepo_c .
cd repodata/
gpg -u YOUR-GPG-KEY-EMAIL --yes --detach-sign --armor repomd.xml
In order to check this signature you need to tell osbuild-composer what gpg key
to use to do the check. Set check_repogpg = true
in the source, and if the
key is available over https, set the gpgkeys
entry to the URL for the key,
like this:
check_gpg = true
check_ssl = true
id = "custom-local"
name = "signed local packages"
type = "yum-baseurl"
url = "https://local/repos/projectrepo/"
check_repogpg = true
gpgkeys=["https://local/keys/repokey.pub"]
Normally you would want to distribute the key via a separate channel from the
rpms for better security, the above is just an example. You can also embed the
whole key into the source gpgkeys
entry. If the entry starts with -----BEGIN PGP PUBLIC KEY BLOCK-----
it will import it directly instead of fetching it
from the url. For example:
check_gpg = true
check_ssl = true
check_repogpg
id = "custom-local"
name = "signed local packages"
type = "yum-baseurl"
url = "https://local/repos/projectrepo/"
gpgkeys=["https://remote/keys/other-repokey.pub",
'''-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.10 (GNU/Linux)
mQENBEt+xXMBCACkA1ZtcO4H7ZUG/0aL4RlZIozsorXzFrrTAsJEHvdy+rHCH3xR
cFz6IMbfCOdV+oKxlDP7PS0vWKfqxwkenOUut5o9b32uDdFMW4IbFXEQ94AuSQpS
jo8PlVMm/51pmmRxdJzyPnr0YD38mVK6qUEYLI/4zXSgFk493GT8Y4m3N18O/+ye
PnOOItj7qbrCMASoBx1TG8Zdg8ufehMnfb85x4xxAebXkqJQpEVTjt4lj4p6BhrW
R+pIW/nBUrz3OsV7WwPKjSLjJtTJFxYX+RFSCqOdfusuysoOxpIHOx1WxjGUOB5j
fnhmq41nWXf8ozb58zSpjDrJ7jGQ9pdUpAtRABEBAAG0HkJyaWFuIEMuIExhbmUg
PGJjbEByZWRoYXQuY29tPokBOAQTAQIAIgUCS37FcwIbAwYLCQgHAwIGFQgCCQoL
BBYCAwECHgECF4AACgkQEX6MFo7+On9dgAf9Hi2K1MKcmLkDeSUIXkXIAw0nAzl2
UDGLWEdDqAgFxP6UaCVtOIRCr7z4EDOQoxD7mkdekbH2W5GcTO4h8MQBHYD9EkY7
H/lTKchlFfsmafOoA3Y/tDLPKu+OIfH9Mqn2Mf7wMYGrnWSRNKYgvC5zkMgkhoPU
mSPPHyBabsdS/Kg5ZAf43ac/MXY9V8Mk6zqbBlj6QYqjJ0nBD6vwozrDQ5gJtDUL
mQho13zPn4lBJl9YJVjcgRB2WbzgSZOln0DfV22Seai66vnr5NyaOIw5B9QLSNhN
EaPFswEDLKCsns9dkDuGFX52/Mt/i7JySvwhMBqHElPzWmwCHeY45M8gBYhGBBAR
AgAGBQJLfsbpAAoJECH7Y/6XEsLNuasAn0Q0jB4Ea/95EREUkCFTm9L6nOpAAJ9t
QzwGXhrLFZzOdRWYiWcCQbX5/7kBDQRLfsVzAQgAvN5jr95pJthv2w9co9/7omhM
5rAnr9WJfbMLLiUfPPUvpL24RGO6SKy03aiVTUjlaHc+cGqOciwnNKMCSt+noyG2
kNnAESTDtCivpsjonaFP8jA3TqL0QK+yzBRKJnMnLEY1nWE1FtkMRccXvzi0Z/XQ
VhiWQyTvDFoKtepBFrH9UqWbNHyki22aighumUsW01pcPH2ogSj+HR01r7SfI/y2
EkE6loHQfCDycHmlqYV+X6GZEvf1qu2+EHEQChsHIAxWyshsxM/ZPmx/8e5S3Xmj
l7h/6E9wcsIpvnf504sLX5j4Km9I5HgJSRxHxgRPpqJ2/XiClAJanO5gCw0RdQAR
AQABiQEfBBgBAgAJBQJLfsVzAhsMAAoJEBF+jBaO/jp/SqEH/iArzrfVOhZQGuy1
KmG0+/FdJGqAEHP5HWpsaeYJok1VmhTPZd4IVFBz/bGJYyvsrPU0pJ6QLkdGxNnb
KulJocgkW5MKEL/CRc54ESKwYngigmbY4qLwhS+gB3BJg1TvoHD810MSj4wdxNNo
6JQmFmuoDsLRwaRYbKQDz95XXoGQtmV1o57T05WkLuC5OmHqnWv3rggVC8madpUJ
moUUvUWgU1qyXe3PrgMGFOibWIl7lPZ08nzKXBRvSK/xoTGxl+570AevfVHMu5Uk
Yu2U6D6/DYohtTYp0s1ekS5KQkCJM7lfqecDsQhfVfOfR0w4aF8k8u3HmWdOfUz+
9+2ZsBo=
=myjM
-----END PGP PUBLIC KEY BLOCK-----''']
Notice that gpgkeys can take as many key urls and keys as you need, not just one. If the signature cannot be found an error similar to this will be returned:
GPG verification is enabled, but GPG signature is not available.
This may be an error or the repository does not support GPG verification:
Status code: 404 for http://repo-server/fedora/repodata/repomd.xml.asc (IP: 192.168.1.3)
And if the signature is invalid:
repomd.xml GPG signature verification error: Bad GPG signature
You can test the signature of the repository manually by running gpg --verify repomd.xml.asc
to help troubleshoot problems.
Official repository overrides
osbuild-composer
does not inherit the system repositories located in /etc/yum.repos.d/
. Instead, it has its own set of official repositories defined in /usr/share/osbuild-composer/repositories
. To override the official repositories, define overrides in /etc/osbuild-composer/repositories
. This directory is meant for user defined overrides and the files located here take precedence over those in /usr
.
The configuration files are not in the usual "repo" format. Instead, they are simple JSON
files.
Defining official repository overrides
To set your own repositories, create this directory if it does not exist already:
$ sudo mkdir -p /etc/osbuild-composer/repositories
Based on the system you want to build an image for, determine the name of a new JSON file:
- Fedora 32 -
fedora-32.json
- Fedora 33 -
fedora-33.json
- RHEL 8.4 -
rhel-84.json
- RHEL 9.0 -
rhel-90.json
Then, create the JSON file with the following structure (or copy the file from /usr/share/osbuild-composer/
and modify its content):
{
"<ARCH>": [
{
"name": "<REPO NAME>",
"metalink": "",
"baseurl": "",
"mirrorlist": "",
"gpgkey": "",
"check_gpg": "",
"metadata_expire": "",
}
]
}
Specify only one of the following attributes: metalink
, mirrorlist
, or baseurl
. All the remaining fields like gpgkey
, metadata_expire
, etc. are optional.
For example, for building a Fedora 33 image running on x86_64, create /etc/osbuild-composer/repositories/fedora-33.json
with this content:
{
"x86_64": [
{
"name": "fedora",
"metalink": "https://mirrors.fedoraproject.org/metalink?repo=fedora-33&arch=x86_64",
"gpgkey": "-----BEGIN PGP PUBLIC KEY BLOCK-----\n\nmQINBF4wBvsBEADQmcGbVUbDRUoXADReRmOOEMeydHghtKC9uRs9YNpGYZIB+bie\nbGYZmflQayfh/wEpO2W/IZfGpHPL42V7SbyvqMjwNls/fnXsCtf4LRofNK8Qd9fN\nkYargc9R7BEz/mwXKMiRQVx+DzkmqGWy2gq4iD0/mCyf5FdJCE40fOWoIGJXaOI1\nTz1vWqKwLS5T0dfmi9U4Tp/XsKOZGvN8oi5h0KmqFk7LEZr1MXarhi2Va86sgxsF\nQcZEKfu5tgD0r00vXzikoSjn3qA5JW5FW07F1pGP4bF5f9J3CZbQyOjTSWMmmfTm\n2d2BURWzaDiJN9twY2yjzkoOMuPdXXvovg7KxLcQerKT+FbKbq8DySJX2rnOA77k\nUG4c9BGf/L1uBkAT8dpHLk6Uf5BfmypxUkydSWT1xfTDnw1MqxO0MsLlAHOR3J7c\noW9kLcOLuCQn1hBEwfZv7VSWBkGXSmKfp0LLIxAFgRtv+Dh+rcMMRdJgKr1V3FU+\nrZ1+ZAfYiBpQJFPjv70vx+rGEgS801D3PJxBZUEy4Ic4ZYaKNhK9x9PRQuWcIBuW\n6eTe/6lKWZeyxCumLLdiS75mF2oTcBaWeoc3QxrPRV15eDKeYJMbhnUai/7lSrhs\nEWCkKR1RivgF4slYmtNE5ZPGZ/d61zjwn2xi4xNJVs8q9WRPMpHp0vCyMwARAQAB\ntDFGZWRvcmEgKDMzKSA8ZmVkb3JhLTMzLXByaW1hcnlAZmVkb3JhcHJvamVjdC5v\ncmc+iQI4BBMBAgAiBQJeMAb7AhsPBgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAAK\nCRBJ/XdJlXD/MZm2D/9kriL43vd3+0DNMeA82n2v9mSR2PQqKny39xNlYPyy/1yZ\nP/KXoa4NYSCA971LSd7lv4n/h5bEKgGHxZfttfOzOnWMVSSTfjRyM/df/NNzTUEV\n7ORA5GW18g8PEtS7uRxVBf3cLvWu5q+8jmqES5HqTAdGVcuIFQeBXFN8Gy1Jinuz\nAH8rJSdkUeZ0cehWbERq80BWM9dhad5dW+/+Gv0foFBvP15viwhWqajr8V0B8es+\n2/tHI0k86FAujV5i0rrXl5UOoLilO57QQNDZH/qW9GsHwVI+2yecLstpUNLq+EZC\nGqTZCYoxYRpl0gAMbDLztSL/8Bc0tJrCRG3tavJotFYlgUK60XnXlQzRkh9rgsfT\nEXbQifWdQMMogzjCJr0hzJ+V1d0iozdUxB2ZEgTjukOvatkB77DY1FPZRkSFIQs+\nfdcjazDIBLIxwJu5QwvTNW8lOLnJ46g4sf1WJoUdNTbR0BaC7HHj1inVWi0p7IuN\n66EPGzJOSjLK+vW+J0ncPDEgLCV74RF/0nR5fVTdrmiopPrzFuguHf9S9gYI3Zun\nYl8FJUu4kRO6JPPTicUXWX+8XZmE94aK14RCJL23nOSi8T1eW8JLW43dCBRO8QUE\nAso1t2pypm/1zZexJdOV8yGME3g5l2W6PLgpz58DBECgqc/kda+VWgEAp7rO2A==\n=EPL3\n-----END PGP PUBLIC KEY BLOCK-----\n",
"check_gpg": true
}
]
}
After you have created repository overrides in /etc/osbuild-composer/repositories
, you must restart the osbuild-composer
service in order for the overrides to take effect.
Using repositories that require subscription
osbuild-composer
can use subscriptions from the host system if they are configured in the appropriate file in /etc/osbuild-composer/repositories
. To enable such repository, copy the baseurl
from /etc/yum.repos.d/redhat.repo
and paste it into the JSON repository definition. Then allow RHSM support using "rhsm": true
like this:
{
"x86_64": [
{
"baseurl": "https://localhost/repo",
"gpgkey": "...",
"rhsm": true
}
]
}
osbuild-composer
will read the /etc/yum.repos.d/redhat.repo
file from the host system and use it as a source of subscriptions. The same subscriptions must be available on a remote worker, if used.
Container registry credentials
All communication with container registries is done by the osbuild-worker
service. It can be configured via the /etc/osbuild-worker/osbuild-worker.toml
configuration file. It is read only once at service start, so the service
needs to be restarted after making any changes.
The configuration file has a containers
section with an auth_file_path
field that is a string referring to a path of a containers-auth.json(5)
file
to be used for accessing protected resources. An example configuration could
look like this:
[containers]
auth_file_path = "/etc/osbuild-worker/containers-auth.json"
For detailed information on the format of the authorization file itself,
refer to the corresponding man page: man 5 containers-auth.json
.
Creating images with the CLI interface
An image is specified by a blueprint and an image type. Unless you specify otherwise, it will use the same distribution and version (e.g. Fedora 33) as the host system. The architecture will always be the same as the one on the host.
Blueprints management using composer-cli
osbuild-composer
provides a storage for blueprints. To store a blueprint.toml
blueprint file, run this command:
$ composer-cli blueprints push blueprint.toml
To verify that the blueprint is available, list all currently stored blueprints:
$ composer-cli blueprints list
base-image-with-tmux
To display the blueprint you have just added, run the command:
$ sudo composer-cli blueprints show base-image-with-tmux
name = "base-image-with-tmux"
description = "A base system with tmux"
version = "0.0.1"
modules = []
groups = []
[[packages]]
name = "tmux"
version = "*"
Building an image using composer-cli
To build a customized image, start by choosing the blueprint and image type you would like to build. To do so, run the following commands:
$ sudo composer-cli blueprints list
$ sudo composer-cli compose types
and trigger a compose (example using the blueprint from the previous section):
$ composer-cli compose start base-image-with-tmux qcow2
Compose ab71b61a-b3c4-434f-b214-1e16527766ff added to the queue
Note that the compose is assigned with a Universally Unique Identifier (UUID), that you can use to monitor the image build progress:
$ composer-cli compose info ab71b61a-b3c4-434f-b214-1e16527766ff
ab71b61a-b3c4-434f-b214-1e16527766ff RUNNING base-image-with-tmux 0.0.1 qcow2 2147483648
Packages:
tmux-*
Modules:
Dependencies:
At this time, the compose is in a "RUNNING" state. Once the compose reaches the "FINISHED" state, you can download the resulting image by running the following command:
$ sudo composer-cli compose results ab71b61a-b3c4-434f-b214-1e16527766ff
ab71b61a-b3c4-434f-b214-1e16527766ff.tar: 455.18 MB
$ fd
ab71b61a-b3c4-434f-b214-1e16527766ff.tar
$ tar xf ab71b61a-b3c4-434f-b214-1e16527766ff.tar
$ fd
ab71b61a-b3c4-434f-b214-1e16527766ff-disk.qcow2
ab71b61a-b3c4-434f-b214-1e16527766ff.json
ab71b61a-b3c4-434f-b214-1e16527766ff.tar
logs
logs/osbuild.log
From the example output above, the resulting tarball contains not only the qcow2
image, but also a JSON
file, which is the osbuild manifest (see the Developer Guide for more details), and a directory with logs.
For more options, see the help
text for composer-cli
:
$ sudo composer-cli compose help
Tip: Booting the image with qemu
If you want to quickly run the resulting image, you can use qemu
:
$ qemu-system-x86_64 \
-enable-kvm \
-m 3000 \
-snapshot \
-cpu host \
-net nic,model=virtio \
-net user,hostfwd=tcp::2223-:22 \
ab71b61a-b3c4-434f-b214-1e16527766ff-disk.qcow2
Be aware that you must specify a way to access the machine in the blueprint. For example, you can create a user with known password, set an SSH key, or enable cloud-init
to use a cloud-init
ISO file.
Building OSTree image
This section contains a guide for building OSTree commits. As opposed to the "traditional" image types, these commits are not directly bootable so although they basically contain a full operating system, in order to boot them, they need to be deployed. This can, for example, be done via the Fedora installer (Anaconda).
OSTree is a technology for creating immutable operating system images and it is a base for Fedora CoreOS, Fedora IoT, Fedora Silverblue, and RHEL for Edge. For more information on OSTree, see their website.
Overview of the intended result
As mentioned above, osbuild-composer produces OSTree commits which are not directly bootable. The commits are inside a tarball to make their usage more convenient. In order to deploy them, you will need:
-
Fedora installation ISO - such as netinst (https://getfedora.org/en/server/download/)
-
HTTP server to serve the content of the tarball to the Fedora virtual machine booted from the ISO
-
Kickstart file that instructs Anaconda (Fedora installer) to use the OSTree commit from the HTTP server
In this guide, a container running Apache httpd
will be used as the HTTP server.
The result will look like this:
_________________ ____________________________
| | | |
| |------->| Fedora VM with mounted ISO |
| | | - Anaconda |
| Fedora Host OS | |____________________________|
| | |
| | _______|________________________
| | | |
| |------->| Fedora container running httpd |
|_________________| | serving content of the tarball|
| and the kickstart file |
|________________________________|
Note: If you would like to understand what is inside the tarball, read the upstream OSTree documentation.
Building an OSTree commit
Start by creating a blueprint for your commit. Using your favorite text editor, vi
, create a file named fishy.toml
with this content:
name = "fishy-commit"
description = "Fishy OSTree commit"
version = "0.0.1"
[[packages]]
name = "fish"
version = "*"
Now push the blueprint to osbuild-composer using composer-cli
:
$ composer-cli blueprints push fishy.toml
And start a build:
$ composer-cli compose start fishy-commit fedora-iot-commit
Compose 8e8014f8-4d15-441a-a26d-9ed7fc89e23a added to the queue
Monitor the build status using:
$ composer-cli compose status
And finally when the compose is complete, download the result:
$ composer-cli compose image 8e8014f8-4d15-441a-a26d-9ed7fc89e23a
8e8014f8-4d15-441a-a26d-9ed7fc89e23a-commit.tar: 670.45 MB
Writing a Kickstart file
As mentioned above, the Kickstart file is meant for the Anaconda installer. It contains instructions on how to install the system.
Create a file named ostree.ks
with this content:
lang en_US.UTF-8
keyboard us
timezone UTC
zerombr
clearpart --all --initlabel
autopart
reboot
user --name=core --groups=wheel --password=foobar
ostreesetup --nogpg --url=http://10.0.2.2:8000/repo/ --osname=iot --remote=iot --ref=fedora/33/x86_64/iot
For those interested in all the options, you can read Anaconda’s documentation.
The crucial part is on the last line. Here, ostreesetup
command is used to fetch the OSTree commit. Now for those wondering about the IP address, this tutorial uses qemu
to boot the virtual machine and 10.0.2.2
is an address which you can use to reach the host system from the guest: User Networking.
Setting up an HTTP server
Now that the kickstart file and OSTree commit are ready, create a container running HTTP server and serving those file. Start by creating a Dockerfile:
FROM fedora:latest
RUN dnf -y install httpd && dnf clean all
ADD *.tar *.ks /var/www/html
EXPOSE 80
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
Make sure you have everything in the build directory (keep in mind that the UUID is random, so it will be different in your case):
$ ls
8e8014f8-4d15-441a-a26d-9ed7fc89e23a-commit.tar
Dockerfile
ostree.ks
Build the container image:
$ podman build -t ostree .
And run it:
$ podman run --rm -p 8000:80 ostree
Note: You might be wondering why to bother with a container when you can just use "python -m http.server". The problem is that OSTree produces way too many requests and the Python HTTP server simply fails to keep up with OSTree.
Running a VM and applying the OSTree commit
Start with downloading the Netinstall image from here: https://getfedora.org/en/server/download/
Create an empty qcow2 image. That is an image of a hard drive for the virtual machine (VM).
$ qemu-img create -f qcow2 disk-image.img 5G
Run a VM using the hard drive and mount the installation ISO:
$ qemu-system-x86_64 \
-enable-kvm \
-m 3000 \
-snapshot \
-cpu host \
-net nic,model=virtio \
-net user,hostfwd=tcp::2223-:22 \
-cdrom $HOME/Downloads/Fedora-Server-netinst-x86_64-33-1.2.iso \
disk-image.img
Note: To prevent any issue, use the latest stable Fedora host OS for this tutorial.
This command instructs qemu (the hypervisor) to:
- Use KVM virtualization (makes the VM faster).
- Increase memory to 3000MB (some processes can get memory hungry, for example
dnf
). - Snapshot the hard drive image, don't override its content.
- Use the same CPU type as the host uses.
- Connect the guest to a virtual network bridge on the host and forward TCP port 2223 from the host to the SSH port (22) on the guest (makes it easier to connect to the guest system).
- Mount the installation ISO.
- Use the hard drive image created above.
At the initial screen, use arrow keys to select the "Install Fedora 33" line and press TAB key. You’ll see a line of kernel command line options appear below. Something like:
vmlinuz initrd=initrd.img inst.stage2=hd:LABEL=Fedora quiet
Add a space and this string:
inst.ks=http://10.0.2.2:8000/ostree.ks
Resulting in this kernel command line:
vmlinuz initrd=initrd.img inst.stage2=hd:LABEL=Fedora quiet inst.ks=http://10.0.2.2:8000/ostree.ks
The IP address 10.0.2.2
is again used here, because the VM is running inside Qemu.
Press "Enter", the Anaconda GUI will show up and automatically install the OSTree commit created above.
Once the system is installed and rebooted, use username "core" and password "foobar" to login. You can change the credentials in the kickstart file.
Building a RHEL for Edge Installer
The following describes how to build a boot ISO which installs an OSTree-based system using the "RHEL for Edge Container" in combination with the "RHEL for Edge Installer" image types. The workflow has the same result as the Building OSTree Image guide with the new image types automating some of the steps.
Note that there are some small differences in this procedure between RHEL 8.4 and RHEL 8.5:
- The names of the image types have changed. In 8.4, the image types were prefixed by
rhel-
. This prefix was removed in 8.5.- The old names
rhel-edge-container
andrhel-edge-installer
still work in RHEL 8.5 as aliases to the new names, however these names are considered deprecated and may be removed completely in future versions.
- The old names
- The internal port for the container has changed from 80 in RHEL 8.4 to 8080 in RHEL 8.5.
Process overview
- Create and load a blueprint with customizations.
- Build an
edge-container
(RHEL 8.5) orrhel-edge-container
(RHEL 8.4) image. - Load image in podman and start the container.
- Create and load an empty blueprint.
- Build an
edge-installer
(RHEL 8.5) orrhel-edge-installer
(RHEL 8.4) image, pointing theostree-url
tohttp://10.0.2.2:8080/repo/
and setting theostree-ref
torhel/edge/demo
.
The edge-container
image type creates an OSTree commit and embeds it into an OCI container with a web server. When the container is started, the web server serves the commit as an OSTree repository.
The edge-intaller
image type pulls the commit from the running container and creates an installable boot ISO with a kickstart file configured to use the embedded OSTree commit.
Detailed workflow
Build the container and serve the commit
Start by creating a blueprint for the commit. The content below is an example and can be modified to fit your needs. For this guide, we will name the file example.toml
.
name = "example"
description = "RHEL for Edge Installer example"
version = "0.0.3"
[[packages]]
name = "vim-enhanced"
version = "*"
[[packages]]
name = "tmux"
version = "*"
[customizations]
[[customizations.user]]
name = "user"
description = "Example User"
password = "$6$uvdfeuHQYM6kUaea$fvvzyu.Z.u89TVCB2tq8UEc52XDFGnAqCo75BX3zu8OzIbS.EKMo/Saammb151sLrdzmlESnpNEPrJ7h5b0c6/"
groups = ["wheel"]
Now push the blueprint to osbuild-composer using composer-cli
:
$ composer-cli blueprints push example.toml
And start the container build:
$ composer-cli compose start-ostree --ref "rhel/edge/example" example edge-container
Compose 8e8014f8-4d15-441a-a26d-9ed7fc89e23a added to the queue
The value for --ref
can be changed but must begin with an alphanumeric character and contain only alphanumeric characters, /
, _
, -
, and .
.
Note: In RHEL 8.4, the image type was called rhel-edge-container
. It has been renamed to edge-container
in 8.5 onwards.
Monitor the build status using:
$ composer-cli compose status
When the compose is FINISHED, download the result:
$ composer-cli compose image 8e8014f8-4d15-441a-a26d-9ed7fc89e23a
8e8014f8-4d15-441a-a26d-9ed7fc89e23a-rhel84-container.tar: 670.45 MB
Load the container into registry:
$ cat 8e8014f8-4d15-441a-a26d-9ed7fc89e23a-rhel84-container.tar | podman load
Getting image source signatures
Copying blob 82934cd3e69d done
Copying config d11911c3dc done
Writing manifest to image destination
Storing signatures
Loaded image(s): @d11911c3dc4bee46cabd52b91c87f48b8a7d450fadc8cfbeb69e2de98b413521
Tag the image for convenience:
$ podman tag d11911c3dc4bee46cabd52b91c87f48b8a7d450fadc8cfbeb69e2de98b413521 localhost/edge-example
Start the container (note the different internal port numbers between the two versions)
For RHEL 8.4:
$ podman run --rm -d -p 8080:80 --name ostree-repo localhost/edge-example
For RHEL 8.5+:
$ podman run --rm -d -p 8080:8080 --name ostree-repo localhost/edge-example
Note: The -d
option detaches the container and leaves it running in the background. You can also remove the option to keep the container attached to the terminal.
Build the installer
Start by creating a simple blueprint for the installer. The blueprint must not have any customizations or packages; only a name, and optionally a version and a description. Add the content below to a file and name it empty.toml
:
name = "empty"
description = "Empty blueprint"
version = "0.0.1"
The edge-installer
image type does not support customizations or package selection, so the build will fail if any are specified.
Push the blueprint:
$ composer-cli blueprints push empty.toml
Start the build:
$ composer-cli compose start-ostree --ref "rhel/edge/example" --url http://10.0.2.2:8080/repo/ empty edge-installer
Compose 09d98a67-a401-4613-9a5b-b93f8a6e695f added to the queue
Note: In RHEL 8.4, the image type was called rhel-edge-installer
. It has been renamed to edge-installer
in 8.5 onwards.
The --ref
argument must match the one from the rhel-edge-container
compose.
The --url
in this case is IP address of the container. This tutorial uses qemu
to boot the virtual machine and 10.0.2.2
is an address which you can use to reach the host system from the guest: User Networking.
Monitor the build status using:
$ composer-cli compose status
When the compose is FINISHED, download the result:
$ composer-cli compose image 09d98a67-a401-4613-9a5b-b93f8a6e695f
09d98a67-a401-4613-9a5b-b93f8a6e695f-rhel84-boot.iso: 1422.61 MB
The downloaded image can then booted to begin the installation. If you used the blueprint in this guide, use the username "user" and password "password42" to login.
Uploading cloud images
osbuild-composer
can upload images to a cloud provider right after they are built. The configuration is slightly different for each cloud provider. See individual subsections of this documentation.
Uploading an image to AWS
osbuild-composer
provides the users with a convenient way to upload images directly to AWS right after the image is built. Before you can use this feature, you have to define vmimport
IAM role in your AWS account. See VM Import/Export Requirements in AWS documentation.
Now, you are ready to upload your first image to AWS. Using a text editor of your choice, create a configuration file with the following content:
provider = "aws"
[settings]
accessKeyID = "AWS_ACCESS_KEY_ID"
secretAccessKey = "AWS_SECRET_ACCESS_KEY"
bucket = "AWS_BUCKET"
region = "AWS_REGION"
key = "OBJECT_KEY"
There are several considerations when filling values in this file:
AWS_BUCKET
must be in theAWS_REGION
AWS_BUCKET
must be created in AWS prior to running the script- The
vmimport
role must have read access to theAWS_BUCKET
please see this guide on how to do so: How to create vmimport role OBJECT_KEY
is the name of an intermediate S3 object. It must not exist before the upload, and it will be deleted when the process is done.
If your authentication method requires you to also specify a session token, you can put it in the
settings
section of the configuration file in a field namedsessionToken
.
Once everything is configured, you can trigger a compose as usual with additional image name and cloud provider profile:
General Syntax
$ sudo composer-cli compose start <image_name> ami IMAGE_KEY aws-config.toml
where IMAGE_KEY will be the name of your new AMI, once it is uploaded to EC2.
Uploading an image to an AWS S3 Bucket
osbuild-composer
provides the users with a convenient way to upload images, of all sorts, directly to an AWS S3 bucket right after the image is built.
Using a text editor of your choice, create a configuration file with the following content:
provider = "aws.s3"
[settings]
accessKeyID = "AWS_ACCESS_KEY_ID"
secretAccessKey = "AWS_SECRET_ACCESS_KEY"
bucket = "AWS_BUCKET"
region = "AWS_REGION"
key = "OBJECT_KEY"
There are several considerations when filling values in this file:
AWS_BUCKET
must be in theAWS_REGION
If your authentication method requires you to also specify a session token, you can put it in the
settings
section of the configuration file in a field namedsessionToken
.
Once everything is configured, you can trigger a compose as usual with additional image name and cloud provider profile:
$ sudo composer-cli compose start base-image-with-tmux qcow2 IMAGE_KEY aws-s3-config.toml
Uploading an image to GCP
osbuild-composer
provides the users with a convenient way to upload images directly to GCP right after the image is built. Before you can use this feature, you have to provide credentials for your user or service account, which you would like to use for uploading images to GCP.
The account associated with the credentials must have at least the following IAM roles assigned:
roles/storage.admin
- to create and delete storage objectsroles/compute.storageAdmin
- to import a VM image to Compute Engine
Now, you are ready to upload your first image to GCP.
Using a text editor of your choice, create a configuration file gcp-config.toml
with the following content:
provider = "gcp"
[settings]
bucket = "GCP_BUCKET"
region = "GCP_STORAGE_REGION"
object = "OBJECT_KEY"
credentials = "GCP_CREDENTIALS"
There are several considerations when filling values in this file:
GCP_BUCKET
must point to an existing bucket.GCP_STORAGE_REGION
can be a regular Google storage region, but also a dual or multi region.OBJECT_KEY
is the name of an intermediate storage object. It must not exist before the upload, and it will be deleted when the upload process is done. If the object name does not end with.tar.gz
, the extension is automatically added to the object name.GCP_CREDENTIALS
is a Base64 encoded content of the credentials JSON file downloaded from GCP. The credentials are used to determine the GCP project to upload the image to.Specifying this value in the
gcp-config.toml
may be optional if you use a different mechanism of authenticating with GCP. For more information about the various ways of authenticating with GCP, read the Authenticating with GCP below.
After everything is configured, you can trigger a compose as usual with an additional image name and cloud provider profile:
sudo composer-cli compose start base-image-with-tmux gce IMAGE_KEY gcp-config.toml
where IMAGE_KEY will be the name of your new GCE image, once it is uploaded to GCP.
Authenticating with GCP
osbuild-composer supports multiple ways of authenticating with GCP.
In case the osbuild-composer is configured to authenticate with GCP in multiple ways, it uses them in the following order of preference:
- Credentials specified with the
composer-cli
command in the configuration file. - Credentials configured in the osbuild-composer worker configuration.
- Application Default Credentials from the Google GCP SDK library, which tries to automatically find a way to authenticate using the following options:
- If
GOOGLE_APPLICATION_CREDENTIALS
environment variable is set, it tries to load and use credentials from the file pointed to by the variable. - It tries to authenticate using the service account attached to the resource which is running the code (e.g. Google Compute Engine VM).
- If
Note that the GCP credentials are used to determine the GCP project to upload the image to. Therefore, unless you want to upload all of your images to the same GCP project, you should always specify credentials with the
composer-cli
command.
Specifying credentials with the composer-cli
command
You need to specify the credentials with the composer-cli
command in the provided upload target configuration gcp-config.toml
:
provider = "gcp"
[settings]
...
credentials = "GCP_CREDENTIALS"
The GCP_CREDENTIALS
value is a Base64 encoded content of the Google account credentials JSON file. The reason for this is that the file is quite large and contains multiple key values, therefore mapping them to the TOML configuration format would require more manual work from the user, than encoding the whole file in Base64 and specifying it as a single value.
To get the encoded content of the Google account credentials file with the path stored in GOOGLE_APPLICATION_CREDENTIALS
environment variable, run:
base64 -w 0 "${GOOGLE_APPLICATION_CREDENTIALS}"
Specifying credentials in the osbuild-composer worker configuration
You can configure the credentials to be used for GCP globally for all image builds in the worker configuration /etc/osbuild-worker/osbuild-worker.toml
:
[gcp]
credentials = "PATH_TO_GCP_ACCOUNT_CREDENTIALS"
Uploading an image to to a bucket in a Generic S3 server
osbuild-composer
provides the users with a convenient way to upload images, of all sorts, directly to a bucket in a Generic S3 server right after the image is built.
Using a text editor of your choice, create a configuration file with the following content:
provider = "generic.s3"
[settings]
endpoint = "S3_SERVER_ENDPOINT"
accessKeyID = "S3_ACCESS_KEY_ID"
secretAccessKey = "S3_SECRET_ACCESS_KEY"
bucket = "S3_BUCKET"
region = "S3_REGION"
key = "OBJECT_KEY"
There are several considerations when filling values in this file:
AWS_REGION
must still be set (e.g. to us-east-1) even if it has no meaning in your S3 server- If your server is using HTTPS with a certificate signed by your own CA, you can either pass the CA bundle by setting the field
ca_bundle
, pointing it to the CA's public certificate, or skip SSL verification by settingskip_ssl_verification
totrue
Once everything is configured, you can trigger a compose as usual with additional image name and cloud provider profile:
$ sudo composer-cli compose start base-image-with-tmux qcow2 IMAGE_KEY generic-s3-config.toml
Uploading an image to Microsoft Azure
osbuild-composer
builds images and delivers them to Microsoft Azure
automatically. These images are ready to use with virtual machines in the
Azure cloud.
Initial setup
Before you can upload images to Azure with osbuild-composer
, your account
needs some initial setup. Be sure to complete these steps
- Create a resource group
- Create a storage account inside the resource group
- Create a storage container within the storage account
- Gather your access keys
For a detailed walkthrough on each step within the Azure portal, review the Build RHEL images for Azure with Image Builder post on the Red Hat Blog.
Make a note of the following items during the setup so you can provide them to
osbuild-composer
during the build process:
- the name of your storage account
- the name of the storage container inside your storage account
- the access key for your storage account
Deploy
Push a blueprint containing your image configuration and create a new file
called azure.toml
that contains the information about your Azure storage
account:
provider = "azure"
[settings]
storageAccount = "your storage account name"
storageAccessKey = "storage access key you copied in the Azure portal"
container = "your storage container name"
Build and deploy the image to Azure:
composer-cli compose start my_blueprint vhd my_image_key azure.toml
In this example my_blueprint
is the name of the blueprint containing your
image configuration. Replace my_image_key
with the preferred image name you
want to see in Azure. This is the name that appears inside your storage
container.
Uploading an image to OCI
osbuild-composer
provides the users with a convenient way to upload images directly to OCI right after the image is built.
See Managing Custom Images in OCI documentation (includes permissions details).
Now, you are ready to upload your first image to OCI. Using a text editor of your choice, create a configuration file with the following content:
provider = "oci"
[settings]
user = "OCI_CLI_USER"
tenancy = "OCI_CLI_TENANCY"
fingerprint = "OCI_CLI_FINGERPRINT"
region = "OCI_CLI_REGION"
bucket = "OCI_BUCKET"
namespace = "OCI_NAMESPACE"
compartment = "OCI_COMPARTMENT"
private_key = '''
...
'''
There are several considerations when filling values in this file:
OCI_BUCKET
must be in theOCI_REGION
and must exist before the upload
Once everything is configured, you can trigger a compose as usual with additional image name and cloud provider profile:
$ sudo composer-cli compose start BLUEPRINT_NAME oci IMAGE_KEY oci-config.toml
where IMAGE_KEY
will be the name of your new OCI image once uploaded.
Uploading a container image to a registry
osbuild-composer
can upload a container image, like the RHEL for
edge container, to a registry directly after it has been built.
In order to do so, the container reference and an upload configuration file need to be specified when building a container artifact:
$ sudo composer-cli compose start BLUEPRINT container REFERENCE CONFIG.toml
where BLUEPRINT
is the name for the container and REFERENCE
the
reference to the container image, like registry.example.com/image:tag
.
If :tag
is omitted, :latest
is the default. The CONFIG.toml
file
must include provider = "container"
. Other values are optional.
provider = "container" # required
[settings]
tls_verify = false # optional, TLS verification, default: true
username = "USERNAME" # optional, username to use
password = "PASSWORD" # optional, password to use
Instead of specifying username
and password
directly, a central
containers-auth.json(5)
file can be used, see
Container registry credentials.
OpenSCAP Remediation
osbuild-composer
now provides the ability to build security hardened images using the OpenSCAP tool.
This feature is available for RHEL 8.7
(& above) and RHEL 9.1
(& above).
OpenSCAP
The OpenSCAP
tool enables users to scan images for vulnerabilities and then remediate the non-compliances according to
predefined security standards. A limitation of this is that it is not always trivial to fix all issues after the first
boot of the image.
Build-time Remediation
To solve this issue, an osbuild stage runs the OpenSCAP
tool on the filesystem tree while the image is being built. The OpenSCAP
tool runs
the standard evaluation for the given profile and applies the remediations to the image. This process enables the user to build a more completely
hardened image compared to running the remediation on a live system.
Openscap Example
[customizations.openscap]
profile_id = "xccdf_org.ssgproject.content_profile_standard"
datastream = "/usr/share/xml/scap/ssg/content/ssg-fedora-ds.xml"
osbuild-composer
exposes to fields for the user to customize in the image blueprints:
- The path to the
datastream
instructions (most likely in the/usr/share/xml/scap/ssg/content/
directory) - The
profile_id
for the desired security standard - Install openscap via this command:
dnf install scap-security-guide
- Use the command
oscap info /usr/share/xml/scap/ssg/content/<security_profile>.xml
to obtain more information such as the profile id to use - The
profile_id
field accepts both the long and short forms, i.e.cis
orxccdf_org.ssgproject.content_profile_cis
.
See the below table for supported profiles.
osbuild-composer
will then generate the necessary configurations for the osbuild
stage based on the user
customizations. Additionally, two packages will be added to the image, openscap-scanner
(the OpenSCAP
tool)
& scap-security-guide
(this package contains the remediation instructions).
:warning: Note The remediation stage assumes that the
scap-security-guide
will be used for the datastream. This package is installed on the image by default. If another datastream is desired, add the necessary package to the blueprint and specify the path to the datastream in the oscap config.
Supported profiles
The supported profiles are distro specific, see below:
Fedora | RHEL 8.7^ | CS9/RHEL 9.1^ | |
---|---|---|---|
ANSSI-BP-028 (enhanced) | x | x | |
ANSSI-BP-028 (high) | x | x | |
ANSSI-BP-028 (intermediary) | x | x | |
ANSSI-BP-028 (minimal) | x | x | |
CIS Level 2 - Server | x | x | |
CIS Level 1 - Server | x | x | |
CIS Level 1 - Workstation | x | x | |
CIS Level 2 - Workstation | x | x | |
CUI | x | x | |
Essential Eight | x | x | |
HIPAA | x | x | |
ISM Official | x | x | |
OSPP | x | x | x |
PCI-DSS | x | x | x |
Standard | x | ||
DISA STIG | x | x | |
DISA STIG with GUI | x | x |
Third-party Repositories
osbuild-composer
supports adding packages from third-party repositories and saving the repository customizations
to an image. This guide aims to clarify each usecase and how to configure osbuild-composer
and
the blueprints accordingly.
Very importantly, osbuild-composer
has two distinct definitions of third-party repositories. Firstly, payload repositories which can be used to install third-party packages at build time and,
secondly, custom repositories which are used to persist the repository configurations to the image.
This could lead to the following desired usecases:
- Install a third-party package
- Save the third-party repository configurations to the image
- Install a third-party package and save the configurations
1. Install a third-party package
To install a third-party package at build time, it is necessary to enable the required third-party repository as a payload repository. This will not save any of the repository configurations
to the image and will not make the repositories available to users on the system after the image has been built. For further information on how to install and configure osbuild-composer
to use custom repositories for installing third-party packages, continue reading here.
2. Save repository configurations
In the second scenario, to make third-party repository configurations persistent and make the repositories available to users on the system, one would use the blueprint custom repository
configurations to enable this. The repository will be configured and saved to /etc/yum.repos.d
as a .repo
file. GPG keys are not imported at build time, but are imported when first
installing a third-party package from the desired repository. You can find the blueprint reference for custom repositories here.
3. Install a third-party package and save configurations
In this case it is necessary to use a combination of payload repositories and custom repositories in order to achieve the desired outcome. This will ensure that the package is installed during build time and the repository configuration is saved to disk for future use. If the user only needs the package or the configuration file, they can use the appropriate repository type to achieve their goal.
Blueprint Reference
Blueprints are text files in the TOML format that describe customizations for the image you are building.
An important thing to note is that these customizations are not applicable to all image types.
osbuild-composer
currently has no good validation or warning system in place to tell you if a customization in your blueprint is not supported for the image type you're building. The customization may be silently dropped.
A very basic blueprint with just the required attributes at the root looks like:
name = "basic-example"
description = "A basic blueprint"
version = "0.0.1"
Where:
- The
name
attribute is a string that contains the name of the blueprint. It can contain spaces, but they will be converted to-
when it is import intoosbuild-composer
. It should be short and descriptive. - The
description
attribute is a string that can be a longer description of the blueprint and is only used for display purposes. - The
version
attribute is a string that contains a semver compatible version number. If a new blueprint is uploaded with the same version the server will automatically bump the PATCH level of the version. If the version doesn't match it will be used as is. For example, uploading a blueprint with version set to 0.1.0 when the existing blueprint version is 0.0.1 will result in the new blueprint being stored as version 0.1.0.
You can upload a blueprint with the osbuild-composer blueprints push $filename
command, the blueprint will then be usable in osbuild-composer compose
as the name
you gave it.
Blueprints have two main sections, the content and customizations sections.
Distribution selection with blueprints
The blueprint now supports a new distro
field that will be used to select the
distribution to use when composing images, or depsolving the blueprint. If
distro
is left blank it will use the host distribution. If you upgrade the
host operating system the blueprints with no distro
set will build using the
new os. You can't build an OS image that differs from the host OS that Image Builder lives on.
eg. A blueprint that will always build a Fedora 38 image, no matter what version is running on the host:
name = "tmux"
description = "tmux image with openssh"
version = "1.2.16"
distro = "fedora-38"
[[packages]]
name = "tmux"
version = "*"
[[packages]]
name = "openssh-server"
version = "*"
Content
The content section determines what goes into the image from other sources such as packages, package groups, or containers. Content is defined at the root of the blueprint.
Packages
The packages
and modules
lists contain objects with a name
and optional version
attribute.
- The
name
attribute is a required string and can be an exact match, or a filesystem-like glob using*
for wildcards and?
for character matching. - The
version
attribute is an optional string can be an exact match or a filesystem-like glob of the version using*
for wildcards and?
for character matching. If not provided the latest version in the repositories is used.
Currently there are no differences between packages and modules in osbuild-composer
. Both are treated like an rpm package dependency.
When using virtual provides as the package name the version glob should be
*
. And be aware that you will be unable tofreeze
the blueprint. This is because the provide will expand into multiple packages with their own names and versions.
For example, to install tmux-2.9a
and openssh-server-8.*
packages, add this to your blueprint:
[[packages]]
name = "tmux"
version = "2.9a"
[[packages]]
name = "openssh-server"
version = "8.*"
Or in alternative syntax:
packages = [
{ name = "tmux", version = "2.9a" },
{ name = "openssh-server", version = "8.*" }
]
Groups
The groups
list describes contains objects with a name
-attribute.
- The
name
attribute is a required string and must match the id of a package group in the repositories exactly.
groups
describes groups of packages to be installed into the image. Package groups are defined in the repository metadata. Each group has a descriptive name used primarily for display in user interfaces and an ID more commonly used in kickstart files. Here, the ID is the expected way of listing a group. Groups have three different ways of categorizing their packages: mandatory, default, and optional. For the purposes of blueprints, only mandatory and default packages will be installed. There is no mechanism for selecting optional packages.
For example, if you want to install the anaconda-tools
group, add the following to your blueprint:
[[groups]]
name = "anaconda-tools"
Or in alternative syntax:
groups = [
{ name = "anaconda-tools" }
]
Containers
The containers
list contains objects with a source
and optional tls-verify
attribute.
These list entries describe the container images to be embedded into the image.
- The
source
attribute is a required string and is a reference to a container image at a registry. - The
name
attribute is an optional string to set the name under which the container image will be saved in the image. If not specifiedname
falls back to the same value assource
. - The
tls-verify
attribute is an optional boolean to disable TLS verification of the source download. By default this is set totrue
.
The container is pulled during the image build and stored in the image at the default local container storage location that is appropriate for the image type, so that all supported container tools like podman
and cri-o
will be able to work with it.
The embedded containers are not started, to do so you can create systemd unit files or quadlets with the files customization.
To embed the latest fedora container from http://quay.io, add this to your blueprint:
[[containers]]
source = "quay.io/fedora/fedora:latest"
Or in alternative syntax:
containers = [
{ source = "quay.io/fedora/fedora:latest" },
{ source = "quay.io/fedora/fedora-minimal:latest", tls-verify = false, name = "fedora-m" },
]
To access protected container resources a containers-auth.json(5)
file can be used, see Container registry credentials.
Customizations
In the customizations we determine what goes into the image that's not in the default packages defined under Content.
- Hostname
- Kernel Command Line Arguments
- SSH Keys
- Additional Users
- Additional Groups
- Timezone
- Locale
- Firewall
- Systemd Services
- Files and Directories
- Ignition
- Repositories
- Filesystems
- OpenSCAP
Hostname
customizations.hostname
is an optional string that can be used to configure the hostname of the final image:
[customizations]
hostname = "baseimage"
This is optional and can be left out to use the default hostname.
Kernel
Kernel Command-Line Arguments
An optional string that allows to append arguments to the bootloader kernel command line:
[customizations.kernel]
append = "nosmt=force"
SSH Keys
An optional list of objects containing:
- The
user
attribute is a required string and must match the name of a user in the image exactly. - The
key
attribute is a required string that contains the public key to be set for that user.
Warning: key
expects the entire content of the public key file, traditionally ~/.ssh/id_rsa.pub
but any algorithm supported by the operating system in the image is valid
Note: If you are adding a user you can add their SSH key in the additional users customization instead.
Set an existing user's SSH key in the final image:
[[customizations.sshkey]]
user = "root"
key = "PUBLIC SSH KEY"
The key will be added to the user's authorized_keys
file in their home directory.
Additional Users
An optional list of objects that contain the following attributes:
name
a required string that sets the username.description
an optional string.password
an optional string.key
an optional string.home
an optional string.shell
an optional string.groups
an optional list of strings.uid
an optional integer.gid
an optional integer.
Warning: key
expects the entire content of the public key file, traditionally ~/.ssh/id_rsa.pub
but any algorithm supported by the operating system in the image is valid
Note: If the password starts with $6$, $5$, or $2b$ it will be stored as an encrypted password. Otherwise it will be treated as a plain text password.
Add a user to the image, and/or set their ssh key. All fields for this section are optional except for the name. The following is a complete example:
[[customizations.user]]
name = "admin"
description = "Administrator account"
password = "$6$CHO2$3rN8eviE2t50lmVyBYihTgVRHcaecmeCk31L..."
key = "PUBLIC SSH KEY"
home = "/srv/widget/"
shell = "/usr/bin/bash"
groups = ["widget", "users", "wheel"]
uid = 1200
gid = 1200
Additional groups
An optional list of objects that contain the following attributes:
name
a required string that sets the name of the group.gid
a required integer that sets the id of the group.
[[customizations.group]]
name = "widget"
gid = 1130
Timezone
An optional object that contains the following attributes:
timezone
an optional string. If not provided the UTC timezone is used..ntpservers
an optional list of strings containing NTP servers to use. If not provided the distribution defaults are used.
[customizations.timezone]
timezone = "US/Eastern"
ntpservers = ["0.north-america.pool.ntp.org", "1.north-america.pool.ntp.org"]
The values supported by timezone can be listed by running the command:
$ timedatectl list-timezones
Some image types have already NTP servers setup such as Google Cloud images. These cannot be overridden because they are required to boot in the selected environment. However, the timezone will be updated to the one selected in the blueprint.
Locale
An optional object that contains the following attributes to customize the locale settings for the system:
languages
an optional list of strings containing locales to be installed.keyboard
an optional string to set the keyboard layout.
Multiple languages can be added. The first one becomes the primary, and the others are added as secondary. You must include one or more languages or keyboards in the section.
[customizations.locale]
languages = ["en_US.UTF-8"]
keyboard = "us"
The values supported by languages can be listed by running can be listed by running the command:
$ localectl list-locales
The values supported by keyboard can be listed by running the command:
$ localectl list-keymaps
Firewall
An optional object containing the following attributes:
ports
an optional list of strings containing ports (or port ranges) and protocols to open.services
an optional object with the following attributes containing services to enable or disable forfirewalld
.enabled
optional list of strings for services to enable.disabled
optional list of strings for services to disable.
By default the firewall blocks all access, except for services that enable their ports explicitly such as the sshd. The following blueprint can be used to open other ports or services.
Note: Ports are configured using the port:protocol
format; port ranges are configured using portA-portB:protocol
format:
[customizations.firewall]
ports = ["22:tcp", "80:tcp", "imap:tcp", "53:tcp", "53:udp", "30000-32767:tcp", "30000-32767:udp"]
Numeric ports, or their names from /etc/services
can be used in the ports enabled/disabled lists.
The blueprint settings extend any existing settings in the image templates. Thus, if sshd is already enabled, it will extend the list of ports with those already listed by the blueprint.
If the distribution uses firewalld
you can specify services listed by firewall-cmd --get-services
in a customizations.firewall.services
section:
[customizations.firewall.services]
enabled = ["ftp", "ntp", "dhcp"]
disabled = ["telnet"]
Remember that the firewall.services
are different from the names in /etc/services
.
Both are optional, if they are not used leave them out or set them to an empty list []
. If you only want the default firewall setup this section can be omitted from the blueprint.
Note: The Google and OpenStack templates explicitly disable the firewall for their environment. This cannot be overridden by the blueprint.
Systemd Services
An optional object containing the following attributes:
enabled
an optional list of strings containing services to be enabled.disabled
an optional list of strings containing services to be disabled.
[customizations.services]
enabled = ["sshd", "cockpit.socket", "httpd"]
disabled = ["postfix", "telnetd"]
This section can be used to control which services are enabled at boot time. Some image types already have services enabled or disabled in order for the image to work correctly, and cannot be overridden. For example, ami
image type requires sshd
, chronyd
, and cloud-init
services. Without them, the image will not boot. Blueprint services do not replace this services, but add them to the list of services already present in the templates, if any.
The service names are systemd service units. You may specify any systemd unit file accepted by systemctl enable, for example, cockpit.socket:
Files and directories
You can use blueprint customizations to create custom files and directories in the image. These customizations are currently restricted only to the /etc
directory.
When using the custom files and directories customization, the following rules apply:
- The path must be an absolute path and must be under
/etc
or/root
. - There must be no duplicate paths of the same directory.
- There must be no duplicate paths of the same file.
These customizations are not supported for image types that deploy ostree commits (such as edge-raw-image
, edge-installer
, edge-simplified-installer
). The only exception is the Fedora iot-raw-image
image type, which supports these customizations.
Directories
You can create custom directories by specifying items in the customizations.directories
list. The existence of a specified directory is handled gracefully only if no explicit mode
, user
or group
is specified. If any of these customizations are specified and the directory already exists in the image, the image build will fail. The intention is to prevent changing the ownership or permissions of existing directories.
The following example creates a directory /etc/foobar
with all the default settings:
[[customizations.directories]]
path = "/etc/foobar"
mode = "0755"
user = "root"
group = "root"
ensure_parents = false
path
is the path to the directory to create. It must be an absolute path under/etc
. This is the only required field.mode
is the octal mode to set on the directory. If not specified, the default is0755
. The leading zero is optional.user
is the user to set as the owner of the directory. If not specified, the default isroot
. Can be specified as user name (string) or as user id (integer).group
is the group to set as the owner of the directory. If not specified, the default isroot
. Can be specified as group name (string) or as group id (integer).ensure_parents
is a boolean that specifies whether to create parent directories as needed. If not specified, the default isfalse
.
Files
You can create custom files by specifying items in the customizations.files
list. You can use the customization to create new files or to replace existing ones, if not restricted by the policy specified below. If the target path is an existing symlink to another file, the symlink will be replaced by the custom file.
Please note that the parent directory of a specified file must exist. If it does not exist, the image build will fail. One can ensure that the parent directory exists by specifying it in the customizations.directories
list.
In addition, the following files are not allowed to be created or replaced by policy:
/etc/fstab
/etc/shadow
/etc/passwd
/etc/group
Using the files
customization comes with a high chance of creating an image that doesn't boot. Use this feature only if you know what you are doing. Although the files
customization can be used to configure parts of the OS which can also be configured by other blueprint customizations, this use is discouraged. If possible, users should always default to using the specialized blueprint customizations. Note that if you combine the files customizations with other customizations, the other customizations may not work as expected or may be overridden by the files customizations.
The following example creates a file /etc/foobar
with the contents Hello world!
:
[[customizations.files]]
path = "/etc/foobar"
mode = "0644"
user = "root"
group = "root"
data = "Hello world!"
path
is the path to the file to create. It must be an absolute under/etc
. This is the only required field.mode
is the octal mode to set on the file. If not specified, the default is0644
. The leading zero is optional.user
is the user to set as the owner of the file. If not specified, the default isroot
. Can be specified as user name (string) or as user id (integer).group
is the group to set as the owner of the file. If not specified, the default isroot
. Can be specified as group name (string) or as group id (integer).data
is the plain text contents of the file. If not specified, the default is an empty file.
Note that the data
property can be specified in any of the ways supported by TOML. Some of them require escaping certain characters and others don't. Please refer to the TOML specification for more details.
Ignition
The customizations.ignition
section allows users to provide Ignition configuration files to be used in edge-raw-image
and edge-simplified-installer
images. Check the RHEL for Edge (r4e
) butane specification for a description of the supported configuration options.
The blueprint configuration can be done either by embedding an Ignition configuration file into the image (only available for edge-simplified-installer
), or providing a provisioning URL that will be fetched at first boot.
ignition.embedded
configuration
[customizations.ignition.embedded]
config = "eyJpZ25pdGlvbiI6eyJ2ZXJzaW9uIjoiMy4zLjAifSwicGFzc3dkIjp7InVzZXJzIjpbeyJncm91cHMiOlsid2hlZWwiXSwibmFtZSI6ImNvcmUiLCJwYXNzd29yZEhhc2giOiIkNiRqZnVObk85dDFCdjdOLjdrJEhxUnhxMmJsdFIzek15QUhqc1N6YmU3dUJIWEVyTzFZdnFwaTZsamNJMDZkUUJrZldYWFpDdUUubUpja2xQVHdhQTlyL3hwSmlFZFdEcXR4bGU3aDgxIn1dfX0="
Add a base64
encoded Ignition configuration in the config
field. This Ignition configuration will be included in the edge-simplified-installer
image.
ignition.firstboot
configuration
[customizations.ignition.firstboot]
url = "http://some-server/configuration.ig"
Add a URL pointing to the Ignition configuration that will be fetched during the first boot in the url
field. Available for both edge-simplified-installer
and edge-raw-image
.
Repositories
Third-party repositories are supported by the blueprint customizations. A repository can be defined and enabled in the blueprints which will then be saved to the /etc/yum.repos.d
directory in an image.
An optional filename
argument can be set, otherwise the repository will be saved using the the repository ID, i.e. /etc/yum.repos.d/<repo-id>.repo
.
Please note custom repositories cannot be used at build time to install third-party packages. These customizations are used to save and enable third-party repositories on the image. For more information, or if you wish to install a package from a third-party repository, please continue reading here.
The following example can be used to create a third-party repository:
[[customizations.repositories]]
id = "example"
name="Example repo"
baseurls=[ "https://example.com/yum/download" ]
gpgcheck=true
gpgkeys = [ "https://example.com/public-key.asc" ]
enabled=true
Since no filename is specified, the repo will be saved to /etc/yum.repos.d/example.repo
.
The blueprint accepts the following options:
id
(required)name
filename
baseurls
(array)mirrorlist
metalink
gpgkeys
keys (array)gpgcheck
repo_gpgcheck
priority
ssl_verify
Note: the baseurls
and gpgkeys
fields both accept arrays as input. One of baseurls
, metalink
& mirrorlist
must be provided
Repository GPG Keys
The blueprint accepts both inline GPG keys and GPG key urls. If an inline GPG key is provided it will be saved to the /etc/pki/rpm-gpg
directory and will be referenced accordingly
in the repository configuration. GPG keys are not imported to the RPM database and will only be imported when first installing a package from the third-party repository.
Filesystems
The blueprints can be extended to provide filesytem support. Currently the mountpoint
and minimum partition size
can be set. Custom mountpoints are currently only supported for RHEL 8.5
& RHEL 9.0
. For other distributions, only the root
partition is supported, the size argument being an alias for the image size.
[[customizations.filesystem]]
mountpoint = "/var"
size = 2147483648
In addition to the root mountpoint, /
, the following mountpoints
and their sub-directories are supported:
/var
/home
/opt
/srv
/usr
/app
/data
/tmp
Filesystem customizations are currently not supported for the following image types:
image-installer
edge-installer
(RHEL and CentOS) andiot-installer
(Fedora)edge-simplified-installer
(RHEL and CentOS)
In addition, the following image types do not create partitioned OS images and therefore filesystem customizations for these types are meaningless:
edge-commit
(RHEL and CentOS) andiot-commit
(Fedora)edge-container
(RHEL and CentOS) andiot-container
(Fedora)tar
container
OpenSCAP
From RHEL 8.7
& RHEL 9.1
support has been added for OpenSCAP
build-time remediation. The blueprints accept two fields:
- the
datastream
path to the remediation instructions - the
profile_id
of the desired security profile
Please see the OpenSCAP page for the list of available security profiles.
[customizations.openscap]
datastream = "/usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml"
profile_id = "xccdf_org.ssgproject.content_profile_cis"
Example Blueprints
The following blueprint example will:
- install the
tmux
,git
, andvim-enhanced
packages - set the root ssh key
- add the groups: widget, admin users and students
name = "example-custom-base"
description = "A base system with customizations"
version = "0.0.1"
[[packages]]
name = "tmux"
version = "*"
[[packages]]
name = "git"
version = "*"
[[packages]]
name = "vim-enhanced"
version = "*"
[customizations]
hostname = "custombase"
[[customizations.sshkey]]
user = "root"
key = "A SSH KEY FOR ROOT"
[[customizations.user]]
name = "widget"
description = "Widget process user account"
home = "/srv/widget/"
shell = "/usr/bin/false"
groups = ["dialout", "users"]
[[customizations.user]]
name = "admin"
description = "Widget admin account"
password = "$6$CHO2$3rN8eviE2t50lmVyBYihTgVRHcaecmeCk31LeOUleVK/R/aeWVHVZDi26zAH.o0ywBKH9Tc0/wm7sW/q39uyd1"
home = "/srv/widget/"
shell = "/usr/bin/bash"
groups = ["widget", "users", "students"]
uid = 1200
[[customizations.user]]
name = "plain"
password = "simple plain password"
[[customizations.user]]
name = "bart"
key = "SSH KEY FOR BART"
groups = ["students"]
[[customizations.group]]
name = "widget"
[[customizations.group]]
name = "students"
[[customizations.filesystem]]
mountpoint = "/"
size = 2147483648
[customizations.openscap]
datastream = "/usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml"
profile_id = "xccdf_org.ssgproject.content_profile_cis"
Developer Guide
In this section, you will find a description of the source code in osbuild
organization.
The following scheme describes how separate components communicate with each other:
In the very basic use case where osbuild-composer
is running locally, the "pool of workers" also lives on the user's host machine. The osbuild-composer
and osbuild-worker
processes are spawned by systemd. We don't support any other means of spawning these processes, as they rely on systemd to open sockets, create state directories etc. Additionally, osbuild-worker
spawns osbuild as a subprocess to create the image itself. The whole image building machinery is spawned from a user process, for example, composer-cli
.
General
Glossary
Term | Explanation |
---|---|
AMI | Amazon Machine Image (image type) |
Blueprint | Definition of customizations in the image |
Compose | Request from the user that produces one or more images. Images in a single compose are, in theory, the same, but for different platforms, such as Azure or AWS. In practice they are slightly different because every cloud platform requires a different package set and system configuration. osbuild-composer running the Weldr API can only create one image at a time, so one compose maps directly to one image build. It can map to multiple image builds when used with other APIs, such as the Koji API. |
Composer API | HTTP API meant as publicly accessible (over TCP). It was created specifically for osbuild-composer and does not support some Weldr features like blueprint management, but adds new features like building different distros and architectures. |
GCP | Google Cloud Platform |
Image Build | One request from osbuild-composer to osbuild-worker. Its result is a single image. |
Image Type | Image file format usually associated with a specific use case. For example: AMI for AWS, qcow2 for OpenStack, etc. |
Manifest | Input for the osbuild tool. It should be a precise definition of an image. See https://www.osbuild.org/man/osbuild-manifest.5 for more information. |
osbuild | Low-level tool for building images. Not meant for end-user usage. |
osbuild-composer | HTTP service for building OS images. |
OSTree | Base technology for immutable OS images: Fedora IoT and RHEL Edge |
Repository overrides | osbuild-composer uses its own set of repository definitions. In case a user wants to use custom repositories, "overrides" can be created in /etc/osbuild-composer |
Weldr API | Local HTTP API used for communication between composer-cli/cockpit-composer and osbuild-composer. It comes from the lorax-composer project. |
Workflow
Git Workflow
Commits
Commits should be easy to read.
The commit message should explain clearly what it's trying to do and why. The following format is common but not required:
<module>: Topic of the commit
Body of the commit, describing the changes in more detail.
The <module>
should point to the area of the codebase (for instance tests
or tools
). The topic
should summarize what the commit is doing.
GitHub truncates the first line if it's longer than 65 characters, which is something to keep in mind as well.
A Fixes #issue-number
can be added to automatically link and close a related issue if it exists.
Pull requests
A pull request should be one or more commits which form a coherent unit, it can be rebased/rewritten/force-pushed until it's fit for merging.
How the PR developed, and the iterations it went through, should not be visible in the git history. The end result counts: a certain amount of commits, each one forming a logical unit of changes. Avoid 'fix-up' commits which tweak previous commits in the PR.
Pull requests should be opened from a developer's own fork to avoid random branches on the origin.
Each pull request should be reviewed, and the CI should pass.
Once a pull request is ready to be merged, it should be merged via the Rebase and merge
or Squash and merge
option. This avoids merge commits on the main branch.
Branches
Force-pushing to, or rebasing the main branch (or other release branches) is not allowed. Avoid directly pushing (fast-forward) to those branches as well. Commits can always be reverted by opening a new pull request.
Code style guidelines
This depends a little bit on the project and the language. Most of our projects have linters available, so do make use of those.
If unsure on how to format a specific statement, try to look for examples in the code.
General
-
No trailing whitespace
-
Avoid really long lines where possible (>120 characters)
-
Single newline at the end of each file
Golang
This is easy, simply use Gofmt.
Python
Python code should follow the PEP 8 style guide.
Shell
ShellCheck is used to lint shell code.
Javascript
Projects like Cockpit Composer use eslint to enforce style.
Releasing
This guide describes the process of releasing osbuild and osbuild-composer to upstream, into Fedora and CentOS Stream.
Clone the release helpers
Go to the maintainer-tools repository, clone the repository and run pip install -r requirements.txt
in order to get all the dependencies to be able to execute the release.py
and update-distgit.py
scripts.
It's also advised to set a GitHub personal access token, otherwise you might run into API usage quotas. Go to Personal access tokens on GitHub and create a new token with scope public_repo
. Now, create a new packit user configuration at ~/.config/packit.yaml
and paste there the following content:
authentication:
github.com:
token: [YOUR_GITHUB_PERSONAL_ACCESS_TOKEN]
Upstream release
Note: Upstream releases are done automatically on a fortnightly alternating schedule, meaning one week we release osbuild and then the next week we release osbuild-composer.
Manual upstream release
Navigate to your local repository in your terminal and call the release.py
script. It will interactively take you through the following steps:
-
Gather all pull request titles merged to
main
since the latest release tag -
Create a draft of the next release tag
While writing the commit message, keep in mind that it needs to conform to both Markdown and git commit message formats, have a look at the commit message for one of the recent releases to get a clear idea how it should look like.
-
Push your signed git tag to
main
From here on a GitHub composite action will take over and
- Create a GitHub release based on the tag (version and message)
- Bump the version in
osbuild.spec
orosbuild-composer.spec
(and potentiallysetup.py
) - Commit and push this change to
main
so the version is already reflecting the next release
Fedora release
We use packit (see .packit.yml
in the osbuild or osbuild-composer repository respectively or the official packit documentation) to automatically push new releases directly to Fedora's dist-git.
Then our fedora-bot takes over and performs the remaining steps:
- Get a kerberos ticket by running
kinit $USER@FEDORAPROJECT.ORG
- Call
fedpkg build
to schedule Koji builds for each active Fedora release (or: dist-git branch) - Update Bodhi with the latest release
CentOS Stream / RHEL releases
If you are a Red Hat employee, please continue reading about this in our internal release guide.
Spreading the word on osbuild.org
The last of releasing a new version is to create a new post on osbuild.org. Just open a PR in osbuild/osbuild.github.io. You can find a lot of inspiration in existing release posts.
Backporting simple fixes
Overview
Method | Complexity | Time | Notes |
---|---|---|---|
COPR | Easy | Immediate | Unofficial, but the fix is already there |
Unsigned RPM | Easy | 1 week | Unofficial in that it's unsigned |
Async Update to z-stream | Difficult | 2 weeks | Requires sign-off from many and could be rejected |
Batch Update to z-stream | Medium | up to 8 weeks | Red Hat's preferred method |
How to
COPR
- Share the correct COPR package version and URL
Unsigned RPM
- Take the latest spec file from downstream
- Create a downstream patch and add it
- Create a scratch build in Brew
Latest RPM builds
While developing osbuild and osbuild composer it is convenient to download the latest RPM builds directly from upstream. The repositories in the osbuild organization don't use any automation from Copr or Packit. Instead, the RPMs are built directly in the Jenkins CI and stored in AWS under the commit hash which allows anyone to download precisely the version built from a desired commit.
The URL is specified in the mockbuild.sh
scripts in the osbuild and osbuild-composer repositories:
And the final resulting URL is displayed in the Jenkins output (available only from Red Hat VPN).
Common trap: If you click on a link to a repo, such as:
you will get HTTP 403 because that's a directory and we don't allow directory listing. If you append a known file path, such as repodata/repomd.xml
you will see that the repo is there:
Testing strategy
Let me start with a quote:
As the team obsessed with immutable test dependencies, how could we use ..
One osbuild developer in one PR fixing one more piece of infrastructure which could still change.
TODO: what do we test in each repo
osbuild-composer
This section provides a basic summary of the various types of testing done for osbuild-composer
. Detailed information about testing can be found in the upstream repository.
Unit tests
There is pretty heavy mocking in the osbuild-composer codebase.
HTTP API is unit-tested without any network communication (there is no socket), only the HTTP request/responses are tested.
Integration tests
These test cases live under test/cases
and each of them is a standalone script. Some of them invoke additional binaries which live under cmd
if not specified otherwise.
-
api.sh [aws|azure|gcp]
- test the Cloud API (running at localhost:443)-
Provisions osbuild-composer and locally running remote worker.
-
Creates a request for compose and uploads the image to specified cloud provider. Currently AWS, Azure and GCP are supported.
-
The uploaded image is used for a VM instance in the respective cloud environment, booted and connected to via SSH. This is currently tested only for AWS and GCP.
-
Requires credentials for the respective cloud provider to work properly.
-
-
aws.sh
Use osbuild-composer "the way we expect our customers to use it". That means provision
osbuild-composer
and use Weldr API to build an AMI image and upload it to EC2. Then use theaws
CLI tool to spawn a VM from the image and make sure it boots and can be accessed.- Requires AWS credentials
-
base_tests.sh
This script runs binaries implemented as part of osbuild-composer codebase in golang. It provisions osbuild-composer and then runs the tests in a loop.
-
osbuild-composer-cli-tests
- Weldr API tests using composer-cli- Executing
composer-cli
utility - Invoke multiple image builds
- Executing
-
osbuild-weldr-tests
- Weldr API tests using golang library frominternal/client
- These live directly in the
internal
directory, which is a bit odd given that all other tests live undercmd/
, but there might be a reason for this. - They invoke a build of a qcow2 image
- These live directly in the
-
osbuild-dnf-json-tests
- These make sure the interface to dnf still works-
This binary will execute
dnf-json
multiple times and it will also run multiplednf
depsolving tasks in parallel. It is possible that it will require a high amount of RAM. -
My guess would be at least 2GB memory for a VM running this test.
-
-
osbuild-auth-tests
- Make sure the TLS certificate authentication works as expected for the koji api and worker api sockets.- A certificate authority is created for these tests and the files are stored in
/etc/osbuild-composer-test/ca
- The certificates live in the standard configuration directory:
/etc/osbuild-composer
- Multiple certificates are created:
- For osbuild-composer itself (let's say a "server" certificate)
- For osbuild-worker
- For a client application, in this case the test binary
- For kojihub
- A certificate authority is created for these tests and the files are stored in
-
-
image_tests.sh
Possibly the most resource-hungry test case. It builds an image for all supported image types for all supported distributions on all supported architectures (note that every distro has a different set of aches and arches have different set of supported types, e.g. there is no s390x image for AWS because there is no such machine). The "test cases" are defined in
test/cases/manifests
and they contain a boot type (where to spawn the VM), compose request (what to ask Weldr API for), and finally the expected manifest. Osbuild-composer should generate the same manifest, build the image successfully, optionally upload it to a cloud provider, boot the image, and finally verify it is running.- Require AWS, Openstack, and Azure credentials
-
koji.sh
Runs a koji instance in a container. It sets up certificates and Kerberos KDC because osbuild-composer uses Kerberos to authenticate with Koji.
-
ostree.sh
This test case creates an OSTree commit, boots it, then it creates a commit with an upgrade on top of the previous commit and makes sure the VM can upgrade to the new one.
- Uses libvirt to run the VM
-
qemu.sh
Create a qcow2 image and boot it using libvirt.
Leaking resources
The cloud-cleaner binary was created to clean up all artifacts (like images, but also registered AMIs, security groups, etc.) that could be left behind. Not all executables in our CI have proper error handling and clean up code and what is even worse, if Jenkins fails and takes down all running jobs, it is possible that the clean-up code will not run even if it is implemented.
Possibly leaking resources:
-
api.sh
test case:- Image uploaded to AWS, Azure or GCP
-
aws.sh
test case:-
Image uploaded to EC2
-
VM running in EC2
-
RPM Repository Snapshots
In order to provide a stable base for the tests, the maintainer team created the RPMRepo project that periodically snapshots repositories of selected distributions.
Projects
osbuild
The osbuild project is the heart of Image Builder. It is a command-line tool for building OS images. It takes manifest as an input and produces an image as an output. osbuild defines a pipeline description to build arbitrary operating system artifacts out of many small and self-contained stages. Furthermore, it provides an execution engine that will consume a pipeline description, execute the pipeline, and provide the resulting image back to the user. The osbuild interfaces are meant to be used by machines, not humans. Therefore, access to osbuild resources should only be required if you plan to develop new osbuild frontends, debug osbuild failures on your own, or contribute to the osbuild development.
Manifests
The manifest consists of:
- sources section
- pipeline
In our usual use-case, that is tied to Fedora and RHEL, not applicable to other non-RPM distros, the sources section contains an org.osbuild.files
section, which is a list of RPMs described by their name, hash, and URL for downloading. We do not support metalink at the moment.
This section is, very often, a source of build failures. This happens because we can only include a single link and RPM repos are often instable. Furthermore, we need to set a timeout for the curl
download, because we want the build to timeout eventually in case the RPMs are unavailable, but it sometimes fails on slow Internet connection as well.
The pipeline consists of a series of stages and ends with an assembler. A stage is our unit of filesystem tree modification and it is implemented as a standalone executable. For example, we have a stage for installing RPM packages, adding a user, enabling systemd service, or setting a timezone.
The difference between a stage and an assembler is that the former takes a read-write filesystem-tree and performs a certain modification to it, whereas the latter takes a read-only filesystem tree and produces an output artifact.
The pipeline contains one more "nested" pipeline, which does not have an assembler. It is called a "build" pipeline.
High level goals
- reproducibility
- extensibility
The ideal case for building images would be that, given the same input manifest, the output image would always be the same no matter what machine was used for building it. Where "the same" is defined as a binary equivalent. The world of IT is, of course, not ideal therefore we define reproducibility as a functional equivalence (that is the image behaves the same when built on different machines) and we limit the set of build machines only to those running the same distribution, in the same version, and on the same architecture. That means if you want to build a Fedora 33 aarch64 image, you need a Fedora 33 aarch64 machine.
It is possible to run a RHEL pipeline on Fedora, for example, but we do not test it and therefore we can't promise it will produce the correct result.
The advantage of the stage/assembler model is that any user can extend the tool with their own stage or assembler.
How osbuild works in practise
The following subsections describe how OSBuild tries to achieve the outlined high level goals.
Manifest versions
OSBuild accepts two versions of manifests. Both manifests are plain JSON files. The following sections contain examples of both (note that comments are not allowed in JSON, so the examples below are not actually valid JSON).
Version 1
The version 1 manifest is built around the idea that an artifact is produced by downloading files from the Internet (e.g. RPMs), using them to build and modify a filesystem tree (using stages), and finally using a read-only version of the final filesystem tree as an input to a assembler which produces the desired artifact.
{
# This version contains 2 top-level keys.
# First sources, these get downloaded from a network and are available
# in the stages.
"sources": {},
# Second is a pipeline, which can optionally contain a nested "build"
# pipeline.
"pipeline": {
# The build pipeline is used to create a build container that is
# later used for building the actual OS artifact. This is mostly
# to increase reproducibility and host-guest separation.
# Also note that this is optional.
"build": {
"pipeline": {
"stages": [
{
"name": "",
"options": {}
},
{
"name": "",
"options": {}
}
],
"runner": ""
}
},
# The pipeline itself is a list of osbuild stages.
"stages": [
{
"name": "",
"options": {}
},
{
"name": "",
"options": {}
}
],
# And finally exactly one osbuild assembler.
"assembler": {
"name": "",
"options": {}
}
},
}
Version 2
Version 2 is more complicated because OSBuild needed to cover additional use cases like OSTree commit inside of a OCI container. In general that is an artifact inside of another artifact. This is why it comes with multiple pipelines.
{
# This version has 3 top-level keys.
# The first one is simply a version.
"version": "2",
# The second one are sources as in version 1, but keep in mind that in this
# version, stages take inputs instead of sources because inputs can be both
# downloaded from a network and produced by a pipeline in this manifest.
"sources": {},
# This time the 3rd entry is a list of pipelines.
"pipelines": [
{
# A custom name for each pipeline. "build" is used only as an example.
"name": "build",
# The runner is again optional.
"runner": "",
"stages": [
{
# The "type" is same as "name" in v1.
"type": "",
# The "inputs" field is new in v2. You can specify what goes to
# the stage. Example inputs are RPMs and OSTree commits from the
# "sources" section, but also filesystem trees built by othe
# pipelines.
"inputs": {},
"options": {}
}
]
},
{
# Again only example name.
"name": "build-fs-tree",
# But this time the pipeline can use the previous one as a build pipeline.
# The name:<something> is a reference format in OSBuild manifest v2.
"build": "name:build",
"stages": []
},
{
"name": "do-sth-with-the-tree",
"build": "name:build",
"stages": [
{
"type": "",
"inputs": {
# This is an example of how to use the filesystem tree built by
# another pipeline as an input to this stage.
"tree": {
"type": "org.osbuild.tree",
"origin": "org.osbuild.pipeline",
"references": [
# This is a reference to the name of the pipeline above.
"name:build-fs-tree"
]
}
},
"options": {}
}
]
},
{
# In v2 the assembler is a pipeline as well.
"name": "assembler",
"build": "name:build",
"stages": []
}
]
}
Components of osbuild
OSBuild is designed as a set of loosely coupled or independent components. This subsection describes each of them separately so that the following section can describe how they work together.
Object Store
Object store is a directory (also a class representing it) that contains multiple filesystem trees. Each filesystem tree lives in a directory whose name represents hash of the pipeline resulting in this tree. In OSBuild, a user can specify a "checkpoint" which stores particular filesystem tree inside of the Object Store.
Build Root
It is a directory where OSBuild modules (stages and assemblers) are executed. The directory contains full operating system which is composed of multiple things:
- Executables and libraries needed for building the OS artifact (these are either from the host or created in a build pipeline).
- Directory where the resulting filesystem tree resides.
- Few directories bind-mounted directly from the host system (like
/dev
) - API sockets for communication between the stage running inside a container and the osbuild process running outside of it (directly on the host).
Sources
Sources are artifacts that are downloaded from the Internet. For example, generic files downloaded with curl
, or OSTree commits downloaded using libostree
.
Inputs
Inputs are a generalization of the concept of sources, but this time an "input" can be both downloaded, as sources are, or generated using osbuild pipeline. That means one pipeline can be used as an input for another pipeline so you can have an artifact inside of an artifact (for example OSTree commit inside of a container).
APIs
OSBuild allows for bidirectional communication from the build container to the osbuild process running on the host system. It uses Unix-domain sockets and JSON-based communication (jsoncomm
) for this purpose. Examples of available APIs:
- osbuild - provides basic osbuild features like passing arguments to the stage inside the build container or reporting exceptions from the stage back to the host
- remoteloop - helps with setting up loop devices on the host and forwarding them to the container
- sources - runs a source module and returns the result
What happens during simplified osbuild run
This section puts the above concepts into context. It does not aim to describe all the possible code paths. To understand osbuild
properly, you need to read the source code, but it should help you get started.
During a single osbuild
run, this is what usually happens:
- Preparation
- Validate the manifest schema to make sure it is either v1 or v2 manifest
- Object Store is instantiated either from an empty directory or from already existing one which might contain already cached filesystem trees.
- Processing the manifest
- Download sources
- Run all pipelines sequentially
- Processing a pipeline (one of N)
- Check the Object Store for cached filesystem trees and start from there if it already contains parially built artifact
- Processing a module (stage or assembler)
- Create a BuildRoot, which means initializing a
bwrap
container, mounting all necessary directories, and forwarding API sockets. - From the build container, use the osbuild API to get arguments and run the module
- Create a BuildRoot, which means initializing a
- If an assembler is present in the manifest, run it and store the resulting artifact in the output directory
Issues that do not fit into the high level goals
Bootstrapping the build environment
The "build" pipeline was introduced to improve reproducibility. Ideally, given a build pipeline, one would always get the same filesystem tree. But, to create the first filesystem tree, you need some tools. So, where go you get them from? Of course from the host operating system (OS). The problem with getting tools from the host OS this is that the host can affect the final result.
We've already had this issue many times, because most of the usual CLI tools were not created with reproducibility in mind.
The struggle with GRUB
The standard tooling for creating GRUB does not fit to our stage/assembler concept because it wants to modify the filesystem tree and create the resulting artifact at the same time. As a result we have our own reimplementation of these tools.
Running OSBuild from sources
It is not strictly required to run OSBuild installed from an RPM package. If you attempt to run osbuild
from the command line in combination with an SELinux stage in the manifest it will most likely fail. For example:
$ python3 -m osbuild
The cause of error is a lack of proper labelling of the python3
executable, all stages and assemblers. Creating two additional files resolves the problem:
- New entrypoint which will soon have the right SELinux label, let's call it
osbuild-cli
:
#!/usr/bin/python3
import sys
from .osbuild.main_cli import osbuild_cli as main
if __name__ == "__main__":
r = main()
sys.exit(r)
- A script to relabel all the files that need it:
#!/bin/bash
LABEL=$(matchpathcon -n /usr/bin/osbuild)
echo "osbuild label: ${LABEL}"
chcon ${LABEL} osbuild-cli
find . -maxdepth 2 -type f -executable -name 'org.osbuild.*' -print0 |
while IFS= read -r -d '' module; do
chcon ${LABEL} ${module}
done
Now run the script and use the entrypoint to execute OSBuild from git checkout.
Stage development
Stage unit testing
To update a stage unit test, modify appropriate test/data/stages/<stage_suffix>/b.mpp.json
.
Regenerate testing manifests:
make test-data
You can run osbuild
stage test only for a specific stage:
sudo python3 -m pytest test/run/test_stages.py -k test_<stage_suffix>
Based on the result of the unit test adjust test/data/stages/<stage_suffix>/diff.json
Inspecting filesystem tree modified by the stage using unit test manifest
# needed only first time
mkdir -p store/
mkdir -p output/
rm -rf rpmbuild
make rpm
sudo dnf install -y rpmbuild/RPMS/noarch/*.rpm
sudo rm -rf store/*
# This command assumes that the latest pipeline stage, which you want to inspect has index "1".
# If this is not true, adjust the index in the `jq .pipeline.stages[1].id`
STAGE_ID=$(osbuild --inspect test/data/stages/<stage_suffix>/b.json | jq .pipeline.stages[1].id | tr -d '"')
sudo osbuild --store store/ --checkpoint "$STAGE_ID" --export "$STAGE_ID" --output-directory output/ test/data/stages/<stage_suffix>/b.json
The modified filesystem tree will be located in store/objects/<stage_id>/
Special case - the stage requires additional dependency
If the additional dependency is not present int the build pipeline of the stage test manifest, you'll have to fix it. Modify the appropriate manifest imported in the build pipeline of the b.mpp.json
file. This may be e.g. the f34-build.json
present in test/data/manifests/
. Modify it's "mpp" version, e.g. test/data/manifests/f34-build.mpp.json
and run make test-data
in the git checkout root.
osbuild CI runs unit tests inside a special osbuild-ci
container. If the stage imports a 3rd party Python module, then you will have to make sure, that this Python module is present in the container image. Adding the dependency to the build pipeline will cover only the case when stages are tested, but not other types of unit testing. In order to extend the osbuild-ci
image, you need to submit a Pull Request against the OSBuild Containers repository.
osbuild-composer
It is a web service for building OS images. The core of osbuild-composer
, which is common to all APIs, is osbuild manifests generation a job queuing. If an operating system is to be supported by osbuild-composer
, it needs the manifest generation code in internal/distro
directory. So far, we only focus on RPM based distributions, such as Fedora and RHEL. The queuing mechanism is under heavy development at the moment.
Interfacing with dnf package manager
We use our custom wrapper for dnf
, which we call simply dnf-json
, because its interface goes like this:
- Stdin - takes a JSON object
- Stdout - returns a JSON object
- Return code is used only for
dnf-json
internal errors, not for errors in the operation specified on the input. Those errors are reported in the returned JSON object.
Local API - Weldr
This API comes from the Lorax-composer project
. osbuild-composer
was created as a drop-in replacement for Lorax which influenced many design decisions. It uses Unix-Domain socket, so it is meant for local usage only. There are two clients:
- composer-cli / weldr-client
- cockpit-composer (branded as Image Builder in the Cockpit console)
Activate this API by invoking systemctl start osbuild-composer.socket
. Systemd will create a socket at /run/weldr/api.socket
.
Remote API - Cloud API
This is the /api/image-builder-composer/v2/
API endpoint. There are currently two clients, which are integrating with osbuild-composer
using this API:
- image-builder, described in more detail in the Image Builder service architecture document.
- koji-osbuild plugin, which integrates
osbuild-composer
with the Koji build system.
Local Cloud API Development
The following instructions assume you are running osbuild-composer in a local
VM on some version of Fedora and that you have the osbuild-composer github
repository available. The VM should have ssh access from the host system. In
these examples I use localvm
as an alias for the VM's ssh settings in my
~/.ssh/config
file.
Setup Local API Access
The osbuild-composer cloud api listens to port 443, but it requires SSL
certificates in order to authenticate the requests. You can generate the
needed certificates using a slightly modified script from the ./tools/
directory, and the system running the script needs to have openssl
installed
on it.
These changes will let you use curl on the VM to POST the composer api json request files to the service listening on 127.0.0.1:443.
From the osbuild-composer git repo copy ./tools/gen-certs.sh
and
./test/data/x509/openssl.cnf
to a temporary directory. Edit the
gen-certs.sh
script and replace all of the subjectAltName=
entries with
subjectAltName=IP:127.0.0.1
and generate new certs like so:
./gen-certs.sh /tmp/openssl.cnf /tmp/local-certs/ /tmp/working-certs/
Copy the new certs to the VM:
scp /tmp/local-certs/* localvm:/etc/osbuild-composer/
ssh into the VM and stop any currently running osbuild services and then start the cloud api socket service by running:
systemctl stop '*osbuild*'
systemctl start osbuild-composer-api.socket osbuild-remote-worker.socket
Make a helper script to POST json cloud api requests to the service. Save this
in a file named start-cloudapi
on the VM:
#!/usr/bin/sh
curl -v -k --cert /etc/osbuild-composer/client-crt.pem \
--cacert /etc/osbuild-composer/ca-crt.pem \
--key /etc/osbuild-composer/client-key.pem \
https://localhost/api/image-builder-composer/v2/compose \
--header 'Content-Type: application/json' \
--data @$1
Now you need a simple request to create a guest (qcow2) image. This uses Fedora 38, and
doesn't include gpg key checking. Save this as simple-guest.json
:
{
"distribution": "fedora-38",
"image_request":
{
"architecture": "x86_64",
"image_type": "guest-image",
"repositories": [
{
"name": "fedora",
"metalink": "https://mirrors.fedoraproject.org/metalink?repo=fedora-38&arch=x86_64",
"check_gpg": false
},
{
"name": "updates",
"metalink": "https://mirrors.fedoraproject.org/metalink?repo=updates-released-f38&arch=x86_64",
"check_gpg": false
},
{
"name": "fedora-modular",
"metalink": "https://mirrors.fedoraproject.org/metalink?repo=fedora-modular-38&arch=x86_64",
"check_gpg": false
},
{
"name": "updates-modular",
"metalink": "https://mirrors.fedoraproject.org/metalink?repo=updates-released-modular-f38&arch=x86_64",
"check_gpg": false
}
]
}
}
Use ./start-cloudapi simple-guest.json
start the build. You should get a JSON response similar to this:
{"href":"/api/image-builder-composer/v2/compose","kind":"ComposeId","id":"f3ac9290-23c0-47b4-bb9e-cadee85d1340"}
This will run the build, but since it doesn't have any upload instructions it
will fail at the upload step and delete the image from the local system.
journalctl -f
will show the progress and the upload error.
If you want to upload results to a service include the upload details in the request. If you want to save the results locally continue to the next section.
Skip upload and save locally
You can configure osbuild-composer to save the image locally and not try to upload it. This allows you to examine the image, or copy it somewhere to do a test boot of it. This is not enabled normally because there are no provisions for cleaning up the images -- you need to do that manually before your disk runs out of space.
The local_save
upload option is enabled by setting an environmental variable
in the osbuild-composer.service
file. You can either edit the file directly,
which will need to be replaced every time you update the osbuild-composer rpm,
or you can create a drop-in file by running systemctl edit osbuild-composer.service
and adding these lines:
[Service]
Environment="OSBUILD_LOCALSAVE=1"
You can confirm the change by running systemctl cat osbuild-composer.service
.
Now stop the local osbuild-composer services and start the cloudapi service by
running:
systemctl stop '*osbuild*'
systemctl start osbuild-composer-api.socket osbuild-remote-worker.socket
Make a new composer api request json file with the local_save
upload option
set to true. Copy the simple-guest.json
example to local-guest.json
and add
the upload_options
section:
{
"distribution": "fedora-38",
"image_request":
{
"architecture": "x86_64",
"image_type": "guest-image",
"upload_options": {
"local_save": true
},
"repositories": [ ... SAME AS PREVIOUS EXAMPLE ... ]
}
}
You can now run ./start-cloudapi local-guest.json
to start the build. You
should get a JSON response similar to this:
{"href":"/api/image-builder-composer/v2/compose","kind":"ComposeId","id":"4674e0d3-ecb3-4cbe-9c31-ca14b7425eaa"}
and monitor the progress with journalctl -f
. When the compose is finished the
result will be saved in
/var/lib/osbuild-composer/artifacts/4674e0d3-ecb3-4cbe-9c31-ca14b7425eaa
Remember to monitor your disk usage, it can fill up quickly if you do not delete old artifact
entries. These are un-managed, unlike the store used with the weldr api, so they can be removed manually with a simple rm -rf /var/lib/osbuild-composer/artifacts/*
RPM Repository Snapshots
For reliable continuous development, the OSBuild project employs its own RPM repository snapshots. These snapshots are persistent and immutable. They are meant to be used by test farms, CI systems, and other development tools, in case the official RPM repositories are not suitable.
WARNING: These snapshots are not meant for production use! No guarantee of safety, applicability, or fitness for a particular purpose is made. No security fixes are applied to the repositories!
Target Repositories
The authoritative list of repositories that we target can be found in the
./repo/
subdirectory of the rpmrepo
repository:
https://github.com/osbuild/rpmrepo/tree/main/repo
This directory contains a configuration for each target repository, including the Base-URL that will be sourced for snapshots. The following list contains an overview (possibly outdated) of the repositories we create snapshots for:
Platform | Version | Architectures | Lifetime |
---|---|---|---|
Fedora | 31 | x86_64 | (obsolete) |
Fedora | 32 | x86_64 | (obsolete) |
Fedora | 33 | x86_64 | 12 months |
Fedora | 34 | x86_64 | 12 months |
Fedora | 35 | x86_64 | 12 months |
RHEL | 8.2 | aarch64, ppc64le, s390x, x86_64 | infinite? |
RHEL | 8.3 | aarch64, ppc64le, s390x, x86_64 | infinite? |
RHEL | 8.4 | aarch64, ppc64le, s390x, x86_64 | infinite? |
RHEL | 8.5 | aarch64, ppc64le, s390x, x86_64 | infinite? |
RHEL | 9.0 | aarch64, ppc64le, s390x, x86_64 | infinite? |
Each target repository has an ID-string that identifies it (which also is the
filename of its target configuration file in the ./repo/
directory). Whenever
a snapshot is created, the snapshot will be identified by that ID-string
suffixed with the date it was created (and possibly some other suffix
identifiers).
An enumeration of all available snapshots of all target repositories can be retrieved via:
$ curl -s https://rpmrepo.osbuild.org/v2/enumerate | jq .
For a given target repository ID-string like el9-x86_64-baseos-n9.0
, the list
of available snapshots can be queried via:
$ curl -s https://rpmrepo.osbuild.org/v2/enumerate/el9-x86_64-baseos-n9.0 | jq .
Usage
We provide an RPM repository for every snapshot, accessible via
rpmrepo.osbuild.org
. The Base URL for a given snapshot is:
https://rpmrepo.osbuild.org/v2/mirror/<storage>/<platform>/<snapshot>/
The parameters are:
Key | Value | Examples |
---|---|---|
<storage> | public, rhvpn | public, rhvpn |
<platform> | f<num>, el<num> | f33, el8 |
<snapshot> | <tag> | f33-x86_64-devel-20201010 |
The storage key selects the actual data store. Available storage includes public for the anonymous, public storage, rhvpn for data on Red Hat private infrastructure. The platform key groups the data by platform, required for data lifetime management. The snapshot key selects the individual snapshot.
Note that not all data is available on all storage locations and platforms. If you select the wrong combination, you will get 404 replies. As a general rule, you should select the platform based on the snapshot name (e.g., for the snapshot f33-x86_64-devel-20201010 you should use f33 as platform). As storage selector, you should use public for all publicly available data, rhvpn for Red Hat internal data.
For instance, to access the F33 snapshot f33-x86_64-fedora-202103231401, use:
https://rpmrepo.osbuild.org/v2/mirror/public/f33/f33-x86_64-fedora-202103231401/
To access the EL8.2 snapshot el8-x86_64-baseos-r8.2-202103231359, use:
https://rpmrepo.osbuild.org/v2/mirror/rhvpn/el8/el8-x86_64-baseos-r8.2-202103231359/
Access
By default, all snapshots are publicly available, unless they contain confidential or proprietary data. If you decide to use these snapshots, please contact the OSBuild Developers and give us a short notice, so we can track the users and communicate upcoming changes:
- RPMrepo Issue Tracker: @GitHub