Monday, February 17, 2025

Deploying CoreDNS as a systemd Service

DNS Service - CoreDNS

CoreDNS was developed for use in Kubernetes as a light-weight name server for containerized services. For a small to medium sized network, CoreDNS is much simpler to configure and operate than any of the production quality alternatives.

This post is meant to demonstrate the use of systemd services running as software containers on Fedora CoreOS. However this technique is applicable to any host that has Podman version 4.4.0+ installed and is configurable by Ansible.

If you are curious about provisioning Fedora CoreOS for this demonstration, see the previous series of posts for guidance:


Running CoreDNS

The CoreDNS configuration consists of a single configuration file and a set of DNS zone files. The files can be contained in a single directory, commonly /opt/coredns. The program looks for a file called Corefile in the current working directory when it is invoked.

CoreDNS is meant to be run in a software container. CoreOS provides podman, a drop-in CLI replacement for Docker and the runc runtime.

DNS servers listen to UDP port 53 for queries. TCP port 53 is used for some operations such as zone transfers and secure DNS. By default it listens on all configured interfaces.

The configuration used in this example is meant for use inside a firewalled network. It does not serve queries to devices outside the local network. It is meant to provide a split-dns

The CoreDNS container image is always found here:

docker.io/coredns/coredns:latest

The coredns-server Playbook

The goal of the coredns-server playbook is to install and configure CoreDNS on a set of servers. The servers need to listen for and respond to DNS queries on port 53/UDP on one of a set of listed IPv4 addresses. The service runs in a software container and is managed as a systemd service.

The deployment steps can be grouped into four related sets of tasks:

  1. Switch to static resolver

  2. Configure network interface

  3. Deploy CoreDNS configuration

  4. Configure systemd service

For clarities sake these four are broken down into separate task files in the coredns-server role. These are detailed in corresponding sections below.

An Ansible playbook is defined in a .yaml formatted file. It is possible to contain the entire playbook in a single file, but it is usually helpful to have the playbook use a role. Roles are re-usable modules that

---
#
# The playbook creates a DNS server on the target hosts using CoreDNS
# It populates the zone files from files/zones
#
- name: CoreDNS Server
  hosts: dnsservers
  become: true

  vars_files:
    - dns_services.yaml

  roles:
    - coredns-server

The dns_services.yaml file specifies the parameters for the CoreDNS server. Among these are the locations and zones for the zone files. These reside in files/zones in the ansible directory. The zone files here are static and follow the RFC standards and will be familar to anyone who’s configured ISC Bind. They could be produced mechanically from other databases but that is outside the scope of this project.

Note
The dns_services.yaml file contains global variables that are not part of the playbook. They are stored at the top of the ansible tree along with the zone files in files/zones
---
#
# DNS services for example.com network
#
dns:
  nameservers:
    pi4-1:
      fqdn: ns1.example.org
      ipv4: 192.168.2.10
    pi4-2:
      fqdn: ns1.example.org
      ipv4: 192.168.2.11

  forwarders:
    - 192.168.2.1
    - 4.2.2.1     # Level3 caching DNS server IP address
    - 1.1.1.1	  # Cloudflare caching DNS server IP Address

  zones:
    - fqdn: example.org
      file: example.org.zone
    - fqdn: lab.example.org
      file: lab.example.org.zone

  search:
    - lan    # mDNS from Google Mesh DNS
    - example.org
    - lab.example.org

The coredns-server Role

This role encapsulates the process of installing a CoreDNS server on a host. The broad steps are described above.

coredns-server role tree
roles/coredns-server/
├── files
│   └── coredns.container
├── handlers
│   └── main.yaml
├── tasks
│   ├── config_files.yaml
│   ├── main.yaml
│   ├── network.yaml
│   ├── resolver.yaml
│   └── systemd_service.yaml
└── templates
    ├── Corefile.j2
    └── resolv.conf.j2

5 directories, 9 files

The task files are the primary driver of a playbook and role. The rest of the files provide resources that serve the tasks as they are run.

The task files are the primary driver of a playbook and role. The rest of the files provide resources that serve the tasks as they are run. The file main.yaml acts as the entry point for the tasks defined in the tasks/ subdirectory. The tasks are defined as if they were part of a playbook, as a YAML list. The main.yaml file refers to a set of smaller task files, grouping the tasks functionally.

---
#
# Coordinate creating a coredns service container
#
- name: Disable systemd-resolved and set static resolver file
  import_tasks: resolver.yaml

- name: Configure and set DNS Listener IP address
  import_tasks: network.yaml

- name: Place the Configration Files
  import_tasks: config_files.yaml

- name: Prepare Systemd Services
  import_tasks: systemd_service.yaml

Note that the first three sets of tasks are not special for CoreOS. They’re applicable to any DNS service. The final task list is the important one for this series.

Disable Dynamic DNS Resolver Service

Since 2020, with the release of Fedora 33, the the local DNS resolver is a daemon integrated with systemd. This daemon listens for local queries and is bound to port 53/UDP. The CoreDNS server needs to bind to the same port, so the systemd-resolved service must be stopped and disabled before coredns can start.

This set of tasks disables the systemd-resolved service and replaces the stock /etc/resolv.conf file with one configured for the target environment.

- name: Disable systemd-resolved - (avoid conflict with coredns)
  service:
    name: systemd-resolved
    state: stopped
    enabled: false

- name: Set static resolver file
  template:
    dest: /etc/resolv.conf
    src: resolv.conf.j2
    owner: root
    group: root
    mode: 0644
    backup: true
#
# Maintained by Ansible
#
nameserver 127.0.0.1
{% for nameserver in dns.forwarders %}
nameserver {{ nameserver }}
{% endfor %}
search {{ dns.search|join(' ') }}

The resolv.conf file directs DNS queries first to the local nameserver and then to the listed forwarders when the local server does not serve the requested domain.

Set DNS Listener IP Address

The DNS service requires two servers for each domain. The servers are identified by IP address because, well they provide the name services. This step ensures that each server host is listening on one of those two addresses.

This task set finds the default interface on this host and then creates a new connection that attaches to the physical one and answers the servers listener address. The connection type is macvlan and it allows this interface to be configured manually while allowing the main interface to use DHCP for the rest of the network information.

The critical step here is the second one. It creates a virtual interface dedicated to the DNS listener address.

- name: Record interface name(s)
  set_fact:
    default_interface_name: "{{ ansible_default_ipv4.interface }}"
  tags: network

- name: Create macvlan interface for DNS server
  nmcli:
    type: macvlan
    conn_name: coredns
    ifname: coredns
    macvlan:
      mode: 2
      parent: "{{ default_interface_name }}"
    method4: manual
    ip4:
      - "{{ dns.nameservers[ansible_hostname].ipv4 }}/{{ ansible_default_ipv4.prefix }}"
    autoconnect: true
    state: present
  tags: network
  register: macvlan

- name: Restart NetworkManager if needed
  systemd:
    name: NetworkManager
    state: restarted
  when: macvlan.changed is true
  tags: network

This results in three visible changes in the network setup. A new NetworkManager connection, a new ip link and address.

$ nmcli --fields connection.id,connection.type,macvlan.parent,macvlan.mode,ipv4.addresses c show coredns
connection.id:                          coredns
connection.type:                        macvlan
macvlan.parent:                         enabcm6e4ei0
macvlan.mode:                           2 (bridge)
ipv4.addresses:                         192.168.2.10/24

$ ip address show coredns
3: coredns@enabcm6e4ei0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 06:71:b3:d4:46:8a brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.10/24 brd 192.168.2.255 scope global noprefixroute coredns
       valid_lft forever preferred_lft forever

Set CoreDNS Configuration

The system is now able to run a DNS server answering on one of the listner IP addresses specified in the vars/dns_servers.yaml data file.

The CoreDNS configuration consists of a single configuration file and a set of zone files. The entire configuration resides in a single directory tree /opt/coredns.

/opt/coredns
/opt/coredns/
├── Corefile
└── zones
    ├── example.org.zone
    └── lab.example.org.zone

2 directories, 3 files

The primary configuration file is the Corefile. It is placed at the root of the /opt/coredns/ tree. When the daemon starts it will use this as the current working directory. It reads the initial config from there.

The Corefile contains the root zone cache so that the server can forward queries for zones outside of this network. It then defines the zones as described in the dns_services.yaml file.

#
# A simple corefile for CoreDNS
#
.:53 {
  cache
  forward . {{ dns.forwarders|join(' ') }}
}

{% for zone in dns.zones %}
{{ zone.fqdn }}:53 {
  file zones/{{ zone.file }}
}
{% endfor %}

For this demonstration the zone files are static text files pulled from the files/zones sub-direcory of the Ansible file tree. They will be placed on the target machine in /opt/coredns/zones/. The Corefile contains the zone definitions and loads the files from there.

Add systemd Container Service

The final step is the significant one here. So far nothing has been particulary new.

As noted above, CoreDNS is meant to run as a container. Early in 2023 Podman integrated Quadlets, a utility to create systemd service unit files from a container spec and run software containers as first-class services. Podman is available on at least the Debian and Fedora derived distributions since the release of Podman 4.4. Podman is an OS integrated alternative to Docker. For the purposes of this document, the only important feature is the ability to run standard software containers as systemd services.

The whole point of this series was to get here: Creating a system service on Fedora CoreOS. It appears pretty anticlimactic. It’s rather like painting a room: All the real work is in the preparation. All that’s left to do now is to create one container spec file, reload the systemd daemon and enable/start the service.

- name: Set systemd container file
  copy:
    dest: /etc/containers/systemd/coredns.container
    src: coredns.container
    owner: root
    group: root
    mode: 644
  register: create_unit

- name: Reload Systemd Units
  systemd_service:
    daemon_reload: true
  notify: Restart CoreDNS Service
  #when: create_unit.changed is true

- name: Enable and Start CoreDNS container
  service:
    name: coredns.service
    state: started
    enabled: true

The container definition is a static file. The Podman components integrated into systemd services take this file and transform it into a systemd service unit file.

[Unit]
Description=CoreDNS Service Container
After=network-online.target

[Container]
Image=docker.io/coredns/coredns:latest

# Expect Corefile and zones/ within the working dir
PodmanArgs=--workdir=/root

PublishPort=53:53/udp
#PublishPort=953:953/udp
#PublishPort=53:53/tcp
#PublishPort=953:953/tcp

# Mount the coredns config dir into the container workingdir
Volume=/opt/coredns:/root

[Install]
# Enable in multi-user boot
WantedBy=multi-user.target default.target

# sudo podman run --detach --rm \
#       --name coredns \
#       --publish 53:53/udp \
#       --volume=/opt/coredns/:/root/ \
#       --workdir=/root \
#       coredns/coredns -conf /root/Corefile

This file is formatted like any other systemd unit file. Only the [Container] section is special to container service operation. That section specifies the location of the service container image and the run-time parameters. The sample above includes the corresponding command to make the mapping from CLI to configuration parameters.

This service starts after the network is active and is meant to be active for the multi-user target. It listens on port 53/udp. It could be configured for TCP and for SSL as well if the Corefile configuration calls for it. The container maps the system /opt/coredns directory to /root inside the container and instructs the container to set that as the working directory before starting the container. Without any arguments

Deployment

All the parts are in place now:

  • ✓ Disable systemd-resolved bound to port 53/udp

  • ✓ Configure the nameserver IP address

  • ✓ Place the CoreDNS configuration and zone files

  • ✓ Define a systemd service unit to manage the nameserver process

Confirm the changes to apply
ansible-playbook --check coredns-server-pb.yaml
Deploy the CoreDNS service
ansible-playbook coredns-server-pb.yaml

Operation

Over time the zones that are served will need to be updated. Make the needed changes to the Corefile zone files and then run the playbook with the zones tag.

Update the DNS configuration and content
ansible-playbook --tags zones coredns-server-pb.yaml

With this playbook, changes to the Corefile or zone files will trigger a restart of the coredns service. CoreDNS does include two plugins, reload and auto. The reload plugin tells the daemon to poll the Corefile periodically and to reload when it detects changes. the auto plugin does the same thing for zone files. These can be added later if needed, but the downtime associated with a service restart on a small network is neglegable.

To Do

In a larger network with servers geographically disbursed, they would also be set up as primary/secondary and would have zone transfers configured. In this example the network is localized, assumed to be a single site. Since both servers are present it is possible just to update them both at the same time and avoid the complexity of primary/secondary. Adding that would be a reasonable update.

The CoreDNS container path contains the latest tag and is embedded in the coredns.container systemd file. Ideally the CoreDNS version would be configurable by setting a variable in a file in /etc/sysconfig/coredns. It is not clear if this is possible yet using a Podman quadlet.

Summary

When this procedure is complete there will be two new DNS servers running CoreDNS. They will serve the configured zones and will forward any queries for other domains upstream for for resolution. The contents can be updated as needed by updating the zone files and new zones can be added by editing the dns_servers.yaml file and adding new zone files.

The DNS service can be managed on the hosts as a systemd service like any other. Restarts will automatically check and update the container image. If the host is running Fedora CoreOS it will update and reboot whenever an image update is made available. The OS and CoreDNS service software are decoupled so that there is no possibility of a dependency conflict between them. Both can be rolled back automically to the last known good version.

The CoreDNS version is allowed to update to the latest version on each restart. If the version must be rolled back, the last known good version can be found in the CoreDNS Releases on Github. Update the release tag in the coredns.container file and re-run the playbook to restore service using the required release.

The DHCP servers for the network will need to be configured with the new nameserver information, and any manually configured systems will also need to be updated.

References

Wednesday, January 29, 2025

Provisioning CoreOS - Intel and Raspberry Pi 4

Provisioning CoreOS

Installing CoreOS on a new system involves just booting a copy of the installation media with the configuration file (produced earlier) embedded. On most systems, the installation media is a USB stick and the target storage is an internal disk or SSD. The CoreOS image is copied bitwise to the destination and is then tuned according to the configuration file. On systems like the Raspberry Pi, that boot from an integrated SD card slot, this bootable media is the target device as well.

It is possible, on systems capable of network boot by DHCP/PXE, to boot in memory over a network and then to install to local disk. In that case the configuration file is retrieved by HTTP(S) from a local web site. This demonstration will only use bootable media.

Both of the procedures shown here are derived from the Fedora CoreOS Documentation, specifically the instructions for Bare Metal Intel and Raspberry Pi 4: EKD2 Combined Fedora CoreOS + EDK2 Firmware Disk. The bare metal procedure is very close to identical. For the Raspberry Pi 4 procedure the individual steps have been scripted to a single command, but that is not strictly necessary.

Bare Metal Installation - Intel

The Bare Metal installation is the most straightforward. It consists of just five steps:

  1. Download CoreOS ISO image

  2. Customize ISO image - Destination Disk and Ignition File

  3. Write ISO to USB Stick

  4. Boot from the USB stick

  5. Boot from the target disk

This example assumes that the Ignition file for the host is coreos-config.ign and the destination disk is /dev/sda. The invocation lets most of the CLI arguments default:

  • archtecture: x86_64

  • platform: metal

  • stream: stable

  • format: ISO

The network configuration is allowed to default to DHCP for any NICs that detect link.

# Download ISO if necessary. Write file name
IMAGE_FILE=$(coreos-installer download --format iso)
# Customize the ISO image for the target host
coreos-installer iso customize --dest-device /dev/sda --dest-ignition coreos-config.ign ${IMAGE_FILE}
# Write the ISO to bootable media
sudo dd if=${IMAGE_FILE} of=/dev/sda bs=1M status=progress
# Reset the local ISO file for next use
coreos-installer iso reset

Two things to note here about coreos-installer download

  1. It checks if the ISO file exists locally, and if so, checks that it is current before trying to download again.

  2. It writes the filename of the ISO file to stdout on exit. That value can be used in following script lines.

The final step of the list above restores the ISO image for next use. The image file is under 800MB, and so will fit on even very small USB sticks.

At this point booting from the USB will install CoreOS on the target host and disk. Insert the USB stick, boot and use the system boot options to boot from the USB. Observe the installation process from the console. When installation is complete, reboot and remove the USB stick.

Raspberry Pi 4 Installation

Currently only Raspberry Pi 4 are supported for Fedora CoreOS, and that only using a set of U-Boot or EFI files managed by third parties. Raspberry Pi 5 can run Fedora with a few minor tweaks, but CoreOS is still waiting for updated EFI and BMC files.

The Raspberry Pi boots from an integrated SD card reader. The CoreOS image is written to the SD card along with the Ignition file. The CoreOS image for aarch64 needs a bootloader, either U-Boot or EFI to boot correctly. This process installs a set of EFI binaries and auxiliary files into the boot partition to take the place of the typical firmware that other systems would have.

When the SD card is inserted and the system boots for the first time, the kernel and initrd are loaded into memory, including the Ignition file and the configuration is laid into the storage before mounting the disk filesystems and handing control to the init process.

The process of writing all the files to the SD card is described in Booting on Raspberry Pi 4 - EDK2: Combined Fedora CoreOS + EDK2 Firmware Disk. First the stock raw CoreOS aarch64 image is written to the SD card. Then the EFI partition is mounted and the EDK2 UEFI firmware is written. At that point the SD card is bootable on a Raspberry Pi 4.

The complete procedure for writing the SD Card is provide in the scripts sub-directory: prepare-pi.sh

Connect the SD card to the working system. Make sure that any auto-mounted partitions are unmounted before proceding. Determine the device path and provide the ignition file.

prepare-sd.sh
bash scripts/prepare-sd.sh <device path> <ignition file>

As with the Intel media, the final step is to install the SD Card in the Raspberry Pi 4 and power it on. Assuming that the Pi is connected to a network with DHCP and internet access, it will boot, complete the Ignition installation, install Ansible and reboot itself.

In both cases, at that point the new system is by SSH using the core user and it is ready to be managed by Ansible.

Finally Ready

With the OS installation complete it becomes possible to start addressing the goal of this series: Deploying containerized network services with Ansible. Keep an eye out for the next post where we’ll configure Ansible and demonstrate that we have connectivity and control of our target hosts.

References

  • coreos-installer
    Usage and arguments for the CoreOS installer binary. This can be run from a live ISO or on a second host to write to the boot media.

  • CoreOS on Bare Metal
    How to install CoreOS on Bare Metal. This includes variants for PXE, and Live ISO installations.

  • CoreOS on Raspberry Pi 4
    How to install CoreOS on Raspberry Pi 4 or 5. This includes instructions for installing EFI boot components that are not present in the Pi boot firmware.

  • Ignition
    Ignition is the engine that applies the provided configuration to a new CoreOS instance on first boot.

  • UEFI-Shell
    a UEFI Shell for built from EDK2 sources

  • Raspberry Pi 4 UEFI Firmware Images
    A build of the UEFI-Shell specifically for Rasberry Pi 4

Thursday, January 16, 2025

CoreOS Configuration - Less is the right amount

Configuring CoreOS

There are already a number of good resources for deploying CoreOS to various systems. See References This document focuses on the particulars of configuring CoreOS as a base for small and medium network infrastructure services.

The Principle of Least Config

In keeping with the minimalist philosophy of CoreOS, the configuration will apply only those settings necessary to boot the system and provide remote access and configuration management. The first two are fairly trivial, but the last involves a bit of system gymnastics.

The CoreOS configuration is applied at first boot and is provided to the installer when writing the boot media to storage.

coreos-infra.bu
---
# 1 - Specify the target and schema version
variant: fcos
version: 1.6.0

# 2 - Provide an ssh public key for the core user
passwd:
  users:
    - name: core
      ssh_authorized_keys_local:
        - infra-ansible-ed25519.pub

storage:
  files:

    # 3 - Define the system hostname
    - path: /etc/hostname
      contents:
        inline: |
          infra-01.example.com

    # 4a - A script to overlay the ansible packages and clean up
    - path: /usr/local/bin/install-overlay-packages
      user:
        name: root
      group:
        name: root
      mode: 0755
      contents:
        inline: |
          #!/bin/bash
          if [ -x /usr/bin/ansible ] ; then
            rm /usr/local/bin/install-overlay-packages
            systemctl disable install-overlay-packages
            rm /etc/systemd/system/install-overlay-packages.service
          else
            rpm-ostree install --assumeyes ansible
            systemctl reboot
          fi

systemd:
  units:

    # 4b - Define a one-time service to run at first boot
    - name: install-overlay-packages.service
      enabled: true
      contents: |
        [Unit]
        Description=Install Overlay Packages
        After=systemd-resolved.service
        Before=zincati.service

        [Service]
        Type=oneshot
        ExecStart=/usr/local/bin/install-overlay-packages

        [Install]
        WantedBy=multi-user.target

1 - Butane Preamble

The Butane configuration schema begins with two values that identify the target OS and the schema version itself.

variant: fcos
version: 1.6.0

This indicates that the file targets Fedora CoreOS and the schema version is 1.6.0. This assists the parser in validating the remainder of the configuration against the indicated schema.

2 - Core User - SSH Public Key

CoreOS deploys with two default users, root and core. The root user is not intended for direct login. Neither has a password by default. CoreOS is meant to be accessed by SSH on a network by the core user.

passwd:
  users:
    - name: core
      ssh_authorized_keys_local:
        - infra-ansible-ed25519.pub

The core user already exists so no additional parameters need to be provided. The user definition only specifies a public key file who’s contents will be inserted into the authorized_keys file of that user.

The ssh_authorized_keys_local option above consists of a list of filenames on the local machine that will be merged into the ignition file during transformation. The directory containing that file is provided on the butane command line using the --files-dir argument.

3 - Hostname

When you log into a system it’s convenient to see the hostname in the CLI prompts. It’s also good for reviewing logs. The hostname for Fedora is set using the /etc/hostname file.

storage:
  files:

    - path: /etc/hostname
      contents:
        inline: |
          infra-01.example.com

By convention this file contains the fully-qualified domain name of the host, and the hostname is the first element of the FQDN.

4 - Package Overlay - Install Ansible

This is the first place where CoreOS is properly customized. The goal is to automate management of the host and service using Ansible. The Fedora Project is agonistic to the user selection of configuration management software, so no CM software is installed by default. These two sections create the parts needed to overlay Ansible on first boot and then reboot so that the Ansible package contents are available.

4a - Overlay Script

The first part of this first-boot process is a shell script placed so that it can be written and removed after use.

    - path: /usr/local/bin/install-overlay-packages
      user:
        name: root
      group:
        name: root
      mode: 0755
      contents:
        inline: |
          #!/bin/bash
          if [ -x /usr/bin/ansible ] ; then
            rm /usr/local/bin/install-overlay-packages
            systemctl disable install-overlay-packages
            rm /etc/systemd/system/install-overlay-packages.service
          else
            rpm-ostree install --assumeyes ansible
            systemctl reboot
          fi

The first half of this section defines the location, ownership and permissions of the file. The second half, under the contents key contains the body of this script.

This script checks to see if the ansible binary is present and executable. If so, then the script removes itself and the systemd service unit file that triggers the script on boot. If ansible is not present, then the script overlays the Ansible RPM and then reboots.

This means that the service and hence the script is executed twice. On first boot it runs the installlation command and reboots. The second time it detects that ansible is present and then disables and removes itself.

4b - One-time First Boot Service

The CoreOS specification allows the user to define and control the operation of systemd services. This final section defines a service that executes the script previously defined.

systemd:
  units:
    - name: install-overlay-packages.service
      enabled: true
      contents: |
        [Unit]
        Description=Install Overlay Packages
        After=systemd-resolved.service
        Before=zincati.service

        [Service]
        Type=oneshot
        ExecStart=/usr/local/bin/install-overlay-packages

        [Install]
        WantedBy=multi-user.target

This unit file defines when the service should start and what it should do. The service will run after networking is enabled and the DNS systemd-resolved service is running, but before the zincati update service is started. It runs the script defined above but does not detach as it would for a daemon.

As noted, this unit is deleted by the script when it runs the second time and detects the presence of the ansible binary.

Transforming the Butane System Spec

The next step is to transform the Butane file to Ignition. The CoreOS installer places the Ignition file onto the new filesystem so that it is available on first boot so it must be provided at the installer CLI invocation.

The butane binary can be installed on a Fedora system from an RPM, or it can run as a software container. See Getting Started in the Butane documents to decide what works best for you.

butane --pretty --files-dir ~/.ssh < coreos-infra.bu > coreos-infra.ign

This call only takes two parameters:

  • --pretty
    This just pretty prints the JSON output. It’s entirely cosmetic and unnecessary.

  • --files-dir ~/.ssh
    This tells butane where to find any external files, specifically, in this case, the location of the public key file for the core user.

The result of running

coreos-infra.ign
{
  "ignition": {
    "version": "3.5.0"
  },
  "passwd": {
    "users": [
      {
        "name": "core",
        "sshAuthorizedKeys": [
          "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGl7GOHs9enyGZ7tTSh8E8G5mE+B9gyVVnz41hRyxbbN Infrastructure Ansible Key"
        ]
      }
    ]
  },
  "storage": {
    "files": [
      {
        "path": "/etc/hostname",
        "contents": {
          "compression": "",
          "source": "data:,infra-01.example.com%0A"
        }
      },
      {
        "group": {
          "name": "root"
        },
        "path": "/usr/local/bin/install-overlay-packages",
        "user": {
          "name": "root"
        },
        "contents": {
          "compression": "gzip",
          "source": "data:;base64,H4sIAAAAAAAC/3yPPQ7CMAyF95zCiDnkABwFMTipSyOcpMpLK3p71B8hMcBkyX7fZ/t8cj5m5xmDiT3dyL7ITahblzOiV6E7XakNkg1RTftYS2DdQjGjsaots1TlxY4cnvwQGCIsaJJCU+oieDX9Ca9macHtUHfUn/oLpM4xiBGFrPiYbEGr8llC1jIwJVkEdLzydVQVX0ozfTTvAAAA//9VmB3oBgEAAA=="
        },
        "mode": 493
      }
    ]
  },
  "systemd": {
    "units": [
      {
        "contents": "[Unit]\nDescription=Install Overlay Packages\nAfter=systemd-resolved.service\nBefore=zincati.service\n\n[Service]\nType=oneshot\nExecStart=/usr/local/bin/install-overlay-packages\n\n[Install]\nWantedBy=multi-user.target",
        "enabled": true,
        "name": "install-overlay-packages.service"
      }
    ]
  }
}

There are a couple of things to note in this transformation and result. The SSH public key string is merged verbatim from the file. The install-overlay-packages script is compressed and serialized as base64 of a gzip file. The systemd unit file is a JSON string with embedded newlines: \n. Together these make a single configuration file that can be copied around, served over HTTP or other file service without corruption from encoding.

Keep this file handy as it is used as input for the next step.

References

  • Butane
    The Butane format usage and specifications.

  • Ignition
    The Ignition spec for CoreOS configuration.

  • CoreOS on Bare Metal
    How to install CoreOS on Bare Metal. This includes variants for PXE, and Live ISO installations.

  • CoreOS on Raspberry Pi 4
    How to install CoreOS on Raspberry Pi 4 or 5. This includes instructions for installing EFI boot components that are not present in the Pi boot firmware.

  • systemd one-shot service
    A blog post on the workings of Systemd one-shot service units.

  • coreos-installer
    Usage and arguments for the CoreOS installer binary. This can be run from a live ISO or on a second host to write to the boot media.