#403 Minimal OSTree/ISO image initiative
Closed: Wontfix 6 years ago Opened 6 years ago by jberkus.

Currently, the main user story for Fedora Atomic Host is as a container infrastructure web host. The OStree ref and ISOs we maintain work well for this user story, and it leads to certain decisions like keeping docker and atomic CLI in the base image (see https://pagure.io/atomic-wg/issue/360).

However, there is a user story we are not satisfying, which is the user who wants a truly minimal install, and is more interested in being able to compose her own ostrees and atomic updates than she is in container orchestration. This is an important user story because we already have demand for it, especially from the IoT sector.

The steps to pursue this would be:

  1. work with users to refine and specify the requirements for a "minimal" edition
  2. identify and recruit contributors to assist with building it
  3. create and publicize new minimal ref and associated tools

Based on my conversations at conferences, the main requests are these:

  • reduce the size of the base ostree by stripping things out, preferably to less than 300MB installed.
  • auto-rollback on rpm-ostree upgrade reboot fail
  • a complete, well-documented toolchain for building and redistributing custom ostrees, based on either the minimal ref or "from scratch".
  • support for ARM32 (unlikely at best, but people have asked)

Metadata Update from @jberkus:
- Issue assigned to sanja
- Issue tagged with: host

6 years ago

The whole time I've been here I've felt a powerful tension: am I here to work on an atomic upgrade system for hosts that are otherwise managed "traditionally" (e.g. Ansible works), or am I here to work on containers? The answer is usually both of course 😀.

Anyways let me state this up front though: My goal with rpm-ostree is to be a fully general system, in the same way the Linux kernel and systemd are. We should scale up to big servers, and scale down "reasonably" (we could do better here but let's say more than 64MB of RAM). Further, we should work the same for single node systems as well as fully replicated environments.

This last aspect is where the image/ostree portion becomes particularly interesting; as I've said a lot, rpm-ostree package layering is IMO absolutely essential for practical use at the small scale. Using it in particular on my daily driver laptop is a powerful motivator to fix things. But that same system can easily take a custom server-side build, sharing all of the client side logic.

Let's briefly compare with (nominal) competition in this area: https://www.ubuntu.com/core
They're really all about Snap - it seems the push is to make dpkg legacy. rpm-ostree is the polar opposite in that regard; it works with and augments the RPM ecosystem, building in image-like updates with ostree.

As I mentioned here: https://mail.gnome.org/archives/ostree-list/2017-December/msg00003.html
for me the fact that rpm-ostree install libvirt just works is powerful and practical.

But on the other hand, Snap shows where we should be going with system containers IMO.

Circling back to the topic here, an interesting place where things collide on this topic is - do people doing custom image builds want to use preloaded Docker/OCI images? Do those stay in sync with the host?

Looking at https://docs.ubuntu.com/core/en/guides/build-device/image-building?from=corepage&_ga=2.21897470.490814636.1513892624-814189486.1504794144
they do seem to support bundling snaps, but they don't seem to encourage changing the base Ubuntu core image?

There are a number of notable upstream rpm-ostree issues on this in the past, like:
https://github.com/projectatomic/rpm-ostree/issues/442

But I hope we can completely revamp the "custom compose" path after jigdo ♲📦 lands: https://github.com/projectatomic/rpm-ostree/issues/1081
since then managing the "images" is a lot simpler.

I'm working on a spin that does networking. kinda like openwrt.

This looks like a good initial spike which should create more spikes and units of work to accomplish found requirements.

we really to get some input from @pbrobinson on this to find out what work he has already done and advice for moving forward.

Agreed. Closing this.

Metadata Update from @sanja:
- Issue close_status updated to: Wontfix
- Issue status updated to: Closed (was: Open)

6 years ago

Login to comment on this ticket.

Metadata