In the meeting from 2017-08-02 modularity was presented as something we should experiment with in order to enable us building/testing from a CI/CD pipeline. In the meeting it was clear that there's still some general confusion around modularity and what benefits it has. i.e. modularity is mostly not concrete for people yet. It was identified that we'd like to come up with questions for the modularity team and invite them to one of our meetings in the next few weeks to discuss the open questions.
This ticket is for gathering open questions for the modularity team so that we are prepared when we have that meeting.
Metadata Update from @dustymabe:
- Issue assigned to jasonbrooks
- Issue tagged with: host
Metadata Update from @dustymabe:
- Issue tagged with: modularity
Some questions from IRC in #atomic
Q: With modularity, how would an ostree tree be composed? Would there need to be something like modularity-ostree (like rpm-ostree) to compose the tree?
A: Modules are made of RPMs, so something like a new "modularity-ostree" would not be required, however, module support would need to be added to libdnf in order for rpm-ostree to understand modules on the client side.
<jbrooks> walters, I've been wondering how modularity would interact with ostree -- would there be something like a modularity-ostree, or would it fit into the rpm scheme somehow
<dustymabe> jbrooks: you mean modularity-ostree like rpm-ostree ?
<dustymabe> like a tool?
<dustymabe> jbrooks: i don't think so
<dustymabe> at least not to begin with
<jbrooks> Like, I'm trying to imagine how the tree is composed
<jbrooks> does a module act like a mega rpm?
<dustymabe> jbrooks: still going to take some learning on my part but i think for us module mostly means specific yum repo
<dustymabe> langdon: around?
<dustymabe> jbrooks: langdon might be able to shed some light for us
<dustymabe> i have a meeting in 2 minutes, so mightmiss some of the conversation
mattdm is here
<langdon> Dustymabe, kinda
<dustymabe> mattdm: \o/
<dustymabe> in terms of atomic host (at least initially) that was my thought on how it would be set up
<dustymabe> the module defines the rpms (including build reqs)
<dustymabe> and then we compose an ostree from that module
<dustymabe> basically just pointing to the yum repo that gets created as the source
dustymabe has 1-1
<drakonis> the boss man is here
<dustymabe> will read discussion after call
<langdon> Basically, Dusty has the right of it
<langdon> so.. jbrooks think of modules kinda like a x-between a metapackage and a yum group.. but with "enforcement" on them moving together
<jbrooks> OK, interesting
<langdon> at the EOD the "things that are built" are "just" rpms.. we decorate with some extra metadata for things like EOL
<jbrooks> So when you install a module, it's sort of like enabling a copr?
<langdon> and.. the module is less bound to its output artifact.. so the artifact is less coupled to rpm .. you could also build something like a container.. or, someday, gems
<langdon> jbrooks, exactly.. except if all the copr repos were crashed together
<jbrooks> But if you created a container from it, that container wouldn't be built out of rpms?
<langdon> technically, two steps when you "dnf module install httpd" it actually enables the repo then installs the module-profile for httpd
<langdon> where, in this case, the module-profile == "dnf install httpd" the metapackage
<langdon> the container would be.. thats why it isn't a great example yet
<langdon> but its a necessary output artifact and the only other, kinda non-rpm, artifact we have today
<jbrooks> OK, that's a lot clearer -- I've been missing this so far, just thinking modules were some other sort of blob of stuff
<langdon> jbrooks, basically it is kinda like a meta-packaging format.. but, underneath, its the same old packaging formats
<jbrooks> That's good, this is what I like about rpm-ostree
<jbrooks> It's still rpms
<langdon> right.. so .. making an ostree from modules would be just like we do today.. except we would want you to add in the modulemd files
<langdon> with the rpms
<langdon> jbrooks, good? im supposed to be cleaning my basement.. so, glad for an excuse to stop.. but i should go back ;)
lalatenduM langdon larsks llunved LongyanG lsm5
<walters> if we move the module support into libdnf, then we'd be really close to rpm-ostree understanding it on the client side as well
<jbrooks> langdon, heh, yeah thanks
<langdon> walters, that is the goal .. but we think the libdnf version has higher effort with churn.. so we wanted to do the experimentation in python then move to c when it was closer to done
<walters> yeah understood
<drakonis> the branching introduced by modularity doesn't extend to all packages yet?
<langdon> drakonis, not deployed yet.. this week, i think?
<langdon> wating on the mass rebuild
<langdon> *waiting even
<langdon> jbrooks, drakonis feel free to bring your qs over to #fedora-modularity any time and/or we have a WG meeting or office hours every tuesday at 10am eastern
Would modularity work with rpm-ostree package layering?
I just tested this out; grab the fedora-modular.repo from the boltron container, copy it into FAH, disable fedora and updates repos, then rpm-ostree install postgresql nodejs works just fine.
rpm-ostree install postgresql nodejs
Since today modularity adds to the rpm-md format rather than replacing it, and dependencies are dynamic, a lot of "modules" will just work. (Though a side effect of this is modularity makes the "rpm metadata is large" problem worse)
The converse (installing from Everything on a modular system) gets interesting. That's an active discussion on fedora-devel etc.; I think a lot of things that people will want to use (e.g. tracing/debugging tools with few dependencies) will mostly Just Work too.
One of the benefits of modularity seems to be in allowing us to break out of the traditional fedora versioning scheme, but the modules I've seen are built from fedora packages, and those packages are tied to a particular fedora version. How will a modularity-based atomic host, or module-based system containers for things like kubernetes look/work different than what we have now?
For the containers case, I'd like to be able for, instance, to have a kubernetes module that's installable on fedora atomic 25 or 26 that could track an arbitrary number of upstream kube versions. Upstream kube is actively supporting 1.5.x 1.6.x and 1.7.x -- if we wanted to, we should be able to offer all of those, too. But would those modules contain rpms built for a particular version of fedora, and would the container images be based on a particular version of the base image? Or would there be some kind of unmoored, continuous kube rpm and module-based base image?
to comment on this ticket.