#8811 method for copying in ostree content to our ostree repos
Opened 2 months ago by dustymabe. Modified 12 days ago

  • Describe the issue

We need to be able to copy in ostree content from our Fedora CoreOS builds into our ostree repos. This is done manually right now by a Fedora release engineer during a release. We'd like to automate this based on fedora messaging messages.

Since I'll be helping develop and manage this "glue" it would be nice if we can run it in openshift so that I can view logs and such without having to gain more access to fedora releng/infra systems. Currently that would require a writable /mnt/koji inside of the openshift app that's being developed, which is probably a lot to ask.

General architecture:

  • listen for a fedora message
  • download ostree tarball
  • extract tarball
  • ostree pull-local into target ostree repo
  • update ostree summary file

We'll probably want a split repo setup where we import most content into a staging repo (like the compose repo we have today) and then sync over part of it when we do an actual release.


Metadata Update from @mohanboddu:
- Issue tagged with: meeting

2 months ago

from the meeting today:

#info We decided to create a separate volume for just ostree stuff with rw access
and dustymabe will test it in stg first

So we're going to migrate the ostree repos to a separate netapp volume (with the same level of backup support) and grant read/write access to that to openshift. There is a wrinkle in that we use two directories:

  1. /mnt/koji/compose/ostree/ - for composing into
  2. /mnt/koji/ostree/ - for prod ostree content (fronted by CDN)

Is it possible to make a single netapp volume have an ostree directory that gets mounted under those locations so that we don't have to change any of our existing consumers? I like the idea of it being a single netapp volume vs multiple as it would be easier to maintain and we get deduplication.

dustymabe will test it in stg first

I think I would like to test this in stg rather than communishift so that we can get a more realistic test. @kevin can you create a volume in stage and show me how to mount it ?

I can try and set this up on friday (2019-10-04)

ok we have worked on this quite a bit today. We have a netapp share fedora-ostree-content that is shared to the staging openshift via two PVs fedora-ostree-content-volume-{1,2} that point at the same NFS share. We were then able to mount those two PVs into pods within two different projects within openshift. We can also mount those shares under /mnt/koji/ostree and /mnt/koji/compose/ostree even though it's a single netapp volume.

Next steps:

  • work on making sure permissions are what we expect them to be
    • will need to possibly look at 1 2
  • get the code reviewed and approved for use in Fedora Infra
  • work out a migration strategy for prod content in the ostree directories
  • get the code reviewed and approved for use in Fedora Infra

I asked and was told I could use this ticket as an RFE for getting the ostree importer into Fedora. Can we please get a security audit of https://github.com/coreos/fedora-coreos-releng-automation/tree/master/coreos-ostree-importer ?

Metadata Update from @dustymabe:
- Issue assigned to kevin

a month ago

Can we please get a security audit of https://github.com/coreos/fedora-coreos-releng-automation/tree/master/coreos-ostree-importer ?

The security audit process is undergoing some changes. In the infrastructure meeting today @kevin volunteered to try to wrangle those changes and get this assigned for review.

The script looks okay as of commit 7e7ffb5fb6729beecbc8e857064296c16bb152f9.
If any significant changes occur, please re-request a security audit.

Also, for future cases: please note that it would help if people tag security audits with the "security" tag, so that they show up in my overviews.

Updated next steps:

  • work on making sure permissions are what we expect them to be
    • will need to possibly look at 1 2
  • work out a migration strategy for prod content in the ostree directories

Login to comment on this ticket.

Metadata