#7943 rfr: development-experimental r/o access to koji rpm filesystems
Closed: Fixed 4 years ago by kevin. Opened 4 years ago by fche.

Some Red Hat engineers are working on a prototype system that serves data/metadata about large libraries of RPMs. We would be grateful if we could have some sort of read-only access to the rpm output files of the koji build system (as complete as practical). This could be a shell, or a little fedora VM that can nfs-read-mount the file system(s), where we can build/test/run private services during development.

We'd prefer to develop this stuff close to the fedora build system, so that we don't have to mass-download this content.

If the experiments are successful, a couple of months later we can start talking about it as a real fedora service; if unsuccessful or problematic, well, we'll nuke it. :-)


So, what sort of access to this instance would you need? just ssh? Or would you want any other services exposed?

Would it need to talk to anything besides the koji nfs mount ?

We would need to run this by our security officer, but perhaps a small vm in our staging env would be possible.

Metadata Update from @kevin:
- Issue priority set to: Waiting on Assignee (was: Needs Review)
- Issue tagged with: security

4 years ago

Just ssh would be great, and just ro koji nfs should be enough. (The VM would have to run compilers etc. so access the regular fedora download servers.)

VM in staging environment with readonly to prod koji volume sounds like a reasonable thing to do, so from a security point of view, I have no concerns.

I do think we might want to clarify some timelines for a couple of months later we can start talking about it as a real fedora service; if unsuccessful or problematic, well, we'll nuke it. :-).

So, @fche, do you have any suggestions for dates by which we can reconsider the VMs? Not saying that at that point we will turn it off, just a scheduled date to check back in and see if we can help progress projects etc? Just to make sure we don't get a case in a year where we have kept this VM around while nobody's using it anymore :).

Metadata Update from @puiterwijk:
- Issue untagged with: security

4 years ago

@puiterwijk re. timeline, I'd say schedule a shutdown and/or reevaluation by end of year.

ok, is there some name we could use for this instance? Once we have that I can set it up and create a fas group for access...

Up to you; we use the term dbgserver or elfutils-dbgserver for the code base that we'll be testing there. We could reuse the elfutils group ACL perhaps.

ok. I have created dbgserver01.stg.phx2.fedoraproject.org. A Fedora 30 instance with 4gb mem and 2 cpus and 30gb disk. It has a ro mount of production koji on it.

Are you intending to manage this host via our ansible setup? Or do you just want to self manage whatever you setup on it and only have us do updates, etc.

I have made a sysadmin-dbgserver group with you as owner. You can add folks to this and they will get access to our bastion hosts.

However, since this is in our staging env, you will also need to add them to sysadmin-dbgserver group in out staging fas: https://admin.stg.fedoraproject.org/accounts/

https://docs.pagure.org/infra-docs/sysadmin-guide/sops/sshaccess.html has information on how to setup ssh to use the bastion hosts to access the instance.

If you want to self manage things, I think we are done.
If you want to manage via our ansible setup, we will need to also add access to our ansible control host / git commit to our ansible repo and set you up permissions to run the ansible playbook.

Let me know if you have any questions or would like to change anything in the setup above.

Awesome, thanks!

We can self-manage the little VM completely (including yum updating etc.), if we have root access on it. At the moment, I can't find any target userid there which accepts my ssh pubkey. The bastion host as per the ssh page is letting me through though.

Oops. I had a typo. Try now (but note that it will be using rhe ssh key attached to your stg fas account.

Got onto the fche@ account now, thanks a lot. 77TB of koji in all its glory, wow.
We can test almost all our stuff with this unprivileged account.

We'd still need sudo or such to do yummy self-maintenance work and to install development tools there.

Try sudo now (and note that it will use your stg password/token)

Hi, Kevin - I don't think I have a fedora/yubikey token, just the normal fedora password, ssh key, and gpg key. Do you believe this system/access needs to be considered high-security enough to get hold of one right away? Or else can we plop the ssh pubkey into the root authorized_keys temporarily?

no, that would make it different from every single other machine we have.

Can you just add a freeotp token or yubikey slot? Docs at:

https://docs.pagure.org/infra-docs/sysadmin-guide/sops/2-factor.html

I can work with freeotp, but https://admin.stg.fedoraproject.org/totpcgiprovision/ is giving a 500 error :-) The non-.stg. provisioner seems ok.

Hum...oh right... @puiterwijk may not have fixed enrollment in stg yet. ;( Hopefully he can look after tomorrow...

@kevin, given that the stg totp system has apparently been down for quite a while, can you suggest any other way of getting some set of packages installed on that VM, so one can start building software on it? For example, if I were to list a bunch of rpms, could we arrange to get them installed, so I don't have to sudo at all for now?

You absolutely can. Please post the list here and I will get them installed.

Sorry for the delay/stall here.

Thanks: the current list is the buildreq of under-development elfutils + a few tools. I hope not to ask for too many more later. :-)

autoconf automake bison bzip2 bzip2-devel flex gcc gcc-c++ gdb gettext git glibc make m4 pkgconfig(libarchive) pkgconfig(libcurl) pkgconfig(libmicrohttpd) pkgconfig(sqlite3) rpmlib(CompressedFileNames) rpmlib(FileDigests) systemd valgrind xz-devel zlib-devel

Done. Just let us know if you need anything more. Either re-open this or file a new ticket.

Happy debugging.

Metadata Update from @kevin:
- Issue close_status updated to: Fixed
- Issue status updated to: Closed (was: Open)

4 years ago

Hey, Kevin, thanks! Missed just a few additions I edited in too quietly:

autoconf automake git make

Metadata Update from @kevin:
- Issue status updated to: Open (was: Closed)

4 years ago

Metadata Update from @kevin:
- Issue close_status updated to: Fixed
- Issue status updated to: Closed (was: Open)

4 years ago

The staging 2fa token provisioner is back to working. Please file a new issue if you run into any problems with it.

Login to comment on this ticket.

Metadata