#7300 What hostname should Bodhi use to upload content to the registry?
Closed: Fixed 2 years ago Opened 2 years ago by bowlofeggs.

  • Describe what you need us to do:
    @otaylor and I'd like to know what hostname Bodhi should use when uploading content to the registry. We tried a test run with flatpaks today and received a 503:
requests.exceptions.HTTPError: 503 Server Error: Service Unavailable for url: https://registry.stg.fedoraproject.org/v2/flatpak-runtime/blobs/uploads/

Should that URL work? I was curious if it is going through a proxy that is denying write requests since that's the same URL the public would use to read containers.

  • When do you need this? (YYYY/MM/DD)
    Soon would be good.

  • When is this no longer needed or useful? (YYYY/MM/DD)
    If we don't use Bodhi or Flatpaks or Containers.

  • If we cannot complete your request, what is the impact?
    I won't be able to debug the issue.

From roles/httpd/reverseproxy/templates/reversepassproxy.registry-generic.conf

# This is terible, but Docker.
RewriteRule ^/v2/(.*)$ http://oci-registry02:5000/v2/$1 [P,L]
RewriteRule ^/v2/(.*)$ http://localhost:6081/v2/$1 [P,L]

So the relevant methods are sent to oci-registry02 rather than to varnish => haproxy => oci-registry{01,02}. But there isn't a oci-registry02 for staging, as far as I can tell?, hence the 503. My guess is that just changing this to oci-registry01 for staging would make pushing to the staging registry work. Alternatively, bodhi could be configured to push to oci-registry01.

Owen, you are a genius.

I actually worked with Patrick just last week to remove the oci-registry02 on staging, since we don't really need it anymore now that we don't use Gluster.

I think you are correct that we can probably get this working by switching it to 01. We could get away with using 01 in stg and prod, but doing that would require an FBR so maybe we can just put a conditional in the template instead.

Should note here that oci-registry02 is also referenced for staging in roles/haproxy/templates/haproxy.cfg - haproxy is presumably dealing OK with that because it's meant to handle non-available backends - but conditionalization would likely prevent scary messages in a log file somewhere.

I just pushed d57f891ade1b78bd422fd49dc8f8858847cf6f13 that should fix this (and doesn't change production any at all).


Metadata Update from @kevin:
- Issue close_status updated to: Fixed
- Issue status updated to: Closed (was: Open)

2 years ago

Login to comment on this ticket.