#7569 the-new-hotness - Openshift instance can't connect to bugzilla
Closed: Fixed a year ago by zlopez. Opened a year ago by zlopez.

  • Describe what you need us to do:
    I migrated the-new-hotness to OpenShift, but I got stuck when trying to connect to https://partner-bugzilla.redhat.com. I'm getting connection refused.

Here is the output of curl -v https://partner-bugzilla.redhat.com executed in pod:

* Rebuilt URL to: https://partner-bugzilla.redhat.com/
*   Trying 10.4.205.30...
* TCP_NODELAY set
* connect to 10.4.205.30 port 443 failed: Connection refused
* Failed to connect to partner-bugzilla.redhat.com port 443: Connection refused
* Closing connection 0
curl: (7) Failed to connect to partner-bugzilla.redhat.com port 443: Connectionrefused
  • When do you need this? (YYYY/MM/DD)
    As soon as possible

  • When is this no longer needed or useful? (YYYY/MM/DD)
    When OpenShift is no longer a thing

  • If we cannot complete your request, what is the impact?
    the-new-hotness will not connect to bugzilla


Yeah, the problem here is that we get the internal IP since we use internal nameservers, but we can't connect to the internal servers.
So we need to get it to use the external IP address, either via /etc/hosts or more config on the nameservers.

Is this only on staging? I noticed that the-production-new-hotness did not file a bug for the new version of Bodhi I released yesterday even though Anitya did send a fedmsg. I saw this on IRC:

20:02 <fedora-notif> A new version of "bodhi" has been detected:  "3.13.0" newer than "3.12.0", packaged as "bodhi"  https://release-monitoring.org/project/11224/ (triggered by https://apps.fedoraproject.org/notifications/bowlofeggs.id.fedoraproject.org/irc/45874)

@bowlofeggs
Production isn't in openshift yet and this is probably caused by the bugzilla session expiration. I need to restart the-new-hotness in production, so it will connect again.

@puiterwijk
Do I need to do something on my side?

I think you need a HostAlias for it...

See something like: https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/

with the external address: partner-bugzilla.redhat.com has address 209.132.183.72

Metadata Update from @mizdebsk:
- Issue priority set to: Waiting on Assignee (was: Needs Review)

a year ago

Is there a way to implement this in the OpenShift.

I tried https://docs.okd.io/latest/dev_guide/integrating_external_services.html but I'm unable to specify hostname.

You can use v1.Pod API - search for "hostAliases". If you have troubles configuring this then I can give it a try.

Thanks @mizdebsk I will look at it.

I looked at the v1.Pod API and I can't figure out how this works. You can only set hostname every other value is boolean.
I think this serves as alias hostname for the created pod not as alias for remote host.

I think you have it with the service setup... almost.

You need to change your code to use the service env: ${SVCNAME}_SERVICE_HOST (ie, partner-bugzilla_SERVICE_HOST ) ?

but I am unsure now either how to get this to work. We may have to ask openshift experts...

Thanks @kevin I will try to ask someone from the openshift team

I looked at the v1.Pod API and I can't figure out how this works. You can only set hostname every other value is boolean.
I think this serves as alias hostname for the created pod not as alias for remote host.

hostAliases are used to manage /etc/hosts within container. The API documentation I linked is clear about this:

  • hostAliases (array): HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. This is only valid for non-hostNetwork pods.
    • ip (string): IP address of the host file entry.
    • hostnames (array): Hostnames for the above IP address.

@mizdebsk
Damn, I didn't realized there is some javascript on the page and you need to click on the hostAliases word to see what is inside.
Then only one question remains, how to apply this on the existing pod?

@zlopez You don't, you update the DeploymentConfiguration or surrounding objects, and just run a new deployment.
That will replace the current pod with one with the new configuration.

As @puiterwijk said, you should edit DeploymentConfig and add hostAliases under Pod spec, next to containers and volumes. Pushing DeploymentConfig will trigger new deployment and the old pod will be replaced with a new one, whth hosts injected.

My bad, it's there. I hate javascript.

the-new-hotness is now running in staging OpenShift, thanks everyone.
Only think missing now is certificates for fedora-messaging.

I'm closing this.

Metadata Update from @zlopez:
- Issue close_status updated to: Fixed
- Issue status updated to: Closed (was: Open)

a year ago

Login to comment on this ticket.

Metadata