#991 Migrate "frontdoor" project to new OCP cluster
Closed: Fixed a year ago by martinpitt. Opened a year ago by martinpitt.

CentOS CI - On-boarding

We have a frontdoor project on the current OCP cluster which will be torn down soon. We use that in the Cockpit/composer/subscription-manager teams/UI to provide some CI functionality which we cannot host on internal infrastructure.

I know that a few months ago we said that we wouldn't need a tenant on the new OpenShift cluster; but it turns out it would be good after all, as we still have two small, but important services running on the current cluster: our GitHub webhook and a prometheus/grafana pair for our metrics. These don't take a lot of resources, and don't need /dev/kvm, but they do need to live on the public internet.

  • How does your project relates to Fedora/CentOS?

All these projects are part of Fedora/CentOS/RHEL. We'd like to keep our current CI machinery around that, as GitHub currently offers nothing simpler/more appropriate than a webhook, and it does not provide any CI metrics at all.

  • Describe your workflow and if you need any special permissions (other than
    admin access to namespace), please tell us and provide a reason for them.

No special needs like serviceaccounts, hw access, or dynamic Jobs. We'd like to run a pod with nginx + RabbitMQ as webhook receiver, and another pod with Prometheus/Grafana. The latter needs a PV (see below). For both we need to be able to create Services and Routes.

We will not run actual tests on this cluster any more (that needs /dev/kvm, which isn't available). But we have internal systems for that.

Note that we'll deploy resources (pods, routes, etc.) ourselves, so the "migrate" in the topic just applies to project creation.

  • Do you need bare-metal/vms checkout capability? (we prefer your workflow
    containerized)

No.

Resources required:
- PVs: 1x 5 GB ReadWriteOnce for prometheus data
- Memory for all pods: 1 GiB peak, expected persistent usage of ~ 200 MB
- CPU: 2 peak, but usually negligible (a few milliCPUs)

Project_name: frontdoor
Project_admins:
 - martinpitt@fedoraproject.org
 - mmarusak@fedoraproject.org

Thank you in advance!


Metadata Update from @arrfab:
- Issue tagged with: centos-ci-infra, high-gain, medium-trouble, namespace-request

a year ago

@cgranell : is that ok to onboard frontdoor into new aws ocp cluster ? (worth knowing that I'll be on PTO some days some only able to create needed IPA group and PV once back)

@arrfab we can go ahead with it, but just making it clear it can take a while as it is PTO season in our teams and it might take longer than usual to complete this task.

Thanks @cgranell ! There is no particular hurry, as far as I understood the old CentOS CI cluster will stay until March 2023, right? We mostly just need the ack for planning, whether we need to look for some other infra.

Metadata Update from @arrfab:
- Issue assigned to arrfab

a year ago

@martinpitt : as it will be syncing group membership based on FAS/ACO automatically (including creating project/namespace), I was wondering (before creating the group in IPA) : would it make sense to create a group for cockpit ? or frontend ?

please not "frontend", that has no meaning in our existing project/org/community structure. "cockpit" is fine; "frontdoor" a bit more generic (but it's a RHEL-internal structure). So, come to think of it, I think group "cockpit" is fine. Thanks!

The ocp-cico-cockpit group was created in IPA (and so available in FAS/ACO) and your users were added.
So it automatically created the cockpit namespace on new cluster https://console-openshift-console.apps.ocp.cloud.ci.centos.org

A 5Gi PV was also created in your namespace and you can use prometheus-data in the PV Claim as Ref

  claimRef:
    namespace: cockpit
    name: prometheus-data

Let us know if that works for you but normally you should be all set. We'll then be able to close this request

Metadata Update from @arrfab:
- Issue priority set to: Waiting on Reporter (was: Needs Review)

a year ago

Thanks @arrfab ! I confirm that I can login. I used this resource for claiming the PVC:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: prometheus-data
  namespace: cockpit
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

and it worked fine, the PVC is in state "bound".

I deployed Prometheus/Grafana, they can successfully use the PVC, and it all seems to work.

Thanks a lot, this was super fast!

Metadata Update from @martinpitt:
- Issue close_status updated to: Fixed
- Issue status updated to: Closed (was: Open)

a year ago

For the record, you can delete our old frontdoor project on the old cluster, we fully migrated to the new one. Thank you!

Login to comment on this ticket.

Metadata
Boards 1
CentOS CI Infra Status: Backlog