We have a frontdoor project on the current OCP cluster which will be torn down soon. We use that in the Cockpit/composer/subscription-manager teams/UI to provide some CI functionality which we cannot host on internal infrastructure.
I know that a few months ago we said that we wouldn't need a tenant on the new OpenShift cluster; but it turns out it would be good after all, as we still have two small, but important services running on the current cluster: our GitHub webhook and a prometheus/grafana pair for our metrics. These don't take a lot of resources, and don't need /dev/kvm, but they do need to live on the public internet.
All these projects are part of Fedora/CentOS/RHEL. We'd like to keep our current CI machinery around that, as GitHub currently offers nothing simpler/more appropriate than a webhook, and it does not provide any CI metrics at all.
No special needs like serviceaccounts, hw access, or dynamic Jobs. We'd like to run a pod with nginx + RabbitMQ as webhook receiver, and another pod with Prometheus/Grafana. The latter needs a PV (see below). For both we need to be able to create Services and Routes.
We will not run actual tests on this cluster any more (that needs /dev/kvm, which isn't available). But we have internal systems for that.
/dev/kvm
Note that we'll deploy resources (pods, routes, etc.) ourselves, so the "migrate" in the topic just applies to project creation.
No.
Resources required: - PVs: 1x 5 GB ReadWriteOnce for prometheus data - Memory for all pods: 1 GiB peak, expected persistent usage of ~ 200 MB - CPU: 2 peak, but usually negligible (a few milliCPUs)
ReadWriteOnce
Project_name: frontdoor Project_admins: - martinpitt@fedoraproject.org - mmarusak@fedoraproject.org
Thank you in advance!
@mmarusak FYI
Metadata Update from @arrfab: - Issue tagged with: centos-ci-infra, high-gain, medium-trouble, namespace-request
@cgranell : is that ok to onboard frontdoor into new aws ocp cluster ? (worth knowing that I'll be on PTO some days some only able to create needed IPA group and PV once back)
frontdoor
@arrfab we can go ahead with it, but just making it clear it can take a while as it is PTO season in our teams and it might take longer than usual to complete this task.
Thanks @cgranell ! There is no particular hurry, as far as I understood the old CentOS CI cluster will stay until March 2023, right? We mostly just need the ack for planning, whether we need to look for some other infra.
Metadata Update from @arrfab: - Issue assigned to arrfab
@martinpitt : as it will be syncing group membership based on FAS/ACO automatically (including creating project/namespace), I was wondering (before creating the group in IPA) : would it make sense to create a group for cockpit ? or frontend ?
cockpit
frontend
please not "frontend", that has no meaning in our existing project/org/community structure. "cockpit" is fine; "frontdoor" a bit more generic (but it's a RHEL-internal structure). So, come to think of it, I think group "cockpit" is fine. Thanks!
The ocp-cico-cockpit group was created in IPA (and so available in FAS/ACO) and your users were added. So it automatically created the cockpit namespace on new cluster https://console-openshift-console.apps.ocp.cloud.ci.centos.org
ocp-cico-cockpit
A 5Gi PV was also created in your namespace and you can use prometheus-data in the PV Claim as Ref
prometheus-data
claimRef: namespace: cockpit name: prometheus-data
Let us know if that works for you but normally you should be all set. We'll then be able to close this request
Metadata Update from @arrfab: - Issue priority set to: Waiting on Reporter (was: Needs Review)
Thanks @arrfab ! I confirm that I can login. I used this resource for claiming the PVC:
--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: prometheus-data namespace: cockpit spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi
and it worked fine, the PVC is in state "bound".
I deployed Prometheus/Grafana, they can successfully use the PVC, and it all seems to work.
Thanks a lot, this was super fast!
Metadata Update from @martinpitt: - Issue close_status updated to: Fixed - Issue status updated to: Closed (was: Open)
For the record, you can delete our old frontdoor project on the old cluster, we fully migrated to the new one. Thank you!
Login to comment on this ticket.