#70 Create a new OCP Cluster for Stream and Linux 8 activities
Closed: Fixed 3 years ago by arrfab. Opened 3 years ago by bstinson.

We want a separate cluster to host the CentOS Stream and CentOS Linux 8 buildsystem, and some other services that are critical to release engineering.

@arrfab mentioned that we might want to investigate doing this on virtual machines spread across 2 (or more) blades in the cody chassis. cody-n[5-10] are already marked in the inventory as available for this role.

Cluster ID: ocp.centos.org

If we can provision the bare minimum number of nodes needed for OCP and add a few more worker nodes, that will be a good start for us. I'll leave instance sizing up to the group to discuss.


Metadata Update from @arrfab:
- Issue priority set to: Next Meetings (was: Needs Review)
- Issue tagged with: centos-common-infra, high-gain, high-trouble

3 years ago

@bstinson : per discussion, do you have some minimal specs you'd like for (depending on role - control plane or workers) these nodes :

  • cpu/cores
  • memory
  • local storage

We can probably resize later if we want to, but better to start directly with some estimate (and identify where we can easily deploy this for now)

Metadata Update from @arrfab:
- Issue assigned to arrfab

3 years ago

I can have a look at available resources and if they meet the minimum requirements for openshift/rhcos deploy.
Can have a small spike to deploy in kvm guest at least and then we can decide direction to take

Just to update this ticket for tracking purposes.
At this stage the following steps are marked as "done" :

  • configure internal resolvers/dns for the new zone (with srv, wildcards) (updated unbound role)
  • deploy new haproxy and configured
  • deployed kvm hypervisors (bridges in place)
  • updated (after tests) ansible playbooks/templates to deploy ocp on kvm guests
  • deployed 3 control planes and 3 workers, all ready

Next steps:

  • replace default ingress tls certs with new ones (signed by known CA)
  • opening ingress ports to the outside
  • discuss storage needs to share nfs for Persistent Volumes

I pushed a couple of new volumes in the ansible inventory to satisfy the PVs we need. There's one for /mnt/koji and one for the OCP registry

Just as a status update, ingress certs were replaced by ansible and cluster open to the outside world, also with openidc/oauth configured and working.
Last stop is binding nfs for PVs (registry and one dedicated for a "mbox" pod)

Per discussion with @bstinson, now that cluster is deployed / ready and with storage, we can close this ticket

Metadata Update from @arrfab:
- Issue close_status updated to: Fixed
- Issue status updated to: Closed (was: Open)

3 years ago

Login to comment on this ticket.

Metadata
Boards 1