#1401 Request: Admin access to openshift namespace / jenkins of pagure project
Closed: Fixed a year ago by wombelix. Opened a year ago by wombelix.

The initial request for the pagure namespace: https://pagure.io/centos-infra/issue/665

I'm a Pagure contributor and work on improving of the unit tests and CI runtime. To test and develop a new pipeline, I need admin access to https://jenkins-pagure.apps.ocp.cloud.ci.centos.org. My FAS account is wombelix. It looks like that at some point @ngompa lost his access to jenkins, this need to be restored too.

Please let me know if additional information are required. Thanks!


I would also like to learn about the duffy pool quotas applied to pagure.

To speed up the tests we need run them in parallel and claim 4 nodes from pool virt-ec2-t2-centos-9s-x86_64 for ~15 minutes in total.
We could also implement a logic to use one node from pool metal-ec2-c5n-centos-9s-x86_64 if available.
In that case we can probably give it back after ~10 minutes.
We would then limit the overall parallel executed job number to 2-3 and add a logic to kill a running job of a second one is submitted for the same PR.
Right now we block one node from virt-ec2-t2-centos-8s-x86_64 per CI Job for ~1h - 1h 30min on average.
Regarding https://pagure.io/centos-infra/issue/1377 and https://lists.centos.org/hyperkitty/list/ci-users@lists.centos.org/thread/B7JU5S7H2EBKFID6IH4GUB3LU5XOAJ7U/, will the amount of nodes in virt-ec2-t2-centos-9s-x86_64 increase when virt-ec2-t2-centos-7-x86_64 and virt-ec2-t2-centos-8s-x86_64 are decommissioned?
I want to find a good balance between a reduced test time in Pagure and reasonable resource usage in the CentOS CI pools.
Details of the planned implementation can be found here: https://pagure.io/pagure/issue/5466

the access to the pagure namespace/project on openshift is granted through https://accounts.centos.org/group/ocp-cico-pagure/ group.
So you can ask @zlopez to add/sponsor you eventually ? (granting all admin rights in that NS)

Metadata Update from @arrfab:
- Issue assigned to zlopez
- Issue tagged with: authentication, centos-ci-infra, namespace-request

a year ago

I would also like to learn about the duffy pool quotas applied to pagure.

To speed up the tests we need run them in parallel and claim 4 nodes from pool virt-ec2-t2-centos-9s-x86_64 for ~15 minutes in total.
We could also implement a logic to use one node from pool metal-ec2-c5n-centos-9s-x86_64 if available.
In that case we can probably give it back after ~10 minutes.
We would then limit the overall parallel executed job number to 2-3 and add a logic to kill a running job of a second one is submitted for the same PR.
Right now we block one node from virt-ec2-t2-centos-8s-x86_64 per CI Job for ~1h - 1h 30min on average.
Regarding https://pagure.io/centos-infra/issue/1377 and https://lists.centos.org/hyperkitty/list/ci-users@lists.centos.org/thread/B7JU5S7H2EBKFID6IH4GUB3LU5XOAJ7U/, will the amount of nodes in virt-ec2-t2-centos-9s-x86_64 increase when virt-ec2-t2-centos-7-x86_64 and virt-ec2-t2-centos-8s-x86_64 are decommissioned?
I want to find a good balance between a reduced test time in Pagure and reasonable resource usage in the CentOS CI pools.
Details of the planned implementation can be found here: https://pagure.io/pagure/issue/5466

I don't think we plan to bump number of "ready" nodes in the other pools when c8s/c7 will be EOL'ed.
But don't forget that ready pool doesn't mean total for available nodes .. that's just the "buffer" as as soon as tenants are requesting nodes, duffy backend worker (through ansible) is reprovisionning machines to get back to that level in the pool

I just had a look at the pagure tenants in Duffy and current quota is the default one for all CI tenants :

  • each session as a maximum TTL of 6h (and you can request multiple machines per session)
  • maximum nodes quota : 10 (independent from running sessions, so can be 10 sessions with one nodes or 1 sessions with 10 nodes, etc)

the access to the pagure namespace/project on openshift is granted through https://accounts.centos.org/group/ocp-cico-pagure/ group.
So you can ask @zlopez to add/sponsor you eventually ? (granting all admin rights in that NS)

Thanks @arrfab I pinged @zlopez and asked to sponsor Neal and me.

I don't think we plan to bump number of "ready" nodes in the other pools when c8s/c7 will be EOL'ed.
But don't forget that ready pool doesn't mean total for available nodes .. that's just the "buffer" as as soon as tenants are requesting nodes, duffy backend worker (through ansible) is reprovisionning machines to get back to that level in the pool

Ahh ok that make sense, I thought ready is the pool limit. OK then we shouldn't have any problems.

I just had a look at the pagure tenants in Duffy and current quota is the default one for all CI tenants :

  • each session as a maximum TTL of 6h (and you can request multiple machines per session)
  • maximum nodes quota : 10 (independent from running sessions, so can be 10 sessions with one nodes or 1 sessions with 10 nodes, etc)

Cool, that should work then for what we plan to do.

Thanks a lot @zlopez , I can login :thumbsup:

Metadata Update from @wombelix:
- Issue close_status updated to: Fixed
- Issue status updated to: Closed (was: Open)

a year ago

Log in to comment on this ticket.

Metadata