Moving CI ppc64le and aarch64 hypervisors to new CI 172.19.0.0/21 subnet and reflect that in the new authoritative ci.centos.org bind dns zone.
@aarfab is this work still valid? Can you add acceptance criteria, and if you know how much work/gain we will get from this, add the appropriate tags on this issue.
Metadata Update from @dkirwan: - Issue tagged with: need-more-info
@dkirwan : yes, these hypervisors are still reachable in the old 172.22.6.0/23 subnet but should be moved to the only 172.19.0.0/21 CI subnet, as that's the goal to combine all CI infra in same vlan/subnet. We can probably do that before converting these nodes to be used for cloud.cico, as I think that @siddharthvipul1 has still work to do for cloud.cico duffy integration
Metadata Update from @arrfab: - Assignee reset
Metadata Update from @arrfab: - Issue untagged with: need-more-info - Issue priority set to: Waiting on Assignee - Issue tagged with: low-trouble, medium-gain
Metadata Update from @arrfab: - Issue marked as depending on: #4
Metadata Update from @arrfab: - Issue tagged with: groomed
Metadata Update from @arrfab: - Issue assigned to arrfab
While waiting for cloud.cico and these nodes to be reinstalled with c7 + opennebula hosts, let's migrate these with new IP in dns and new ansible ci inventory.
Metadata Update from @arrfab: - Issue unmarked as depending on: #4 - Issue untagged with: dev, groomed
Following commits (in filestore and inventory) fixes this :
* 531ff7e - (HEAD -> master) Power/aarch64 ip switch to correct subnet. Fixes #15 * 71b9122 - (HEAD -> master, origin/master, origin/HEAD) Added power/aarch64 ci hypervisors in inventory
That also means that p8h{1..3} and aah{2..4} (Power 8 and aarch64 hypervisors) are now under new ansible ci inventory
Metadata Update from @arrfab: - Issue close_status updated to: Fixed - Issue status updated to: Closed (was: Open)
Login to comment on this ticket.