#8748 Migrate Fedora Project communishift to new datacenter
Opened 5 months ago by kevin. Modified 3 months ago

As some of you may know, later this year the Fedora Project is moving entirely out of our main datacenter in Phoenix, AZ, USA (where the communishift cluster is currently located).

We explored various options to migrate the cluster to minimize downtime, but we just don't have extra hardware or resources to setup a duplicate cluster anywhere currently. So, unfortunately, that means we will need to power off the cluster, ship it to a new datacenter and bring it back up there.

The estimated timeline for this looks like:

2020-04-13 - power off, derack, pack and ship
2020-04-20 - Arrival at new DC (or before)
2020-04-23 - Unpack, re-rack, cable, network
2020-04-30 - Reconfigure/reinstall
2020-05-08 - fully back in service

Note that these dates are subject to shipping and availability of people to rack and setup and other items beyond our control.

This ticket can be watched to follow progress of this move with any date adjustments or progress as the move is happening.


Communishift is now powered off. Being packed up to ship.

Hardware has arrived at the shipping dock of it's new datacenter. Waiting to see how long until it can be unpacked and racked.

Status: hardware is being racked. It will possibly be ready for work on Friday due to limits of time allowed for people inside of the data-centre due to COVID-19 precautions. I am having to look for personal protection equipment that I will need to wear when going to center..

So, we have run into a few roadblocks. ;(

First one of our switches died (either due to the move or just bad luck). I think thats been fixed now.

Now we need networking to approve our network setup plan, and then we need a few new cable runs to connect things we we need them to be. Due to COVID19 issues staff at the datacenter is limiting the amount of time anyone spends there, including dcops folks that could do the cabling. Additionally we have a hard deadline to finish the other datacenter and have been focusing on that.

All that said, we will still try and get this up and running again as soon as we have cycles to do so.

Login to comment on this ticket.

Metadata