The PDC-backend hosts were used to run pdc-updater. It is my understanding that this is the only thing they were running.
If that is the case, since pdc-updater has been deprecated in favor of toddlers and all the handler of interest from pdc-updater have been ported and deployed in toddler (cf #9094 ), then I believe the pdc-backend hosts can be decomissioned and ansible cleaned up for them.
no deadline/hurry
I also use it to run mass branching and other pdc related things (changing eol of all componenets when release goes eol) on it as its faster than running from local machine.
From the stand up today:
and someone can check everything there, then we turn them off (but keep them around) and try and use another host next time we need something, if it fails we bring those back?
Metadata Update from @mohanboddu: - Issue priority set to: Waiting on Assignee (was: Needs Review) - Issue tagged with: groomed, medium-gain, medium-trouble
So checking https://pagure.io/fedora-infra/ansible/blob/798bf46c821315d003742570f65380f3ee7e0592/f/roles/pdc/backend/tasks/main.yml
What is does is: - Install pdc-updater - Add the fedmsg configuration file for pdc-updater so fedmsg-hub runs that consumer - Create a /etc/pdc.d folder and put a configuration file in there - Set up a daily cron job that can pdc-audit which sends a report about the difference between pdc and the data sources. I'm checking my email but I'm not finding traces of this cron job since April 1st 2020 (I had until January 1st 2020, but I cleaned up the old email just this morning...).
fedmsg-hub
/etc/pdc.d
pdc-audit
@mohanboddu looking at this ^ there doesn't seem to be anything specific in there that you couldn't do, say on pdc-web02 or so.
I am in favor of decommissioning pdc-backend02 and 03 and keep 01 around in case it is needed later (though I very much doubt so), and remove the pdc/backend role in ansible.
pdc/backend
Metadata Update from @pingou: - Issue assigned to pingou
So we've agreed to:
Then sometime next week (or the week after):
The (disk) images (lvm group) will remain until we need the space. If needed we could thus mount the lvm group or build a new image with them.
This is now all done. ;)
Metadata Update from @kevin: - Issue close_status updated to: Fixed - Issue status updated to: Closed (was: Open)
Login to comment on this ticket.