Learn more about these different git repos.
Other Git URLs
In changing waiverdb to record waivers against a subject+testcase, rather than a result id, we wrote a schema migraiton which just drops the old column. At the time we assumed that waiverdb would not be in use so there would be no rows in the database.
However now it is in use, so that migration will break existing waivers.
We may need to devise a better migration strategy, for example we could re-use the same code for looking up the result in Resultsdb and mapping it to an appropriate subject+testcase: #106
@ralph is this worth worrying about, do you think?
If the number of waivers in Fedora's waiverdb is small, we could do something hackier like just manually resubmit corresponding waivers by hand after the upgrade, or something.
The alternative will be to devise a complicated three-stage migration where we do something like:
Ideally with some kind of automatically reversible process in case we need to downgrade it...
That is a lot of complexity just to avoid breaking a small amount of existing data. But on the other hand, maybe this is a good opporunity to get some practice at doing a migration like this for an Openshift app. Because we are going to need to be able to do things like this going forward.
Thinking about this this morning.. maybe the way to do this is to only drop old columns in a subsequent release after the initial migration:
See http://threebean.org/img/migration.png
I'll start changing the migration script simply adding the result/subject column and query resultsdb for the corresponding result/subject for the id. About that: what should happen if the item is not "koji_build" or "bodhi_update" and it is not "original_spec_nvr". In the API we raise an error. What about the migration script?
Ideally we would want the migration script to exit with an error, leaving the existing data intact. And that would ideally then trigger the deployment to fail (so we know there is a bug we need to fix) and roll itself back.
Right now we are doing the migration on application startup which is not good. It means each pod in the rolling deployment may try to concurrently do the database migration... I think we need to use some kind of Kubernetes "Job" or the OpenShift equivalent...
https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
Metadata Update from @gnaponie: - Issue assigned to gnaponie
PR: https://pagure.io/waiverdb/pull-request/120
Metadata Update from @gnaponie: - Assignee reset
Related conversation in #121.
See PR #124.
Metadata Update from @ralph: - Issue status updated to: Closed (was: Open)
Metadata Update from @dcallagh: - Issue close_status updated to: Fixed - Issue set to the milestone: 0.7
Log in to comment on this ticket.