#1820 Adjust/Drop/Document batched updates policy
Closed: Accepted 5 years ago by churchyard. Opened 6 years ago by kevin.

Bodhi added a batched updates state in recent months. When an update has gotten sufficent karma in updates-testing it promotes to 'batched' (or sufficent time and the maintainer promotes it to 'batched'). The maintainer can override this and submit to 'stable' instead.

batched updates are just kept in that state (still in updates-testing, but not yet in stable) until a weekly bodhi cron job, currently running tuesday at 00:00UTC) submits them for stable. Then the next push moves all those updates to stable.

Considerations:

  • dnf ships with a 'dnf-makecache' timer running every hour. When repos are updated, this timer will download metadata on all enabled Fedora machines.
  • Many maintainers bypass batched currently
  • Updates_Policy doesn't mention batched state or what maintainers should/should not do
  • gnome-software currently gathers updates daily and only applies when there's security updates or it's been a week. (I think this is right, but not sure... perhaps @kalev could chime in and correct this?)

@mattdm came up with the idea for batched, he might have more input in what it's goals are.


One proposal for getting it more used (it is only available since October btw) would be to reward badges for submitting updates to batched.

gnome-software currently gathers updates daily and only applies when there's security updates or it's been a week. (I think this is right, but not sure... perhaps @kalev could chime in and correct this?)

Yes, that's correct.

The advantages of batched are that it avoids badgering users with constant updates (most of which aren't relevant to them) as well as reducing the overhead on our mirror network.

I am personally of the opinion that batched should be enforced for all updates that are not marked as urgent security updates, with the option for the user to petition the infrastructure team to push something through earlier with justification (e.g. fixing a bug that is not a security issue but is visibly affecting a lot of users).

Edit: I think we would need to clearly spell out the criteria for these exceptions ahead of time, lest we have an overflow of requests for quick pushes.

To clarify: ANY update marked 'urgent' bypasses the batched state currently. (Security or otherwise).

I would advocate that updates don't necessarily need to be security related to bypass batched without releng approval. A severe bug that eats data or stops a system from booting would also be urgent. This is why the current Bodhi code does automatically bypass batched for autokarma updates if they are marked urgent.

I would advocate that updates don't necessarily need to be security related to bypass batched without releng approval. A severe bug that eats data or stops a system from booting would also be urgent. This is why the current Bodhi code does automatically bypass batched for autokarma updates if they are marked urgent.

Sure, I'll revise my proposal to drop the word "security". I still think that batched should be enforced.

This is basically a small thing which came out of this big discussion a little over a year ago: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/thread/HFTB6Z5DAXZV6O4OGFK4I2GQIE42XMHR/#M5EH2C3CDRZ477PO5ADKAVRGO444P34M
... and has history dating back to Spot's proposal from FUDcon Lawrence.

Goals are:
Make things more predictable for users
Reduce the perception of a constant flood of hard-to-manage change
Encourage non-critical updates to sit in updates-testing for a bit longer
Reduce mirror churn
Reduce metadata churn
Eventually get to a point where we're doing QA on updates as a set (ideally mostly automatically)

Obviously this particular bodhi feature doesn't get us all the way there, but it moves in that direction with relatively little friction. I'd be okay with experimenting with making the policy more strong. There are clearly non-urgent updates not being batched right now.

I think that if people really want the firehose, updates-testing is what they want (and plans to push out updates more quickly or with less testing just make stable more like "testing" without being honest about it).

Metadata Update from @sgallagh:
- Issue tagged with: meeting

6 years ago

dnf ships with a 'dnf-makecache' timer running every hour. When repos are updated, this timer will download metadata on all enabled Fedora machines.

FWIW rpm-ostree doesn't have that timer; and if used in "pure image/ostree" mode, we also don't fetch the repodata at all.

I also don't use systemd in my dev containers, so I don't have that timer there either.

Can you adjust that hence to "traditional/dnf Fedora machines"?

I think this batching is really unhelpful because:

  • users who want batched updates were already able to do their own batches locally. Now they have to adjust to the weekday Fedora delivers batches on or incur unwanted additional delays. So we remove the possibility to schedule the batches from the users. E.g., I can imagine a company wanting to run updates once on Monday morning and then keeping working the whole week without disruption from updates. The timing of the Fedora batches goes counter to that.
  • users who do not want batched updates get basically screwed, with no good option to get the updates faster. (The only option is to opt into all of updates-testing, which includes untested updates too.) At the very least, a fast track repository which allows getting the updates as soon as they're batched is needed. (That would also help the use case of the Monday morning updates, and it would also solve the metadata issue because the frequently updated fast track repository would contain only few packages, they'd move from there to stable at each batch.)

Goals are:

I would word these differently:

Make things more predictable for users

Remove choice and control from users.

Reduce the perception of a constant flood of hard-to-manage change

Hide the constant flow of bug fixes from users, and swamp them with a big drop once a week, for which it takes 1-2 hours to read all the update notes (Yes, I actually read the update notes!), as opposed to taking 10 minutes every day.

Encourage non-critical updates to sit in updates-testing for a bit longer

Force an arbitrary delay on updates that are already tested and ready for stable, which is essentially random because it depends only on when in the week the update passes the testing and not on the actual quality of the update.

Reduce mirror churn
Reduce metadata churn

These are kinda valid technical concerns, but there ought to be other solutions for that. (A combination of a batched and a fast track repository would help there. Metadata deltas would be another solution. Solving the metadata issue would also cut much of the churn for mirrors, because the metadata is actually the largest part to sync.)

Eventually get to a point where we're doing QA on updates as a set (ideally mostly automatically)

Doing tests in batches of unrelated updates makes it hard to figure out which of the updates actually caused the regression.

Obviously this particular bodhi feature doesn't get us all the way there, but it moves in that direction with relatively little friction. I'd be okay with experimenting with making the policy more strong. There are clearly non-urgent updates not being batched right now.
I think that if people really want the firehose, updates-testing is what they want (and plans to push out updates more quickly or with less testing just make stable more like "testing" without being honest about it).

This is just not true. I want tested updates as soon as they are tested. I don't want untested updates. And I expect this to be the case for anybody running Fedora on any kind of production machine, even if they want daily updates (as I do).

And batched does not actually ensure more testing, the criteria for pushing to batched are the same as the criteria for pushing to stable. The waiting is purely time-based and depends literally on the day of the week, which is obviously not a reasonable criterion for stability. (The extra testing can be anywhere between 0 and 7 days.) And, even if it is theoretically possible, do updates in batched actually ever get withdrawn from there due to additional testing? I have not seen it happening at all.

Pushing tested updates out more quickly (in fact, the daily pushes were already a compromise, I would love multiple pushes a day or ideally Copr-style instant pushes) is entirely orthogonal to the criteria for when an update can be considered tested.

This issue will be discussed in the
FESCo meeting Friday at 15:00UTC in #fedora-meeting on
irc.freenode.net.

Note that this is a change in time from the previous FESCo meeting time.

To convert UTC to your local time, take a look at
http://fedoraproject.org/wiki/UTCHowto

or run:
date -d '2018-02-16 15:00 UTC'

(The extra testing can be anywhere between 0 and 7 days.)

Agree the fact that it can be 0 is a real problem! And it's actually related to the problem that in general, no one knows for sure what will ship the next day; an update could get withdrawn, but that was actually required by something else. And that nothing actually tests/gates the final result of each individual update. We've talked about having the AH releases be the bodhi batches but we'd need some thought on how to fix these issues.

In general though I think the batching is a good start - we're honestly never going to stop debating how updates work, the goal should be to improve in general.

I still have not seen any evidence that the time spent in batched, when it is non-zero, is actually used for any kind of additional QA. Have any updates been unpushed from batched so far? If yes, how many?

AGREED: zbyszek will solicit more feedback on fedora-devel (perhaps also from mattdm specifically) (+5, 0, -0)

Metadata Update from @bowlofeggs:
- Issue untagged with: meeting
- Issue assigned to zbyszek

6 years ago

@zbyszek It may be relevant to mention in your solicitation that there is a current proposal to make the metadata files also be delta'd (like drpms, so they'd be much smaller), which might also factor into people's considerations.

I opened a new thread of fedora-devel: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/message/DGAPOT4JH2XNRYDWXMGSEI54JILUGXH3/

I'd call the results inconclusive. The most interesting proposal is to create a new repo for non-batched updates:

I think a new, additional repo with batched contents should solve a lot of problems here: we'd have both side of the fence happy (regular users: batched updates, developers: continuous stream of updates), and it would also make it possible to actually QA the batched set before it hits the mirrors.

Right now there's no way for QA to only extract batched updates without getting all the rest of updates-testing; if we had that we could actually have people test the batched set of updates before they are pushed out to stable.

I think we should investigate how much effort and what cost to our mirroring infrastructure that would be.

Metadata Update from @zbyszek:
- Issue tagged with: meeting

6 years ago

ACTION: next week updates will only be pushed to stable when there is an urgent pending stable update or tuesday after batched promotion. Feedback will be gathered and we will revist next meeting.

Thanks for the heads-up, from now on all my updates will be marked urgent, and I encourage all maintainers to do the same.

@kkofler you act as if other people participating in the discussion wilfully ignored the arguments you make. But you know that is not true — the issue has been discussed in at least two long threads on fedora-devel, in the last FESCO meeting, and in previous ones. Getting the updates out as soon as possible is just one of the goals, and it conflicts with other goals. In an ideal world we would have better tooling and the whole discussion wouldn't happen, but (at least as of now) we don't, so we have to balance between all the priorities. The idea that was agreed upon in the FESCo meeting is a way to gather facts, how much we could gain if updates were fully batched. It is not a permanent solution, just an experiment to gather data. Please don't defeat this effort.

This quote from the meeting log:

15:44:44 <sgallagh> tyll_: Right. Assuming this passes with little content, the final solution (in my mind) is just enforcing batched.

makes it pretty clear what is going to happen if this one-week test is "successful".

You're really going to actively attempt to subvert FESCo's decision...? Don't you think it's likely we'll just lose the ability to skip batched altogether if maintainers choose to abuse the urgent priority?

@kkofler you act as if other people participating in the discussion wilfully ignored the arguments you make. But you know that is not true — the issue has been discussed in at least two long threads on fedora-devel, in the last FESCO meeting, and in previous ones.

Those mail threads on -devel didn't exactly result in a wave of enthusiastic support. The single one articulated advantage that isn't better realized by following your own client-side update policy is nowhere to be seen: the more extensive testing of whole batches likely won't exist for a long time to come. Yet, we're plowing ahead with this batching idea.

This whole process really doesn't look like solving a problem to me. More like deploying a pre-determined solution (batched updates) with little regard to whether it solves an actual problem and how well it does that compared to alternative points in the design space.

How will the success of this one-week test be evaluated? What exactly is its purpose, even?

Those mail threads on -devel didn't exactly result in a wave of enthusiastic support. The single one articulated advantage that isn't better realized by following your own client-side update policy is nowhere to be seen: the more extensive testing of whole batches likely won't exist for a long time to come. Yet, we're plowing ahead with this batching idea.

It's not expected (by me at least) that devel list is reprsenative of all fedora users. Its fine to get feedback from the developer side, but for the users we have to think based on what we know and information from them that we have gathered at places users hang out.

This whole process really doesn't look like solving a problem to me. More like deploying a pre-determined solution (batched updates) with little regard to whether it solves an actual problem and how well it does that compared to alternative points in the design space.

@mattdm mentioned the goals above...

How will the success of this one-week test be evaluated? What exactly is its purpose, even?

To ask users (and developers) if it meets the goals above...

Today's batch contains several security fixes: at least: clamav, freexl, glibc, libcdio, perl-Mojolicious, xen, with several CVEs each, plus libXcursor, libXFont, libXFont2, php, unzip with one CVE each, plus libwebp that was not linked with hardening flags. I consider it completely irresponsible to sit on security fixes that way. Especially considering that there was a push yesterday (for an urgent security fix in dhcp) anyway.

Security fixes should never be sent to batched.

@kevin:

It's not expected (by me at least) that devel list is reprsenative of all fedora users.

The usual excuse for ignoring devel list feedback. Makes me wonder why you raised the issue there to begin with. Where do you see feedback from users in favor of batching? I haven't seen any.

@mattdm mentioned the goals above...

But his goals are clearly not shared by everyone. See also https://pagure.io/fesco/issue/1820#comment-491776 . Before forcing through a change fulfilling some goals, you first need to ask whether those goals are worthwhile or even desirable to begin with.

How will the success of this one-week test be evaluated? What exactly is its purpose, even?

To ask users (and developers) if it meets the goals above...

This assumes we want to reach those goals to begin with, for which there is no consensus whatsoever.

This is interesting:
clamav – https://bodhi.fedoraproject.org/updates/FEDORA-2018-602b5345fa → security/unspecified
glibc – https://bodhi.fedoraproject.org/updates/FEDORA-2018-1cbdc8cbb8 → security/low
libcdio – https://bodhi.fedoraproject.org/updates/FEDORA-2018-30a8492364 → security/unspecified
xen – https://bodhi.fedoraproject.org/updates/FEDORA-2018-c553a586c8 → security/unspecified
freexl – https://bodhi.fedoraproject.org/updates/FEDORA-2018-2eb691e7d7 → security/medium
php – https://bodhi.fedoraproject.org/updates/FEDORA-2018-a89ccf7133 → security/unspecified

At least for xen and php it seems that those were important security updates and shouldn't be batched. IMHO there's clearly something to fix here. I think the best option would be either automatically default to priority=urgent if type=security is selected, or to not batch updates with type=security and with unspecified priority. It seems too many maintainers forget to set the priority, probably because it didn't matter in the past.

dhcp – https://bodhi.fedoraproject.org/updates/FEDORA-2018-5051dbd15e → security/urgent

I'm not sure what happened with this one. I guess it was pushed to batched by mistake and then updated or pushed.

dhcp was actually the one that was not batched. It was pushed on Monday, forcing a refresh of the metadata. Sadly, none of the other security updates (nor any other updates) were pushed together with it.

Hence the conclusion: This batching system totally fails at preventing metadata refreshes and needlessly delays many security updates.

At least for xen and php it seems that those were important security updates and shouldn't be batched. IMHO there's clearly something to fix here. I think the best option would be either automatically default to priority=urgent if type=security is selected, or to not batch updates with type=security and with unspecified priority.

Then you'll guarantee that Kevin will be right that the batching system will fail at preventing metadata refreshes. Security updates get pushed every day or two. If you want them to skip batched, then batched doesn't do its job, and it's hard to see the point of having batched at all.

Right. That's my point. Batching is inherently incompatible with responsible delivery of security updates, so it needs to be dropped.

Not all security updates are urgent in nature.

@zbyszek perhaps a variation of your idea might be simply to disallow "unspecified" for security updates - i.e., force the packager to choose a severity.

One more thing that could help - when we implemented the batching system, it was suggested without contest that it would be good for newpackage updates to go straight to stable. At the time I don't think there was a stated goal of reducing the metadata churn (the only goal in mind was to reduce the "firehose" of updates, and a new package wouldn't be an update), so it seemed reasonable to have them skip batching. We could change that and have them batch as well to reduce the churn.

I think it makes sense to require security updates to specify severity even outside of the batching concerns, so I've opened a ticket about that: https://github.com/fedora-infra/bodhi/issues/2206

This issue might explain why so many security updates end up with "unspecified", as editing updates with the CLI will cause them to become "unspecified": https://github.com/fedora-infra/bodhi/issues/2208

First: I think there are too many choices for urgency, particularly in a world involving batching. I'd like to suggest the following:

  • The UI should carry only two severity types: "Standard" and "Urgent". All updates marked as "standard" will be batched. All updates marked as "Urgent" will skip batched and be pushed to stable immediately.
  • The UI should change to reflect the meaning of the severity, clearly indicating that selecting "urgent" means that the packager is asserting that the update is necessary to resolve a serious issue affecting a large number of Fedora users where time is a critical factor. "Standard" should also clearly state that the update will be batched for the weekly release.

There is realistically no meaning to any of the other severity levels and - as they are listed without indication of what they mean - they are entirely subjective.

I also think that we should state clearly that abuse of the "urgent" field (such as setting it for every update, no matter how trivial), would be grounds for loss of packager privileges, as they are would not be abiding by the rules set down by the Fedora Project and are therefore violating the commitment they made when they were made a packager.

There are good reasons for the batched delivery for a great many users. There are many people out there in the world that are using Fedora for production workloads. Providing them with a public schedule for when changes are going to land is a good way to keep those people happy and encourage new deployments as well. Furthermore, while it hasn't happened yet, it also allows us the opportunity to have a known stop point (say, Monday just after the urgent updates push) to freeze the content for the batched release and run a series of tests against it as a complete unit. This is obviously not in place yet, but that sort of behavior would have this as a dependency.

I also agree with the earlier commenters who have noted that if you are a "live on the cutting edge" sort of person, then you probably should be running with updates-testing enabled anyway. For the record, with the ongoing CI efforts, I think we're getting closer to a world where updates-testing is going to be much closer to something we could call "updates-staging", rather than actual untested content. Remember too that our policy on u-t is that it's meant for changes we intend for stable and that its presence in u-t is not meant to be its first chance for testing; it's the last chance. We need to be more forceful about that. Particularly now that COPR is available, we need to encourage packagers to use THAT space for early testing instead of the updates-testing repository.

@sgallagh:

I also think that we should state clearly that abuse of the "urgent" field (such as setting it for every update, no matter how trivial), would be grounds for loss of packager privileges, as they are would not be abiding by the rules set down by the Fedora Project and are therefore violating the commitment they made when they were made a packager.

If you have to resort to this kind of threats to force packagers to follow your process, this is a clear indication that something is wrong with it.

There are good reasons for the batched delivery for a great many users. There are many people out there in the world that are using Fedora for production workloads. Providing them with a public schedule for when changes are going to land is a good way to keep those people happy and encourage new deployments as well. Furthermore, while it hasn't happened yet, it also allows us the opportunity to have a known stop point (say, Monday just after the urgent updates push) to freeze the content for the batched release and run a series of tests against it as a complete unit. This is obviously not in place yet, but that sort of behavior would have this as a dependency.

You are just repeating the same old arguments from @mattdm that I already debunked a month ago:
https://pagure.io/fesco/issue/1820#comment-491776

I do not see what kind of deployment is really helped by a forced time in the week where updates happen, as opposed to being able to decide on their own, client-side, as had always been the case. Your schedule is not necessarily aligned with the deployment's schedule. What advantage does forcing a schedule on them bring to the users wanting weekly updates?

I also agree with the earlier commenters who have noted that if you are a "live on the cutting edge" sort of person, then you probably should be running with updates-testing enabled anyway. For the record, with the ongoing CI efforts, I think we're getting closer to a world where updates-testing is going to be much closer to something we could call "updates-staging", rather than actual untested content. Remember too that our policy on u-t is that it's meant for changes we intend for stable and that its presence in u-t is not meant to be its first chance for testing; it's the last chance. We need to be more forceful about that. Particularly now that COPR is available, we need to encourage packagers to use THAT space for early testing instead of the updates-testing repository.

That too is just repeating an old argument from @mattdm that I already debunked a month ago, see the last paragraph of:
https://pagure.io/fesco/issue/1820#comment-491776

updates-testing by definition contains updates that are not tested yet. It is not a substitute for timely delivery of stable updates (including security updates, see https://pagure.io/fesco/issue/1820#comment-497833 ). And just because you would like more people to test updates-testing is not a valid reason to force users to be your guinea pigs.

I see only 2 ways out of this crisis:

  • drop batching completely, or
  • introduce an updates-fasttrack repository that carries all the batched updates, as I proposed months ago, and as has been reproposed by others everywhere this issue is being discussed.

I do not understand the opposition to updates-fasttrack. It would make both sides happy.

This is interesting:
clamav – https://bodhi.fedoraproject.org/updates/FEDORA-2018-602b5345fa → security/unspecified
glibc – https://bodhi.fedoraproject.org/updates/FEDORA-2018-1cbdc8cbb8 → security/low
libcdio – https://bodhi.fedoraproject.org/updates/FEDORA-2018-30a8492364 → security/unspecified
xen – https://bodhi.fedoraproject.org/updates/FEDORA-2018-c553a586c8 → security/unspecified
freexl – https://bodhi.fedoraproject.org/updates/FEDORA-2018-2eb691e7d7 → security/medium
php – https://bodhi.fedoraproject.org/updates/FEDORA-2018-a89ccf7133 → security/unspecified
At least for xen and php it seems that those were important security updates and shouldn't be batched. IMHO there's clearly something to fix here. I think the best option would be either automatically default to priority=urgent if type=security is selected, or to not batch updates with type=security and with unspecified priority. It seems too many maintainers forget to set the priority, probably because it didn't matter in the past.
dhcp – https://bodhi.fedoraproject.org/updates/FEDORA-2018-5051dbd15e → security/urgent
I'm not sure what happened with this one. I guess it was pushed to batched by mistake and then updated or pushed.

So, as I pointed out on IRC, I was on push duty this week, and have tried to check all updates marked as Security with request=Stable, and if they had a security bug that was marked by the security team as "Important" or higher, marked the update as Urgent, so as to make sure that everything went out in time to protect users.
The only one where this was needed was the DHCP one, since the Xen one actually hit batched just before that was promoted to stable, on Tuesday, so when all batched would get pushed anyway.

So we are now going to go the RHEL way, arbitrarily deciding whether a security fix is important enough or not on the users' behalf, and only caring about those rated important? This is quite sad.

Security updates need to go out as quickly as possible.

I noticed a significant benefit to this week's batching experiment - I received a much higher percentage of drpms than I usually do. So in addition to downloading less metadata, I also downloaded less RPM data when I did update (I don't typically update every day). This adds up to even less bandwidth than we thought when we just considered the metadata downloads being reduced.

How are the drpms related to the batching at all?

AGREED: Ask QA about batch testing plans, go back to normal pushes, discuss bodhi/other plans on list, revisit next week (+7, 0, -0)

I see this claim repeated in the meeting log:

15:38:01 <bowlofeggs> one nice thing i've noticed this week is that i get more drpms
15:38:11 <bowlofeggs> that's another subtle benefit of batching

but I still cannot see how batching affects the drpms at all. I got no answer to my question.

As for this bold claim:

15:42:18 <sgallagh> The hardest part of a decision like this is that we will never hear from the people it benefits.

this is the typical silent majority fallacy. I think the most likely explanation for why you never hear from those people is that they simply do not exist. I have not seen any evidence otherwise. And as I explained, users who want to do weekly batching can actually do it better if we do not do it for them because they get to pick their own schedule. So where are those users that like batching?

@bowlofeggs: Sorry if I sound like a broken record, but can you please explain how you think batching leads to more DRPMs?

As far as I know, DRPMs are computed for an update compared to the last n versions of the updated package, for some fixed n. Whether all updates happen in one batch or in separate batches should not affect this at all.

So I would think that the higher percentage of DRPMs that you had in that particular batch must have been a lucky accident and not related to the batching at all.

Or how would the batching affect the availability of DRPMs? (And if there is a reason, can that not be achieved without batching? The batching should not reduce the number of DRPMs to produce except in the rare occasion where the same package is updated multiple times a week.)

Batching slows down the firehose of updates. Some packages update rather frequently. When packages update frequently, they reduce the chances of users getting drpms, since you do not get a drpm if you miss an in between update. I do not update my system every day, or even every week. Thus, since batching slows down the fire hose, I get a higher ratio of drpms.

@bowlofeggs:

I do not update my system every day, or even every week.

That's your fault then, but…

Thus, since batching slows down the fire hose, I get a higher ratio of drpms.

… that still does not follow. Even if you update less than weekly, whether the updates go out daily or weekly will only make a difference for missing or not missing an update if the two updates go out the same week. Otherwise, n updates in that time period will still be n updates in that time period, no matter whether they get pushed at any weekday or on Tuesday.

So I still think your empirical evidence is just a coincidence and not really a result of batching.

On 03/13/2018 10:21 AM, Kevin Kofler wrote:

That's your fault then, but…

Updating less often than daily is not a fault. Language like this
weakens your persuasiveness. I suggest using a friendlier tone in the
future.

… that still does not follow. Even if you update less than weekly, whether the updates go out daily or weekly will only make a difference for missing or not missing an update if the two updates go out the same week. Otherwise, n updates in that time period will still be n updates in that time period, no matter whether they get pushed at any weekday or on Tuesday.

So I still think your empirical evidence is just a coincidence and not really a result of batching.

With batching, packages do not update more often than once a week
(unless they have a good reason to). In fact, they would most often also
not update every week either because of the week in testing requirement
and the obsoletion system, meaning that most packages would at most
update every other week for non-exceptional updates[0]. This means it's
easier to "catch them all!" than it is when packages can update rapidly.

[0] There's an open RFE from tibbs to allow packagers to stage updates,
which would allow them to update every week even with forced gating:
https://github.com/fedora-infra/bodhi/issues/2191

Does Bodhi actually obsolete updates in batched if you file a new update for testing? If it does that, IMHO that's a bug in Bodhi. Just as packages queued for stable are not obsoleted, packages already in batched should not get obsoleted either, because they are effectively also queued for stable-

On 03/13/2018 12:09 PM, Kevin Kofler wrote:

Does Bodhi actually obsolete updates in batched if you file a new update for testing? If it does that, IMHO that's a bug in Bodhi. Just as packages queued for stable are not obsoleted, packages already in batched should not get obsoleted either, because they are effectively also queued for stable-

It does, and it makes sense for it to do that because the goal of the
obsoletion system is to avoid two packages in updates-testing at the
same time. batched updates are still in the updates-testing repo.

That is really another argument for stopping the batching, because it breaks getting frequently-changing packages out. If an update is batched for stable, submitting another update to testing must not cancel the batched update. If this is not compatible with how the batching is implemented, then it is impossible to do the right thing, and so the only possible conclusion is to stop batching. (But please note that the frequently proposed fasttrack repository would solve the problem, because the batched update would then be in fasttrack whereas the new one would be submitted to testing, so there would be no conflict there.)

Batching (especially without a fasttrack repository) really causes more problems than it solves.

On 03/13/2018 12:47 PM, Kevin Kofler wrote:

That is really another argument for stopping the batching, because it breaks getting frequently-changing packages out.

The stated goal of batching is to reduce the update churn.

And how is reducing the amount of bug fixes that get released an improvement?

I can understand that there are people who want to update only once a week (though I think that this is best addressed at the client side, without affecting everyone else, and leaving the users the choice of schedule), but making it harder to get out one update per batch for packages such as youtube-dl that release very often is not part of that.

I see only 3 possibilities:
a) push small chunks of ~20-40 updates every day (what we used to do),
b) push huge drops of ~150-300 updates every week (what we do now, I have 296 updates today!),
c) stop fixing bugs – no more "churn", but surely that is not what we want!

IMHO, a) is clearly the best solution.

Do we have statistics on how many of our updates actually have user reported bugs attached to them, vs. how many are just updates because upstream released something new?

I think assuming all filed updates are of the same level of relevance is a mistake. An update that fixes a user reported bug is important to push because it fixes a problem for someone. An update that is an update for the sake of updating can clearly wait.

Upstream will always continue to find and fix issues, however the presence of a fix doesn't automatically equate to a problem users are hitting. They are often corner cases or use-case specific problems. Fedora packagers should continue to incorporate upstream changes quickly, but that does not mean users always need all of those changes ASAP.

On 03/14/2018 01:10 PM, Kevin Kofler wrote:

And how is reducing the amount of bug fixes that get released an improvement?

Reducing the frequency of update pushes is not the same thing as
reducing the number of bug fixes that get released. Hyperbole is not an
effective way to persuade people towards your position.

Bug fixes will simply be collated together, which gives greater
efficiency to end users. Rather than getting two RPMs for two bug fixes,
they might get one RPM for the same two bug fixes now, as an example. In
the end the difference is not the number of bugs that were fixed, but
the number of RPMs that needed to be installed to get them (and the
amount of data that needed to be retrieved could be better too, between
the DRPM and the metadata).

@jwboyer I think we could probably trawl around Bodhi's DB to get answers to that question, though I don't have time ATM to do it now. We could basically count updates that are related to BZs vs ones that are not, but we probably should also filter out the-new-hotness BZs since those are not filed by human users.

I think assuming all filed updates are of the same level of relevance is a mistake. An update that fixes a user reported bug is important to push because it fixes a problem for someone. An update that is an update for the sake of updating can clearly wait.

OTOH, it can be pretty annoying (at least when it happens repeatedly) to invest the time to properly debug an issue and do the work to write up a high-quality bug report only to find out you were chasing a bug that was already fixed. It's probably my #1 complaint about other distributions that don't update to newer upstream versions in their stable releases but also lack the maintainer resources to consequently backport bugfixes. Some people choose Fedora exactly because we usually stay close to the upstream pulse.

The continued presence of known bugs not only affects users. There are also ecosystem issues to consider, affecting the relationship to upstream projects. As an upstream developer, you'll learn to give lower priority to (or outright ignore) bug reports from users on certain "stable" or "LTS" distributions if they don't come with a reproducer against the latest release or VCS tip. Fedora users are often lucky enough to have their reports considered because the diff between latest Fedora and latest upstream tends to be small.

Slight adjustments to the update process won't destroy this, sure. It's more policies like "updates without critical user-reported bugs have to wait for the next (interim) release" or similar that I'm worried about. To be clear, I am aware that no one actually proposed this. Yet, I can't help but notice the many small steps towards that.

@bowlofeggs:

Reducing the frequency of update pushes is not the same thing as
reducing the number of bug fixes that get released. Hyperbole is not an
effective way to persuade people towards your position.
Bug fixes will simply be collated together, which gives greater
efficiency to end users. Rather than getting two RPMs for two bug fixes,
they might get one RPM for the same two bug fixes now, as an example. In
the end the difference is not the number of bugs that were fixed, but
the number of RPMs that needed to be installed to get them (and the
amount of data that needed to be retrieved could be better too, between
the DRPM and the metadata).

The problem is that this means that it takes much longer for the bug fix(es) to reach the users, even more than the advertised up to 1 week if you encourage maintainers to batch things even beyond that (and not being able to file a new update for testing while the previous one is batched is even more than "encouraging" that, that effectively forces a 2-week minimum delay).

If I am hitting a bug that is already fixed upstream, I want it fixed now, not in 1 week, not in 2 weeks. (In fact, as a user, I would like even multiple pushes as a day, with hourly update checks in the client.)

If I am hitting a bug that is already fixed upstream, I want it fixed now, not in 1 week, not in 2 weeks. (In fact, as a user, I would like even multiple pushes as a day, with hourly update checks in the client.)

@kkofler I respect that, but I don't think that it is the case for most of our users, and additionally not for a wider user base we'd like to expand into. But, to repeat something from far above: the updates-testing channel seems perfect for people who do want that constant stream. (And I'd love for that to have multiple pushes per day.)

But, to repeat something from far above

So I have to repeat the answer (or actually the 2 answers, because this was repeated not once, but twice!), too:

This is just not true. I want tested updates as soon as they are tested. I don't want untested updates. And I expect this to be the case for anybody running Fedora on any kind of production machine, even if they want daily updates (as I do).

updates-testing by definition contains updates that are not tested yet. It is not a substitute for timely delivery of stable updates (including security updates, see https://pagure.io/fesco/issue/1820#comment-497833 ). And just because you would like more people to test updates-testing is not a valid reason to force users to be your guinea pigs.

I run with updates-testing and don't feel like a "guinea pig".

I guess I don't have any more to add that hasn't already been said. Let's see how the experiment goes.

Yesterday, when turning off my computer, I received the prompt asking me if I want to install updates. Checked yes. My computer rebooted, installed the updates, turned off. All exactly as normal.

Today, when turning on my computer, I clicked the notification to see what had been updated, and discovered it was the package cldr-emoji-animation, and no other packages. This doesn't seem like it was a useful updates push.

@catanzaro and https://bodhi.fedoraproject.org/updates/FEDORA-2018-e1cab23787 has a somewhat disappointing description: "This is an update for 33 alpha version". That doesn't tell me as a user what the update is for, except to make me wonder why we're pushing an alpha version to a stable release (especially given that https://www.unicode.org/repos/cldr/trunk/readme.html says Note: This is a pre-release candidate version of CLDR 33, intended for testing. It is not recommended for production use).

I also updated the batch yesterday and refreshed plasma-pk-updates right now.

I get ceph (I have only the subpackages librados and librbd1 installed) (1 CVE fixed by upstream point release), hplip (upstream point release with unspecified contents), samba (2 CVEs fixed by upstream point release), postgresql (I have only -libs installed) (upstream point release with a changelog link), sane-backends (fix for packaging bug rh#1554032 that breaks access to all USB scanners) and vulkan (upstream release with unspecified contents).

So there are at least 2 security updates in that set (and the Samba one looks very important to me, people were able to edit other people's password!), and one important bugfix. So the metadata had to be regenerated anyway, so why wait with that cldr-emoji-annotation update even if it is not critical?

@kkofler and @catanzaro together make a strong point against batched — batching on the server side is only effective when everybody has more or less the same package set to start with. gnome and kde desktop are a good example of two sets of packages that could have updates batched on the client side, but batching on the server side is often going to result in spurious batches that only contain a few "fringe" packages.

And today, one single update: firefox. A critical security update according to upstream (https://www.mozilla.org/en-US/security/advisories/mfsa2018-06/">MFSA2018-06), so this was rightfully not batched. So the batching is clearly not working to reduce the frequency of metadata refreshes. It only spreads the updates much more unevenly (which makes it painful to go through the huge batched pushes and read the notes) and delays bug fixes for no good reason. And speaking of update notes, the update notes for this Firefox update did not contain a link to that security advisory, I had to look it up on my own.

I would suggest doing the following:

1) have separate "batched updates", and "firehose updates" repos. This would make it possible for people who want firehose (kkofler) to have that, and at the same time we could enable batched updates repo for the general public

2) normally all updates go directly to stable and bodhi doesn't do any kind of batching: every update that goes to stable ends up in the "firehose updates" repo

3) once every week (or even once every two weeks), we take a time out, temporarily freeze all bodhi pushes, and run a set of tests on the "firehose updates" repo. it could be automated tests (openQA) and maybe someone tests them manually if they have time.

4) once tested, we do a special batched updates push that copies all updates from "firehose updates" to the "batched updates" repo and release to the general public

This should work roughly the same as 2-week Atomic composes work right now: every two weeks, the current set of rpms is tested and pushed out as an Atomic update.

At the same time, both sides of the fence should be content with this setup: firehose people get their firehose, and general public gets batched updates, while able to easily join the firehose if needed.

Security updates and other critical updates would of course skip this whole process and could go directly to both "batched updates" and "firehose updates" repos.

@kkofler and @catanzaro together make a strong point against batched

Unlike @kkofler, I'm strongly in favor of batched updates, I just think it's not working well right now. It's normal to not get things exactly right the first time.

I think we have two reasonable options:

  • Give up and batch nothing
  • Go all-in: force all updates to be batched, including critical security updates, and flush weekly or every other week. Exceptions could be made by releng in the very, very rare case that we are aware of active exploitation of a vulnerability on Linux (which basically never happens).

The second option is compatible with @kalev's suggestion. I like kalev's proposal. In particular, slowing down the updates pushes to every other week should allow the possibility for to updates to be QA tested together as a set, rather than individually as they are right now. This has the potential to significantly improve update quality by avoiding regressions. I kinda think updates-testing could serve the purpose of the firehose repo, but I know Kevin insists that it's not good enough because the updates there have by definition not yet been tested, so having a separate repo might be a good compromise.

We should clearly define the goal as reducing total updates pushes. Reducing the average size of updates pushes is not useful. When we have any update that needs to skip batched, all updates should be immediately flushed from batched, to ensure we're not forcing users to reboot just to install one or two updates.

Finally, we should note that no other major distributions have this problem. None. This problem is unique to Fedora, because Fedora is the only major distribution that cares about pushing timely and significant software updates out to users. Delaying updates by a week or two will not compromise that. This is actually a good problem to have.

@kkofler and @catanzaro together make a strong point against batched — batching on the server side is only effective when everybody has more or less the same package set to start with. gnome and kde desktop are a good example of two sets of packages that could have updates batched on the client side, but batching on the server side is often going to result in spurious batches that only contain a few "fringe" packages.

I think this problem would occur much less often if we had fewer packages skipping batched.

When we have any update that needs to skip batched, all updates should be immediately flushed from batched, to ensure we're not forcing users to reboot just to install one or two updates.

I disagree with this part for two reasons:

a) there's a good chance that the update will miss any particular user. We have 15000 packages in Fedora and only 1000 are installed by default (I made up those numbers, but I think the order of magnitude is correct), which means there's a 1 in 15 chance that pushing out a single update to the repo hits one particular installation. Let's say a super critical qt-webkit security update is released; there's no reason to flush all of GNOME packages because of that and force all of Workstation users to update

b) we can't apply QA if a random security update always triggers flushing the batched set. For QA it needs to be predictable, so that QA can set time aside, let's say every second Tuesday to test things before pushing them out.

a) there's a good chance that the update will miss any particular user. We have 15000 packages in Fedora and only 1000 are installed by default (I made up those numbers, but I think the order of magnitude is correct), which means there's a 1 in 15 chance that pushing out a single update to the repo hits one particular installation. Let's say a super critical qt-webkit security update is released; there's no reason to flush all of GNOME packages because of that and force all of Workstation users to update

  • The metadata refresh is needed for all installations, even if the update does not actually affect some users.
  • The UI issue can and should be addressed on the client side, according to the individual user's preferences. This is a client policy that should not be forced by the server.

The only valid reason to do batching on the server side rather than the client side is to avoid the metadata download, and that is not working at all, and I am convinced (also looking at the evidence collected so far) it cannot possibly work without sacrificing security.

In today's FESCo meeting, we agreed to continue the discussion on the mailing list and ticket this week. We would like to identify concrete goals and concrete plans to get us there.

Kalev, that's a good point... although the metadata refresh will still be required, which is unfortunate, the reboot can often be skipped. Perhaps I was wrong to suggest flushing all updates whenever one skips batched.

The only valid reason to do batching on the server side rather than the client side is to avoid the metadata download

It's clearly needed to allow QA of a well-defined set of updates.

@catanzaro:

Kalev, that's a good point... although the metadata refresh will still be required, which is unfortunate, the reboot can often be skipped.

Rebooting for each and every update is something only gnome-software does, and that already does its own client-side batching (updating only once a week) anyway. So this argument is also invalid.

On 03/16/2018 02:13 PM, Kevin Kofler wrote:

The only valid reason to do batching on the server side rather than the client side is to avoid the metadata download, and that is not working at all, and I am convinced (also looking at the evidence collected so far) it cannot possibly work without sacrificing security.

I don't think this claim has been established. I think you have made
your opinion clear and I wouldn't say that your opinion is invalid, but
the opinions of others are also not invalid.

I still find the higher ration of drpm files to be compelling. Security
is not sacrificed because we still have a mechanism to flush urgent
updates (which can also include non-security bug fixes if they are truly
urgent).

As for the suggestion of adding another repo - I'm personally +1 to that
idea. I just don't know if we have the resources to do it on
infrastructure or mirrors. I believe someone also made a suggestion that
it might only be desired by a vocal minority and not by a large number
of our users, which I also think might be likely, though I also think
that would be hard to know for sure without data. I would support an
effort to make a "firehose repo" happen if we have the resources to do so.

@adrian @kevin @ausil @puiterwijk do you think it would be viable to
add a "firehose updates" repository, which would contain updates that
have reached karma requirements but haven't reached the updates
repository yet? I'd think it would not typically contain nearly as many
packages as updates or fedora, because once the batch goes out updates
should progress from it to updates. So it might not really be that much
data, if I'm thinking about it correctly?

Is there anyone who would still not be happy if we had a firehose repo,
or would that be a happy compromise if we could pull it off?

I'm not opposed to the "firehose" repo idea — I'm just unsure that it would be meaningfully different from updates-testing.

@bowlofeggs:

But the higher ratio of DRPM files is at best a side effect (by limiting the throughput due to obsoletion conflicts, which is not what batching is really about, for which people are already discussing solutions, and which is probably a corner case anyway), if it is even measurable at all. So far, the only evidence that the effect exists at all is your empirical observation on a single batch.

I see several weak or invalid arguments for batching and not a single compelling one.

@mattdm In my mind, the firehose repo would be a sort of staging area where we stage the updates that are going to be pushed to stable at the next weekly (or biweekly) stable push. Without it, we can't meaningfully QA a weekly batched set.

That's an important distinction from updates-testing because we need to separate the updates that are going to be stable from the rest of the updates-testing updates that haven't reached their karma thresholds yet.

Here's an example. We have nss version bump in updates-testing. We have a firefox version bump in updates-testing, depending on new nss, in a separate update. Firefox gets queued to stable, nss does not.

At this point, if we test all of updates-testing, our tests pass -- both nss and firefox are there. But if we go ahead with the push that only moves firefox to the stable repo, things break. That's why we need a separate staging area, a.k.a the firehose repo.

@catanzaro

Go all-in: force all updates to be batched, including critical security updates, and flush weekly or every other week. Exceptions could be made by releng in the very, very rare case that we are aware of active exploitation of a vulnerability on Linux (which basically never happens).

How do you come to the conclusion that software vulnerabilities are basically never exploited on Linux? Get yourself a VM from one of the cloud providers and put up a service on the web (or other popular application) ports. You'll quickly get swamped with requests trying to exploit software bugs and user configuration errors. Most of them seem to probe for really ancient bugs but that's probably just lazy attackers. If this ultra-low-hanging fruit supply would dry up, they're likely to go after machines with more recent vulnerabilities.

2 updates today: kernel, python3. Batching does not work to prevent metadata updates at all.

And another small set (so again new metadata) today: ImageMagick, firefox, freerdp, perl-Time-Moment, again with urgent security fixes in at least ImageMagick and firefox.

Yet another metadata update, with one package (libfbclient2).

On 03/20/2018 04:35 PM, Kevin Kofler wrote:

Yet another metadata update, with one package (libfbclient2).

The forced batching experiment ended 1.5 weeks ago, so we are back to
packagers pushing stable willy nilly as before, so it is not surprising
to see these metadata updates. Also, today is Tuesday so the big batch
went out.

The week we did the experiment there were few metadata updates, so the
data we have does show that batching is useful.

The updates causing the metadata refreshes are not "willy-nilly". They are important security updates. (Well, except maybe the firebird/libfbclient2 one, that one was 1 month old and could probably have waited the few hours until the batch.)

The one week of the "experiment", there were indeed no metadata refreshes, because even critical security updates did not go out that week! See https://pagure.io/fesco/issue/1820#comment-497833 . That "experiment" was a total fiasco!

Batching does not prevent metadata refreshes if you care at all about security.

That again is hyperbole. Security professional rated the updates - no critical updates were held that week.

Those are the same "security professionals" that mark a large proportion of the security issues that get fixed in Fedora as WONTFIX for RHEL. If that becomes the standard for a security update being critical, that will degrade security support in Fedora significantly.

This week's batch includes this libvorbis update:
https://bodhi.fedoraproject.org/updates/FEDORA-2018-061bafe369

This update includes the fix for CVE-2018-5146, which was rated "critical" by both Red Hat: https://access.redhat.com/security/cve/cve-2018-5146 and Mozilla: https://www.mozilla.org/en-US/security/advisories/mfsa2018-08/

Yet, it sat in batched for around a day (unfortunately, I cannot tell you for how long exactly because there seems to be no way to get Bodhi to display precise times anymore) and did not go out with that Monday update push (which had only firebird).

So batching is delaying critical security updates.

On 03/21/2018 07:24 PM, Kevin Kofler wrote:

So batching is delaying critical security updates.

The packager should have set this update to be urgent or pushed it to
stable manually. The tool did not fail here.

The tool allowed the packager to do the wrong thing, and even made it easier than doing the right thing. If we stop doing batching, then it can never delay a security update even in the event of human error.

From test list:

There have been some suggestions that it might be useful or interesting to QA to test the set of batched updates as a set, but we thought it would be good to formally ask QA to comment on whether there is value there or not.

I'm sorry, I haven't read this whole ticket. However, with my QA hat on, I believe batching can be useful for testing, both automated and manual. For automated, it doesn't need to be in days/weeks intervals though, hours intervals would be enough. So, the long batches as currently implemented would be mainly useful for manual testing. The batch could get frozen for a few days, QA could spend some time with it and make sure nothing important got horribly broken, and then it would get pushed. It could replace or extend the current update policy as implemented in Bodhi. I'm not sure if there are substantial benefits over the current state, though. Perhaps @adamwill might have an opinion here.

Not really a strong one, no. I kinda agree with @kkofler to the extent that the process as it's been implemented so far seems a bit screwy (on the one hand, it's kinda too easy for people to just override the system and push straight to stable, destroying any value the batching has; on the other hand, it's also too easy for people not to push straight to stable in the cases where they really should do that).

Kevin's exactly right that a tool which makes it too easy to do the wrong thing is failing in a sense, though it technically implements the intended design (no blame implied). We could do this better: why not prompt anyone submitting a 'security' update to batched with a question about whether any of the vulns fixed is 'critical'? Hell, why not look at CVEs listed in Bugzilla bugs fixed by the update and just check if one of them is critical? There do seem to be paths to make this better.

My instinctual response to the question of whether this is helpful for QA is sorta 'mehhhh, maybe a bit?', but it's entirely that, I have no data or even reasoned argument, really.

Testing batched updates as a batch is definitely something we could make openQA do, if appropriate fedmsgs are sent out when a batch is prepared. The question though, I guess, is whether we'd have any process in place for the test results to be considered before the batch was actually pushed out. If not, it'd be fairly useless testing.

The question though, I guess, is whether we'd have any process in place for the test results to be considered before the batch was actually pushed out. If not, it'd be fairly useless testing.

I'd argue that it's a good first step. Otherwise, we have a chicken and egg argument ("No point in making a process to consider pulling something from a batch when they're not even tested!")

This was discussed in the last meeting (2018-03-23):
https://meetbot-raw.fedoraproject.org/teams/fesco/fesco.2018-03-23-15.00.log.html#l-399
ACTION: zbyszek to generate a wiki page with arguments and considerations, ask for feedback (zbyszek, 16:45:26)
AGREED: Ask bodhi developers to try to "suspend" or "bypass" batching in the short term, continue discussion (+5, 1, -0) (tyll, 16:47:09)

Dropping the meeting keyword until new information comes in.

Metadata Update from @sgallagh:
- Issue untagged with: meeting

5 years ago

@zbyszek Have you created the wiki page as per two comments above?

I'd like to discuss the status of this one on Friday.

Metadata Update from @bowlofeggs:
- Issue tagged with: meeting

5 years ago

Friday's meeting will be at 15:00UTC in #fedora-meeting on irc.freenode.net.

I'd like to discuss the status of this one on Friday's meeting at 15:00UTC in #fedora-meeting on irc.freenode.net.

AGREED: Leave things the way they are now, and wait for bowlofeggs or zbyszek to write up the wiki :-) (+7, 0, -0)

Metadata Update from @bowlofeggs:
- Issue untagged with: meeting

5 years ago

Metadata Update from @sgallagh:
- Issue tagged with: Stalled

5 years ago

Proposal: Drop Bodhi's batching feature.

Metadata Update from @bowlofeggs:
- Assignee reset
- Issue untagged with: stalled

5 years ago

Yeah. I think we should remove batching, and put the effort into making the metadata more efficient. I haven't followed the details, but @jdieter's zchunk change seems to be coming along nicely.

Just curious, what changed? I've just skimmed trough the ticket and this proposal surprises me.

I don't think the batched updates are any good, but they are rather annoying, but I haven't given it much consideration yet.

Just curious, what changed? I've just skimmed trough the ticket and this proposal surprises me.
I don't think the batched updates are any good, but they are rather annoying, but I haven't given it much consideration yet.

Batched updates are currently "disabled" [1]. There's a long history here, but we decided a long time ago that they couldn't be properly enabled unless some other conditions were met, which it now looks like will never happen. So the proposal is to drop this state in favor of other solutions.

I'm +1 here at this point. (I stand by my assertion that batches needed to be either mandatory or nonexistent; having them be opt-in was meaningless.)

[1] Well, they're not actually disabled, but we push them to stable daily, so they're effectively disabled.

Thanks for the explanation.

+1

On Tue, 2019-01-08 at 16:04 +0000, Miro Hron=C4=8Dok wrote:

Just curious, what changed?

We were unable to come to a consensus to keep the feature, so as the
Bodhi maintainer I'd now like to simplify the code and remove it. Since
it is hobbled right now, it seems to confuse people and it also makes
the Bodhi code more complex. I'm in the middle of working on a major
backwards incompatible Bodhi release, so now is a good time to remove
things.

+1 to drop the feature.

+1 simplified code is good

I see 3 pluses for Drop Bodhi's batching feature. Adding to the meeting agenda for Monday 2019-01-21.

Metadata Update from @churchyard:
- Issue tagged with: meeting

5 years ago

+1 - client side batched application of updates is better than than server side batching of pushing to stable, except for the metadata-download issue - which hopefully we can address otherwise.

Part of the long-term goal was for the batching to result in QA-ed "updates packs" -- of course that never happened -- but that can't be done with only client-side batching.

I still don't see why it would be easier to QA a batch of unrelated updates rather than the updates individually. And the larger batched pushes would also make it harder to identify which update in the push actually caused the regression. If I have 10-20 updates in a daily push, of which maybe 1-3 look like they could be related to the regression, it is much easier to identify the culprit than if I have ~100 updates in a (7 times larger) weekly push, of which ~10-20 look like they could be related to the regression.

Better testing of inter-update regressions would be great. But considering
that an update could drop into the weekly batch at any time up to the last
minute, and some updates bypass testing, I'm not sure it (even if enabled)
would have actually improved the qa situation much. I could imagine some
non-bypassable 6 hour delay of the daily compose for automated tests, but
that different than what's bring dropped here, which at the moment is just
some packages confusion.

On Sun, Jan 20, 2019, 6:43 PM Michael Catanzaro <pagure@pagure.io wrote:

catanzaro added a new comment to an issue you are following:
Part of the long-term goal was for the batching to result in QA-ed "updates packs" -- of course that never happened -- but that can't be done with only client-side batching.

To reply, visit the link below or just reply to this email
https://pagure.io/fesco/issue/1820

there are 5 explicit pluses, also I tend to guess that @bowlofeggs is in favor of his own proposal.

No minuses, proposal is 18 days old. Consider the proposal APPROVED (+5, 0, 0).

Metadata Update from @churchyard:
- Issue untagged with: meeting
- Issue tagged with: pending announcement

5 years ago

Metadata Update from @churchyard:
- Issue untagged with: pending announcement
- Issue close_status updated to: Accepted
- Issue status updated to: Closed (was: Open)

5 years ago

This upstream issue tracks the removal of batching from Bodhi: https://github.com/fedora-infra/bodhi/pull/2981

I think that this is the issue you actually wanted to link to:
https://github.com/fedora-infra/bodhi/issues/2977

Login to comment on this ticket.

Metadata