#9066 MBS is unreasonably slow and never completes module builds
Closed: Fixed 3 years ago by pingou. Opened 3 years ago by mbooth.

I filed a ticket here: https://pagure.io/fm-orchestrator/issue/1640

But I figured it's probably related to datacentre move and might be worth reporting here instead.

It used to take only a couple of hours to build my tycho module, even under heavy load.

My last builds were going for nearly 24hours before failing:

https://release-engineering.github.io/mbs-ui/module/9106
https://release-engineering.github.io/mbs-ui/module/9105

My current builds are taking hours between "ranks" -- the component RPM builds finish quickly in koji and then it takes forever before MBS recognises them and starts the next rank of component RPM builds:

https://release-engineering.github.io/mbs-ui/module/9119
https://release-engineering.github.io/mbs-ui/module/9120

It's as though MBS is not getting messages about tasks completing in koji and waits for an hour-long timeout to elapse before checking, realising it can continue with the next part.

Any idea what is going on here?


Metadata Update from @mohanboddu:
- Issue priority set to: Waiting on Assignee (was: Needs Review)
- Issue tagged with: groomed, medium-gain, medium-trouble

3 years ago

So, this is really a duplicate of: https://pagure.io/fedora-infrastructure/issue/8922

You can see from looking at the tag history: ( koji list-history --tag=module-tycho-1.6-3220200623140302-9686cf87 )

Tue Jun 23 09:36:22 2020 jetty-9.4.30-1.v20200611.module_f32+9119+9ade36f5 tagged into module-tycho-1.6-3220200623140302-9686cf87 by mbs/mbs.fedoraproject.org [still active]
Tue Jun 23 11:36:33 2020 felix-gogo-runtime-1.1.0-5.module_f32+9119+9ade36f5 tagged into module-tycho-1.6-3220200623140302-9686cf87 by mbs/mbs.fedoraproject.org [still active]
...

So, here it finished the initial builds that didn't depend on any others. Then, it waited for the repo to update (for kojira to notice that the buildroot needs updating and regenerate it), that happened and it was able to build 'felix-gogo-runtime' (which I guess depends on one or more of the just built packages). That took about 2 hours.

So, lets track this over at that other ticket and fixing it for normal buildroots should hopefully fix it for mbs ones too. :)

Metadata Update from @kevin:
- Issue close_status updated to: Duplicate
- Issue status updated to: Closed (was: Open)

3 years ago

Alright, thanks for info @kevin :-)

@kevin in the other ticket people say they are triggering buildroot regeneration manually -- anything I can do like that for MBS? My MBS builds are in progress for more than 24hours now with the final RPM completed nearly 8 hours ago....

Hop over to #fedora-releng and we can see about manually getting things moving for you... the delay definitely shouldn't be 24 hours... it's more like 2 hours at most, so I suspect something went wrong here?

ok, turns out this is not due to buildroot regen, something else is broken.

The slowness might be related, but then jobs never finish:

[Build #9118] 389-directory-server-testing-820200622113104-9edba152 is in "build" state.
Koji tag: module-389-directory-server-testing-820200622113104-9edba152
Link: https://mbs.fedoraproject.org/module-build-service/2/module-builds/9118
Components: [100%]: 0 in building, 2 done, 0 failed
No building task.

The logs on backend show:

Jun 25 15:46:48 mbs-backend01.iad2.fedoraproject.org fedmsg-hub[11113]: [2020-06-25 15:46:48][MBS.scheduler.producer    INFO]     * <ModuleBuild 389-directory-server, id=9118, stream=testing, version=820200622113104, scratch=False, state 'build', batch 2, state_reason None>
Jun 25 15:46:50 mbs-backend01.iad2.fedoraproject.org fedmsg-hub[11113]: [2020-06-25 15:46:50][MBS.builder.KojiModuleBuilder    INFO] <KojiModuleBuilder module: 389-directory-server, tag: module-389-directory-server-testing-820200622113104-9edba152> connecting buildroot.
Jun 25 15:46:50 mbs-backend01.iad2.fedoraproject.org fedmsg-hub[11113]: [2020-06-25 15:46:50][MBS.builder.KojiModuleBuilder    INFO] <KojiModuleBuilder module: 389-directory-server, tag: module-389-directory-server-testing-820200622113104-9edba152> buildroot sucessfully connected.
Jun 25 15:46:50 mbs-backend01.iad2.fedoraproject.org fedmsg-hub[11113]: [2020-06-25 15:46:50][MBS.scheduler.producer    INFO]   Processing the paused module build <ModuleBuild 389-directory-server, id=9118, stream=testing, version=820200622113104, scratch=False, state 'build', batch 2, state_reason None>

I have no idea what 'paused' means there, but it's not completed in several days.

I don't know mbs at all, so we are going to need some help here.

@mikem @ralph @sgallagh

Do any of you know anyone who could help us debug things here?

Metadata Update from @kevin:
- Issue status updated to: Open (was: Closed)

3 years ago

I wonder if @jkaluza and @mprahl are still around. They were extremely helpful for my problems with MBS in the past

I do see some modules completing, but not others and I have no idea what the difference is. ;(

I'll try and raise the bat-signal level if I can here...

Another ticket was found with rpm2flatpack in 9134.

I remember jkaluza complained that messages (about finishing a build) MBS is subscribed to get sometimes lost. For that case MBS resorts into polling and the polling has a large period not to abuse Koji. As a result progressing a module build becomes slow. The lost is especially common when restarting MBS. Also there is/was another issue with a memory and file descriptor leak in MBS leading to a system overload and failures in the messaging client making the system more unresponsive.

The build [1] I have "running" has time_modified "2020-07-03T14:47:10Z" and it's still in the "build" state. I doubt the poll has such large delay, but I do not know the MBS at all, thus everything is possible.

[1] https://mbs.fedoraproject.org/module-build-service/2/module-builds/9137

The period is not so long. That's probably the issue with a resource exhaustion. See that all your RPM builds are successfully finished and MBS knows about that (state 1), but MBS does not advances to the next iteration, in your case switching the build into done state. I also got all my modular builds into the state now.

Infra people should restart the server.

I have restarted the server. it makes no difference.

The fedmsg-hub resource issue was fixed.

I remember that the stuck MBS build must be canceled and submitted again from a new module commit (to prevent from resuming the build).

Another idea is that it's a problem on Koji side. I can see 40 concurrent newRepo tasks running there. On third of them is waiting on a builder in the queue. Isn't it that MBS waits for the builds to appear in the modular build roots and it's only so slow?

Or another tip: I experienced twice https://koji.fedoraproject.org/koji/buildinfo?buildID=1542153:

GenericError: hash changed for external rpm: libuuid-2.35.2-2.fc33.x86_64@f33-build (18114b5a272a18016f74b3a0712061 -> 18114b5a272a1801dbb74b3a07c4a061)

I've never seen that error. util-linux/libuuid has not been rebuilt since May.

I remember that the stuck MBS build must be canceled and submitted again from a new module commit (to prevent from resuming the build).

yeah, I recall something about that too... but not sure.

Another idea is that it's a problem on Koji side. I can see 40 concurrent newRepo tasks running there. On third of them is waiting on a builder in the queue. Isn't it that MBS waits for the builds to appear in the modular build roots and it's only so slow?

There's so many newrepo tasks because we now have sidetags... but the sidetag newrepos are really quick... 2-3min usually, so I bet if you looked a few minutes later all those would be done.
Repos do seem to get regenerated pretty timely... I do see mbs waiting for them and then going on...

Or another tip: I experienced twice https://koji.fedoraproject.org/koji/buildinfo?buildID=1542153:
GenericError: hash changed for external rpm: libuuid-2.35.2-2.fc33.x86_64@f33-build (18114b5a272a18016f74b3a0712061 -> 18114b5a272a1801dbb74b3a07c4a061)

Weird. Thats a new one to me too...

I've never seen that error. util-linux/libuuid has not been rebuilt since May.

Yeah. Seems like a koji bug there for sure, but not sure if it's related to this or not...

A lame question, to make it clear to me: am I supposed to cancel my for 12 days running build and re-run it, hoping it'll work this time? I can do it, I've no problem with it, I only do not know whether you'd like to use that particular build for any debugging, log grabbing and such.

Well, right now it's unclear to me what to do to debug it, so I would say yes, try and cancel and resubmit a new one.

Tagging @merlinm to see if he has any help to offer here.

Sorry, I can't be of much help here. But I'd also suspect dropped messages, or database updates that never made it to storage.

Okay, so I did so. Just re-starting re-used the same build ID, which did not help (the fedpkg flatpak-build failed with an error: https://koji.fedoraproject.org/koji/taskinfo?taskID=47334650 - that's a very odd error, from my point of view). Then I made an empty commit into the flatpaks/evolution git repo and started a new module build for it ( https://mbs.fedoraproject.org/module-build-service/2/module-builds/9218 ). It's currently "building", after more than 2 hours of "doing visually nothing". I thought it's waiting for the only build shown in there ( https://koji.fedoraproject.org/koji/taskinfo?taskID=47335734 , but it finished within 8 minutes since started, more than 2 and a half hours ago, from which I'd expect the module build will advance to the next stage, hours ago, but it did not for some reason).

That being said, is it doing anything now, or it's waiting for I do not know what, or it's stuck on anything, or...? Hmm, can this be related to the infrastructure changes (data center move)? It's still ongoing, right?

That being said, is it doing anything now, or it's waiting for I do not know what, or it's stuck on anything, or...? Hmm, can this be related to the infrastructure changes (data center move)? It's still ongoing, right?

The odd error is likely due to the proxy outage we just had... lots of thigns were down/in an odd state.

It's logs are less than useful, so I actually don't know what it's waiting for...

It's related to the datacenter move in that it worked before moving, but we long since redeployed it and it's fully up according to our ansible playbooks/monitoring. So, yes, it's likely due to a deployment or env change, but I have no idea what that could be at this point.

This is the error I've got now:

GenericError: rpm's header can't be extracted: 5 (rpm error: error reading package header)

in https://koji.fedoraproject.org/koji/taskinfo?taskID=47346285 of https://mbs.fedoraproject.org/module-build-service/2/module-builds/9218 . I'll re-run the build (which, i guess) will reuse the MBS ID.

For the record, the module build finally finished, though the flatpak-build failed. I opened a new bug report for it:
https://pagure.io/fedora-infrastructure/issue/9150

I remember that the stuck MBS build must be canceled and submitted again from a new module commit (to prevent from resuming the build).

yeah, I recall something about that too... but not sure.

This doesn't seem to help. I've tried cancelling and resubmitting a stuck task 10+ times here (with various different wait times in between, from a minute to 12 hours), but nothing really changes. The tasks end up in 100% done and in "build" state, without advancing to "done".

I'm also unable to finish my perl-bootstrap:5.30 build. I tried many times last week resubmitting the build, but no luck. The builds also become stuck just before transiting to "done" for more then a half day. A new build today is awfully slow. There are huge delays between finishing a RPM build and recognizing the component as "1" state and between finishing a component build and submitting the next component build. Example:

$ fedpkg module-build-watch $(seq 9250 9254)
[Build #9250] perl-bootstrap-5.30-3020200722080415-a5b0195c is in "failed" state.
  Koji tag: module-perl-bootstrap-5.30-3020200722080415-a5b0195c
  Link: https://mbs.fedoraproject.org/module-build-service/2/module-builds/9250
  Components: 0 done, 123 failed
  Reason: Failed to build artifact module-build-macros in Koji
[Build #9251] perl-bootstrap-5.30-3220200722080415-43bbeeef is in "build" state.
  Koji tag: module-perl-bootstrap-5.30-3220200722080415-43bbeeef
  Link: https://mbs.fedoraproject.org/module-build-service/2/module-builds/9251
  Components: [2%]: 0 in building, 3 done, 0 failed
    No building task.
[Build #9252] perl-bootstrap-5.30-3120200722080415-f636be4b is in "build" state.
  Koji tag: module-perl-bootstrap-5.30-3120200722080415-f636be4b
  Link: https://mbs.fedoraproject.org/module-build-service/2/module-builds/9252
  Components: [2%]: 0 in building, 3 done, 0 failed
    No building task.
[Build #9253] perl-bootstrap-5.30-3320200722080415-601d93de is in "build" state.
  Koji tag: module-perl-bootstrap-5.30-3320200722080415-601d93de
  Link: https://mbs.fedoraproject.org/module-build-service/2/module-builds/9253
  Components: [2%]: 0 in building, 3 done, 0 failed
    No building task.
[Build #9254] perl-bootstrap-5.30-820200722080415-9edba152 is in "build" state.
  Koji tag: module-perl-bootstrap-5.30-820200722080415-9edba152
  Link: https://mbs.fedoraproject.org/module-build-service/2/module-builds/9254
  Components: [1%]: 1 in building, 2 done, 0 failed
    Building:
      - perl-Fedora-VSP
        https://koji.fedoraproject.org/koji/taskinfo?taskID=47626827

A wild guess: Could it be that firewall rules need updating after the data center move to avoid blocking notification messages? MBS seems to be able to talk to koji but perhaps something is blocking fedmsg/fedora-messaging (I don't know which it uses)?

Another idea: Is MBS configuration updated for new fedmsg/fedora-messaging IPs/hostnames?

Firewall rules are setup.. however there seems to be a problem in ports being listened to on the mbs server. The configs are saying to use 3000->3003 but only ports 3000/3001 are being used. I am not sure if we have lost messages because of that. [This change does not look to be related to the move.. ]

I tried cancelling one of the module builds I'm running (that haven't finished over a few days, being stuck at 100%) and today I'm reproducibly getting error 500 when doing that:

$ fedpkg module-build-cancel 9243
Cancelling module build #9243...
Could not execute module_build_cancel: The cancellation of module build #9243 failed with:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request.  Either the server is overloaded or there is an error in the application.</p>

I reached the state of all components built but the module still building for the third time. This time with rebuild_strategy=all to exclude a component reuse as a possible cause:

[Build #9255] perl-bootstrap-5.30-3020200723132827-a5b0195c is in "failed" state.
  Koji tag: module-perl-bootstrap-5.30-3020200723132827-a5b0195c
  Link: https://mbs.fedoraproject.org/module-build-service/2/module-builds/9255
  Components: 0 done, 123 failed
  Reason: Failed to build artifact module-build-macros in Koji
[Build #9256] perl-bootstrap-5.30-3220200723132827-43bbeeef is in "build" state.
  Koji tag: module-perl-bootstrap-5.30-3220200723132827-43bbeeef
  Link: https://mbs.fedoraproject.org/module-build-service/2/module-builds/9256
  Components: [100%]: 0 in building, 123 done, 0 failed
    No building task.
[Build #9257] perl-bootstrap-5.30-3120200723132827-f636be4b is in "build" state.
  Koji tag: module-perl-bootstrap-5.30-3120200723132827-f636be4b
  Link: https://mbs.fedoraproject.org/module-build-service/2/module-builds/9257
  Components: [100%]: 0 in building, 123 done, 0 failed
    No building task.
[Build #9258] perl-bootstrap-5.30-3320200723132827-601d93de is in "build" state.
  Koji tag: module-perl-bootstrap-5.30-3320200723132827-601d93de
  Link: https://mbs.fedoraproject.org/module-build-service/2/module-builds/9258
  Components: [100%]: 0 in building, 123 done, 0 failed
    No building task.
[Build #9259] perl-bootstrap-5.30-820200723132827-9edba152 is in "build" state.
  Koji tag: module-perl-bootstrap-5.30-820200723132827-9edba152
  Link: https://mbs.fedoraproject.org/module-build-service/2/module-builds/9259
  Components: [100%]: 0 in building, 123 done, 0 failed
    No building task.

I will keep it "building" over the weekend. Then I will declare officially that MBS is broken without a proper maintenance, because I was unable to finish a single module build in two weeks.

I tried cancelling one of the module builds I'm running (that haven't finished over a few days, being stuck at 100%) and today I'm reproducibly getting error 500 when doing that:
$ fedpkg module-build-cancel 9243

MBS thinks that there was no such build https://mbs.engineering.redhat.com/module-build-service/2/module-builds/9243. That's of cause nonsense, because I'm running 9255 now.

I tried cancelling one of the module builds I'm running (that haven't finished over a few days, being stuck at 100%) and today I'm reproducibly getting error 500 when doing that:
$ fedpkg module-build-cancel 9243

MBS thinks that there was no such build https://mbs.engineering.redhat.com/module-build-service/2/module-builds/9243. That's of cause nonsense, because I'm running 9255 now.

mbs.engineering.redhat.com is the internal service; https://mbs.fedoraproject.org/module-build-service/2/module-builds/9243 is aware of the build

OK I have changed some firewall rules to see if that was causing problems with fedmsg communication between mbs-backend and busgateway. I do not know how messages are used or sent in MBS so this is a shot in the dark.. (as pdc and mbs were able to talk with each other before).

Could someone test to see if this fixes it. After this, I do not have any other guesses to go on here.

@smooge, I can't test it, I just get error 500 from MBS when submitting builds. Something must have broken earlier today/yesterday unrelated to the firewall rules change. Can you restart the whole thing, please?

Everything I know that cna be restarted for MBS has been done.. can you see if it works better now?

Thanks. No, that didn't help, I'm still getting error 500:

$ fedpkg module-build -w
Submitting the module build...
Could not execute module_build: The build failed with an unexpected error: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request.  Either the server is overloaded or there is an error in the application.</p>

OK this new one is on me
[Fri Jul 24 18:02:35.084303 2020] [:error] [pid 11128] /usr/lib/python2.7/site-packages/fedmsg/core.py:159: UserWarning: fedmsg is not configured to send any zmq messages for name 'mbs.mbs-frontend01'
[Fri Jul 24 18:02:35.084316 2020] [:error] [pid 11128] "for name %r" % config.get("name", None))
[Fri Jul 24 18:02:36.088890 2020] [:error] [pid 11128] ERROR:module_build_service:Exception on /module-build-service/2/module-builds/ [POST]

OK I have updated fedmsg configs and restarted httpd on mbs-frontend. I do not know what to change on backend but restarted fedmsg-hub I think it is updated also. If you can try one more time, let me know.

Great, thanks!

OK, the builds are working again (I can kick off new and cancel existing ones without getting error 500), but so far the one that I cancelled and started again half an hour ago is still at 100% and hasn't completed. I also started a new build that's chugging along normally so far (33%).

I'll keep an eye on them and report back tomorrow; so far I'd say the stuck-at-100% issue is not fixed, but I haven't waited very long here.

My perl-boostrap builds are still stuck at 100% built for two days. I will resubmit them, but my hope is low.

Resubmitting perl-bootstrap with a resume produced an interesting bug:

Submitting the module build...
[Build #9257] perl-bootstrap-5.30-3120200723132827-f636be4b is in "wait" state.
  Koji tag: module-perl-bootstrap-5.30-3120200723132827-f636be4b
  Link: https://mbs.fedoraproject.org/module-build-service/2/module-builds/9257
  Components: 123
[...]
[Build #9257] perl-bootstrap-5.30-3120200723132827-f636be4b is in "failed" state.
  Components: 123 done, 0 failed
  Reason: 'NoneType' object has no attribute 'utctimetuple'
[...]

The job build was originally 100% built:

[Build #9257] perl-bootstrap-5.30-3120200723132827-f636be4b is in "build" state.
  Koji tag: module-perl-bootstrap-5.30-3120200723132827-f636be4b
  Link: https://mbs.fedoraproject.org/module-build-service/2/module-builds/9257
  Components: [100%]: 0 in building, 123 done, 0 failed
    No building task.

I only cancelled it and resubmitted.

I will cancel them and do a full rebuild without a reuse again.

A new perl-bootstrap build ended like the previous ones: 100 % built, but the modules still in "build" state:

[Build #9288] perl-bootstrap-5.30-3220200727104727-43bbeeef is in "build" state.
  Koji tag: module-perl-bootstrap-5.30-3220200727104727-43bbeeef
  Link: https://mbs.fedoraproject.org/module-build-service/2/module-builds/9288
  Components: [100%]: 0 in building, 123 done, 0 failed
    No building task.
[Build #9289] perl-bootstrap-5.30-3120200727104727-f636be4b is in "build" state.
  Koji tag: module-perl-bootstrap-5.30-3120200727104727-f636be4b
  Link: https://mbs.fedoraproject.org/module-build-service/2/module-builds/9289
  Components: [100%]: 0 in building, 123 done, 0 failed
    No building task.
[Build #9290] perl-bootstrap-5.30-3020200727104727-a5b0195c is in "failed" state.
  Koji tag: module-perl-bootstrap-5.30-3020200727104727-a5b0195c
  Link: https://mbs.fedoraproject.org/module-build-service/2/module-builds/9290
  Components: 0 done, 123 failed
  Reason: Failed to build artifact module-build-macros in Koji
[Build #9291] perl-bootstrap-5.30-820200727104727-9edba152 is in "failed" state.
  Koji tag: module-perl-bootstrap-5.30-820200727104727-9edba152
  Link: https://mbs.fedoraproject.org/module-build-service/2/module-builds/9291
  Components: 1 done, 122 failed
  Reason: Component(s) perl failed to build.
[Build #9292] perl-bootstrap-5.30-3320200727104727-601d93de is in "build" state.
  Koji tag: module-perl-bootstrap-5.30-3320200727104727-601d93de
  Link: https://mbs.fedoraproject.org/module-build-service/2/module-builds/9292
  Components: [100%]: 0 in building, 123 done, 0 failed
    No building task.

I declare that MBS is doomed.

If you look at:

https://mbs.fedoraproject.org/module-build-service/1/module-builds/9243
https://koji.fedoraproject.org/koji/buildinfo?buildID=1546636

the import into Koji worked, but the module is still in the "build" state. The import into Koji is supposed to be the very last thing before updating the state to "done".

So something like:
- the koji import succeeded but encoding the result into XML failed
- the database transaction in MBS timed out before Koji responded

The MBS logs would tell and need to be examined.

I have seem MBS get into this sort of confused state several times and I have not yet been able to debug it. I've looked at logs before and not found much to go on, but I can have a look at the Fedora logs anyway if someone can point me at them.

My running theory is that mbs is missing out on some messages somehow and failing to catch up.

When we've dealt with this internally, we've needed to restart the service and often resubmit some of the stuck builds.

@mikem lets meet up on irc on thursday afternoon in #fedora-admin and debug this. Drop a calendar invite on my calendar and we can hopefully make some progress.

I've checked the logs available to me. It is very slow, because it does not receive messages from Koji. It then fallbacks to polling strategy when it checks the state of module build and all its component builds in Koji every 10 minutes. This is really much slower then just receiving a message and handling it right after the event happens in Koji.

The code for polling is also most likely not covering 100% of messages and cases. It has been meant only to handle some possibly missing message or temporary infrastructure outage, but it is not meant to be as a production way how to run MBS.

@kevin, I can help debugging it, but I would need sudo on mbs-backend01.iad2 to connect database and do some queries to see what is the state of those modules in MBS db and also to check journalctl. Without that, we only have /var/tmp/build-*.log files which do not contain all the relevant data in this case.

Note that in the same time, it receives messages from MBS frontend successfully, so it is probably connected to bus.

@smooge, @kevin, I think this is the root problem:

[jkaluza@mbs-backend01 fedmsg.d][PROD-IAD2]$ nc -w 10 busgateway01.iad2.fedoraproject.org 3999
Ncat: Connection timed out.

Yeah.. no one told me that 3999 was a port needing to be allowed through the firewall. I was supposed to allow 3000-3010 through. I will put that in a ticket to get opened up.

[Edited to add: does anyone know what kind of cheese pairs well with that whine? geez stephen ]

@smooge , can you tell me once you configure the firewall so I can check it?

@smooge has requested the firewall update, but it is not controlled by fedora-infra, so it can take a while.

jkaluza, the firewall change has been made. Sorry for the delay and really sorry if this is the whole problem.

@smooge, thanks. I've just checked that the port is opened and fedmsg-tail sees the right messages. I will try real module builds tomorrow.

I also see the Koji messages in MBS logs now. MBS polling fallback is not used anymore and real messages are handled. This should fix this ticket. I also see latest module builds in "ready" state.

@ppisar, could you please test building perl-bootstrap module once more and verify that it works as expected?

My builds are now working as expected. Thanks!

I spawned 9305..9309 builds. F31 9307 build succeeded and is done now. Other builds fails for various reasons in Koji builders (F33 and EL8 times out s390x, F32 has broken annobin, F30 is missing repositories).

So it seems that this MBS issue with builds stack in 100 % is resolved.

Awesome, thanks for the follow up @kalev and @ppisar

Thanks for the help debugging the issue @jkaluza !!

I'm going to close this as Fixed then. Let's re-open or open a new one if needed.

Metadata Update from @pingou:
- Issue close_status updated to: Fixed
- Issue status updated to: Closed (was: Open)

3 years ago

Login to comment on this ticket.

Metadata