The server under domain d2uk5hbyrobdzx.cloudfront.net currently returns a lot of 502 errors.
d2uk5hbyrobdzx.cloudfront.net
I originally reported it on GitHub: https://github.com/fedora-silverblue/issue-tracker/issues/631#issuecomment-2688549258
What I tried to do was to rollback to a previous commit on Silverblue.
Seeing the same when trying to update or rebase. Same server as you d2uk5hbyrobdzx.cloudfront.net - tested from Germany. This might be a faulty load balancer, so it could be region specific.
Sorry you are hitting this.
We had a number of issues with the origin server(s) that this pulls from yesterday.
To clarify, you are seeing this now today?
Yes, that endpoint is a amazon cloudfront, so the server will vary according to where you are (there are a number of them out on the 'edge')
I am running into this same issue while trying to update Kinoite 41
I coming from here: https://pagure.io/fedora-infrastructure/issue/11973 I currently see a lot 500 errors in fedora core os zincati update process. Also from germany (hetzner)
Same issue here in Norway on Silverblue
error: While pulling fedora/40/x86_64/silverblue: While fetching https://d2uk5hbyrobdzx.cloudfront.net/delta-indexes/NK/zRroGmpJGyBol0ksLZshGjGRCKuFQmJzolT948r3k.index: Server returned HTTP 502
Same here in the US on Silverblue:
While pulling fedora/41/x86_64/silverblue: While fetching https://d2uk5hbyrobdzx.cloudfront.net/delta-indexes/8w/UFyT8ra1CS8MOuaTU8dVVs7hBTi2e6dqXpUST916k.index: Server returned HTTP 502
Same here in NL when running rpm-ostree update;
rpm-ostree update
error: While pulling fedora/41/x86_64/sericea: While fetching https://d2uk5hbyrobdzx.cloudfront.net/objects/db/962e0e2b66ad44e4ab4f009e4e316f594e40022281653046e181294872e214.filez: Server returned HTTP 502
Tried changing URL in /etc/ostree/remotes.d/fedora.conf to (this one)[https://d2uk5hbyrobdzx.cloudfront.net] obtained from the mirrors list, gives 502 as well. It's picked up only when I comment out the contenturl=mirrorslist=... line.
/etc/ostree/remotes.d/fedora.conf
contenturl=mirrorslist=...
Are there any other mirrors available? I have a feeling it could be a workaround.
PS. pls forgive obvious and trivial mistakes, total newbie here, just finished installing fedora sway atomic.
Sorry you are hitting this. We had a number of issues with the origin server(s) that this pulls from yesterday. To clarify, you are seeing this now today? Yes, that endpoint is a amazon cloudfront, so the server will vary according to where you are (there are a number of them out on the 'edge')
I had this issue yesterday as well.
Same here in Lithuania.
error: While pulling fedora/41/x86_64/kinoite: While fetching https://d2uk5hbyrobdzx.cloudfront.net/objects/16/3be1d7f590285b58ed9dbb45793156d8c71fee729fc502d6ad0e2488c49416.dirtree: Server returned HTTP 502
Managed to update yesterday after a few tries.
In my case the error is 504 (Russia):
error: While pulling fedora/40/x86_64/silverblue: While fetching https://d2uk5hbyrobdzx.cloudfront.net/delta-indexes/NK/zRroGmpJGyBol0ksLZshGjGRCKuFQmJzolT948r3k.index: Server returned HTTP 504
Is there any workaround yet?
I tried several various locations via a VPN but in my case it didn't help
I've just issued an invalidation on cloudfront (basically telling it to refresh everything).
Hopefully this will help out/fix the issue. If folks could give it a little while to refresh, then check again and let us know if it's working then that would be great.
I'm still getting:
error: While pulling fedora/stable/x86_64/iot: While fetching https://d2ju0wfl996cmc.cloudfront.net/objects/ff/eb7d3877cd438197a4171e69a747f20ced76a841758adea239f407feddf5d9.dirtree: Server returned HTTP 502
So, huh. I am not sure whats going on there as that file doesn't exist on our source... so of course it's not in the cache either. ;(
I'm seeing similar failures, and have for several days. It's interesting to note that the URL which fails seems to change over time: in some attempts, rpm-ostree gets deeper into the update process before failing out.
Most of the time, my failures are rpm-ostree transaction fail with a 502 on https://d2uk5hbyrobdzx.cloudfront.net/delta-indexes/h5/8hWxkT1VqT+xvTyByyacZUAQ+I8PhN1wkAY1KRrNI.index , which I think is early in the sync process. Other times it gets past that and fails on a .dirtree object, similar to other reports above.
Additionally, fetching the failing URLs with curl shows that Cloudfront is serving a more detailed error in the response body. Stripping out the HTML noise, it says:
CloudFront attempted to establish a connection with the origin, but either the attempt failed or the origin closed the connection. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner. If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation. Generated by cloudfront (CloudFront) Request ID: A9fsw31_6mZRaN9UfqsQA590vIDxvx8fCVYVYMdzrH0vckqrX8JY4A==
CloudFront attempted to establish a connection with the origin, but either the attempt failed or the origin closed the connection.
We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.
If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.
Generated by cloudfront (CloudFront) Request ID: A9fsw31_6mZRaN9UfqsQA590vIDxvx8fCVYVYMdzrH0vckqrX8JY4A==
So, maybe something wrong at the origin server(s) that's preventing cache fill?...
Having this same issue from Southern California when attempting to update Kinoite.
Been having this same issue for a few days now in Berlin, Germany.
Are all of the failures in the same tree? ie, objects or data-indexes? Or it's all over the place seemingly at random?
So, all the failures I see are actually 404's on the backend. ie, those files don't exist there.
So, I am not sure whats going on here. It doesn't seem fully like the cache, but perhaps I am missing something. ;(
@siosm might you have any idea whats going on here?
@jlebon @dustymabe Is coreos seeing this too?
@pwhalen is iot seeing this too?
I experience the same issues with IoT as with Silverblue.
Same issue here upgrading from a fresh install. Looking into it, the file requested is in /delta-indexes/jz/ which is completely empty on the cloudfront link when opened from a browser.
I was having the same problems yesterday updating Silverblue 41 and it was all over the place and at random. It resolved today around 17:00 CEST for me in The Netherlands (Amazon West-Europe).
Failures are seemingly all over the place here. The most common failure is on delta-indexes fetches, but occasionally rpm-ostree works for longer and successfully fetches a few dozen URLs before failing in the same way on a fetch in the objects namespace.
@kevin It's odd that cloudfront's error reports that it's unable to connect to the backend, if you're seeing 404s. Is the outbound request rate from CF similar to the inbound request rate on the origin? I'm wondering if, say, the origin is overloaded and dropping some connections, which would show up as a big delta between the request rate out from CF and the rate in at the origin server.
What's the object lifetime for ostree updates on the origin? Do they get garbage collected, and at what rate? I'm wondering if CF is caching a top-level URL too aggressively, and maybe keeps serving it to clients even though the origin has moved on to a new tree and garbage collected the old one? That might explain the mysterious 404s, and also the intermittent behavior: if you get lucky you hit some CF caches that happened to capture part/all of a URL tree while it was valid, and then they continue to serve that.
(disclaimer: I know nothing about how fedora infrastructure works, I just troubleshoot this kind of serving system as my day job, so I'm pattern-matching on possible causes that explain the observed symptoms)
The 404 I mention is on the local side. The file doesn't exist there at all. So, it tells cloudfront '404' and cloudfront tells the client '502'.
It's not a matter of load, it just simply doesn't have the file, so it can't serve it. :(
Why it's telling clients to fetch that file I don't know, will need to get folks that know ostree to look. :(
Also having this issue when trying to update from fresh install here in sweden.
Interestingly, it started working for me again yesterday evening.
From Poland: having the same issue for 3 days now on a fresh Kinoite 41 install.
The only workaround that worked for me was to rebase to a ublue image (you'd have to install their ublue-os-signing package first -- see here: https://discussion.fedoraproject.org/t/atomic-desktop-update-error-server-returned-http-502/146178/25).
ublue-os-signing
Not ideal, but at least works for my scenario.
I think it's a problem with the edge because when I used a VPN to change location to South America, it worked fine
I had the same problem for several days. The following steps worked for me:
Somehow the problem got solved during the preview
sadly this did not work for me. I'm in Northern California.
Same in Oregon :(
Same, Germany.
HTTP error from remote fedora for <https://d2uk5hbyrobdzx.cloudfront.net/delta-indexes/8Z/Sf6OyPd3wEkGJW62HAvFYcGnxMRxerdk03uaZkM3w.index>: While fetching https://d2uk5hbyrobdzx.cloudfront.net/delta-indexes/8Z/Sf6OyPd3wEkGJW62HAvFYcGnxMRxerdk03uaZkM3w.index: Server returned HTTP 502
I can confirm that this is also happening in Czech:
error: While pulling fedora/41/x86_64/kinoite: While fetching https://d2uk5hbyrobdzx.cloudfront.net/delta-indexes/hX/074AAUg00U9XS0UpcduOj_YOdyy3UcHwts1Ki9fmc.index: Server returned HTTP 502
Metadata Update from @zlopez: - Issue priority set to: Waiting on Assignee (was: Needs Review) - Issue tagged with: Needs investigation, high-gain
@arrfab help me with investigation today and found out that the files in question are missing in the ostree repo even on kojipkgs
For example https://d2uk5hbyrobdzx.cloudfront.net/delta-indexes/8Z/Sf6OyPd3wEkGJW62HAvFYcGnxMRxerdk03uaZkM3w.index is returning 502 and https://kojipkgs.fedoraproject.org/ostree/repo/delta-indexes/8Z/Sf6OyPd3wEkGJW62HAvFYcGnxMRxerdk03uaZkM3w.index is returning 404.
I reached to the #atomic-desktops:fedoraproject.org on matrix, hopefully I will get some answer there as well. But this looks like partial push of some update that is missing required files.
For me the current situation is that I get 502 on Fedora Silverblue 40, but everything seems to work on Silverblue 41.
@arrfab help me with investigation today and found out that the files in question are missing in the ostree repo even on kojipkgs For example https://d2uk5hbyrobdzx.cloudfront.net/delta-indexes/8Z/Sf6OyPd3wEkGJW62HAvFYcGnxMRxerdk03uaZkM3w.index is returning 502 and https://kojipkgs.fedoraproject.org/ostree/repo/delta-indexes/8Z/Sf6OyPd3wEkGJW62HAvFYcGnxMRxerdk03uaZkM3w.index is returning 404. I reached to the #atomic-desktops:fedoraproject.org on matrix, hopefully I will get some answer there as well. But this looks like partial push of some update that is missing required files.
isn't there a way to flush the cache on the cloudfront side? Using the kojipkgs directly seems to be working fine. So I guess its content is fine.
I guess maybe cloudfront cached something that references to files that don't exist anymore and that would explain the behavior you observed. If you can flush the Cloudfront cache, that 'something' would be updated and reference to the files that actually exist on Kojipkgs.
I'm also having this error - WA (US).
I commented out the mirrorlist line in /etc/ostree/remotes.d/fedora.conf and get the error:
error: While pulling fedora/41/x86_64/kinoite: While fetching https://ostree.fedoraproject.org/objects/e7/23dd54b5155b9d59a2e0edecbe67b9db02aa434b57cfe254db7160350ce740.filez: Server returned HTTP 404
I don't think it's a cache issue. As kevin has said, the file being requested doesn't exist on the source ostree.fedoraproject.org server itself. Either there's an error in ostree requesting an incorrect file, or the file is missing on Fedora's servers.
Yeah, as I noted in https://pagure.io/fedora-infrastructure/issue/12427#comment-958748
For example https://d2uk5hbyrobdzx.cloudfront.net/delta-indexes/8Z/Sf6OyPd3wEkGJW62HAvFYcGnxMRxerdk03uaZkM3w.index is returning 502 and https://kojipkgs.fedoraproject.org/ostree/repo/delta-indexes/8Z/Sf6OyPd3wEkGJW62HAvFYcGnxMRxerdk03uaZkM3w.index is returning 404. I reached to the #atomic-desktops:fedoraproject.org on matrix, hopefully I will get some answer there as well. But this looks like partial push of some update that is missing required files.
I also CC'ed a bunch of folks on this ticket in https://pagure.io/fedora-infrastructure/issue/12427#comment-958728
isn't there a way to flush the cache on the cloudfront side? Using the kojipkgs directly seems to be working fine. So I guess its content is fine. I guess maybe cloudfront cached something that references to files that don't exist anymore and that would explain the behavior you observed. If you can flush the Cloudfront cache, that 'something' would be updated and reference to the files that actually exist on Kojipkgs.
I did so in https://pagure.io/fedora-infrastructure/issue/12427#comment-958448
All indications I can see are that this is not a caching issue. The content doesn't exist so we can't serve it. However, you are seeing the exact same file you couldn't get from the cache available directly? That seems different... whats the url?
I guess maybe cloudfront cached something that references to files that don't exist anymore and that would explain the behavior you observed. If you can flush the Cloudfront cache, that 'something' would be updated and reference to the files that actually exist on Kojipkgs. I did so in https://pagure.io/fedora-infrastructure/issue/12427#comment-958448
Thanks for pointing that out.
I was under the impression (or assumption) that Kojipkgs are the origin for that Cloudfront URL. Which could explain the behavior considering the possible cache issue. If flushing the cache didn't resolve, I guess that might not be true. Could you clarify?
What I did was replaced the file in /etc/ostree/remotes.d/fedora.conf from:
[remote "fedora"] url=https://ostree.fedoraproject.org gpg-verify=true gpgkeypath=/etc/pki/rpm-gpg/ contenturl=mirrorlist=https://ostree.fedoraproject.org/mirrorlist
to:
[remote "fedora"] url=https://kojipkgs.fedoraproject.org/ostree/repo/ gpg-verify=true gpgkeypath=/etc/pki/rpm-gpg/ #contenturl=mirrorlist=https://ostree.fedoraproject.org/mirrorlist
And it works.
So, assuming it tries to use url first, and if it fails, gets another URL from the mirrorlist which at the moment only has 1 url: https://d2uk5hbyrobdzx.cloudfront.net/
Now to your question:
It is not exactly as you asked, that the file can't be fetched from the cache but it could be directly.
What I think it might be happening is that it gets some kind of index file that references the delta-index files we have been seen so many errors. And this index file is cached and references files that are not cached and don't exist on the origin anymore.
I am not familiar with the ostree repository layout, so I cannot say for sure what would be this cache file.
But having access to the access log on both the cache or origin, you can probably follow a pattern from the same client and figure that out. Than you can compare that file from the cache and vs the same file on the origin.
Would it be possible to confirm whether Kojipkgs is the origing for the Cloudfront URL?
I have not seen this, however others have using iot - https://pagure.io/fedora-infrastructure/issue/12427#comment-958466
Tried to reproduce, but upgrades/rebasing worked ok.
What I did was replaced the file in /etc/ostree/remotes.d/fedora.conf from: [remote "fedora"] url=https://ostree.fedoraproject.org gpg-verify=true gpgkeypath=/etc/pki/rpm-gpg/ contenturl=mirrorlist=https://ostree.fedoraproject.org/mirrorlist to: [remote "fedora"] url=https://kojipkgs.fedoraproject.org/ostree/repo/ gpg-verify=true gpgkeypath=/etc/pki/rpm-gpg/ And it works.
to: [remote "fedora"] url=https://kojipkgs.fedoraproject.org/ostree/repo/ gpg-verify=true gpgkeypath=/etc/pki/rpm-gpg/ And it works.
[remote "fedora"] url=https://kojipkgs.fedoraproject.org/ostree/repo/ gpg-verify=true gpgkeypath=/etc/pki/rpm-gpg/
Yes, this did the trick for me with Fedora 40 Silverblue today. Otherwise I was hitting this since the 28th of February:
$ rpm-ostree upgrade Scanning metadata: 1... done error: While pulling fedora/40/x86_64/silverblue: Server returned HTTP 502
I was also having problems upgrading Fedora 41 Silverblue on the 28th, but it started working today.
What I did was replaced the file in /etc/ostree/remotes.d/fedora.conf from: [remote "fedora"] url=https://ostree.fedoraproject.org gpg-verify=true gpgkeypath=/etc/pki/rpm-gpg/ contenturl=mirrorlist=https://ostree.fedoraproject.org/mirrorlist to: ``` [remote "fedora"] url=https://kojipkgs.fedoraproject.org/ostree/repo/ gpg-verify=true gpgkeypath=/etc/pki/rpm-gpg/ contenturl=mirrorlist=https://ostree.fedoraproject.org/mirrorlist ``` And it works. So, assuming it tries to use url first, and if it fails, gets another URL from the mirrorlist which at the moment only has 1 url: https://d2uk5hbyrobdzx.cloudfront.net/
to: ``` [remote "fedora"] url=https://kojipkgs.fedoraproject.org/ostree/repo/ gpg-verify=true gpgkeypath=/etc/pki/rpm-gpg/
``` And it works.
Pretty please. I am begging you all. Don't do this except perhaps very temporarliy. If everyone starts pointing directly there, that could swamp those servers and cause problems for everyone.
I understand doing it for debugging or working around, but please don't go telling everyone to do it. ;)
What happens if you now switch back to the default/cloudfront? Does it keep working?
Now to your question: It is not exactly as you asked, that the file can't be fetched from the cache but it could be directly.
Then this is another case. All the above ones I have looked at did not exist on the backend at all.
What I think it might be happening is that it gets some kind of index file that references the delta-index files we have been seen so many errors. And this index file is cached and references files that are not cached and don't exist on the origin anymore. I am not familiar with the ostree repository layout, so I cannot say for sure what would be this cache file. But having access to the access log on both the cache or origin, you can probably follow a pattern from the same client and figure that out. Than you can compare that file from the cache and vs the same file on the origin.
When I have done so, the file simply doesn't exist on our end.
cloudfront shows such things as '502' when it gets a 404 from the backend.
Try for example:
https://d2uk5hbyrobdzx.cloudfront.net/foobarbazdoesntexist
They are.
I am not going to do this workaround. Hopefully the situation will be cleared up in the very near future.
Hi @kevin ,
Yes, true. The idea is to using just temporarily for debugging and help fix the underlying issue. I should've highlighted that when I mentioned it. Sorry about that.
I reverted the workaround and on the Kinoite 41 system that wasn't working before now works. At least from my perspective the issue seems now solved, on both systems: one with Silverblue 41 and another with Kinoite 41.
Thank you!
Was there any other change?
Okay, understood. :)
I guess I can't test right now because I just updated. I can test again in a few days to see where things are at.
I used the koji packages URL and was able to rpm-ostree upgrade. I reverted to the default remote for fedora and was able to check-in like normal.
On this system, I hadn't upgraded since 41.20250121.0. I don't know if using the koji URL allowed my system to fix something or if the issue was resolved coincidentally or if it was something else.
Still receiving the same error message. <img alt="Screenshot_20250303_111254.png" src="/fedora-infrastructure/issue/raw/files/58ae40c7f9688f968424b35a99a331006106629ff7187cf08a4c887005e1ec8b-Screenshot_20250303_111254.png" />
What I did was replaced the file in /etc/ostree/remotes.d/fedora.conf from: [remote "fedora"] url=https://ostree.fedoraproject.org gpg-verify=true gpgkeypath=/etc/pki/rpm-gpg/ contenturl=mirrorlist=https://ostree.fedoraproject.org/mirrorlist to: [remote "fedora"] url=https://kojipkgs.fedoraproject.org/ostree/repo/ gpg-verify=true gpgkeypath=/etc/pki/rpm-gpg/ #contenturl=mirrorlist=https://ostree.fedoraproject.org/mirrorlist
to: [remote "fedora"] url=https://kojipkgs.fedoraproject.org/ostree/repo/ gpg-verify=true gpgkeypath=/etc/pki/rpm-gpg/ #contenturl=mirrorlist=https://ostree.fedoraproject.org/mirrorlist
Thank you. This workaround works for me. I understand the impact on the servers and have switched back to the CloudFront URL. Before I made the change to /etc/ostree/remotes.d/fedora.conf, rpm-ostree update on Silverblue and Kinoite give:
error: While pulling fedora/41/x86_64/silverblue: While fetching https://d2uk5hbyrobdzx.cloudfront.net/delta-indexes/DC/0O6lDAFVM33LALWc2WOU+yfAr9e7PcbiAWtiSRHGM.index: Server returned HTTP 502 error: While pulling fedora/41/x86_64/kinoite: While fetching https://d2uk5hbyrobdzx.cloudfront.net/delta-indexes/hX/074AAUg00U9XS0UpcduOj_YOdyy3UcHwts1Ki9fmc.index
https://kojipkgs.fedoraproject.org/ostree/repo/delta-indexes/DC/0O6lDAFVM33LALWc2WOU+yfAr9e7PcbiAWtiSRHGM.index and https://kojipkgs.fedoraproject.org/ostree/repo/delta-indexes/hX/074AAUg00U9XS0UpcduOj_YOdyy3UcHwts1Ki9fmc.index also return 404 error.
After updating and switching back to the default/Cloudfront, I can install packages but cannot update again (if I remove the deployment with rpm-ostree cleanup -p && rpm-ostree update, I get the same 502 error).
rpm-ostree cleanup -p && rpm-ostree update
One thing I noticed later is that is seems rpm-ostree would always use the mirror urls.
So, I gave it a try using just ostree.fedoraproject.org and commenting out the mirror:
[remote "fedora"] url=https://ostree.fedoraproject.org gpg-verify=true gpgkeypath=/etc/pki/rpm-gpg/ #contenturl=mirrorlist=https://ostree.fedoraproject.org/mirrorlist
It failed with a 404 on https://ostree.fedoraproject.org/objects/62/2f9e7d50ea45112c89ff3aafe4fc47918792993d55f2917d28842b7ef6fa8d.filez
And that file exists on both cloudfront and kojipkgs:
https://d2uk5hbyrobdzx.cloudfront.net/objects/62/2f9e7d50ea45112c89ff3aafe4fc47918792993d55f2917d28842b7ef6fa8d.filez https://kojipkgs.fedoraproject.org/ostree/repo/objects/62/2f9e7d50ea45112c89ff3aafe4fc47918792993d55f2917d28842b7ef6fa8d.filez
Maybe there is some inconsistency on https://ostree.fedoraproject.org and kojipkgs that is causing the behavior mentioned before.
It reaches ostree (as it is the default url), gets the list of files to download, and when doing so, some of the requests go to cloudfront (as it is a url from the mirror list), as they might not be perfectly in sync, it might end up trying to download files that don't exist yet (or anymore) on cloudfront.
I didn't tried the workaround with kojipkgs and can confirm that the issue is still there on Fedora 41 Kinoite.
error: While pulling fedora/41/x86_64/kinoite: While fetching https://d2uk5hbyrobdzx.cloudfront.net/objects/39/0ca0fdcec9a5e1c20864db44dbe7eb0ce6fba938b346c68a658945e766f88a.dirtree: Server returned HTTP 502
I still have that issue as well (Central Europe). For now the workaround for me is to switch locations to somewhere in South America via VPN, then updating works properly.
Are there any updates on the issue being resolved?
Sadly not much progress.
I've invalidated the iot cloudfront distributions... I realize I missed them the other day. If folks seeing this on the iot side could retry that would be great.
I also did another invalidation on the main ostree one. Perhaps it will help.
ostree.fedoraproject.org doesn't have the objects directly, so thats expected to be 404 if you are going directly to it.
I just did a fresh install of Sway Atomic yesterday and received the same HTTP 502 error message on that device also.
Possibly related: https://pagure.io/releng/issue/11439
@sohrob76 when you say you received the same image do you mean this? If so, could you provide the URL that resulted in that? I've been running through some of the URLs to see what happens and so far I haven't seen a 502 response like that yet...
I'm seeing some of the URLs work and some returning 404s. The following return an error in CloudFront:
x-cache: Error from cloudfront ... 404 (not 502 :confused: )
x-cache: Error from cloudfront
The following are also noted above as failing but are returning content for me:
x-cache: Miss from cloudfront ... content returned
x-cache: Miss from cloudfront
I'm using Budgie Atomic and have not hit this yet ... but it's obviously an issue. If location matters, I'm on the US east coast and what I see is the delta indexes 404 and objects resolve and return content.
@siosm @jlebon @dustymabe Any guidance is appreciated :pray:
Sadly not much progress. I've invalidated the iot cloudfront distributions... I realize I missed them the other day. If folks seeing this on the iot side could retry that would be great. I also did another invalidation on the main ostree one. Perhaps it will help. ostree.fedoraproject.org doesn't have the objects directly, so thats expected to be 404 if you are going directly to it.
I have a something running IoT and it's still hit or miss. I see that some updates did go through automatically today when before nothing would for days. However running rpm-ostree update still gives me the same error unforunately.
rpm-ostree
I'm in Southern California. I could update properly on my personal desktop running Silverblue when connecting to a VPN in Japan.
@sohrob76 when you say you received the same image do you mean this? If so, could you provide the URL that resulted in that? I've been running through some of the URLs to see what happens and so far I haven't seen a 502 response like that yet... I'm seeing some of the URLs work and some returning 404s. The following return an error in CloudFront: x-cache: Error from cloudfront ... 404 (not 502 :confused: ) https://d2uk5hbyrobdzx.cloudfront.net/delta-indexes/NK/zRroGmpJGyBol0ksLZshGjGRCKuFQmJzolT948r3k.index https://d2uk5hbyrobdzx.cloudfront.net/delta-indexes/8w/UFyT8ra1CS8MOuaTU8dVVs7hBTi2e6dqXpUST916k.index https://d2uk5hbyrobdzx.cloudfront.net/delta-indexes/h5/8hWxkT1VqT+xvTyByyacZUAQ+I8PhN1wkAY1KRrNI.index The following are also noted above as failing but are returning content for me: x-cache: Miss from cloudfront ... content returned https://d2uk5hbyrobdzx.cloudfront.net/objects/db/962e0e2b66ad44e4ab4f009e4e316f594e40022281653046e181294872e214.filez https://d2uk5hbyrobdzx.cloudfront.net/objects/16/3be1d7f590285b58ed9dbb45793156d8c71fee729fc502d6ad0e2488c49416.dirtree https://d2ju0wfl996cmc.cloudfront.net/objects/ff/eb7d3877cd438197a4171e69a747f20ced76a841758adea239f407feddf5d9.dirtree I'm using Budgie Atomic and have not hit this yet ... but it's obviously an issue. If location matters, I'm on the US east coast and what I see is the delta indexes 404 and objects resolve and return content. @siosm @jlebon @dustymabe Any guidance is appreciated :pray:
My apologies, I meant to say the same HTTP 502 error message, not image. I have five separate devices at home, three running Kinoite, and two running Sway Atomic and all of them are getting the 502 error message when attempting to update and it's been that way since at least Wednesday or Thursday of last week.
Having problems with fedora coreos too, big headache considering i run as empheral vm's and it breaks during my bootstrap part with rpm-os-tree error: While pulling fedora/x86_64/coreos/stable: While fetching https://d2uk5hbyrobdzx.cloudfront.net/delta-indexes/2I/Hyw1vdTpCWCgJrehYb7YKmYNpx9ITJfwZVd27ZMHo.index: Server returned HTTP 502
I thought the whole point of having a mirrorlist is not to have a single point of failure.
I saw few people having 502 when updating fedora flatpaks as well. Not sure if this is the same issue.
For me it is ephemeral. So I do
while ! rpm-ostree upgrade; do sleep 1; done
which will succeed after a while.
Not that great, but works alright for the meantime.
On the Fedora CoreOS side, we are also encountering probably the very same roadblock with some of the tests hitting 502:
502
[ 9.364508] zincati[807]: [ERROR zincati::update_agent::actor] failed to stage deployment: rpm-ostree deploy failed: [ 9.366365] zincati[807]: error: While pulling 8310b7331bb16cbe16335b389b6cfc346efc2f81f5a831265425e4e6c543bc79: Server returned HTTP 502
I'll take a look at this in an hour.
Copr/Konflux affected, and people report problems with GitHub actions.
@praiskup I'm not sure if they are the same issue. The issues COPR/Konflux/GitHub actions (that was me) are seeing is pulling containers from registry.fedoraproject.org, this issue is about pulling ostree commits from some(where|thing) else?
@dustymabe let me know if you want to look at this together.
The comments here seem mention delta-indexes/*,
delta-indexes/*
Looking at that directory in If I look at: https://d2uk5hbyrobdzx.cloudfront.net/delta-indexes/ it seems that there is no content there other than the directories...
I wonder if the static deltas and summary file needs to be regenerated. https://ostreedev.github.io/ostree/repo/#the-summary-file
I'm not sure either :shrug: hence I opened a Red Hat IT ticket INC3431287 (private), at least for the registry.fedoraproject.org pull problems (which probably is an alias to quay.io).
I just had my first successful updates in about a week! Looks like the team may have fixed the issue. I'm on the west coast of the US in SoCal.
We got together to try to investigate this. We think we have solved at least part of the issue. Recently @kevin added some IP blocks for IPs that were hitting kojipkgs heavily and it seems we included the cloudfront endpoint in there, so cloudfront was getting blocked from seeding content from the source of truth. We have removed those blocks and we are seeing initial success.
However we're not completely sure this is the cause of the problems going back a week. We think it's possible the OSTree summary file on the proxies could be getting out of date with the one on the OSTree repo (they get synced a few times an hour).
summary
The next time people report issues let's have someone from Infra check that all of the proxies have the correct content. The below shows just one proxy:
sh-5.2$ curl -L --silent https://ostree.fedoraproject.org/summary | md5sum 751d2957dcab8458cb0cdeb141574912 - sh-5.2$ md5sum /mnt/koji/ostree/repo/summary 751d2957dcab8458cb0cdeb141574912 /mnt/koji/ostree/repo/summary
You can run the following command on batcave to check them all:
ansible proxies -m shell -a "md5sum /srv/web/ostree/summary"
I just got the Kinoite 41 update for 3/5. Thanks, everybody!
Nice, updates work again. Thank you so much!
I will leave it to the Fedora team to close the ticket.
Able to rpm-ostree ugprade on my kinoite 41 box at work this morning. Was not able to do this yesterday.
Copr builds again, thanks.
Metadata Update from @zlopez: - Issue untagged with: Needs investigation - Issue tagged with: medium-trouble
Closing this as fixed as nobody reported the issue anymore.
Metadata Update from @zlopez: - Issue close_status updated to: Fixed - Issue status updated to: Closed (was: Open)
Working fine here again. Thanks to all involved <3
I tested rpm-ostree upgrade with both Fedora 40 and 41 Silverblue. It worked as expected.
rpm-ostree upgrade
Thanks, everybody!
Log in to comment on this ticket.