When I try to download the image following the link it directs me to https://download.fedoraproject.org/pub/fedora/linux/releases/32/Workstation/x86_64/iso/Fedora-Workstation-Live-x86_64-32-1.6.iso
The link is not accessible: "Service Unavailable".
I was able to download the images using the mirror from
https://admin.fedoraproject.org/mirrormanager/mirrors/Fedora/32/x86_64
There seems to be some infra issues:
https://koji.fedoraproject.org/koji/taskinfo?taskID=47346106
yes, pagure also seems to be down: https://src.fedoraproject.org
Also CentOS Jenkins which runs Fedora CI pipelines is unable to send FedoraMessaging messages:
Sending message for job 'master'. FATAL: Unhandled exception in perform: FATAL: java.lang.NullPointerException at com.redhat.jenkins.plugins.ci.messaging.RabbitMQMessagingWorker.sendMessage(RabbitMQMessagingWorker.java:278) at com.redhat.utils.MessageUtils.sendMessage(MessageUtils.java:133) at com.redhat.jenkins.plugins.ci.CIMessageNotifier.doMessageNotifier(CIMessageNotifier.java:145) at com.redhat.jenkins.plugins.ci.pipeline.CIMessageSenderStep$Execution$1.run(CIMessageSenderStep.java:197) at jenkins.security.ImpersonatingScheduledExecutorService$1.run(ImpersonatingScheduledExecutorService.java:58) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) exception in finally [Pipeline] } [Pipeline] // timeout [Pipeline] echo FAIL: Could not send message to FedoraMessaging on topic org.centos.prod.ci.koji-build.test.queued
This could be related as well.
https://bodhi.fedoraproject.org/updates/FEDORA-2020-8fee894682
I am getting " Failed to talk to Greenwave.".
Yep, can't push:
$ fedpkg -v push Creating repo object from /home/ahughes/projects/fedora/java-1.8.0-openjdk Running: git -c credential.helper=/usr/bin/fedpkg gitcred -c credential.useHttpPath=true push Enter passphrase for key '/home/ahughes/.ssh/id_rsa': Error during lookup request: status: 503 fatal: Could not read from remote repository.
A couple of our proxies are having a bad day and are pretty impactful.
We've just told them to behave which should solve your issues until the start to be annoying again. We'll try to find the underlying issue.
Great, just managed to push. Hope it stays up! Thanks.
thanks, the services seem to be up again.
Still having issues with koji builds: https://koji.fedoraproject.org/koji/taskinfo?taskID=47349259
47349412 buildArch (libmatroska-1.6.0-1.fc33.src.rpm, armv7hl): open (buildhw-a64-11.iad2.fedoraproject.org) -> FAILED: BuildrootError: could not init mock buildroot, mock exited with status 30; see root.log for more information
Something wrong with armv7hl repos?
DEBUG util.py:621: Problem 1: conflicting requests DEBUG util.py:621: - package gawk-5.0.1-7.fc32.armv7hl does not have a compatible architecture
Everything should be back up and processing normally again.
Still having issues with koji builds: https://koji.fedoraproject.org/koji/taskinfo?taskID=47349259 47349412 buildArch (libmatroska-1.6.0-1.fc33.src.rpm, armv7hl): open (buildhw-a64-11.iad2.fedoraproject.org) -> FAILED: BuildrootError: could not init mock buildroot, mock exited with status 30; see root.log for more information> Something wrong with armv7hl repos?
Still having issues with koji builds: https://koji.fedoraproject.org/koji/taskinfo?taskID=47349259 47349412 buildArch (libmatroska-1.6.0-1.fc33.src.rpm, armv7hl): open (buildhw-a64-11.iad2.fedoraproject.org) -> FAILED: BuildrootError: could not init mock buildroot, mock exited with status 30; see root.log for more information>
No, that builder was not supposed to be in the default channel, it was only supposed to do image builds.
I have removed it, resubmit and it should be fine.
The cause was varnish was getting oomed on two of our proxies. We are investigating the root cause, but in the mean time we have doubled the memory on those vm's to avoid oom.
Metadata Update from @kevin: - Issue close_status updated to: Fixed - Issue status updated to: Closed (was: Open)
Login to comment on this ticket.