#12023 release-monitoring.org crawlers seem to not be running since ~June 26
Opened 17 days ago by decathorpe. Modified 4 days ago

It appears that the crawlers for different sources in release-monitoring.org have not been checking for new versions since about June 26. Checking projects where I know there have been new upstream releases shows that release-monitoring.org doesn't know about them, so it's not a problem with filing bugs in bugzilla. I don't know if this affects all crawlers, but at least the crates.io crawler has been dead for about a week.

Forcing a refresh manually for those projects makes release-monitoring.org see the new releases and the-new-hotness to file bugs, so from what I can tell, everything is working, except the crawlers.


Metadata Update from @zlopez:
- Issue priority set to: Waiting on Assignee (was: Needs Review)
- Issue tagged with: Needs investigation, medium-gain, ops

17 days ago

Metadata Update from @zlopez:
- Issue assigned to zlopez

17 days ago

I checked the release-monitoring.org deployment and there were no new log entries since June 26th. The OpenShift namespace probably needs more resources. I redeployed the job again, let's see if that helps.

I now have gotten notifications for almost 100 new bugs being filed, for projects ranging [a-z], so I assume that means it's working again :)

The restart of the job did the trick, so I'm closing this one as fixed :-)

Metadata Update from @zlopez:
- Issue close_status updated to: Fixed
- Issue status updated to: Closed (was: Open)

15 days ago

Metadata Update from @zlopez:
- Issue untagged with: Needs investigation
- Issue tagged with: low-trouble

15 days ago

It looks like this is happening again. release-monitoring.org has no knowledge of any releases that happened in the last ~24 hours or so.

Metadata Update from @decathorpe:
- Issue status updated to: Open (was: Closed)

6 days ago

I'm trying to understand what is happening here, as the pod is still running in OpenShift, but the last log message is from 2024-07-11 02:25:27 and there isn't any error that is looking like it could have caused this.

Let me restart the pod and enable debug output, maybe we will have more info next time it happens. This could be just stuck on some project that is incorrectly setup.

Thank you, restarting seems to have done the trick for now.

Log in to comment on this ticket.

Metadata
Boards 1
ops Status: Backlog