Learn more about these different git repos.
Other Git URLs
This is stemming from fedora-infrastructure#6363.
The problem is that lots of queries to greenwave turn into many more queries to resultsdb and waiverdb. These can timeout, and cause greenwave to give 500 errors.
Proposed solution (part 1):
dogpile.cache
This raises a new problem that greenwave's decision at any point in time may be out of date based on whether or not its cache is out of date.
Proposed solution (part 2):
@pingou has asked that we prioritize this above any other greenwave/waiverdb issues for the moment.
We should do this as two pull requests imho. First - just do the resultsdb part. Do the waiverdb part in a second PR and release so we can break the problem down.
I'm not quite clear about the last bullet point of solution 2. Please see below.
Any time it sees a message bus message about a new result in resultsdb, it should construct the cache key used to store results about a given item, and then ask the cache to delete the value associated with that cache key.
what is the point of asking the cache to delete the value associated with that cache key? If the cache key does not exist, there should be nothing to delete. And as I understand, it would just ask the cache to update the value with the new result for that cache key. Btw, where are we going to cache the results? I guess it might make sense to put the cached results into database so that Greenwave will be able to query the cached results when answering a question?
Any subsequent questions posed to greenwave would force a cache refresh and return >up-to-date decisions.
How could a cache get refreshed? I'm totally lost here and I guess it would be easier to use a real life example to explain this whole idea.
OK - for a start, see #84. I introduced a no-op cache there.
The big missing piece is a fedmsg consumer that would invalidate particular cache keys when resultsdb gets new results. It would look something like this:
def consume(self, msg): if is_resultsdb_message(msg): task = msg['msg']['task'] del task['name'] # here, task is {"item": "nodejs-ansi-black-0.1.1-1.fc28", "type": "koji_build" } key = greenwave.cache.cache_key_generator( greenwave.resources.retrieve_results, task) current_app.cache.delete(key)
I don't want to submit a commit/patch with that yet until #51 is fixed up and merged.. since presumably it is going to introduce a consumer too. I don't want to duplicate work there.
Do you see how it would work?
subject
item
item=foo-1.2.3
Thanks @ralph. I'm on board. I will try to fix up #51
OK, I'll get cracking on the invalidator shortly!
Invalidator posted in #86.
Metadata Update from @mjia: - Issue assigned to ralph
Metadata Update from @mjia: - Issue set to the milestone: 0.2
I see this is assigned to the milestone 0.2 and I see a 0.3 got released earlier today. Does that mean there is something we can poke at ? :)
Nearly... We have a build done. @mjia was trying to deploy it on stage, but something went wrong in Openshift and we can't see what. I think we need @codeblock to grant us access to the project in Openshift to debug further.
@pingou , Greenwave 0.3 is now in stg and you can try this out there.
https://greenwave-web-greenwave.app.os.stg.fedoraproject.org/api/v1.0/version
Let us know how it goes, thanks.
FYI - I ran a modified version of the script from fedora-infrastructure#6363 against greenwave stg and everything seemed fine (although we still need to hook up a few things before moving anything to prod).
Metadata Update from @ralph: - Issue status updated to: Closed (was: Open)
Login to comment on this ticket.