#547 make `rpkg verrel` work for containers
Closed: Fixed 2 years ago by onosek. Opened 3 years ago by ktdreyer.

It would be great if rpkg verrel could work for containers as it does for RPMs. Here is the bash code we're using to parse this in the meantime:

local _component=$(cat Dockerfile|egrep -e 'com.redhat.component="' | sed -e 's|^.*com.redhat.component="\(.*\)".*$|\1|')
local _version=$(cat Dockerfile|egrep -e 'version="' | sed -e 's|^.*version="\(.*\)".*$|\1|')
local _release=$(cat Dockerfile|egrep -e 'release="' | sed -e 's|^.*release="\(.*\)".*$|\1|')

https://pypi.org/project/dockerfile-parse/ would be even safer (and it's what OSBS uses for this).


@ktdreyer
Hey, I'm the developer getting started on this issue. Could you please take me through an example workflow of how this feature would work, from the clone onward?

Sure. Her's a user story:

As a container developer,
I want to determine the NVR of my Dockerfile with Bash,
so I can write further code to look up the build in Koji and act on it.

Here's an example workflow in a Jenkins job:

fedpkg clone container/nginx
cd nginx
NVR=nginx-container-$(fedpkg verrel)

# Do more things with this NVR, for example, look it up with "buildinfo"
koji buildinfo $(NVR)

# Or query the state of the build programmatically:
state=$(koji --noauth call --json getBuild $(NVR) | jq .state)
if [ $state = 1 ]; then
    echo "The koji build already exists"
    echo "The koji build is completed"
    return 0
fi

# etc.

"Do more things with the NVR" means: rebuild it if it failed (with retries), submit the build to another test system, print the build in a report, etc. These are the things that do currently with the NVRs for RH's OpenShift Data Storage product.

(The reason we care so much about inspecting the state of the build is that rpkg container-build can fail at the end with a non-zero exit code even if it sends a buildContainer RPC successfully. We've seen cases where Koji's watch_tasks() fails due to intermittent DNS problems, so it looks like the build command "failed" but it actually succeeded.)

Okay thank you so much, I really appreciate the detail of the user story that you supplied, I understand your need better now.
I think that this feature makes a lot of sense. I'll link the corresponding PR here later once I've done a bit more research.
It makes sense that 'x-pkg verrel' ought to work for all kinds of containers.

I've reviewed some of the containers available at src.fedoraproject.org and I've found that many do not have release numbers. What would be a functional rectification for this in terms of what would be formatted correctly for your use case?

Yeah, in those cases OSBS uses Koji's getNextRelease RPC. For example, grafana's Dockerfile has no "release" label, but the last build was grafana-7-3, so the next release will be "4":

koji --noauth call getNextRelease --python --json-output '{"name": "grafana", "version": "7"}'
"4"

@ktdreyer So if one were to call 'x-pkg verrel' in the container's repo, would the user expect 'grafana-7-3' or 'grafana-7-4'

Or to put it more specifically, if we were to get the next release from the koji api, would the preferred value be the 'current' release (nextRelease - 1) or would it be 'nextRelease'?

@ktdreyer To add to the above point, what would be your preferred NVR if you were to perform:

fedpkg clone container/nginx
cd nginx
fedpkg verrel

Currently, based on the Koji API that is available, we get a next release number of:
nginx-1.12-3120191022231909.f636be4b

Thanks for raising this point.

With RPMs, verrel will already raise an error if there is no "Release" field.

For simplicity, we could just raise an error for containers that have no release label.

I think it will be complex to test in rpkg and confusing to users if we depend on the state of the build in Koji here.

Okay that sounds very reasonable. We'll change it to reflect your suggestion. Thanks for the expertise!

Metadata Update from @onosek:
- Issue set to the milestone: 1.63

2 years ago

Login to comment on this ticket.

Metadata
Related Pull Requests
  • #567 Merged 2 years ago