#151 Need owner to define basic container smoke testing requirements
Closed None Opened 6 years ago by acarter.

The release tooling team will begin shipping layered images officially in F25. As part of that, we'll be creating an automated release workflow that includes testing via Taskotron. In order to scope Taskotron requirements, we need to know the core testing requirements are for basic smoke testing that will be shared across containers. We need an owner to define those tests initially. I'd also like to know whether that person would continue to own them going forward or if ownership would fall to the group or another individual at that time. This should be done soon so that we can scope Taskotron work.

Note: Kushal has noted that the owner should be sure to discuss with the internal Container Certification team

What do smoke test requirements look like? Can you link some examples?

Draft document of baseline tests here. Looking for feedback and more tests.


Additional areas to test:

Building images.
Running containers from built images.
Testing some of the layered Fedora-Dockerfiles-based images.
Specifically it'd be good to test systemd inside of containers.

So, there's testing docker, and there's testing containers. Per yesterday's discussion, we need a standard framework for testing container images, which is going to be hard, given the variety. This will require each container builder to enable these tests. Here's what I'm thinking:

  1. Pull works (we can pull down the container)
  2. Run works (with potential optional parameters supplied by the image owner)
  3. Some operational test works defined by the image owner (for example, on an nginx webserver container you can get a response on port 80).

For sanity, we'd require that (3) be defined in bash or python.

HOWEVER, the problem we're going to run into is that every container image isn't necessarily a standalone thing. For example, I build an image for clustered PostgreSQL, and without having an etcd container available it just errors out. Or in the Kube 1.2 demo, the ghost container there is set up to require an nginx container to run.

Now, it would be nice to reach inside the container and say it should have a test mode which runs standalone, but (a) that would be requiring the image author to make changes to upstream software and (b) what's the point in smoke-testing a standalone mode which is radically different from the container's normal operation? So I think we're going to need to look at smoke-testing combinations of containers ... this seems like a good role for atomic app, really.

I understand that testing docker is different from testing containers.

However, as docker evolves, things might be changing WRT requires / expected options. The images I'd like to see tested as part of testing docker do not necessarily need to be useful end-user application -- I'd leave that to container authors to handle that test process.

But some core functionality and interaction of the host, docker, and the containerized processes should always be testable with each new docker version.

I can't speak to what other folks are planning to do in terms of testing around docker and layered images but the functionality planned from a Taskotron POV for the F25 timeframe is:

  • ability to trigger on layered image build completion
  • dist-git repos to store tests to be run automatically on individual layered images
  • ability to retrieve a layered image under test
  • ability to run shell commands (stepping stone to whatever tests are written for the layered image)
  • utilities to get data into correct formats for centralized reporting

There are no plans for a "docker image testing framework" right now because it doesn't seem to be really wanted or needed.

The only "universal" check that we're considering at the moment is running [[http://developers.redhat.com/blog/2016/05/02/introducing-atomic-scan-container-vulnerability-detection/|Atomic Scan]] on every image but there are no concrete plans for if/when that would be complete.

We're also set up to run [[http://docker-autotest.readthedocs.io/en/latest/index.html|docker-autotest]] but there hasn't been discussion around when to actually run that. At the moment, our dev instance is set to run the suite on every completed build of docker in koji.

My understanding of the situation extends to the following (please correct me if I'm wrong on any of these points):

  • there are not really any good universal checks to run on all layered images outside potential for atomic scan listed above
  • there are no good, existing, practical frameworks for testing docker images
  • one of the few requirements that I've gotten for this is to not restrict test writing to a single language or framework
  • image creators will be responsible for the tests which run on new images
  • there are no concrete requirements for how/if new layered images should be tested at this time

I don't think this will answer all of the questions here but it should provide some useful related information

  1. Atomic scan for every image.
  2. Verifying the content of every image (right now everything should come from our RPMS).
  3. Scanning for executable(s) in image (should not be any random binary).
  4. OpenScap based checks (may be atomic scan can take care of that).

These are the few things I have in my mind. The list of functionality mentioned by tflink in the above comment also sounds good to me.

One more extra thing we may want to look at is https://github.com/coreos/clair

The QA team is happy with the current state, we are closing this ticket. We can reopen it in future if required.

Login to comment on this ticket.