#4 fourth test PR
Closed 5 years ago by mikeb. Opened 5 years ago by mikeb.

rebuild
Mike Bonnet • 5 years ago  
rebuild
Mike Bonnet • 5 years ago  
rebuild
Mike Bonnet • 5 years ago  
rebuild
Mike Bonnet • 5 years ago  
rebuild
Mike Bonnet • 5 years ago  
rebuild
Mike Bonnet • 5 years ago  
debug
Mike Bonnet • 5 years ago  
rebuild
Mike Bonnet • 5 years ago  
debug
Mike Bonnet • 5 years ago  
rebuild
Mike Bonnet • 5 years ago  
rebuild
Mike Bonnet • 5 years ago  
rebuild
Mike Bonnet • 5 years ago  
reformat
Mike Bonnet • 5 years ago  
reformat
Mike Bonnet • 5 years ago  
debug
Mike Bonnet • 5 years ago  
quotes
Mike Bonnet • 5 years ago  
quotes
Mike Bonnet • 5 years ago  
undebug
Mike Bonnet • 5 years ago  
debug
Mike Bonnet • 5 years ago  
quotes
Mike Bonnet • 5 years ago  
refactor
Mike Bonnet • 5 years ago  
quotes
Mike Bonnet • 5 years ago  
ugh
Mike Bonnet • 5 years ago  
debug
Mike Bonnet • 5 years ago  
quotes
Mike Bonnet • 5 years ago  
tweaks
Mike Bonnet • 5 years ago  
typo
Mike Bonnet • 5 years ago  
typo
Mike Bonnet • 5 years ago  
typo
Mike Bonnet • 5 years ago  
use 8443
Mike Bonnet • 5 years ago  
tweaks
Mike Bonnet • 5 years ago  
debug
Mike Bonnet • 5 years ago  
tmp
Mike Bonnet • 5 years ago  
tmp
Mike Bonnet • 5 years ago  
file modified

file modified
+1 -4
@@ -7,8 +7,7 @@ 

  $ docker build openshift/backend \

      --tag mbs-backend:latest \

      --build-arg mbs_rpm=<MBS_RPM> \

-     --build-arg mbs_messaging_umb_rpm=<MBS_MESSAGING_UMB_RPM> \

-     --build-arg umb_ca_crt=<UMB_CA_CRT>

+     --build-arg mbs_messaging_umb_rpm=<MBS_MESSAGING_UMB_RPM>

  ```

  

  where:
@@ -20,8 +19,6 @@ 

    Plugin](https://github.com/release-engineering/mbs-messaging-umb) RPM. If not

    provided, only `fedmsg` and `in_memory` will be available for messaging in the

    image.

- * UMB_CA_CRT is a path or URL to the CA certificate of the message bus to be

-   used by MBS.

  

  ## Build the container image for MBS frontend

  

file modified
+8 -12
@@ -1,28 +1,24 @@ 

- FROM fedora:28

+ FROM fedora:29

  LABEL \

      name="Backend for the Module Build Service (MBS)" \

      vendor="The Factory 2.0 Team" \

      license="MIT" \

      description="The MBS coordinates module builds. This image is to serve as the MBS backend." \

-     usage="https://pagure.io/fm-orchestrator" \

-     build-date=""

+     usage="https://pagure.io/fm-orchestrator"

  

  # The caller can chose to provide an already built module-build-service RPM.

  ARG mbs_rpm=module-build-service

  ARG mbs_messaging_umb_rpm

- ARG umb_ca_crt

  

  RUN dnf -y install \

-             python2-pungi \

-             python2-psycopg2 \

-             https://dl.fedoraproject.org/pub/epel/7Server/x86_64/Packages/s/stomppy-3.1.6-3.el7.noarch.rpm \

+             --setopt=deltarpm=0 \

+             --setopt=install_weak_deps=false \

+             --setopt=tsflags=nodocs \

+             python3-psycopg2 \

+             python3-docopt \

              $mbs_rpm \

              $mbs_messaging_umb_rpm \

      && dnf -y clean all

  

- ADD $umb_ca_crt /etc/pki/ca-trust/source/anchors/umb_serverca.crt

- # Do this as a workaround instead of `update-ca-trust`

- RUN cat /etc/pki/ca-trust/source/anchors/umb_serverca.crt >> /etc/pki/tls/certs/ca-bundle.crt

- 

  VOLUME ["/etc/module-build-service", "/etc/fedmsg.d", "/etc/mbs-certs"]

- ENTRYPOINT fedmsg-hub

+ ENTRYPOINT fedmsg-hub-3

@@ -5,17 +5,18 @@ 

      vendor="The Factory 2.0 Team" \

      license="MIT" \

      description="The MBS coordinates module builds. This image is to serve as the MBS frontend." \

-     usage="https://pagure.io/fm-orchestrator" \

-     build-date=""

+     usage="https://pagure.io/fm-orchestrator"

  

  RUN dnf -y install \

-             httpd \

-             mod_wsgi \

+             --setopt=deltarpm=0 \

+             --setopt=install_weak_deps=false \

+             --setopt=tsflags=nodocs \

+             python3-mod_wsgi \

      && dnf -y clean all

  

  EXPOSE 8080/tcp 8443/tcp

  VOLUME ["/etc/module-build-service", "/etc/fedmsg.d", "/etc/mbs-certs", "/etc/httpd/conf.d"]

- ENTRYPOINT ["mod_wsgi-express", "start-server", "/usr/share/mbs/mbs.wsgi"]

+ ENTRYPOINT ["mod_wsgi-express-3", "start-server", "/usr/share/mbs/mbs.wsgi"]

  CMD [\

      "--user", "fedmsg", "--group", "fedmsg", \

      "--port", "8080", "--threads", "1", \

@@ -0,0 +1,88 @@ 

+ Run MBS-Koji integration tests

+ ==============================

+ ### Configure Secrets for Push Containers

+ This section is optional if publishing containers is not needed.

+ 

+ To publish the container built by the pipeline, you need to set up a secret for your target container registries.

+ 

+ - Please go to your registry dashboard and create a robot account.

+ - Backup your docker-config-json file (`$HOME/.docker/config.json`) if present.

+ - Run `docker login` with the robot account you just created to produce a new docker-config-json file.

+ - Create a new [OpenShift secret for registries][] named `factory2-pipeline-registry-credentials` from your docker-config-json file:

+ ```bash

+   oc create secret generic factory2-pipeline-registry-credentials \

+     --from-file=.dockerconfigjson="$HOME/.docker/config.json" \

+     --type=kubernetes.io/dockerconfigjson

+ ```

+ 

+ #### Build Jenkins slave container image

+ Before running the pipeline, you need to build a container image for Jenkins slave pods.

+ This step should be repeated every time you change

+ the Dockerfile for the Jenkins slave pods.

+ ```bash

+  oc start-build mbs-integration-jenkins-slave

+ ```

+ 

+ ### Integration Test Pipeline

+ Typically, integration tests should be rerun

+ - when a new container image of MBS or Koji is built

+ - to ensure an image is mature enough for promotion

+ 

+ Hence, splitting the functional test stage makes it possible to run integration

+ tests individually. It can also be a step of a larger pipeline.

+ 

+ #### Installation

+ To install this OpenShift pipeline:

+ ```bash

+ oc process --local -f pipelines/templates/mbs-koji-integration-test-template.yaml \

+    -p NAME=mbs-koji-integration-test \

+   | oc apply -f -

+ ```

+ 

+ Additional installations with default parameters for dev, stage, and prod environments:

+ ```bash

+ # for dev

+ oc process --local -f pipelines/templates/mbs-koji-integration-test-template.yaml \

+    -p NAME=mbs-koji-dev-integration-test \

+    -p MBS_BACKEND_IMAGE="quay.io/factory2/mbs-backend:latest" \

+    -p MBS_FRONTEND_IMAGE="quay.io/factory2/mbs-frontend:latest" \

+    -p KOJI_IMAGE="quay.io/factory2/koji:latest" \

+   | oc apply -f -

+ # for stage

+ oc process --local -f pipelines/templates/mbs-koji-integration-test-template.yaml \

+    -p NAME=mbs-koji-stage-integration-test \

+    -p MBS_BACKEND_IMAGE="quay.io/factory2/mbs-backend:stage" \

+    -p MBS_FRONTEND_IMAGE="quay.io/factory2/mbs-frontend:stage" \

+    -p KOJI_IMAGE="quay.io/factory2/koji:stage" \

+   | oc apply -f -

+ # for prod

+ oc process --local -f pipelines/templates/mbs-koji-integration-test-template.yaml \

+    -p NAME=mbs-koji-prod-integration-test \

+    -p MBS_BACKEND_IMAGE="quay.io/factory2/mbs-backend:prod" \

+    -p MBS_FRONTEND_IMAGE="quay.io/factory2/mbs-frontend:prod" \

+    -p KOJI_IMAGE="quay.io/factory2/koji:prod" \

+   | oc apply -f -

+ ```

+ 

+ #### Usage

+ To trigger a pipeline build for each environment, run:

+ ```bash

+ # for dev

+ oc start-build mbs-koji-dev-integration-test

+ # for stage

+ oc start-build mbs-koji-stage-integration-test

+ # for prod

+ oc start-build mbs-koji-prod-integration-test

+ ```

+ 

+ To trigger a custom integration test, start a new pipeline build with the image reference you want to test against and the Git repository and commit ID/branch name

+ where the functional test suite is used:

+ ```bash

+ oc start-build mbs-koji-integration-test \

+   -e MBS_BACKEND_IMAGE="quay.io/factory2/mbs-backend:test" \

+   -e MBS_FRONTEND_IMAGE="quay.io/factory2/mbs-frontend:test" \

+   -e KOJI_IMAGE="quay.io/factory2/koji:test" \

+   -e MBS_GIT_REPO=https://pagure.io/forks/<username>/waiverdb.git \

+   -e MBS_GIT_REF=my-branch # master branch is default

+ ```

+ 

@@ -0,0 +1,45 @@ 

+ FROM fedora:29 AS builder

+ LABEL \

+     name="Backend for the Module Build Service (MBS)" \

+     vendor="The Factory 2.0 Team" \

+     maintainer="The Factory 2.0 Team <pnt-factory2-devel@redhat.com>" \

+     license="MIT" \

+     description="The MBS coordinates module builds. This image is to serve as the MBS backend." \

+     usage="https://pagure.io/fm-orchestrator"

+ 

+ ARG EXTRA_RPMS=""

+ ARG DNF_CMD="dnf -y --setopt=deltarpm=0 --setopt=install_weak_deps=false --setopt=tsflags=nodocs"

+ 

+ COPY . /src

+ WORKDIR /src

+ 

+ RUN ${DNF_CMD} install \

+       'dnf-command(builddep)' rpm-build rpmdevtools rpmlint \

+       python3-tox python3-pytest python3-mock python3-flake8 bandit && \

+     ${DNF_CMD} builddep *.spec && \

+     ${DNF_CMD} clean all

+ RUN rpmdev-setuptree && \

+     python3 setup.py sdist && \

+     rpmbuild --define "_sourcedir $PWD/dist" -ba *.spec && \

+     mv $HOME/rpmbuild/RPMS /srv

+ RUN flake8 && \

+     bandit -r -ll -s B102,B303,B411,B602 module_build_service && \

+     tox -v -e py3

+ 

+ 

+ FROM fedora:29

+ 

+ COPY --from=builder /srv/RPMS /srv/RPMS

+ COPY repos/ /etc/yum.repos.d/

+ 

+ RUN ${DNF_CMD} install \

+       python3-psycopg2 \

+       python3-docopt \

+       python3-service-identity \

+       /srv/*/*/*.rpm \

+       $EXTRA_RPMS && \

+     ${DNF_CMD} clean all && \

+     rm -rf /srv/RPMS

+ 

+ VOLUME ["/etc/module-build-service", "/etc/fedmsg.d", "/etc/mbs-certs"]

+ ENTRYPOINT fedmsg-hub-3

@@ -0,0 +1,78 @@ 

+ # Template to produce a new BuildConfig and ImageStream for MBS backend image builds.

+ 

+ ---

+ apiVersion: v1

+ kind: Template

+ metadata:

+   name: mbs-backend-build-template

+ labels:

+   template: mbs-backend-build-template

+ parameters:

+ - name: NAME

+   displayName: Short unique identifier for the templated instances.

+   required: true

+   value: mbs-backend

+ - name: MBS_GIT_REPO

+   displayName: MBS Git repo URL

+   description: Default MBS Git repo URL in which to run dev tests against

+   required: true

+   value: https://pagure.io/fm-orchestrator.git

+ - name: MBS_GIT_REF

+   displayName: MBS Git repo ref

+   description: Default MBS Git repo ref in which to run dev tests against

+   required: true

+   value: master

+ - name: MBS_BACKEND_IMAGESTREAM_NAME

+   displayName: ImageStream name of the resulting image

+   required: true

+   value: mbs-backend

+ - name: MBS_BACKEND_IMAGESTREAM_NAMESPACE

+   displayName: Namespace of ImageStream for MBS images

+   required: false

+ - name: MBS_IMAGE_TAG

+   displayName: Tag of resulting image

+   required: true

+   value: latest

+ - name: EXTRA_RPMS

+   displayName: Names of extra rpms to install

+   required: false

+   value: ""

+ objects:

+ - apiVersion: v1

+   kind: ImageStream

+   metadata:

+     name: "${MBS_BACKEND_IMAGESTREAM_NAME}"

+     labels:

+       app: "${NAME}"

+ - kind: "BuildConfig"

+   apiVersion: "v1"

+   metadata:

+     name: "${NAME}"

+     labels:

+       app: "${NAME}"

+   spec:

+     runPolicy: "Parallel"

+     completionDeadlineSeconds: 1800

+     strategy:

+       dockerStrategy:

+         forcePull: true

+         dockerfilePath: openshift/integration/koji/containers/backend/Dockerfile

+         buildArgs:

+         - name: EXTRA_RPMS

+           value: "${EXTRA_RPMS}"

+     resources:

+       requests:

+         memory: "768Mi"

+         cpu: "300m"

+       limits:

+        memory: "1Gi"

+        cpu: "500m"

+     source:

+       git:

+         uri: "${MBS_GIT_REPO}"

+         ref: "${MBS_GIT_REF}"

+     output:

+       to:

+         kind: "ImageStreamTag"

+         name: "${MBS_BACKEND_IMAGESTREAM_NAME}:${MBS_IMAGE_TAG}"

+         namespace: "${MBS_BACKEND_IMAGESTREAM_NAMESPACE}"

@@ -0,0 +1,82 @@ 

+ # Template to produce a new BuildConfig and ImageStream for MBS frontend image builds.

+ 

+ ---

+ apiVersion: v1

+ kind: Template

+ metadata:

+   name: mbs-frontend-build-template

+ labels:

+   template: mbs-frontend-build-template

+ parameters:

+ - name: NAME

+   displayName: Short unique identifier for the templated instances.

+   required: true

+   value: mbs-frontend

+ - name: MBS_GIT_REPO

+   displayName: MBS Git repo URL

+   description: Default MBS Git repo URL in which to run dev tests against

+   required: true

+   value: https://pagure.io/fm-orchestrator.git

+ - name: MBS_GIT_REF

+   displayName: MBS Git repo ref

+   description: Default MBS Git repo ref in which to run dev tests against

+   required: true

+   value: master

+ - name: MBS_FRONTEND_IMAGESTREAM_NAME

+   displayName: ImageStream name of the resulting image

+   required: true

+   value: mbs-frontend

+ - name: MBS_FRONTEND_IMAGESTREAM_NAMESPACE

+   displayName: Namespace of ImageStream for MBS images

+   required: false

+ - name: MBS_IMAGE_TAG

+   displayName: Tag of resulting image

+   required: true

+   value: latest

+ - name: MBS_BACKEND_IMAGESTREAM_NAME

+   displayName: ImageStream name of the MBS backend image

+   required: true

+   value: mbs-frontend

+ - name: MBS_BACKEND_IMAGESTREAM_NAMESPACE

+   displayName: Namespace of ImageStream for MBS backend image

+   required: false

+ objects:

+ - apiVersion: v1

+   kind: ImageStream

+   metadata:

+     name: "${MBS_FRONTEND_IMAGESTREAM_NAME}"

+     labels:

+       app: "${NAME}"

+ - kind: "BuildConfig"

+   apiVersion: "v1"

+   metadata:

+     name: "${NAME}"

+     labels:

+       app: "${NAME}"

+   spec:

+     runPolicy: "Parallel"

+     completionDeadlineSeconds: 1800

+     strategy:

+       dockerStrategy:

+         forcePull: true

+         dockerfilePath: openshift/frontend/Dockerfile

+         from:

+           kind: ImageStreamTag

+           name: "${MBS_BACKEND_IMAGESTREAM_NAME}:${MBS_IMAGE_TAG}"

+           namespace: "${MBS_BACKEND_IMAGESTREAM_NAMESPACE}"

+     resources:

+       requests:

+         memory: "768Mi"

+         cpu: "300m"

+       limits:

+        memory: "1Gi"

+        cpu: "500m"

+     source:

+       git:

+         uri: "${MBS_GIT_REPO}"

+         ref: "${MBS_GIT_REF}"

+     output:

+       to:

+         kind: "ImageStreamTag"

+         name: "${MBS_FRONTEND_IMAGESTREAM_NAME}:${MBS_IMAGE_TAG}"

+         namespace: "${MBS_FRONTEND_IMAGESTREAM_NAMESPACE}"

@@ -0,0 +1,74 @@ 

+ # Based on the rad-jenkins image, which is in turn based on:

+ # https://github.com/jenkinsci/docker-jnlp-slave/blob/master/Dockerfile

+ # https://github.com/jenkinsci/docker-slave/blob/master/Dockerfile

+ 

+ FROM fedora:29

+ LABEL name="mbs-koji-int-jenkins-slave" \

+       description="Jenkins slave for MBS-Koji integration tests" \

+       vendor="MBS Developers" \

+       license="GPLv2+" \

+       distribution-scope="private"

+ 

+ ARG USER=jenkins

+ ARG UID=10000

+ ARG HOME_DIR=/var/lib/jenkins

+ ARG SLAVE_VERSION=3.29

+ ARG TINI_VERSION=0.18.0

+ ARG DNF_CMD="dnf -y --setopt=deltarpm=0 --setopt=install_weak_deps=false --setopt=tsflags=nodocs"

+ ARG CA_URLS=""

+ 

+ # Provide a default HOME location and set some default git user details

+ # Set LANG to UTF-8 to support it in stdout/stderr

+ ENV HOME=${HOME_DIR} \

+     GIT_COMMITTER_NAME=mbs-koji-int \

+     GIT_COMMITTER_EMAIL=mbs-koji-int@fedoraproject.org \

+     LANG=en_US.UTF-8

+ 

+ USER root

+ 

+ RUN ${DNF_CMD} install -y \

+     java-1.8.0-openjdk nss_wrapper gettext git \

+     tar gzip skopeo wget make \

+     origin-clients \

+     # Jenkins pipeline 'sh' steps seem to require ps

+     procps-ng \

+     # Tools to interface with our test instances

+     koji && \

+     ${DNF_CMD} clean all

+ 

+ # CA Certs

+ WORKDIR /etc/pki/ca-trust/source/anchors/

+ RUN for ca_url in ${CA_URLS}; do curl -skO ${ca_url}; done && \

+     update-ca-trust

+ 

+ # Setup the user for non-arbitrary UIDs with OpenShift

+ # https://docs.openshift.org/latest/creating_images/guidelines.html#openshift-origin-specific-guidelines

+ RUN useradd -d ${HOME_DIR} -u ${UID} -g 0 -m -s /bin/bash ${USER} && \

+     chmod -R g+rwx ${HOME_DIR}

+ 

+ # Make /etc/passwd writable for root group

+ # so we can add dynamic user to the system in entrypoint script

+ RUN chmod g+rw /etc/passwd

+ 

+ # Retrieve jenkins slave client

+ RUN curl --create-dirs -sSLo /usr/share/jenkins/slave.jar \

+     https://repo.jenkins-ci.org/public/org/jenkins-ci/main/remoting/${SLAVE_VERSION}/remoting-${SLAVE_VERSION}.jar && \

+     chmod 755 /usr/share/jenkins && \

+     chmod 644 /usr/share/jenkins/slave.jar

+ 

+ # Entry point script to run jenkins slave client

+ COPY jenkins-slave /usr/local/bin/jenkins-slave

+ RUN chmod 755 /usr/local/bin/jenkins-slave

+ 

+ # install tini, a tiny but valid init for containers

+ # install wait-for-it.sh, to allow containers to wait for other services to come up

+ RUN curl -L -o /usr/local/bin/tini "https://github.com/krallin/tini/releases/download/v${TINI_VERSION}/tini" \

+     && chmod +rx /usr/local/bin/tini \

+     && curl -L -o /usr/local/bin/wait-for-it "https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh" \

+     && chmod +rx /usr/local/bin/tini /usr/local/bin/wait-for-it \

+     && ${DNF_CMD} clean all

+ 

+ # For OpenShift we MUST use the UID of the user and not the name.

+ USER ${UID}

+ WORKDIR ${HOME_DIR}

+ ENTRYPOINT ["/usr/local/bin/tini", "--", "jenkins-slave"]

@@ -0,0 +1,116 @@ 

+ #!/usr/bin/env bash

+ 

+ # The MIT License

+ #

+ #  Copyright (c) 2015, CloudBees, Inc.

+ #

+ #  Permission is hereby granted, free of charge, to any person obtaining a copy

+ #  of this software and associated documentation files (the "Software"), to deal

+ #  in the Software without restriction, including without limitation the rights

+ #  to use, copy, modify, merge, publish, distribute, sublicense, and/or sell

+ #  copies of the Software, and to permit persons to whom the Software is

+ #  furnished to do so, subject to the following conditions:

+ #

+ #  The above copyright notice and this permission notice shall be included in

+ #  all copies or substantial portions of the Software.

+ #

+ #  THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR

+ #  IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,

+ #  FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE

+ #  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER

+ #  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,

+ #  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN

+ #  THE SOFTWARE.

+ 

+ # Usage jenkins-slave.sh [options] -url http://jenkins [SECRET] [AGENT_NAME]

+ # Optional environment variables :

+ # * JENKINS_TUNNEL : HOST:PORT for a tunnel to route TCP traffic to jenkins host, when jenkins can't be directly accessed over network

+ # * JENKINS_URL : alternate jenkins URL

+ # * JENKINS_SECRET : agent secret, if not set as an argument

+ # * JENKINS_AGENT_NAME : agent name, if not set as an argument

+ # * JENKINS_JAR_CACHE : directory for cached jar files

+ #

+ # Credentials are also supported for authentication to jenkins. If desired,

+ # create the directory /etc/jenkins/credentials with "username" and "password"

+ # files within.

+ #

+ # This script was originally adopted from:

+ # https://github.com/jenkinsci/docker-jnlp-slave/blob/master/jenkins-slave

+ 

+ # Dynamically create a passwd file for non-arbitrary UIDs.

+ # Taken from: https://docs.openshift.org/latest/creating_images/guidelines.html#openshift-origin-specific-guidelines

+ # Adjusted using: https://github.com/openshift/jenkins/commit/20a511b8ccf71a8ebd80519440403e530ccb6337

+ export USER_ID=$(id -u)

+ export GROUP_ID=$(id -g)

+ 

+ # Skip for root user

+ if [ x"$USER_ID" != x"0" ]; then

+     export NSS_WRAPPER_PASSWD=/tmp/passwd

+     export NSS_WRAPPER_GROUP=/etc/group

+     export LD_PRELOAD=/usr/lib64/libnss_wrapper.so

+ 

+     cp /etc/passwd $NSS_WRAPPER_PASSWD

+     echo "jenkins:x:${USER_ID}:${GROUP_ID}:jenkins:${HOME}:/bin/bash" >> $NSS_WRAPPER_PASSWD

+ fi

+ 

+ if [ $# -eq 1 ]; then

+ 

+     # if `docker run` only has one arguments, we assume user is running alternate command like `bash` to inspect the image

+     exec "$@"

+ 

+ else

+ 

+     # if -tunnel is not provided try env vars

+     if [[ "$@" != *"-tunnel "* ]]; then

+         if [ ! -z "$JENKINS_TUNNEL" ]; then

+             TUNNEL="-tunnel $JENKINS_TUNNEL"

+         fi

+     fi

+ 

+     if [ -n "$JENKINS_URL" ]; then

+         URL="-url $JENKINS_URL"

+     fi

+ 

+     if [ -n "$JENKINS_NAME" ]; then

+         JENKINS_AGENT_NAME="$JENKINS_NAME"

+     fi

+ 

+     if [ -n "$JENKINS_JAR_CACHE" ]; then

+         JAR_CACHE="-jar-cache $JENKINS_JAR_CACHE"

+     fi

+ 

+     if [ -d "/etc/jenkins/credentials" ]; then

+         USERNAME="$(cat /etc/jenkins/credentials/username)"

+         PASSWORD="$(cat /etc/jenkins/credentials/password)"

+         CREDENTIALS="-credentials ${USERNAME}:${PASSWORD}"

+     fi

+ 

+     if [ -z "$JNLP_PROTOCOL_OPTS" ]; then

+         echo "Warning: JnlpProtocol3 is disabled by default, use JNLP_PROTOCOL_OPTS to alter the behavior"

+         JNLP_PROTOCOL_OPTS="-Dorg.jenkinsci.remoting.engine.JnlpProtocol3.disabled=true"

+     fi

+ 

+     # If both required options are defined, do not pass the parameters

+     OPT_JENKINS_SECRET=""

+     if [ -n "$JENKINS_SECRET" ]; then

+         if [[ "$@" != *"${JENKINS_SECRET}"* ]]; then

+             OPT_JENKINS_SECRET="${JENKINS_SECRET}"

+         else

+             echo "Warning: SECRET is defined twice in command-line arguments and the environment variable"

+         fi

+     fi

+ 

+     OPT_JENKINS_AGENT_NAME=""

+     if [ -n "$JENKINS_AGENT_NAME" ]; then

+         if [[ "$@" != *"${JENKINS_AGENT_NAME}"* ]]; then

+             OPT_JENKINS_AGENT_NAME="${JENKINS_AGENT_NAME}"

+         else

+             echo "Warning: AGENT_NAME is defined twice in command-line arguments and the environment variable"

+         fi

+     fi

+ 

+     #TODO: Handle the case when the command-line and Environment variable contain different values.

+     #It is fine it blows up for now since it should lead to an error anyway.

+ 

+     exec java $JAVA_OPTS $JNLP_PROTOCOL_OPTS -cp /usr/share/jenkins/slave.jar hudson.remoting.jnlp.Main -headless $CREDENTIALS $JAR_CACHE $TUNNEL $URL $OPT_JENKINS_SECRET $OPT_JENKINS_AGENT_NAME "$@"

+ fi

@@ -0,0 +1,48 @@ 

+ OC:=oc

+ OCFLAGS:=

+ JOBS_DIR:=jobs

+ TEMPLATES_DIR:=templates

+ JOB_PARAM_FILES:=$(wildcard $(JOBS_DIR)/*.env)

+ JOBS:=$(patsubst $(JOBS_DIR)/%.env,%,$(JOB_PARAM_FILES))

+ 

+ OC_CMD=$(OC) $(OCFLAGS)

+ 

+ help:

+ 	@echo TARGETS

+ 	@echo -e "\tinstall\t\tInstall or update pipelines to OpenShift"

+ 	@echo -e "\tuninstall\tDelete installed pipelines from OpenShift"

+ 	@echo

+ 	@echo VARIABLES

+ 	@echo -e "\tJOBS\t\tSpace seperated list of pipeline jobs to install"

+ 	@echo -e "\tJOBS_DIR\tLooking for pipeline job definitions in an alternate directory."

+ 	@echo -e "\tTEMPLATES_DIR\tLooking for pipeline job templates in an alternate directory."

+ 	@echo -e "\tOC\t\tUse this oc command"

+ 	@echo -e "\tOCFLAGS\t\tOptions to append to the oc command arguments"

+ install:

+ 	@for job in $(JOBS); do \

+ 		echo "[PIPELINE] Updating pipeline job \"$${job}\"..." ; \

+ 	  template_file=$$(cat ./$(JOBS_DIR)/$${job}.tmpl); \

+ 		$(OC_CMD) process --local -f ./$(TEMPLATES_DIR)/$${template_file} \

+ 			--param-file ./$(JOBS_DIR)/$${job}.env | $(OC_CMD) apply -f -; \

+ 		echo "[PIPELINE] Pipeline job \"$${job}\" updated" ; \

+ 	done

+ uninstall:

+ 	@for job in $(JOBS); do \

+ 	  template_file=$$(cat ./$(JOBS_DIR)/$${job}.tmpl); \

+ 		template_name=$${template_file%.y?ml}; \

+ 		template_name=$${template_name%-template}; \

+ 		echo "[PIPELINE] Deleting pipeline job \"$${job}\"..." ; \

+ 		$(OC_CMD) delete all -l template="$$template_name" -l app="$$job" ;\

+ 		echo "[PIPELINE] Pipeline job \"$${job}\" deleted" ; \

+ 	done

+ create-jenkins-is:

+ 	$(OC_CMD) import-image jenkins:2 --confirm --scheduled=true \

+ 		--from=registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11

+ install-jenkins: create-jenkins-is

+ 	$(OC_CMD) new-app --template=jenkins-persistent \

+ 		-p MEMORY_LIMIT=2Gi \

+ 		-p VOLUME_CAPACITY=10Gi \

+ 		-p NAMESPACE=$(shell $(OC_CMD) project -q) \

+ 		-e INSTALL_PLUGINS=script-security:1.46,permissive-script-security:0.3,timestamper:1.9,http_request:1.8.22,ownership:0.12.1,antisamy-markup-formatter:1.5 \

+ 		-e JENKINS_JAVA_OVERRIDES="-Dpermissive-script-security.enabled=no_security"

+ .PHONY: help install uninstall create-jenkins-is install-jenkins

@@ -0,0 +1,5 @@ 

+ EXTRA_REPOS=https://copr.fedorainfracloud.org/coprs/mikeb/mbs-messaging-umb/repo/fedora-29/mikeb-mbs-messaging-umb-fedora-29.repo

+ EXTRA_RPMS=mbs-messaging-umb

+ # Remove before submitting PR

+ MBS_GIT_REPO=https://pagure.io/forks/mikeb/fm-orchestrator.git

+ MBS_GIT_REF=test-master

@@ -0,0 +1,1 @@ 

+ mbs-koji-int-dev-template.yaml

@@ -0,0 +1,3 @@ 

+ # Remove before submitting PR

+ MBS_GIT_REPO=https://pagure.io/forks/mikeb/fm-orchestrator.git

+ MBS_GIT_REF=test-master

@@ -0,0 +1,1 @@ 

+ mbs-koji-int-jenkins-slave-template.yaml

@@ -0,0 +1,3 @@ 

+ # Remove before submitting PR

+ MBS_GIT_REPO=https://pagure.io/forks/mikeb/fm-orchestrator.git

+ MBS_GIT_REF=test-master

@@ -0,0 +1,1 @@ 

+ mbs-koji-int-test-template.yaml

@@ -0,0 +1,7 @@ 

+ NAME=mbs-polling-master

+ MAIL_ENABLED=true

+ # Remove before submitting PR

+ PAGURE_REPO_NAME=mikeb/fm-orchestrator

+ PAGURE_REPO_IS_FORK=true

+ PAGURE_POLLED_BRANCH=test-master

+ MAIL_ENABLED=test-master

@@ -0,0 +1,1 @@ 

+ mbs-polling-pagure.yaml

@@ -0,0 +1,6 @@ 

+ NAME=mbs-polling-prs

+ PAGURE_POLLING_FOR_PR=true

+ # Remove before submitting PR

+ PAGURE_REPO_NAME=mikeb/fm-orchestrator

+ PAGURE_REPO_IS_FORK=true

+ MAIL_ENABLED=true

@@ -0,0 +1,1 @@ 

+ mbs-polling-pagure.yaml

@@ -0,0 +1,205 @@ 

+ # Template to produce a new MBS-Koji CI pipeline in OpenShift.

+ #

+ # Dev pipeline is a part of the MBS Pipeline, covers the following steps:

+ #

+ # - Run Flake8 and Bandit checks

+ # - Run unit tests

+ # - Build SRPM

+ # - Build RPM

+ # - Invoke Rpmlint

+ # - Build container

+ # - Run integration tests against the latest Koji images

+ # - Push container

+ #

+ # Required Jenkins Plugins:

+ # - Openshift Sync plugin

+ # - Openshift Client plugin

+ # - Kubernetes plugin

+ # - SSH Agent plugin

+ # - Timestamper plugin

+ # - HTTP Request plugin

+ #

+ ---

+ apiVersion: v1

+ kind: Template

+ metadata:

+   name: mbs-koji-int-dev

+ labels:

+   template: mbs-koji-int-dev

+ parameters:

+ - name: NAME

+   displayName: Short unique identifier for the templated instances

+   description: This field is used to deploy multiple pipelines to one OpenShift project from this template.

+   required: true

+   value: mbs-koji-int-dev

+ - name: MBS_GIT_REPO

+   displayName: MBS Git repo URL

+   description: Default MBS Git repo URL in which to find the integration tests to run

+   required: true

+   value: https://pagure.io/fm-orchestrator.git

+ - name: MBS_GIT_REF

+   displayName: MBS Git repo ref

+   description: Default MBS Git repo ref in which to find the integration tests to run

+   required: true

+   value: master

+ - name: MBS_MAIN_BRANCH

+   displayName: Name of the main branch.

+   description: If MBS_MAIN_BRANCH equals MBS_GIT_REF, publishing steps will be automatically performed.

+   value: master

+   required: true

+ - name: JENKINS_AGENT_IMAGE

+   displayName: Container image for Jenkins slave pods

+   required: true

+   value: quay.io/factory2/mbs-koji-int-jenkins-slave:latest

+ - name: CONTAINER_REGISTRY_CREDENTIALS

+   displayName: Secret name of container registries used for pulling and pushing images

+   value: factory2-pipeline-registry-credentials

+   required: false

+ - name: JENKINS_AGENT_CLOUD_NAME

+   displayName: Name of OpenShift cloud in Jenkins master configuration

+   required: true

+   value: openshift

+ - name: MBS_BACKEND_DEV_IMAGE_DESTINATIONS

+   displayName: Comma seperated list of container repository/tag to which the built MBS backend dev image will be pushed

+   description: OpenShift registries must be prefixed with 'atomic:'

+   required: false

+   value: quay.io/factory2/mbs-backend:latest

+ - name: MBS_FRONTEND_DEV_IMAGE_DESTINATIONS

+   displayName: Comma seperated list of container repository/tag to which the built MBS frontend dev image will be pushed

+   description: OpenShift registries must be prefixed with 'atomic:'

+   required: false

+   value: quay.io/factory2/mbs-frontend:latest

+ - name: MBS_DEV_IMAGE_TAG

+   displayName: Tag name of the resulting container image for development environment

+   value: "latest"

+   required: true

+ - name: MBS_BACKEND_IMAGESTREAM_NAME

+   displayName: Name of ImageStream for MBS backend images

+   required: true

+   value: mbs-backend

+ - name: MBS_BACKEND_IMAGESTREAM_NAMESPACE

+   displayName: Namespace of ImageStream for MBS backend images

+   required: false

+ - name: MBS_FRONTEND_IMAGESTREAM_NAME

+   displayName: Name of ImageStream for MBS frontend images

+   required: true

+   value: mbs-frontend

+ - name: MBS_FRONTEND_IMAGESTREAM_NAMESPACE

+   displayName: Namespace of ImageStream for MBS frontend images

+   required: false

+ - name: MBS_INTEGRATION_TEST_BUILD_CONFIG_NAME

+   displayName: Name of BuildConfig for running integration tests

+   required: true

+   value: mbs-koji-int-test

+ - name: FORCE_PUBLISH_IMAGE

+   displayName: Whether to push the resulting image regardless of the Git branch

+   value: "false"

+   required: true

+ - name: TAG_INTO_IMAGESTREAM

+   displayName: Whether to tag the pushed image as dev

+   value: "true"

+   required: true

+ - name: MBS_SPEC_FILE

+   displayName: Repo to the rpm specfile for the module-build-service

+   required: true

+   value: https://src.fedoraproject.org/rpms/module-build-service/raw/master/f/module-build-service.spec

+ - name: EXTRA_REPOS

+   displayName: Space-separated list of URLs to .repo files to install in the images

+   required: false

+   value: ""

+ - name: EXTRA_RPMS

+   displayName: Space-separated list of rpm names to install in the images

+   required: false

+   value: ""

+ - name: TESTCASES

+   displayName: >-

+     Space-separated list of testcases to run as part of the pipeline. An empty string (the default)

+     causes all available testcases to run. The value "skip" causes no testcases to be run.

+   required: false

+   value: ""

+ - name: CLEANUP

+   displayName: Cleanup objects after the pipeline is complete

+   required: true

+   value: "true"

+ objects:

+ - kind: ServiceAccount

+   apiVersion: v1

+   metadata:

+     name: "${NAME}-jenkins-slave"

+     labels:

+       app: "${NAME}"

+ - kind: RoleBinding

+   apiVersion: v1

+   metadata:

+     name: "${NAME}-jenkins-slave_edit"

+     labels:

+       app: "${NAME}"

+   subjects:

+   - kind: ServiceAccount

+     name: "${NAME}-jenkins-slave"

+   roleRef:

+     name: edit

+ - kind: BuildConfig

+   apiVersion: v1

+   metadata:

+     name: "${NAME}"

+     labels:

+       app: "${NAME}"

+   spec:

+     runPolicy: Serial

+     completionDeadlineSeconds: 3600

+     source:

+       git:

+         uri: "${MBS_GIT_REPO}"

+         ref: "${MBS_GIT_REF}"

+     strategy:

+       type: JenkinsPipeline

+       jenkinsPipelineStrategy:

+         env:

+         - name: MBS_GIT_REPO

+           value: "${MBS_GIT_REPO}"

+         - name: MBS_GIT_REF

+           value: "${MBS_GIT_REF}"

+         - name: CONTAINER_REGISTRY_CREDENTIALS

+           value: "${CONTAINER_REGISTRY_CREDENTIALS}"

+         - name: JENKINS_AGENT_IMAGE

+           value: "${JENKINS_AGENT_IMAGE}"

+         - name: JENKINS_AGENT_CLOUD_NAME

+           value: "${JENKINS_AGENT_CLOUD_NAME}"

+         - name: JENKINS_AGENT_SERVICE_ACCOUNT

+           value: "${NAME}-jenkins-slave"

+         - name: MBS_BACKEND_DEV_IMAGE_DESTINATIONS

+           value: "${MBS_BACKEND_DEV_IMAGE_DESTINATIONS}"

+         - name: MBS_FRONTEND_DEV_IMAGE_DESTINATIONS

+           value: "${MBS_FRONTEND_DEV_IMAGE_DESTINATIONS}"

+         - name: FORCE_PUBLISH_IMAGE

+           value: "${FORCE_PUBLISH_IMAGE}"

+         - name: TAG_INTO_IMAGESTREAM

+           value: "${TAG_INTO_IMAGESTREAM}"

+         - name: MBS_DEV_IMAGE_TAG

+           value: "${MBS_DEV_IMAGE_TAG}"

+         - name: MBS_BACKEND_IMAGESTREAM_NAME

+           value: "${MBS_BACKEND_IMAGESTREAM_NAME}"

+         - name: MBS_BACKEND_IMAGESTREAM_NAMESPACE

+           value: "${MBS_BACKEND_IMAGESTREAM_NAMESPACE}"

+         - name: MBS_FRONTEND_IMAGESTREAM_NAME

+           value: "${MBS_FRONTEND_IMAGESTREAM_NAME}"

+         - name: MBS_FRONTEND_IMAGESTREAM_NAMESPACE

+           value: "${MBS_FRONTEND_IMAGESTREAM_NAMESPACE}"

+         - name: MBS_MAIN_BRANCH

+           value: "${MBS_MAIN_BRANCH}"

+         - name: MBS_INTEGRATION_TEST_BUILD_CONFIG_NAME

+           value: "${MBS_INTEGRATION_TEST_BUILD_CONFIG_NAME}"

+         - name: MBS_SPEC_FILE

+           value: "${MBS_SPEC_FILE}"

+         - name: EXTRA_REPOS

+           value: "${EXTRA_REPOS}"

+         - name: EXTRA_RPMS

+           value: "${EXTRA_RPMS}"

+         - name: TESTCASES

+           value: "${TESTCASES}"

+         - name: CLEANUP

+           value: "${CLEANUP}"

+         - name: BUILD_DISPLAY_RENAME_TO

+           value: ""

+         jenkinsfilePath: openshift/integration/koji/pipelines/templates/mbs-koji-int-dev.Jenkinsfile

@@ -0,0 +1,340 @@ 

+ library identifier: "c3i@master",

+         retriever: modernSCM([$class: "GitSCMSource",

+                               remote: "https://pagure.io/c3i-library.git"])

+ 

+ pipeline {

+   agent {

+     kubernetes {

+       cloud params.JENKINS_AGENT_CLOUD_NAME

+       label "jenkins-slave-${UUID.randomUUID().toString()}"

+       serviceAccount params.JENKINS_AGENT_SERVICE_ACCOUNT

+       defaultContainer "jnlp"

+       yaml """

+       apiVersion: v1

+       kind: Pod

+       metadata:

+         labels:

+           app: "jenkins-${env.JOB_BASE_NAME}"

+           factory2-pipeline-kind: mbs-koji-int-dev-pipeline

+           factory2-pipeline-build-number: "${env.BUILD_NUMBER}"

+       spec:

+         containers:

+         - name: jnlp

+           image: "${params.JENKINS_AGENT_IMAGE}"

+           imagePullPolicy: Always

+           tty: true

+           env:

+           - name: REGISTRY_CREDENTIALS

+             valueFrom:

+               secretKeyRef:

+                 name: "${params.CONTAINER_REGISTRY_CREDENTIALS}"

+                 key: ".dockerconfigjson"

+           resources:

+             requests:

+               memory: 384Mi

+               cpu: 200m

+             limits:

+               memory: 768Mi

+               cpu: 300m

+       """

+     }

+   }

+   options {

+     timestamps()

+     timeout(time: 120, unit: "MINUTES")

+     buildDiscarder(logRotator(numToKeepStr: "10"))

+   }

+   environment {

+     // Jenkins BUILD_TAG could be too long (> 63 characters) for OpenShift to consume

+     TEST_ID = "${params.TEST_ID ?: 'jenkins-' + currentBuild.id}"

+     ENVIRONMENT_LABEL = "test-${env.TEST_ID}"

+     PIPELINE_NAMESPACE = readFile("/run/secrets/kubernetes.io/serviceaccount/namespace").trim()

+   }

+   stages {

+     stage("Prepare") {

+       steps {

+         script {

+           def scmVars = checkout([$class: "GitSCM",

+             branches: [[name: params.MBS_GIT_REF]],

+             userRemoteConfigs: [[url: params.MBS_GIT_REPO, refspec: "+refs/heads/*:refs/remotes/origin/* +refs/pull/*/head:refs/remotes/origin/pull/*/head"]],

+           ])

+           env.GIT_BRANCH = scmVars.GIT_BRANCH

+           env.GIT_COMMIT = scmVars.GIT_COMMIT

+           env.GIT_COMMIT_SHORT = env.GIT_COMMIT.substring(0, 7)

+           env.MBS_VERSION = sh(script: """grep -m 1 -P -o "(?<=version=')[^']+" setup.py""", returnStdout: true).trim()

+           env.BUILD_SUFFIX = ".jenkins${currentBuild.id}.git${env.GIT_COMMIT_SHORT}"

+           env.MBS_TEMP_TAG = "${env.MBS_VERSION}${env.BUILD_SUFFIX}"

+           echo "Building branch=${env.GIT_BRANCH}, commit=${env.GIT_COMMIT}"

+           if (params.BUILD_DISPLAY_RENAME_TO) {

+             currentBuild.displayName = params.BUILD_DISPLAY_RENAME_TO

+           } else {

+             currentBuild.displayName = "Building branch=${env.GIT_BRANCH}, commit=${env.GIT_COMMIT_SHORT}"

+           }

+           def resp = httpRequest params.MBS_SPEC_FILE

+           env.SPEC_FILE_NAME = params.MBS_SPEC_FILE.split("/").last()

+           writeFile file: env.SPEC_FILE_NAME, text: resp.content

+           sh """

+             sed -i \

+                 -e 's/Version:.*/Version:        ${env.MBS_VERSION}/' \

+                 -e 's/%{?dist}/${env.BUILD_SUFFIX}%{?dist}/' \

+                 ${env.SPEC_FILE_NAME}

+ 

+           """

+           sh "mkdir repos"

+           for (repo in params.EXTRA_REPOS.split()) {

+             resp = httpRequest repo

+             writeFile file: "repos/${repo.split("/").last()}", text: resp.content

+           }

+         }

+         sh """

+           sed -i \

+               -e '/enum34/d' \

+               -e '/funcsigs/d' \

+               -e '/futures/d' \

+               -e '/koji/d' \

+               requirements.txt

+           sed -i \

+               -e 's/py.test/py.test-3/g' \

+               -e '/basepython/d' \

+               -e '/sitepackages/a setenv = PYTHONPATH={toxinidir}' \

+               tox.ini

+         """

+       }

+     }

+     stage("Build backend image") {

+       environment {

+         BACKEND_BUILDCONFIG_ID = "mbs-backend-build-${currentBuild.id}"

+         FRONTEND_BUILDCONFIG_ID = "mbs-frontend-build-${currentBuild.id}"

+       }

+       steps {

+         script {

+           openshift.withCluster() {

+             // OpenShift BuildConfig doesn't support specifying a tag name at build time.

+             // We have to create a new BuildConfig for each image build.

+             echo "Creating a BuildConfig for mbs-backend build..."

+             def template = readYaml file: "openshift/integration/koji/containers/backend/mbs-backend-build-template.yaml"

+             def processed = openshift.process(template,

+               "-p", "NAME=${env.BACKEND_BUILDCONFIG_ID}",

+               "-p", "MBS_GIT_REPO=${params.MBS_GIT_REPO}",

+               "-p", "MBS_GIT_REF=${params.MBS_GIT_REF}",

+               "-p", "MBS_BACKEND_IMAGESTREAM_NAME=${params.MBS_BACKEND_IMAGESTREAM_NAME}",

+               "-p", "MBS_BACKEND_IMAGESTREAM_NAMESPACE=${env.PIPELINE_NAMESPACE}",

+               "-p", "MBS_IMAGE_TAG=${env.MBS_TEMP_TAG}",

+               "-p", "EXTRA_RPMS=${params.EXTRA_RPMS}"

+             )

+             def buildname = c3i.buildAndWait(processed, "--from-dir=.")

+             def build = openshift.selector(buildname)

+             def ocpBuild = build.object()

+             env.BACKEND_IMAGE_REF = ocpBuild.status.outputDockerImageReference

+             env.BACKEND_IMAGE_DIGEST = ocpBuild.status.output.to.imageDigest

+             env.BACKEND_IMAGE_TAG = env.MBS_TEMP_TAG

+             echo "Built image ${env.BACKEND_IMAGE_REF}, digest: ${env.BACKEND_IMAGE_DIGEST}"

+           }

+         }

+       }

+       post {

+         cleanup {

+           script {

+             openshift.withCluster() {

+               echo "Tearing down..."

+               openshift.selector("bc", [

+                 "app": env.BACKEND_BUILDCONFIG_ID,

+                 "template": "mbs-backend-build-template",

+                 ]).delete()

+             }

+           }

+         }

+       }

+     }

+     stage("Build frontend image") {

+       environment {

+         FRONTEND_BUILDCONFIG_ID = "mbs-frontend-build-${currentBuild.id}"

+       }

+       steps {

+         script {

+           openshift.withCluster() {

+             // OpenShift BuildConfig doesn't support specifying a tag name at build time.

+             // We have to create a new BuildConfig for each container build.

+             // Create a BuildConfig from a seperated Template.

+             echo "Creating a BuildConfig for mbs-frontend build..."

+             def template = readYaml file: "openshift/integration/koji/containers/frontend/mbs-frontend-build-template.yaml"

+             def processed = openshift.process(template,

+               "-p", "NAME=${env.FRONTEND_BUILDCONFIG_ID}",

+               "-p", "MBS_GIT_REPO=${params.MBS_GIT_REPO}",

+               "-p", "MBS_GIT_REF=${params.MBS_GIT_REF}",

+               "-p", "MBS_FRONTEND_IMAGESTREAM_NAME=${params.MBS_FRONTEND_IMAGESTREAM_NAME}",

+               "-p", "MBS_FRONTEND_IMAGESTREAM_NAMESPACE=${env.PIPELINE_NAMESPACE}",

+               "-p", "MBS_IMAGE_TAG=${env.MBS_TEMP_TAG}",

+               "-p", "MBS_BACKEND_IMAGESTREAM_NAME=${params.MBS_BACKEND_IMAGESTREAM_NAME}",

+               "-p", "MBS_BACKEND_IMAGESTREAM_NAMESPACE=${env.PIPELINE_NAMESPACE}"

+             )

+             def buildname = c3i.buildAndWait(processed)

+             def build = openshift.selector(buildname)

+             def ocpBuild = build.object()

+             env.FRONTEND_IMAGE_REF = ocpBuild.status.outputDockerImageReference

+             env.FRONTEND_IMAGE_DIGEST = ocpBuild.status.output.to.imageDigest

+             env.FRONTEND_IMAGE_TAG = env.MBS_TEMP_TAG

+             echo "Built image ${env.FRONTEND_IMAGE_REF}, digest: ${env.FRONTEND_IMAGE_DIGEST}"

+           }

+         }

+       }

+       post {

+         cleanup {

+           script {

+             openshift.withCluster() {

+               echo "Tearing down..."

+               openshift.selector("bc", [

+                 "app": env.FRONTEND_BUILDCONFIG_ID,

+                 "template": "mbs-frontend-build-template",

+                 ]).delete()

+             }

+           }

+         }

+       }

+     }

+     stage("Run integration tests") {

+       steps {

+         script {

+           openshift.withCluster() {

+             def testcases

+             if (params.TESTCASES) {

+               if (params.TESTCASES == "skip") {

+                 testcases = []

+                 echo "Skipping integration tests"

+               } else {

+                 testcases = params.TESTCASES.split()

+                 echo "Using specified list of testcases: ${testcases}"

+               }

+             } else {

+               testcases = findFiles(glob: "openshift/integration/koji/pipelines/tests/*.groovy").collect {

+                 it.name.minus(".groovy")

+               }

+               echo "Using all available testcases: ${testcases}"

+             }

+             def testbc = "bc/${params.MBS_INTEGRATION_TEST_BUILD_CONFIG_NAME}"

+             testcases.each { testcase ->

+               echo """Starting integration test "${testcase}" for the built images..."""

+               def build = c3i.buildAndWait(testbc,

+                   "-e", "MBS_BACKEND_IMAGE=${env.BACKEND_IMAGE_REF}",

+                   "-e", "MBS_FRONTEND_IMAGE=${env.FRONTEND_IMAGE_REF}",

+                   "-e", "MBS_GIT_REPO=${params.MBS_GIT_REPO}",

+                   "-e", "MBS_GIT_REF=${params.MBS_GIT_REF}",

+                   "-e", "TESTCASE=${testcase}",

+                   "-e", "CLEANUP=${params.CLEANUP}",

+                   "-e", "BUILD_DISPLAY_RENAME_TO='${currentBuild.displayName}'"

+               )

+               echo """Integration test "${testcase}" PASSED"""

+             }

+             echo "All integration tests PASSED"

+           }

+         }

+       }

+     }

+     stage("Tag into ImageStreams") {

+       when {

+         expression {

+           return "${params.MBS_DEV_IMAGE_TAG}" && params.TAG_INTO_IMAGESTREAM == "true" &&

+             (params.FORCE_PUBLISH_IMAGE == "true" || params.MBS_GIT_REF == params.MBS_MAIN_BRANCH)

+         }

+       }

+       steps {

+         script {

+           openshift.withCluster() {

+             openshift.withProject(params.MBS_BACKEND_IMAGESTREAM_NAMESPACE) {

+               def sourceRef = "${params.MBS_BACKEND_IMAGESTREAM_NAME}:${env.BACKEND_IMAGE_TAG}"

+               def destRef = "${params.MBS_BACKEND_IMAGESTREAM_NAME}:${params.MBS_DEV_IMAGE_TAG}"

+               echo "Tagging ${sourceRef} as ${destRef}..."

+               openshift.tag(sourceRef, destRef)

+             }

+             openshift.withProject(params.MBS_FRONTEND_IMAGESTREAM_NAMESPACE) {

+               def sourceRef = "${params.MBS_FRONTEND_IMAGESTREAM_NAME}:${env.FRONTEND_IMAGE_TAG}"

+               def destRef = "${params.MBS_FRONTEND_IMAGESTREAM_NAME}:${params.MBS_DEV_IMAGE_TAG}"

+               echo "Tagging ${sourceRef} as ${destRef}..."

+               openshift.tag(sourceRef, destRef)

+             }

+           }

+         }

+       }

+     }

+     stage("Push images") {

+       when {

+         expression {

+           return params.FORCE_PUBLISH_IMAGE == "true" || params.MBS_GIT_REF == params.MBS_MAIN_BRANCH

+         }

+       }

+       steps {

+         script {

+           if (env.REGISTRY_CREDENTIALS) {

+              dir ("${env.HOME}/.docker") {

+                   writeFile file:"config.json", text: env.REGISTRY_CREDENTIALS

+              }

+           }

+           def registryToken = readFile(file: "/var/run/secrets/kubernetes.io/serviceaccount/token")

+           def copyDown = { name, src ->

+             src = "docker://${src}"

+             echo "Pulling ${name} from ${src}..."

+             withEnv(["SOURCE_IMAGE_REF=${src}", "TOKEN=${registryToken}"]) {

+               sh """

+                 set -e +x # hide the token from Jenkins console

+                 mkdir -p _images

+                 skopeo copy \

+                   --src-cert-dir=/var/run/secrets/kubernetes.io/serviceaccount/ \

+                   --src-creds=serviceaccount:"$TOKEN" \

+                   "$SOURCE_IMAGE_REF" dir:_images/${name}

+               """

+             }

+           }

+           def pullJobs = [

+             "Pulling backend"  : { copyDown("backend", env.BACKEND_IMAGE_REF) },

+             "Pulling frontend" : { copyDown("frontend", env.FRONTEND_IMAGE_REF) }

+           ]

+           parallel pullJobs

+           def copyUp = { name, dest ->

+             if (!dest.startsWith("atomic:") && !dest.startsWith("docker://")) {

+               dest = "docker://${dest}"

+             }

+             echo "Pushing ${name} to ${dest}..."

+             withEnv(["DEST_IMAGE_REF=${dest}"]) {

+               sh """

+                 skopeo copy dir:_images/${name} "$DEST_IMAGE_REF"

+               """

+             }

+           }

+           def backendDests = params.MBS_BACKEND_DEV_IMAGE_DESTINATIONS ?

+                              params.MBS_BACKEND_DEV_IMAGE_DESTINATIONS.split(",") : []

+           def frontendDests = params.MBS_FRONTEND_DEV_IMAGE_DESTINATIONS ?

+                               params.MBS_FRONTEND_DEV_IMAGE_DESTINATIONS.split(",") : []

+           def pushJobs = backendDests.collectEntries {

+             [ "Pushing backend to ${it}"  : { copyUp("backend", it) } ]

+           }

+           pushJobs << frontendDests.collectEntries {

+             [ "Pushing frontend to ${it}" : { copyUp("frontend", it) } ]

+           }

+           parallel pushJobs

+         }

+       }

+     }

+   }

+   post {

+     cleanup {

+       script {

+         if (params.CLEANUP == "true") {

+           openshift.withCluster() {

+             if (env.BACKEND_IMAGE_TAG) {

+               echo "Removing tag ${env.BACKEND_IMAGE_TAG} from the ${params.MBS_BACKEND_IMAGESTREAM_NAME} ImageStream..."

+               openshift.withProject(params.MBS_BACKEND_IMAGESTREAM_NAMESPACE) {

+                 openshift.tag("${params.MBS_BACKEND_IMAGESTREAM_NAME}:${env.BACKEND_IMAGE_TAG}", "-d")

+               }

+             }

+             if (env.FRONTEND_IMAGE_TAG) {

+               echo "Removing tag ${env.FRONTEND_IMAGE_TAG} from the ${params.MBS_FRONTEND_IMAGESTREAM_NAME} ImageStream..."

+               openshift.withProject(params.MBS_FRONTEND_IMAGESTREAM_NAMESPACE) {

+                 openshift.tag("${params.MBS_FRONTEND_IMAGESTREAM_NAME}:${env.FRONTEND_IMAGE_TAG}", "-d")

+               }

+             }

+           }

+         }

+       }

+     }

+   }

+ }

@@ -0,0 +1,88 @@ 

+ # Template to build a new Jenkins slave running integration tests

+ ---

+ apiVersion: v1

+ kind: Template

+ metadata:

+   name: mbs-koji-int-jenkins-slave

+ labels:

+   template: mbs-koji-int-jenkins-slave

+ parameters:

+ - name: NAME

+   displayName: Short unique identifier for the templated instances

+   description: This field is used to deploy multiple pipelines to one OpenShift project from this template.

+   required: true

+   value: mbs-koji-int-jenkins-slave

+ - name: MBS_GIT_REPO

+   displayName: MBS Git repo URL

+   description: Default MBS Git repo URL in which to find the integration tests to run

+   required: true

+   value: https://pagure.io/fm-orchestrator.git

+ - name: MBS_GIT_REF

+   displayName: MBS Git repo ref

+   description: Default MBS Git repo ref in which to find the integration tests to run

+   required: true

+   value: master

+ - name: JENKINS_AGENT_IMAGE

+   displayName: Container image for Jenkins slave pods

+   required: true

+   value: quay.io/factory2/mbs-koji-int-jenkins-slave:latest

+ - name: CONTAINER_REGISTRY_CREDENTIALS

+   displayName: Secret name of container registries used for pulling and pushing images

+   value: factory2-pipeline-registry-credentials

+   required: false

+ - name: CA_URLS

+   displayName: Space-separated list of URLs to CA certificates to install in the image

+   required: false

+   value: ""

+ objects:

+ - kind: BuildConfig

+   apiVersion: v1

+   metadata:

+     name: "${NAME}"

+     labels:

+       app: "${NAME}"

+   spec:

+     runPolicy: Serial

+     completionDeadlineSeconds: 1800

+     strategy:

+       dockerStrategy:

+         forcePull: true

+         dockerfilePath: Dockerfile

+         buildArgs:

+         - name: CA_URLS

+           value: "${CA_URLS}"

+     resources:

+       requests:

+         memory: 384Mi

+         cpu: 200m

+       limits:

+        memory: 768Mi

+        cpu: 300m

+     source:

+       contextDir: openshift/integration/koji/containers/jenkins-slave

+       git:

+         uri: "${MBS_GIT_REPO}"

+         ref: "${MBS_GIT_REF}"

+     output:

+       to:

+         kind: DockerImage

+         name: "${JENKINS_AGENT_IMAGE}"

+       pushSecret:

+        name: "${CONTAINER_REGISTRY_CREDENTIALS}"

+ - kind: ServiceAccount

+   apiVersion: v1

+   metadata:

+     name: "${NAME}"

+     labels:

+       app: "${NAME}"

+ - kind: RoleBinding

+   apiVersion: v1

+   metadata:

+     name: "${NAME}-slave_edit"

+     labels:

+       app: "${NAME}"

+   subjects:

+   - kind: ServiceAccount

+     name: "${NAME}"

+   roleRef:

+     name: edit

@@ -0,0 +1,116 @@ 

+ # Template to produce a new OpenShift pipeline for running integration tests

+ ---

+ apiVersion: v1

+ kind: Template

+ metadata:

+   name: mbs-koji-int-test

+ labels:

+   template: mbs-koji-int-test

+ parameters:

+ - name: NAME

+   displayName: Short unique identifier for the templated instances

+   description: This field is used to deploy multiple pipelines to one OpenShift project from this template.

+   required: true

+   value: mbs-koji-int-test

+ - name: MBS_BACKEND_IMAGE

+   displayName: The MBS backend container image to be tested

+   description: This field must be in repo:tag or repo@sha256 format

+   value: quay.io/factory2/mbs-backend:latest

+ - name: MBS_FRONTEND_IMAGE

+   displayName: The MBS frontend container image to be tested

+   description: This field must be in repo:tag or repo@sha256 format

+   value: quay.io/factory2/mbs-frontend:latest

+ - name: KOJI_IMAGE

+   displayName: The Koji container image to be tested

+   description: This field must be in repo:tag or repo@sha256 format

+   value: quay.io/factory2/koji:latest

+ - name: MBS_GIT_REPO

+   displayName: MBS Git repo URL

+   description: Default MBS Git repo URL in which to find the integration tests to run

+   required: true

+   value: "https://pagure.io/fm-orchestrator.git"

+ - name: MBS_GIT_REF

+   displayName: MBS Git repo ref

+   description: Default MBS Git repo ref in which to find the integration tests to run

+   required: true

+   value: master

+ - name: JENKINS_AGENT_IMAGE

+   displayName: Container image for Jenkins slave pods

+   required: true

+   value: quay.io/factory2/mbs-koji-int-jenkins-slave:latest

+ - name: CONTAINER_REGISTRY_CREDENTIALS

+   displayName: Secret name of container registries used for pulling and pushing images

+   value: factory2-pipeline-registry-credentials

+   required: false

+ - name: JENKINS_AGENT_CLOUD_NAME

+   displayName: Name of OpenShift cloud in Jenkins master configuration

+   required: true

+   value: openshift

+ - name: TESTCASE

+   displayName: The name of the testcase to run

+   required: false

+   value: "dummy"

+ - name: CLEANUP

+   displayName: Cleanup objects after testing is complete

+   required: true

+   value: "true"

+ objects:

+ - kind: ServiceAccount

+   apiVersion: v1

+   metadata:

+     name: "${NAME}-jenkins-slave"

+     labels:

+       app: "${NAME}"

+ - kind: RoleBinding

+   apiVersion: v1

+   metadata:

+     name: "${NAME}-jenkins-slave_edit"

+     labels:

+       app: "${NAME}"

+   subjects:

+   - kind: ServiceAccount

+     name: "${NAME}-jenkins-slave"

+   roleRef:

+     name: edit

+ - kind: "BuildConfig"

+   apiVersion: "v1"

+   metadata:

+     name: "${NAME}"

+     labels:

+       app: "${NAME}"

+   spec:

+     runPolicy: Serial

+     completionDeadlineSeconds: 3600

+     source:

+       git:

+         uri: "${MBS_GIT_REPO}"

+         ref: "${MBS_GIT_REF}"

+     strategy:

+       type: JenkinsPipeline

+       jenkinsPipelineStrategy:

+         env:

+         - name: "MBS_GIT_REPO"

+           value: "${MBS_GIT_REPO}"

+         - name: "MBS_GIT_REF"

+           value: "${MBS_GIT_REF}"

+         - name: "MBS_BACKEND_IMAGE"

+           value: "${MBS_BACKEND_IMAGE}"

+         - name: "MBS_FRONTEND_IMAGE"

+           value: "${MBS_FRONTEND_IMAGE}"

+         - name: "KOJI_IMAGE"

+           value: "${KOJI_IMAGE}"

+         - name: "CONTAINER_REGISTRY_CREDENTIALS"

+           value: "${CONTAINER_REGISTRY_CREDENTIALS}"

+         - name: JENKINS_AGENT_IMAGE

+           value: "${JENKINS_AGENT_IMAGE}"

+         - name: JENKINS_AGENT_CLOUD_NAME

+           value: "${JENKINS_AGENT_CLOUD_NAME}"

+         - name: JENKINS_AGENT_SERVICE_ACCOUNT

+           value: "${NAME}-jenkins-slave"

+         - name: TESTCASE

+           value: "${TESTCASE}"

+         - name: CLEANUP

+           value: "${CLEANUP}"

+         - name: BUILD_DISPLAY_RENAME_TO

+           value: ""

+         jenkinsfilePath: openshift/integration/koji/pipelines/templates/mbs-koji-int-test.Jenkinsfile

@@ -0,0 +1,166 @@ 

+ library identifier: "c3i@master",

+         retriever: modernSCM([$class: "GitSCMSource",

+                               remote: "https://pagure.io/c3i-library.git"])

+ 

+ pipeline {

+   agent {

+     kubernetes {

+       cloud params.JENKINS_AGENT_CLOUD_NAME

+       label "jenkins-slave-${UUID.randomUUID().toString()}"

+       serviceAccount params.JENKINS_AGENT_SERVICE_ACCOUNT

+       defaultContainer "jnlp"

+       yaml """

+       apiVersion: v1

+       kind: Pod

+       metadata:

+         labels:

+           app: "jenkins-${env.JOB_BASE_NAME}"

+           factory2-pipeline-kind: mbs-koji-int-test-pipeline

+           factory2-pipeline-build-number: "${env.BUILD_NUMBER}"

+       spec:

+         containers:

+         - name: jnlp

+           image: "${params.JENKINS_AGENT_IMAGE}"

+           imagePullPolicy: Always

+           tty: true

+           env:

+           - name: REGISTRY_CREDENTIALS

+             valueFrom:

+               secretKeyRef:

+                 name: "${params.CONTAINER_REGISTRY_CREDENTIALS}"

+                 key: ".dockerconfigjson"

+           resources:

+             requests:

+               memory: 384Mi

+               cpu: 200m

+             limits:

+               memory: 768Mi

+               cpu: 300m

+       """

+     }

+   }

+   options {

+     timestamps()

+     timeout(time: 60, unit: "MINUTES")

+     buildDiscarder(logRotator(numToKeepStr: "10"))

+   }

+   environment {

+     // Jenkins BUILD_TAG could be too long (> 63 characters) for OpenShift to consume

+     TEST_ID = "${params.TEST_ID ?: 'jenkins-' + currentBuild.id}"

+     ENVIRONMENT_LABEL = "test-${env.TEST_ID}"

+   }

+   stages {

+     stage("Prepare") {

+       steps {

+         script {

+           if (params.BUILD_DISPLAY_RENAME_TO) {

+             currentBuild.displayName = "${params.BUILD_DISPLAY_RENAME_TO}, testcase=${params.TESTCASE}"

+           } else {

+             currentBuild.displayName = "testcase=${params.TESTCASE}"

+           }

+         }

+       }

+     }

+     stage("Call cleanup routine") {

+       steps {

+         script {

+           // Cleanup all test environments that were created 1 hour ago in case of failures of previous cleanups.

+           c3i.cleanup("umb", "koji", "mbs")

+         }

+       }

+     }

+     stage("Call UMB deployer") {

+       steps {

+         script {

+           def keystore = ca.get_keystore("umb-${TEST_ID}-broker", "mbskeys")

+           def truststore = ca.get_truststore("mbstrust")

+           umb.deploy(env.TEST_ID, keystore, "mbskeys", truststore, "mbstrust")

+         }

+       }

+     }

+     stage("Call Koji deployer") {

+       steps {

+         script {

+           koji.deploy(env.TEST_ID, ca.get_ca_cert(),

+                       ca.get_ssl_cert("koji-${TEST_ID}-hub"),

+                       "amqps://umb-${TEST_ID}-broker",

+                       ca.get_ssl_cert("koji-${TEST_ID}-msg"),

+                       "mbs-${TEST_ID}-koji-admin")

+         }

+       }

+     }

+     stage("Call MBS deployer") {

+       steps {

+         script {

+           // Required for accessing src.fedoraproject.org

+           def digicertca = """-----BEGIN CERTIFICATE-----

+ MIIDxTCCAq2gAwIBAgIQAqxcJmoLQJuPC3nyrkYldzANBgkqhkiG9w0BAQUFADBs

+ MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3

+ d3cuZGlnaWNlcnQuY29tMSswKQYDVQQDEyJEaWdpQ2VydCBIaWdoIEFzc3VyYW5j

+ ZSBFViBSb290IENBMB4XDTA2MTExMDAwMDAwMFoXDTMxMTExMDAwMDAwMFowbDEL

+ MAkGA1UEBhMCVVMxFTATBgNVBAoTDERpZ2lDZXJ0IEluYzEZMBcGA1UECxMQd3d3

+ LmRpZ2ljZXJ0LmNvbTErMCkGA1UEAxMiRGlnaUNlcnQgSGlnaCBBc3N1cmFuY2Ug

+ RVYgUm9vdCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMbM5XPm

+ +9S75S0tMqbf5YE/yc0lSbZxKsPVlDRnogocsF9ppkCxxLeyj9CYpKlBWTrT3JTW

+ PNt0OKRKzE0lgvdKpVMSOO7zSW1xkX5jtqumX8OkhPhPYlG++MXs2ziS4wblCJEM

+ xChBVfvLWokVfnHoNb9Ncgk9vjo4UFt3MRuNs8ckRZqnrG0AFFoEt7oT61EKmEFB

+ Ik5lYYeBQVCmeVyJ3hlKV9Uu5l0cUyx+mM0aBhakaHPQNAQTXKFx01p8VdteZOE3

+ hzBWBOURtCmAEvF5OYiiAhF8J2a3iLd48soKqDirCmTCv2ZdlYTBoSUeh10aUAsg

+ EsxBu24LUTi4S8sCAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgGGMA8GA1UdEwEB/wQF

+ MAMBAf8wHQYDVR0OBBYEFLE+w2kD+L9HAdSYJhoIAu9jZCvDMB8GA1UdIwQYMBaA

+ FLE+w2kD+L9HAdSYJhoIAu9jZCvDMA0GCSqGSIb3DQEBBQUAA4IBAQAcGgaX3Nec

+ nzyIZgYIVyHbIUf4KmeqvxgydkAQV8GK83rZEWWONfqe/EW1ntlMMUu4kehDLI6z

+ eM7b41N5cdblIZQB2lWHmiRk9opmzN6cN82oNLFpmyPInngiK3BD41VHMWEZ71jF

+ hS9OMPagMRYjyOfiZRYzy78aG6A9+MpeizGLYAiJLQwGXFK3xPkKmNEVX58Svnw2

+ Yzi9RKR/5CYrCsSXaQ3pjOLAEFe4yHYSkVXySGnYvCoCWw9E1CAx2/S6cCZdkGCe

+ vEsXCS+0yx5DaMkHJ8HSXPfqIbloEpw8nL+e/IBcm2PN7EeqJSdnoDfzAIJ9VNep

+ +OkuE6N36B9K

+ -----END CERTIFICATE-----

+ """

+           def cabundle = ca.get_ca_cert().cert + digicertca

+           def msgcert = ca.get_ssl_cert("mbs-${TEST_ID}-msg")

+           def kojicert = ca.get_ssl_cert("mbs-${TEST_ID}-koji-admin")

+           mbs.deploy(env.TEST_ID, kojicert, ca.get_ca_cert(), msgcert, cabundle,

+                      "https://koji-${TEST_ID}-hub",

+                      "umb-${TEST_ID}-broker:61612",

+                      params.MBS_BACKEND_IMAGE, params.MBS_FRONTEND_IMAGE)

+         }

+       }

+     }

+     stage("Run tests") {

+       steps {

+         script {

+           if (params.TESTCASE == "dummy") {

+             error "You must specify a valid testcase to run"

+           }

+           def testcase = load "openshift/integration/koji/pipelines/tests/${params.TESTCASE}.groovy"

+           echo "Running testcase ${params.TESTCASE}..."

+           testcase.runTests()

+         }

+       }

+     }

+   }

+   post {

+     failure {

+       script {

+         openshift.withCluster() {

+           echo "Getting logs from all deployments..."

+           def sel = openshift.selector("dc", ["environment": env.ENVIRONMENT_LABEL])

+           sel.logs("--tail=100")

+         }

+       }

+     }

+     cleanup {

+       script {

+         if (params.CLEANUP == "true") {

+           openshift.withCluster() {

+             /* Tear down everything we just created */

+             echo "Tearing down test resources..."

+             openshift.selector("all,pvc,configmap,secret",

+                                ["environment": env.ENVIRONMENT_LABEL]).delete("--ignore-not-found=true")

+           }

+         }

+       }

+     }

+   }

+ }

@@ -0,0 +1,362 @@ 

+ # Template to produce a new OpenShift pipeline job for polling for Pagure branches or PRs

+ 

+ ---

+ apiVersion: v1

+ kind: Template

+ metadata:

+   name: mbs-polling-pagure

+ labels:

+   template: mbs-polling-pagure

+ parameters:

+ - name: NAME

+   displayName: Short unique identifier for the templated instances

+   description: This field is used to deploy multiple pipelines to one OpenShift project from this template.

+   required: true

+   value: mbs-polling-pagure

+ - name: PAGURE_REPO_NAME

+   displayName: Pagure project name

+   description: <username>/<namespace>/<repo>

+   required: true

+   value: fm-orchestrator

+ - name: PAGURE_REPO_IS_FORK

+   displayName: Is the Pagure repo a fork?

+   required: true

+   value: "false"

+ - name: PAGURE_POLLING_FOR_PR

+   displayName: set to 'true' to poll for PRs, or 'false' for the master branch

+   required: true

+   value: "false"

+ - name: PAGURE_POLLING_SCHEDULE

+   displayName: Schedule of polling

+   description: using cron-style syntax

+   required: true

+   value: "H/5 * * * *"

+ - name: PAGURE_POLLED_BRANCH

+   displayName: Name of polled branch

+   required: true

+   value: "master"

+ - name: DEV_PIPELINE_BC_NAME

+   displayName: Name of BuildConfig for starting dev pipeline builds

+   required: true

+   value: mbs-koji-int-dev

+ - name: DEV_PIPELINE_BC_NAMESPACE

+   displayName: Namespace of BuildConfig for starting dev pipeline builds

+   required: false

+ - name: PAGURE_API_KEY_SECRET_NAME

+   displayName: Name of Secret for updating Pagure pull-requests status

+   value: "pagure-api-key"

+   required: false

+ - name: PIPELINE_UPDATE_JOBS_DIR

+   displayName: location of pipeline job definitions for auto update

+   value: jobs

+   required: false

+ - name: MAIL_ENABLED

+   displayName: >-

+     Whether to send an email. "true" will always send mail, "false" will

+     never send mail. Any other value will be interpreted as a regex, and

+     mail will only be sent if the name of the branch being built matches

+     the regex. For example, "master" will only send mail if building the

+     master branch.

+   value: "false"

+   required: true

+ - name: JENKINS_AGENT_IMAGE

+   displayName: Container image for Jenkins slave pods

+   required: true

+   value: quay.io/factory2/mbs-koji-int-jenkins-slave:latest

+ - name: OPENSHIFT_CLOUD_NAME

+   displayName: Name of OpenShift cloud in Jenkins master configuration

+   required: true

+   value: openshift

+ objects:

+ - kind: ServiceAccount

+   apiVersion: v1

+   metadata:

+     name: "${NAME}-jenkins-slave"

+     labels:

+       app: "${NAME}"

+ - kind: RoleBinding

+   apiVersion: v1

+   metadata:

+     name: "${NAME}-jenkins-slave_edit"

+     labels:

+       app: "${NAME}"

+   subjects:

+   - kind: ServiceAccount

+     name: "${NAME}-jenkins-slave"

+   roleRef:

+     name: edit

+ - kind: "BuildConfig"

+   apiVersion: "v1"

+   metadata:

+     name: "${NAME}"

+     labels:

+       app: "${NAME}"

+   spec:

+     runPolicy: "Serial"

+     strategy:

+       type: JenkinsPipeline

+       jenkinsPipelineStrategy:

+         jenkinsfile: |-

+           // Don't use external Jenkinsfile here, or Jenkins will also poll on that repo and branch.

+           // Also set changelog to false so it doesn't poll the library repo and trigger jobs when

+           // changes are found.

+           library identifier: "c3i@master",

+                   changelog: false,

+                   retriever: modernSCM([$class: "GitSCMSource",

+                                         remote: "https://pagure.io/c3i-library.git"])

+           pipeline {

+             agent {

+               kubernetes {

+                 cloud "${OPENSHIFT_CLOUD_NAME}"

+                 label "jenkins-slave-${UUID.randomUUID().toString()}"

+                 serviceAccount "${NAME}-jenkins-slave"

+                 defaultContainer "jnlp"

+                 yaml """

+                 apiVersion: v1

+                 kind: Pod

+                 metadata:

+                   labels:

+                     app: "jenkins-${env.JOB_BASE_NAME}"

+                     factory2-pipeline-kind: "mbs-polling-pagure-pipeline"

+                     factory2-pipeline-build-number: "${env.BUILD_NUMBER}"

+                 spec:

+                   containers:

+                   - name: jnlp

+                     image: "${JENKINS_AGENT_IMAGE}"

+                     imagePullPolicy: Always

+                     tty: true

+                     resources:

+                       requests:

+                         memory: 384Mi

+                         cpu: 200m

+                       limits:

+                         memory: 768Mi

+                         cpu: 300m

+                 """

+               }

+             }

+             options {

+               timestamps()

+             }

+             environment {

+               PIPELINE_NAMESPACE = readFile("/run/secrets/kubernetes.io/serviceaccount/namespace").trim()

+               PAGURE_URL = "https://pagure.io"

+               PAGURE_API = "${env.PAGURE_URL}/api/0"

+               PAGURE_API_KEY_SECRET_NAME = "${PAGURE_API_KEY_SECRET_NAME}"

+               PAGURE_REPO_NAME = "${PAGURE_REPO_NAME}"

+               PAGURE_REPO_IS_FORK = "${PAGURE_REPO_IS_FORK}"

+               PAGURE_POLLING_FOR_PR = "${PAGURE_POLLING_FOR_PR}"

+               PAGURE_REPO_HOME = "${env.PAGURE_URL}${env.PAGURE_REPO_IS_FORK == "true" ? "/fork" : ""}/${PAGURE_REPO_NAME}"

+               GIT_URL = "${env.PAGURE_URL}/${env.PAGURE_REPO_IS_FORK == "true" ? "forks/" : ""}${PAGURE_REPO_NAME}.git"

+             }

+             triggers { pollSCM("${PAGURE_POLLING_SCHEDULE}") }

+             stages {

+               stage("Prepare") {

+                 agent { label "master" }

+                 steps {

+                   script {

+                     // checking out the polled branch

+                     def polledBranch = env.PAGURE_POLLING_FOR_PR == "true" ? "origin/pull/*/head" : "origin/${PAGURE_POLLED_BRANCH}"

+                     def scmVars = checkout([$class: "GitSCM",

+                       branches: [[name: polledBranch]],

+                       userRemoteConfigs: [

+                         [

+                           name: "origin",

+                           url: env.GIT_URL,

+                           refspec: "+refs/heads/*:refs/remotes/origin/* +refs/pull/*/head:refs/remotes/origin/pull/*/head",

+                         ],

+                       ],

+                       extensions: [[$class: "CleanBeforeCheckout"]],

+                     ])

+                     env.GIT_COMMIT = scmVars.GIT_COMMIT

+                     env.GIT_COMMIT_SHORT = env.GIT_COMMIT.substring(0, 7)

+                     // setting build display name

+                     def prefix = "origin/"

+                     // origin/pull/1234/head -> pull/1234/head, origin/master -> master

+                     def branch = scmVars.GIT_BRANCH.startsWith(prefix) ? scmVars.GIT_BRANCH.substring(prefix.size())

+                       : scmVars.GIT_BRANCH

+                     env.GIT_BRANCH = branch

+                     echo "Building branch=${env.GIT_BRANCH}, commit=${env.GIT_COMMIT_SHORT}"

+                     if (env.PAGURE_POLLING_FOR_PR == "false" && branch == "${PAGURE_POLLED_BRANCH}") {

+                       currentBuild.displayName = "Building branch=${PAGURE_POLLED_BRANCH}, commit=${env.GIT_COMMIT_SHORT}"

+                     } else if (env.PAGURE_POLLING_FOR_PR == "true" && branch ==~ /^pull\/[0-9]+\/head$/) {

+                       env.PR_NO = branch.split("/")[1]

+                       env.PR_URL = "${env.PAGURE_REPO_HOME}/pull-request/${env.PR_NO}"

+                       // To HTML syntax in build description, go to `Jenkins/Global Security/Markup Formatter` and select "Safe HTML".

+                       def pagureLink = """<a href="${env.PR_URL}">PR-${env.PR_NO}</a>"""

+                       try {

+                         if (env.PAGURE_API_KEY_SECRET_NAME) {

+                           def pagureClient = getPagureClient()

+                           def prInfo = pagureClient.getPR(env.PR_NO)

+                           env.PR_TITLE = prInfo.title

+                           pagureLink = """<a href="${env.PR_URL}">${env.PR_TITLE}</a>"""

+                           pagureClient.updatePRStatus(pr: env.PR_NO, username: "c3i-jenkins", uid: "ci-pre-merge",

+                                                       url: env.BUILD_URL, percent: null, comment: "Pending")

+                         }

+                       } catch (Exception e) {

+                         echo "Error using pagure API: ${e}"

+                         // ignoring this...

+                       }

+                       echo "Building PR #${env.PR_NO}: ${env.PR_URL}"

+                       currentBuild.displayName = "Building PR#${env.PR_NO}, commit=${env.GIT_COMMIT_SHORT}"

+                       currentBuild.description = pagureLink

+                     } else { // This shouldn't happen.

+                       error "Build is aborted due to unexpected polling trigger actions"

+                     }

+                   }

+                 }

+               }

+               stage("Update pipeline jobs") {

+                 when {

+                   expression {

+                     return "${PIPELINE_UPDATE_JOBS_DIR}" && env.PAGURE_POLLING_FOR_PR == "false" && env.GIT_BRANCH == "${PAGURE_POLLED_BRANCH}"

+                   }

+                 }

+                 steps {

+                   checkout([$class: "GitSCM",

+                     branches: [[name: env.GIT_BRANCH]],

+                     userRemoteConfigs: [

+                       [

+                         name: "origin",

+                         url: env.GIT_URL,

+                         refspec: "+refs/heads/*:refs/remotes/origin/* +refs/pull/*/head:refs/remotes/origin/pull/*/head",

+                       ],

+                     ],

+                     extensions: [[$class: "CleanBeforeCheckout"]],

+                   ])

+                   script {

+                     dir("openshift/integration/koji/pipelines") {

+                       sh """

+                       make install JOBS_DIR="${PIPELINE_UPDATE_JOBS_DIR}"

+                       """

+                     }

+                     // We need to do this here because we have a checkout we can access.

+                     // The checkout above is on the master, but we can't access it with an sh()

+                     // step because they always runs on the Kubernetes agent.

+                     // See https://issues.jenkins-ci.org/browse/JENKINS-51700

+                     def author = sh(script: "git --no-pager show -s --format='%ae %an' ${env.GIT_COMMIT}",

+                                     returnStdout: true).trim().split(" ", 2)

+                     env.GIT_EMAIL = author[0]

+                     env.GIT_NAME = author[1]

+                     echo "Commit author: ${env.GIT_NAME} <${env.GIT_EMAIL}>"

+                   }

+                 }

+               }

+               stage("Run Dev Build") {

+                 steps {

+                   script {

+                     openshift.withCluster() {

+                       openshift.withProject("${DEV_PIPELINE_BC_NAMESPACE}") {

+                         echo "Starting a dev pipeline build..."

+                         def isMaster = env.PAGURE_POLLING_FOR_PR != "true"

+                         def build = c3i.buildAndWait("bc/${DEV_PIPELINE_BC_NAME}",

+                           "-e", "MBS_GIT_REPO=${env.GIT_URL}",

+                           "-e", "MBS_GIT_REF=${isMaster ? env.GIT_COMMIT : env.GIT_BRANCH}",

+                           "-e", "FORCE_PUBLISH_IMAGE=${isMaster}",

+                           "-e", "MBS_MAIN_BRANCH=${PAGURE_POLLED_BRANCH}",

+                           "-e", "BUILD_DISPLAY_RENAME_TO='${currentBuild.displayName}'"

+                         )

+                       }

+                     }

+                   }

+                 }

+               }

+             }

+             post {

+               success {

+                 script {

+                   // update Pagure PR flag

+                   if (env.PR_NO && env.PAGURE_API_KEY_SECRET_NAME) {

+                     try {

+                       def pagureClient = getPagureClient()

+                       pagureClient.updatePRStatus(pr: env.PR_NO, username: "c3i-jenkins", uid: "ci-pre-merge",

+                                                   url: env.BUILD_URL, percent: 100, comment: "Integration testing passed")

+                       echo "Updated PR #${env.PR_NO} status to PASS."

+                     } catch (e) {

+                       echo "Error updating PR #${env.PR_NO} status to PASS: ${e}"

+                     }

+                   }

+                   // sending email

+                   if ("${MAIL_ENABLED}" == "true" ||

+                       "${MAIL_ENABLED}" != "false" && env.GIT_BRANCH =~ "${MAIL_ENABLED}"){

+                     try {

+                       sendBuildStatusEmail(true)

+                     } catch (e) {

+                       echo "Error sending email: ${e}"

+                     }

+                   }

+                 }

+               }

+               failure {

+                 script {

+                   // update Pagure PR flag and make a comment

+                   if (env.PR_NO && env.PAGURE_API_KEY_SECRET_NAME) {

+                     try {

+                       def pagureClient = getPagureClient()

+                       pagureClient.updatePRStatus(pr: env.PR_NO, username: "c3i-jenkins", uid: "ci-pre-merge",

+                                                   url: env.BUILD_URL, percent: 0, comment: "Integration testing failed")

+                       echo "Updated PR #${env.PR_NO} status to FAILURE"

+                       pagureClient.commentOnPR(pr: env.PR_NO, comment: """

+                         Integration testing of commit ${env.GIT_COMMIT_SHORT} FAILED

+                         ${env.BUILD_URL}

+                         Rebase or make new commits to rebuild.

+                       """.stripIndent())

+                       echo "Commented on PR #${env.PR_NO}"

+                     } catch (e) {

+                       echo "Error updating PR #${env.PR_NO} status to FAILURE: ${e}"

+                     }

+                   }

+                   // sending email

+                   if ("${MAIL_ENABLED}" == "true" ||

+                       "${MAIL_ENABLED}" != "false" && env.GIT_BRANCH =~ "${MAIL_ENABLED}"){

+                     try {

+                       sendBuildStatusEmail(false)

+                     } catch (e) {

+                       echo "Error sending email: ${e}"

+                     }

+                   }

+                 }

+               }

+             }

+           }

+           def getPagureClient() {

+             withCredentials([string(credentialsId: "${env.PIPELINE_NAMESPACE}-${env.PAGURE_API_KEY_SECRET_NAME}",

+                                     variable: "PAGURE_TOKEN")]) {

+               return pagure.client(apiUrl: env.PAGURE_API,

+                                    repo: env.PAGURE_REPO_NAME,

+                                    isFork: env.PAGURE_REPO_IS_FORK == "true",

+                                    token: env.PAGURE_TOKEN)

+             }

+           }

+           def sendBuildStatusEmail(boolean success) {

+             def recipient

+             if (binding.hasVariable("ownership") &&

+                 ownership.job.ownershipEnabled &&

+                 ownership.job.primaryOwnerEmail) {

+               recipient = ownership.job.primaryOwnerEmail

+               echo "Sending email to job owner: ${recipient}"

+             } else if (env.PR_NO) {

+               def prinfo = getPagureClient().getPR(env.PR_NO)

+               recipient = "${prinfo.user.name}@fedoraproject.org"

+               echo "Sending email to PR submitter: ${recipient}"

+             } else if (env.GIT_EMAIL) {

+               recipient = env.GIT_EMAIL

+               echo "Sending email to commit author: ${recipient}"

+             } else {

+               echo "Could not determine recipient, not sending email"

+               return false

+             }

+             def status = success ? "passed" : "failed"

+             def subject = "Job ${env.JOB_NAME.split("/").last()} #${env.BUILD_NUMBER} ${status}"

+             def body = "Commit: ${env.PAGURE_REPO_HOME}/c/${env.GIT_COMMIT}\nBuild URL: ${env.DEV_BUILD_URL ?: env.BUILD_URL}"

+             if (env.PR_TITLE) {

+               subject = "PR #${env.PR_NO} (${env.PR_TITLE}), ${subject}"

+             } else if (env.PR_NO) {

+               subject = "PR #${env.PR_NO}, ${subject}"

+             }

+             if (env.PR_URL) {

+               body += "\nPull Request: ${env.PR_URL}"

+             }

+             subject = "${env.PAGURE_REPO_NAME} ${subject}"

+             mail from: "c3i-jenkins@fedoraproject.org", to: recipient, subject: subject, body: body

+           }

@@ -0,0 +1,58 @@ 

+ // Build an empty module and verify that the CGImport works correctly

+ 

+ def runTests() {

+   def clientcert = ca.get_ssl_cert("mbs-${TEST_ID}-koji-admin")

+   koji.setConfig("https://koji-${TEST_ID}-hub/kojihub", "https://koji-${TEST_ID}-hub/kojifiles",

+                  clientcert.cert, clientcert.key, ca.get_ca_cert().cert)

+   koji.addTag("module-f28")

+   koji.addTag("module-f28-build", "--parent=module-f28", "--arches=x86_64")

+   koji.runCmd("grant-cg-access", "mbs-${TEST_ID}-koji-admin", "module-build-service", "--new")

+   koji.callMethodLogin("addBType", "module")

+   def buildparams = """

+         {"scmurl": "https://src.fedoraproject.org/forks/mikeb/modules/testmodule.git?#8b3fb16160f899ce10905faf570f110d52b91154",

+          "branch": "empty-f28",

+          "owner":  "mbs-${TEST_ID}-koji-admin"}

+       """

+   def resp = httpRequest(

+         httpMode: "POST",

+         url: "http://mbs-${TEST_ID}-frontend/module-build-service/1/module-builds/",

+         acceptType: "APPLICATION_JSON",

+         contentType: "APPLICATION_JSON",

+         requestBody: buildparams,

+       )

+   if (resp.status != 201) {

+     echo "Response code was ${resp.status}, output was ${resp.content}"

+     error "POST response code was ${resp.status}, not 201"

+   }

+   def buildinfo = readJSON(text: resp.content)

+   timeout(10) {

+     waitUntil {

+       resp = httpRequest "http://mbs-${TEST_ID}-frontend/module-build-service/1/module-builds/${buildinfo.id}"

+       if (resp.status != 200) {

+         echo "Response code was ${resp.status}, output was ${resp.content}"

+         return false

+       }

+       def modinfo = readJSON(text: resp.content)

+       if (modinfo.state_name != "ready") {

+         echo "Module ${modinfo.id} (${modinfo.name}) is in the ${modinfo.state_name} state, not ready"

+         return false

+       }

+       def builds = koji.listBuilds()

+       echo "Builds: ${builds}"

+       def build = builds.find { it.name == "testmodule" }

+       if (!build) {

+         echo "Could not find a build of testmodule"

+         return false

+       }

+       def develbuild = builds.find { it.name == "testmodule-devel" }

+       if (!develbuild) {

+         echo "Could not find a build of testmodule-devel"

+         return false

+       }

+       echo "All checks passed"

+       return true

+     }

+   }

+ }

+ 

+ return this;

@@ -0,0 +1,146 @@ 

+ // Submit a build to MBS and verify that it initializes Koji correctly

+ 

+ def runTests() {

+   def clientcert = ca.get_ssl_cert("mbs-${TEST_ID}-koji-admin")

+   koji.setConfig("https://koji-${TEST_ID}-hub/kojihub", "https://koji-${TEST_ID}-hub/kojifiles",

+                  clientcert.cert, clientcert.key, ca.get_ca_cert().cert)

+   koji.addTag("module-f28")

+   koji.addTag("module-f28-build", "--parent=module-f28", "--arches=x86_64")

+   def buildparams = """

+         {"scmurl": "https://src.fedoraproject.org/modules/testmodule.git?#9c589780e1dd1698dc64dfa28d30014ad18cad32",

+          "branch": "f28",

+          "owner":  "mbs-${TEST_ID}-koji-admin"}

+       """

+   def resp = httpRequest(

+         httpMode: "POST",

+         url: "http://mbs-${TEST_ID}-frontend/module-build-service/1/module-builds/",

+         acceptType: "APPLICATION_JSON",

+         contentType: "APPLICATION_JSON",

+         requestBody: buildparams,

+       )

+   if (resp.status != 201) {

+     echo "Response code was ${resp.status}, output was ${resp.content}"

+     error "POST response code was ${resp.status}, not 201"

+   }

+   def buildinfo = readJSON(text: resp.content)

+   // Check that MBS has configured Koji correctly

+   timeout(10) {

+     waitUntil {

+       resp = httpRequest "http://mbs-${TEST_ID}-frontend/module-build-service/1/module-builds/${buildinfo.id}"

+       if (resp.status != 200) {

+         echo "Response code was ${resp.status}, output was ${resp.content}"

+         return false

+       }

+       def modinfo = readJSON(text: resp.content)

+       if (modinfo.state_name != "build") {

+         echo "Module ${modinfo.id} (${modinfo.name}) is in the ${modinfo.state_name} state, not build"

+         return false

+       }

+       def targets = koji.listTargets()

+       def target = targets.find { it.name =~ "^module-testmodule-" }

+       if (!target) {

+         echo "Could not find module target"

+         return false

+       }

+       echo "Target: ${target}"

+       def taginfo = koji.tagInfo(target.build_tag_name)

+       echo "Build tag: ${taginfo}"

+       if (taginfo.arches != "x86_64") {

+         echo "${target.build_tag_name} does not have arches set to x86_64"

+         return false

+       }

+       if (taginfo.perm != "admin") {

+         echo "${target.build_tag_name} does not have perm set to admin"

+         return false

+       }

+       if (taginfo.extra.get("mock.package_manager", "") != "dnf") {

+         echo "${target.build_tag_name} is not configured to use dnf"

+         return false

+       }

+       if (!taginfo.extra.get("repo_include_all", false)) {

+         echo "${target.build_tag_name} is not configured with repo_include_all"

+         return false

+       }

+       def ancestors = koji.listTagInheritance(target.build_tag_name)

+       echo "Ancestors of ${target.build_tag_name}: ${ancestors}"

+       if (!ancestors.contains("module-f28-build")) {

+         echo "module-f28-build not in inheritance of ${target.build_tag_name}"

+         return false

+       }

+       def groups = koji.listGroups(target.build_tag_name)

+       echo "Groups of ${target.build_tag_name}: ${groups}"

+       def srpm_build = groups.find { it.name == "srpm-build" }

+       if (!srpm_build) {

+         echo "${target.build_tag_name} does not have a srpm-build group"

+         return false

+       }

+       def srpm_packages = srpm_build.packagelist.findAll { it.package in ["bash", "rpm-build", "module-build-macros"] }

+       if (srpm_packages.size() != 3) {

+         echo "${target.build_tag_name} does not have required packages in the srpm-build group"

+         return false

+       }

+       def build = groups.find { it.name == "build" }

+       if (!build) {

+         echo "${target.build_tag_name} does not have a build group"

+         return false

+       }

+       def build_packages = build.packagelist.findAll { it.package in ["bash", "rpm-build", "module-build-macros"] }

+       if (build_packages.size() != 3) {

+         echo "${target.build_tag_name} does not have required packages in the build group"

+         return false

+       }

+       def tasks = koji.listTasks()

+       echo "Tasks: ${tasks}"

+       def build_task = tasks.find { it.method == "build" }

+       if (!build_task) {

+          echo "No build task has been created"

+          return false

+       }

+       if (build_task.request.size() < 3) {

+         echo "The build task does not have the correct format"

+         return false

+       }

+       if (!build_task.request[0].contains("module-build-macros")) {

+         echo "The build task is not building module-build-macros"

+         return false

+       }

+       if (!build_task.request[0].endsWith(".src.rpm")) {

+         echo "The build task is not building from a srpm"

+         return false

+       }

+       if (build_task.request[1] != target.name) {

+         echo "The build task is not using the correct target"

+         return false

+       }

+       if (!build_task.request[2].get("skip_tag", false)) {

+         echo "The build task is not using skip_tag"

+         return false

+       }

+       if (build_task.request[2].get("mbs_artifact_name", "") != "module-build-macros") {

+         echo "The build task does not have the mbs_artifact_name option set correctly"

+         return false

+       }

+       if (build_task.request[2].get("mbs_module_target", "") != target.dest_tag_name) {

+         echo "The build task does not have the mbs_module_target option set correctly"

+         return false

+       }

+       def newrepo_task = tasks.find { it.method == "newRepo" }

+       if (!newrepo_task) {

+         echo "No newRepo task has been created"

+         return false

+       }

+       if (newrepo_task.request.size() < 1) {

+         echo "The newRepo task does not have the correct format"

+         return false

+       }

+       if (newrepo_task.request[0] != target.build_tag_name) {

+         echo "The newRepo task is not associated with the correct tag"

+         return false

+       }

+       echo "All checks passed"

+       return true

+     }

+   }

+ }

+ 

+ return this;

@@ -290,6 +290,19 @@ 

  

        from module_build_service import app as application

  - apiVersion: v1

+   # Only creating this as a Secret because it supports base64-encoded data.

+   # Convert to a ConfigMap and use binaryData once we're running on OpenShift 3.10+.

+   kind: Secret

+   metadata:

+     name: mbs-cacerts

+     labels:

+       app: mbs

+       service: frontend

+       environment: "test-${TEST_ID}"

+   data:

+     ca-bundle.crt: |-

+       ${CA_CERTS}

+ - apiVersion: v1

    kind: Secret

    metadata:

      name: "mbs-frontend-certificates"
@@ -392,6 +405,9 @@ 

            - name: koji-certificates

              mountPath: /etc/koji-certs

              readOnly: true

+           - name: cacerts-vol

+             mountPath: /etc/pki/tls/certs

+             readOnly: true

            resources:

              limits:

                memory: 400Mi
@@ -415,6 +431,10 @@ 

          - name: koji-certificates

            secret:

              secretName: mbs-koji-secrets

+         - name: cacerts-vol

+           secret:

+             secretName: mbs-cacerts

+             defaultMode: 0444

        triggers:

        - type: ConfigChange

  # backend
@@ -753,6 +773,9 @@ 

            - name: koji-certificates

              mountPath: /etc/koji-certs

              readOnly: true

+           - name: cacerts-vol

+             mountPath: /etc/pki/tls/certs

+             readOnly: true

            resources:

              limits:

                memory: 400Mi
@@ -770,6 +793,10 @@ 

          - name: koji-certificates

            secret:

              secretName: mbs-koji-secrets

+         - name: cacerts-vol

+           secret:

+             secretName: mbs-cacerts

+             defaultMode: 0444

        triggers:

        - type: ConfigChange

  # postgresql
@@ -904,3 +931,7 @@ 

    description: Top level URL of the Koji instance to use. Without a '/' at the end.

    default: https://mbs-brew-hub.usersys.redhat.com

    required: true

+ - name: CA_CERTS

+   displayName: CA certificates

+   description: Bundle of CA certificates that should be trusted

+   required: true

This is another test PR.

1 new commit added

  • mail for PRs (for now)
5 years ago

rebased onto 6e62cf6

5 years ago

rebased onto 6e62cf6

5 years ago

rebased onto 6e62cf6

5 years ago

rebased onto 6e62cf6

5 years ago

rebased onto 6e62cf6

5 years ago

1 new commit added

  • use upstream master of the c3i library
5 years ago

rebased onto 6e62cf6

5 years ago

rebased onto 6e62cf6

5 years ago

rebased onto 6e62cf6

5 years ago

rebased onto 6e62cf6

5 years ago

rebased onto 6e62cf6

5 years ago

rebased onto 6e62cf6

5 years ago

rebased onto 6e62cf6

5 years ago

rebased onto 6e62cf6

5 years ago

rebased onto 6e62cf6

5 years ago

rebased onto 6e62cf6

5 years ago

Pull-Request has been closed by mikeb

5 years ago
Changes Summary 30
+0 -0
file changed
docker/test-py3.sh
+1 -4
file changed
openshift/README.md
+8 -12
file changed
openshift/backend/Dockerfile
+6 -5
file changed
openshift/frontend/Dockerfile
+88
file added
openshift/integration/koji/README.md
+45
file added
openshift/integration/koji/containers/backend/Dockerfile
+78
file added
openshift/integration/koji/containers/backend/mbs-backend-build-template.yaml
+82
file added
openshift/integration/koji/containers/frontend/mbs-frontend-build-template.yaml
+74
file added
openshift/integration/koji/containers/jenkins-slave/Dockerfile
+116
file added
openshift/integration/koji/containers/jenkins-slave/jenkins-slave
+48
file added
openshift/integration/koji/pipelines/Makefile
+5
file added
openshift/integration/koji/pipelines/jobs/mbs-koji-int-dev.env
+1
file added
openshift/integration/koji/pipelines/jobs/mbs-koji-int-dev.tmpl
+3
file added
openshift/integration/koji/pipelines/jobs/mbs-koji-int-jenkins-slave.env
+1
file added
openshift/integration/koji/pipelines/jobs/mbs-koji-int-jenkins-slave.tmpl
+3
file added
openshift/integration/koji/pipelines/jobs/mbs-koji-int-test.env
+1
file added
openshift/integration/koji/pipelines/jobs/mbs-koji-int-test.tmpl
+7
file added
openshift/integration/koji/pipelines/jobs/mbs-polling-master.env
+1
file added
openshift/integration/koji/pipelines/jobs/mbs-polling-master.tmpl
+6
file added
openshift/integration/koji/pipelines/jobs/mbs-polling-prs.env
+1
file added
openshift/integration/koji/pipelines/jobs/mbs-polling-prs.tmpl
+205
file added
openshift/integration/koji/pipelines/templates/mbs-koji-int-dev-template.yaml
+340
file added
openshift/integration/koji/pipelines/templates/mbs-koji-int-dev.Jenkinsfile
+88
file added
openshift/integration/koji/pipelines/templates/mbs-koji-int-jenkins-slave-template.yaml
+116
file added
openshift/integration/koji/pipelines/templates/mbs-koji-int-test-template.yaml
+166
file added
openshift/integration/koji/pipelines/templates/mbs-koji-int-test.Jenkinsfile
+362
file added
openshift/integration/koji/pipelines/templates/mbs-polling-pagure.yaml
+58
file added
openshift/integration/koji/pipelines/tests/module-build-cgimport.groovy
+146
file added
openshift/integration/koji/pipelines/tests/module-build-init.groovy
+31 -0
file changed
openshift/mbs-test-template.yaml