#1264 websites: new main website deployment on stg
Merged a year ago by kevin. Opened a year ago by darknao.
fedora-infra/ darknao/ansible websites_stg  into  main

@@ -283,18 +283,21 @@ 

      website: fedoraproject.org

      path: /index.html

      target: https://getfedora.org/

+     when: env != "staging"

  

    - role: httpd/redirect

      shortname: get-fedora-old

      website: fedoraproject.org

      path: /get-fedora

      target: https://getfedora.org/

+     when: env != "staging"

  

    - role: httpd/redirect

      shortname: sponsors

      website: fedoraproject.org

      path: /sponsors

      target: https://getfedora.org/sponsors

+     when: env != "staging"

  

    - role: httpd/redirect

      shortname: code-of-conduct
@@ -319,18 +322,21 @@ 

      website: fedoraproject.org

      path: /verify

      target: https://getfedora.org/verify

+     when: env != "staging"

  

    - role: httpd/redirect

      shortname: keys

      website: fedoraproject.org

      path: /keys

      target: https://getfedora.org/keys

+     when: env != "staging"

  

    - role: httpd/redirect

      shortname: release-banner

      website: fedoraproject.org

      path: /static/js/release-counter-ext.js

      target: https://getfedora.org/static/js/release-counter-ext.js

+     when: env != "staging"

  

  #

  # When there is no prerelease we redirect the prerelease urls

@@ -85,3 +85,11 @@ 

      app: websites

      template: fedora-websites-cron.yml

      objectname: fedora-websites-cron.yml

+ 

+   # New websites 3.0

+   # STAGING ONLY

+   - role: openshift/object

+     app: websites

+     file: obc.yml

+     objectname: obc.yml

+     when: env == "staging"

@@ -0,0 +1,1 @@ 

+ 25 * * * * root /usr/local/bin/lock-wrapper fedoraprojectsync /usr/local/sbin/fedoraproject-sync >& /dev/null

@@ -1,8 +1,51 @@ 

+ - name: Install needed packages

+   package:

+     state: present

+     name:

+     - s3cmd

+   when: env == "staging"

+   tags:

+   - fedora-web

+   - fedora-web/main

+ 

  - name: Copy in the sync-fedora-web cronjob

    copy: src=cron-sync-fedora-web dest=/etc/cron.d/sync-fedora-web

    tags:

    - fedora-web

    - fedora-web/main

+   when: env != "staging"

+ 

+ - name: Load s3 credentials

+   ansible.builtin.include_vars:

+     file: "{{ private }}/files/websites/s3_fedoraproject_{{ env_short }}.yml"

+   ignore_errors: true

+   when: env == "staging"

+   tags:

+   - fedora-web

+   - fedora-web/main

+ 

+ - name: Copy in the sync-fedora-web-stg cronjob

+   copy:

+     src: cron-sync-fedora-web-stg

+     dest: /etc/cron.d/sync-fedora-web

+   tags:

+   - fedora-web

+   - fedora-web/main

+   when:

+   - env == "staging"

+   - fedoraproject_s3_bucket_name is defined

+ 

+ - name: Create fedoraproject-sync script

+   template:

+     src: fedoraproject-sync

+     dest: /usr/local/sbin/fedoraproject-sync

+     mode: 700

+   tags:

+   - fedora-web

+   - fedora-web/main

+   when:

+   - env == "staging"

+   - s3_access_key is defined

  

  - name: Make directory for the config files for {{website}} we are about to copy

    file: path=/etc/httpd/conf.d/{{website}} state=directory owner=root group=root mode=0755

@@ -0,0 +1,9 @@ 

+ #!/bin/bash

+ AWS_SECRET_ACCESS_KEY={{ fedoraproject_s3_access_key }}

+ AWS_ACCESS_KEY_ID={{ fedoraproject_s3_access_key_id }}

+ BUCKET_NAME={{ fedoraproject_s3_bucket_name }}

+ S3_GW=s3-openshift-storage.apps.ocp{{ env_suffix }}.fedoraproject.org

+ 

+ s3cmd sync --host ${S3_GW}:443 --host-bucket ${BUCKET_NAME}.${S3_GW} s3://${BUCKET_NAME}/ --access_key=${AWS_ACCESS_KEY_ID} --secret_key=${AWS_SECRET_ACCESS_KEY} /srv/web/fedoraproject.org/ --delete-removed

+ 

+ 

@@ -0,0 +1,7 @@ 

+ apiVersion: objectbucket.io/v1alpha1

+ kind: ObjectBucketClaim

+ metadata:

+   name: fedoraproject-s3

+ spec:

+   generateBucketName: fedoraproject

+   storageClassName: openshift-storage.noobaa.io

@@ -5,7 +5,27 @@ 

    labels:

      environment: "websites"

  spec:

+   failedBuildsHistoryLimit: 2

+   successfulBuildsHistoryLimit: 1

    source:

+ {% if env == 'staging' %}

+     git:

+       uri: "https://gitlab.com/fedora/websites-apps/fedora-websites/fedora-websites-3.0.git"

+       ref: main

+     dockerfile: |-

+       FROM docker.io/library/node:18 as build

+       RUN apt-get update && apt-get install -y translate-toolkit && rm -rf /var/lib/apt/lists/*

+       ADD . /websites

+       WORKDIR /websites

+       RUN npm install

+       RUN ./i18n_gen_yml.sh

+       RUN npm run generate

+ 

+       FROM quay.io/fedora/fedora:37

+       RUN dnf -y install s3cmd && dnf clean all

+       COPY --from=build /websites/.output/public /output

+ 

+ {% else %}

      dockerfile: |-

        FROM fedora:34

        RUN dnf -y install \
@@ -31,9 +51,18 @@ 

          python3-zanata-client && \

            dnf clean all

        CMD bash /etc/websites/build.sh

+ {% endif %}

    strategy:

      type: Docker

    output:

      to:

        kind: ImageStreamTag

        name: builder:latest

+   triggers:

+ {% if websites_github_secret is defined %}

+   - type: GitHub

+     github:

+       secret: "{{ websites_github_secret }}"

+ {% endif %}

+   - type: ConfigChange

+   - type: ImageChange

@@ -0,0 +1,41 @@ 

+ apiVersion: apps/v1

+ kind: Deployment

+ metadata:

+   name: fedoraproject-push

+   annotations:

+     image.openshift.io/triggers: >-

+       [{"from":{"kind":"ImageStreamTag","name":"builder:latest","namespace":"websites"},"fieldPath":"spec.template.spec.containers[?(@.name==\"s3push\")].image","pause":"false"}]

+ spec:

+   replicas: 1

+   selector:

+     matchLabels:

+       app: fedoraproject-push

+   strategy:

+     type: Recreate

+   template:

+     metadata:

+       labels:

+         app: fedoraproject-push

+     spec:

+       containers:

+       - image: image-registry.openshift-image-registry.svc:5000/websites/builder:latest

+         command: ["/bin/bash", "-c"]

+         args:

+         - |

+           s3cmd sync /output/ s3://${BUCKET_NAME}/ --host ${BUCKET_HOST}:$BUCKET_PORT --host-bucket=${BUCKET_NAME}.${BUCKET_HOST}:$BUCKET_PORT --no-check-certificate --delete-removed

+           sleep infinity

+         imagePullPolicy: Always

+         envFrom:

+         - configMapRef:

+             name: fedoraproject-s3

+         - secretRef:

+             name: fedoraproject-s3

+         name: s3push

+         resources:

+           limits:

+             cpu: 20m

+             memory: 100Mi

+           requests:

+             cpu: '0'

+             memory: 10Mi

+       terminationGracePeriodSeconds: 2

Fedora website 3.0 deployment in staging.
This deploys the new main fedoraproject.org website that will replace getfedora.org (and others at a later time) for F38.

This playbook:
- removes fedoraproject.org redirects to getfedora.org.
- Provisions s3 storage on OCP.
- Retrieves s3 credentials from OCP using community.okd.k8s module (assuming the correct kubeconfig is at the default location on os_control) and saves it in /srv/private for later use.
- Installs s3cmd on proxies (used to sync s3 storage)
- Deploys the sync script including s3 credentials.
- Deploys buildconfig / deployment on OCP to build the website and push to s3.

Note that I've done something unusual (at least to me) as I used a buildconfig + deployment to build & deploy the websites, instead of a cronjob like it was for the previous version.

The buildconfig builds an image with the website in it, then trigger a redeploy of the application pod that basically just pushes the static website content to s3, and sits there indefinitely (until the next build).
The reason behind this logic is we can now trigger a build+deploy remotely through the integrated webhook (from Gitlab for instance) and build the website only when needed (instead of periodic builds).
Additionally, it leaves room to add fedora-messaging to that deploy pod later.

Additional requirements:
- set websites_github_secret with some random string (like uuidgen) in /srv/private/ansible/vars.yml.
- a few files need to be manually removed on proxies:

 /etc/httpd/conf.d/fedoraproject.org/get-fedora-old-redirect.conf
 /etc/httpd/conf.d/fedoraproject.org/main-fedoraproject-redirect.conf
 /etc/httpd/conf.d/fedoraproject.org/sponsors-redirect.conf
 /etc/httpd/conf.d/fedoraproject.org/verify-redirect.conf

or you could just drop the whole /etc/httpd/conf.d/fedoraproject.org directory and the playbook will recreate everything.

extra notes:
- /srv/web/fedoraproject.org will be completely replaced. I believe this should not be an issue as it seems to be only the old, unused main site in it.
- while I tried to test most of this change in my own environment, it's very unlikely it will work on the first try. You can ping me on irc to troubleshoot if needed.
- I feel like I wrote way too much text in this PR :p
- We plan to give a demo of the new website to the Council beginning of January. It would be very nice if we could get this deployed before the end-of-year recharge period so we have time to adjust if the deployment doesn't work as expected.

Build failed. More information on how to proceed and troubleshoot errors available at https://fedoraproject.org/wiki/Zuul-based-ci

rebased onto 21b06e99ef9d381727edd27129615ec8dbea5a53

a year ago

Build failed. More information on how to proceed and troubleshoot errors available at https://fedoraproject.org/wiki/Zuul-based-ci

Build failed. More information on how to proceed and troubleshoot errors available at https://fedoraproject.org/wiki/Zuul-based-ci

rebased onto 39e0d1e0ed7118c919202ecd7f7f3e8bf4963dea

a year ago

Build succeeded.

Build succeeded.

Build succeeded.

Build succeeded.

Just a few small comments:

  • {{ private }} is a git repo, so I don't think we want to copy files onto it. If we are just storing them to push out to proxies, a few possible other ideas:
  • Just store in a secure tmp file for each run
  • if they don't change often we could just pull them once and check them into private?
  • or... do we need to use credentials at all? could we just make the s3 endpoint open, but r/o of course... that might mean people start pulling from it, but I doubt too many people would care about that.

  • I think it's just the right amount of text. Thanks.

ODF does not allow the creation of publicly accessible s3 buckets (without credentials).
But the credentials are quite stable, and should never change unless the bucket is recreated.
So I think we can just pull them once, and store them in /srv/private.

Not sure how you want to handle this though.
Are you going to do that manually (I can provide you the required variables & values to put there and just remove the community.okd.k8s part from the playbook that pulls secrets), or do I need to add something to the playbook?

ODF does not allow the creation of publicly accessible s3 buckets (without credentials).

Too bad. ;(

But the credentials are quite stable, and should never change unless the bucket is recreated.
So I think we can just pull them once, and store them in /srv/private.

Sounds fine.

Not sure how you want to handle this though.
Are you going to do that manually (I can provide you the required variables & values to put there and just remove the community.okd.k8s part from the playbook that pulls secrets), or do I need to add something to the playbook?

Just tell me how to get them and how they are named and I can check them into private vars.

There are 3 values to fetch:
- AWS_SECRET_ACCESS_KEY
- AWS_ACCESS_KEY_ID
- BUCKET_NAME

The first two are stored in a secret and the third one in a configmap. They are both named fedoraproject-s3.

oc -n websites extract secrets/fedoraproject-s3 --to=-
oc -n websites extract cm/fedoraproject-s3 --to=-

The expected private vars file name is /srv/private/ansible/files/websites/s3_fedoraproject_stg.yml with the following variables in it:

fedoraproject_s3_access_key: <access_key>
fedoraproject_s3_access_key_id: <access_key_id>
fedoraproject_s3_bucket_name: <bucket_name>

rebased onto 3de11bb54bf6afa38c96decbf77e438e6c1a885d

a year ago

Build succeeded.

ok, so I need to merge this, run the openshift part to get those variables, then run the rest right?
Or what order do we need to do this?

Exactly.
Openshift part first. Once the Object Bucket Claim is "Bound" (usually takes 2 minutes) then the secret/configmap should be there with s3 info.
Then once everything is set (s3 creds and the "additional requirements" part), you can run the proxy one.

rebased onto 5b5edd1

a year ago

rebased onto 5b5edd1

a year ago

Pull-Request has been merged by kevin

a year ago

Build succeeded.