#7 Added the infra SOPs ported to asciidoc.
Merged 2 years ago by pbokoc. Opened 2 years ago by asaleh.
asaleh/infra-docs-fpo master  into  master

Remove old sysadmin guide
Adam Saleh • 2 years ago  
file modified
-2
@@ -16,5 +16,3 @@ 

  - modules/ROOT/nav.adoc

  - modules/developer_guide/nav.adoc

  - modules/sysadmin_guide/nav.adoc

- - modules/old_sysadmin_guide/nav.adoc

- - modules/communishift/nav.adoc

@@ -7,11 +7,9 @@ 

  

  Services handling identity and providing personal space to our contributors.

  

- FAS https://fas.fedoraproject.org[fas.fp.o]::

- The __F__edora __A__ccount __S__ystem, our directory and identity management

- tool, provides community members with a single account to login on Fedora

- services. https://admin.fedoraproject.org/accounts/user/new[Creating an

- account] is one of the first things to do if you plan to work on Fedora.

+ Accounts https://accounts.fedoraproject.org/[accounts.fp.o]::

+ Our directory and identity management tool provides community members with a single account to login on Fedora

+ services. Registering an account there is one of the first things to do if you plan to work on Fedora.

  

  Fedora People https://fedorapeople.org/[fedorapeople.org]::

  Personnal web space provided to community members to share files, git

@@ -1,1 +0,0 @@ 

- * xref:index.adoc[Communishift documentation]

@@ -1,10 +0,0 @@ 

- :experimental:

- = Communishift documentation

- 

- link:https://console-openshift-console.apps.os.fedorainfracloud.org/[Communishift] is the name for the OpenShift community cluster run by the Fedora project.

- It's intended to be a place where community members can test/deploy/run things that are of benefit to the community at a lower SLE (Service Level Expectation) than services directly run and supported by infrastructure, additionally doing so in a self service manner.

- It's also an incubator for applications that may someday be more fully supported once they prove their worth.

- Finally, it's a place for Infrastructure folks to learn and test and discover OpenShift in a less constrained setting than our production clusters.

- 

- This documentation focuses on implementation details of Fedora's OpenShift instance, not on OpenShift usage in general.

- These instructions are already covered by link:https://docs.openshift.com/container-platform/4.1/welcome/index.html[upstream documentation].

@@ -1,1 +0,0 @@ 

- * link:https://fedora-infra-docs.readthedocs.io/en/latest/sysadmin-guide/sops/old/index.html[Old System Administrator Guide]

file modified
+118 -1
@@ -1,1 +1,118 @@ 

- * link:https://fedora-infra-docs.readthedocs.io/en/latest/sysadmin-guide/index.html[System Administrator Guide]

+ * xref:orientation.adoc[Orientation for Sysadmin Guide]

+ * xref:index.adoc[Sysadmin Guide]

+ ** xref:2-factor.adoc[Two factor auth]

+ ** xref:accountdeletion.adoc[Account Deletion SOP]

+ ** xref:anitya.adoc[Anitya Infrastructure SOP]

+ ** xref:ansible.adoc[ansible - SOP]

+ ** xref:apps-fp-o.adoc[apps-fp-o - SOP]

+ ** xref:archive-old-fedora.adoc[How to Archive Old Fedora Releases - SOP]

+ ** xref:arm.adoc[Fedora ARM Infrastructure - SOP]

+ ** xref:aws-access.adoc[Amazon Web Services Access - SOP]

+ ** xref:bastion-hosts-info.adoc[Fedora Bastion Hosts - SOP]

+ ** xref:blockerbugs.adoc[Blockerbugs Infrastructure - SOP]

+ ** xref:bodhi.adoc[Bodhi Infrastructure - SOP]

+ ** xref:bugzilla.adoc[Bugzilla Sync Infrastructure - SOP]

+ ** xref:bugzilla2fedmsg.adoc[bugzilla2fedmsg - SOP]

+ ** xref:collectd.adoc[Collectd - SOP]

+ ** xref:compose-tracker.adoc[Compose Tracker - SOP]

+ ** xref:contenthosting.adoc[Content Hosting Infrastructure - SOP]

+ ** xref:copr.adoc[Copr - SOP]

+ ** xref:database.adoc[Database Infrastructure - SOP]

+ ** xref:datanommer.adoc[datanommer - SOP]

+ ** xref:debuginfod.adoc[Fedora Debuginfod Service - SOP]

+ ** xref:departing-admin.adoc[Departing admin - SOP]

+ ** xref:dns.adoc[DNS repository for fedoraproject - SOP]

+ ** xref:docs.fedoraproject.org.adoc[Docs - SOP]

+ ** xref:fas-notes.adoc[Fedora Account System - SOP]

+ ** xref:fas-openid.adoc[FAS-OpenID - SOP]

+ ** xref:fedmsg-certs.adoc[fedmsg (Fedora Messaging) Certs, Keys, and CA - SOP]

+ ** xref:fedmsg-gateway.adoc[fedmsg-gateway - SOP]

+ ** xref:fedmsg-introduction.adoc[fedmsg introduction and basics - SOP]

+ ** xref:fedmsg-irc.adoc[fedmsg-irc - SOP]

+ ** xref:fedmsg-new-message-type.adoc[Adding a new fedmsg message type - SOP]

+ ** xref:fedmsg-relay.adoc[fedmsg-relay - SOP]

+ ** xref:fedmsg-websocket.adoc[WebSocket - SOP]

+ ** xref:fedocal.adoc[Fedocal - SOP]

+ ** xref:fedora-releases.adoc[Fedora Release Infrastructure - SOP]

+ ** xref:fedorawebsites.adoc[Websites Release - SOP]

+ ** xref:fmn.adoc[FedMsg Notifications (FMN) - SOP]

+ ** xref:gather-easyfix.adoc[Fedora gather easyfix - SOP]

+ ** xref:gdpr_delete.adoc[GDPR Delete - SOP]

+ ** xref:gdpr_sar.adoc[GDPR SAR - SOP]

+ ** xref:geoip-city-wsgi.adoc[geoip-city-wsgi - SOP]

+ ** xref:github2fedmsg.adoc[github2fedmsg - SOP]

+ ** xref:github.adoc[Using github for Infra Projects - SOP]

+ ** xref:greenwave.adoc[Greenwave - SOP]

+ ** xref:guestdisk.adoc[Guest Disk Resize - SOP]

+ ** xref:guestedit.adoc[Guest Editing - SOP]

+ ** xref:haproxy.adoc[Haproxy Infrastructure - SOP]

+ ** xref:hotfix.adoc[HOTFIXES - SOP]

+ ** xref:hotness.adoc[The New Hotness - SOP]

+ ** xref:infra-git-repo.adoc[Infrastructure Git Repos - SOP]

+ ** xref:infra-hostrename.adoc[Infrastructure Host Rename - SOP]

+ ** xref:infra-raidmismatch.adoc[Infrastructure Raid Mismatch Count - SOP]

+ ** xref:infra-repo.adoc[Infrastructure Yum Repo - SOP]

+ ** xref:infra-retiremachine.adoc[Infrastructure retire machine - SOP]

+ ** xref:ipsilon.adoc[Ipsilon Infrastructure - SOP]

+ ** xref:iscsi.adoc[iSCSI - SOP]

+ ** xref:jenkins-fedmsg.adoc[Jenkins Fedmsg - SOP]

+ ** xref:kerneltest-harness.adoc[Kerneltest-harness - SOP]

+ ** xref:kickstarts.adoc[Kickstart Infrastructure - SOP]

+ ** xref:koji.adoc[Koji Infrastructure - SOP]

+ ** xref:koji-archive.adoc[Koji Archive - SOP]

+ ** xref:koji-builder-setup.adoc[Setup Koji Builder - SOP]

+ ** xref:koschei.adoc[Koschei - SOP]

+ ** xref:layered-image-buildsys.adoc[Layered Image Build System - SOP]

+ ** xref:mailman.adoc[Mailman Infrastructure - SOP]

+ ** xref:making-ssl-certificates.adoc[SSL Certificate Creation - SOP]

+ ** xref:massupgrade.adoc[Mass Upgrade Infrastructure - SOP]

+ ** xref:mastermirror.adoc[Master Mirror Infrastructure - SOP]

+ ** xref:mbs.adoc[Module Build Service Infra - SOP]

+ ** xref:memcached.adoc[Memcached Infrastructure - SOP]

+ ** xref:message-tagging-service.adoc[Message Tagging Service - SOP]

+ ** xref:mirrorhiding.adoc[Mirror Hiding Infrastructure - SOP]

+ ** xref:mirrormanager.adoc[MirrorManager Infrastructure - SOP]

+ ** xref:mirrormanager-S3-EC2-netblocks.adoc[AWS Mirrors - SOP]

+ ** xref:mote.adoc[mote - SOP]

+ ** xref:nagios.adoc[Fedora Infrastructure Nagios - SOP]

+ ** xref:netapp.adoc[Netapp Infrastructure - SOP]

+ ** xref:new-hosts.adoc[DNS Host Addition - SOP]

+ ** xref:nonhumanaccounts.adoc[Non-human Accounts Infrastructure - SOP]

+ ** xref:nuancier.adoc[Nuancier - SOP]

+ ** xref:odcs.adoc[On Demand Compose Service - SOP]

+ ** xref:openqa.adoc[OpenQA Infrastructure - SOP]

+ ** xref:openshift.adoc[OpenShift - SOP]

+ ** xref:openvpn.adoc[OpenVPN - SOP]

+ ** xref:outage.adoc[Outage Infrastructure - SOP]

+ ** xref:packagereview.adoc[Package Review - SOP]

+ ** xref:pagure.adoc[Pagure Infrastructure - SOP]

+ ** xref:pdc.adoc[PDC - SOP]

+ ** xref:pesign-upgrade.adoc[Pesign upgrades/reboots - SOP]

+ ** xref:planetsubgroup.adoc[Planet Subgroup Infrastructure - SOP]

+ ** xref:publictest-dev-stg-production.adoc[Fedora Infrastructure Machine Classes - SOP]

+ ** xref:rabbitmq.adoc[RabbitMQ - SOP]

+ ** xref:rdiff-backup.adoc[rdiff-backup - SOP]

+ ** xref:registry.adoc[Container registry - SOP]

+ ** xref:requestforresources.adoc[Request for resources - SOP]

+ ** xref:resultsdb.adoc[ResultsDB - SOP]

+ ** xref:retrace.adoc[Retrace - SOP]

+ ** xref:scmadmin.adoc[SCM Admin - SOP]

+ ** xref:selinux.adoc[SELinux Infrastructure - SOP]

+ ** xref:sigul-upgrade.adoc[Sigul servers upgrades/reboots - SOP]

+ ** xref:simple_koji_ci.adoc[simple_koji_ci - SOP]

+ ** xref:sshaccess.adoc[SSH Access Infrastructure - SOP]

+ ** xref:sshknownhosts.adoc[SSH known hosts Infrastructure - SOP]

+ ** xref:staging.adoc[Staging - SOP]

+ ** xref:status-fedora.adoc[Fedora Status Service - SOP]

+ ** xref:syslog.adoc[Log Infrastructure - SOP]

+ ** xref:tag2distrepo.adoc[Tag2DistRepo Infrastructure - SOP]

+ ** xref:torrentrelease.adoc[Torrent Releases Infrastructure - SOP]

+ ** xref:unbound.adoc[Fedora Infra Unbound Notes - SOP]

+ ** xref:virt-image.adoc[Fedora Infrastructure Kpartx Notes - SOP]

+ ** xref:virtio.adoc[Virtio Notes - SOP]

+ ** xref:virt-notes.adoc[Fedora Infrastructure Libvirt Notes - SOP]

+ ** xref:voting.adoc[Voting Infrastructure - SOP]

+ ** xref:waiverdb.adoc[WaiverDB - SOP]

+ ** xref:wcidff.adoc[What Can I Do For Fedora - SOP]

+ ** xref:wiki.adoc[Wiki Infrastructure - SOP]

+ ** xref:zodbot.adoc[Zodbot Infrastructure - SOP]

@@ -0,0 +1,98 @@ 

+ = Two factor auth

+ 

+ Fedora Infrastructure has implemented a form of two factor auth for

+ people who have sudo access on Fedora machines. In the future we may

+ expand this to include more than sudo but this was deemed to be a high

+ value, low hanging fruit.

+ 

+ == Using two factor

+ 

+ http://fedoraproject.org/wiki/Infrastructure_Two_Factor_Auth

+ 

+ To enroll a Yubikey, use the fedora-burn-yubikey script like normal. To

+ enroll using FreeOTP or Google Authenticator, go to

+ https://admin.fedoraproject.org/totpcgiprovision/

+ 

+ === What's enough authentication?

+ 

+ FAS Password+FreeOTP or FAS Password+Yubikey Note: don't actually enter

+ a +, simple enter your FAS Password and press your yubikey or enter your

+ FreeOTP code.

+ 

+ == Administrating and troubleshooting two factor

+ 

+ Two factor auth is implemented by a modified copy of the

+ https://github.com/mricon/totp-cgi project doing the authentication and

+ pam_url submitting the authentication tokens.

+ 

+ totp-cgi runs on the fas servers (currently fas01.stg and

+ fas01/fas02/fas03 in production), listening on port 8443 for pam_url

+ requests.

+ 

+ FreeOTP, Google authenticator and yubikeys are supported as tokens to

+ use with your password.

+ 

+ === FreeOTP, Google authenticator:

+ 

+ FreeOTP application is preferred, however Google authenticator works as

+ well. (Note that Google authenticator is not open source)

+ 

+ This is handled via totpcgi. There's a command line tool to manage

+ users, totpprov. See 'man totpprov' for more info. Admins can use this

+ tool to revoke lost tokens (google authenticator only) with 'totpprov

+ delete-user username'

+ 

+ To enroll using FreeOTP or Google Authenticator for production machines,

+ go to https://admin.fedoraproject.org/totpcgiprovision/

+ 

+ To enroll using FreeOTP or Google Authenticator for staging machines, go

+ to https://admin.stg.fedoraproject.org/totpcgiprovision/

+ 

+ You'll be prompted to login with your fas username and password.

+ 

+ Note that staging and production differ.

+ 

+ === YubiKeys:

+ 

+ Yubikeys are enrolled and managed in FAS. Users can self-enroll using

+ the fedora-burn-yubikey utility included in the fedora-packager package.

+ 

+ === What do I do if I lose my token?

+ 

+ Send an email to admin@fedoraproject.org that is encrypted/signed with

+ your gpg key from FAS, or otherwise identifies you are you.

+ 

+ === How to remove a token (so the user can re-enroll)?

+ 

+ First we MUST verify that the user is who they say they are, using any

+ of the following:

+ 

+ * Personal contact where the person can be verified by member of

+ sysadmin-main.

+ * Correct answers to security questions.

+ * Email request to admin@fedoraproject.org that is gpg encrypted by the

+ key listed for the user in fas.

+ 

+ Then:

+ 

+ . For google authenticator,

+ +

+ ____

+ .. ssh into batcave01 as root

+ .. ssh into os-master01.iad2.fedoraproject.org

+ .. $ oc project fas

+ .. $ oc get pods

+ .. $ oc rsh <pod> (Pick one of totpcgi pods from the above list)

+ .. $ totpprov delete-user <username>

+ ____

+ . For yubikey: login to one of the fas machines and run:

+ /usr/local/bin/yubikey-remove.py username

+ 

+ The user can then go to

+ https://admin.fedoraproject.org/totpcgiprovision/ and reprovision a new

+ device.

+ 

+ If the user emails admin@fedoraproject.org with the signed request, make

+ sure to reply to all indicating that a reset was performed. This is so

+ that other admins don't step in and reset it again after its been reset

+ once.

@@ -0,0 +1,294 @@ 

+ = Account Deletion SOP

+ 

+ For the most part we do not delete accounts. In the case that a deletion

+ is paramount, it will need to be coordinated with appropriate entities.

+ 

+ Disabling accounts is another story but is limited to those with the

+ appropriate privileges. Reasons for accounts to be disabled can be one

+ of the following:

+ 

+ ____

+ * Person has placed SPAM on the wiki or other sites.

+ * It is seen that the account has been compromised by a third party.

+ * A person wishes to leave the Fedora Project and wants the account

+ disabled.

+ ____

+ 

+ == Contents

+ 

+ * <<_disabling>>

+ ** <<_disable_accounts>>

+ ** <<_disable_groups>>

+ * <<_user_requested_disables>>

+ * <<_renames>>

+ ** <<_rename_accounts>>

+ ** <<_rename_groups>>

+ * <<_deletion>>

+ ** <<_delete_accounts>>

+ ** <<_delete_groups>>

+ 

+ === Disabling

+ 

+ Disabling accounts is the easiest to accomplish as it just blocks people

+ from using their account. It does not remove the account name and

+ associated UID so we don't have to worry about future, unintentional

+ collisions.

+ 

+ == Disable Accounts

+ 

+ To begin with, accounts should not be disabled until there is a ticket

+ in the Infrastructure ticketing system. After that the contents inside

+ the ticket need to be verified (to make sure people aren't playing

+ pranks or someone is in a crappy mood). This needs to be logged in the

+ ticket (who looked, what they saw, etc). Then the account can be

+ disabled.:

+ 

+ ....

+ ssh db02

+ sudo -u postgres pqsql fas2

+ 

+ fas2=# begin;

+ fas2=# select * from people where username = 'FOOO';

+ ....

+ 

+ Here you need to verify that the account looks right, that there is only

+ one match, or other issues. If there are multiple matches you need to

+ contact one of the main sysadmin-db's on how to proceed.:

+ 

+ ....

+ fas2=# update people set status = 'admin_disabled' where username = 'FOOO';

+ fas2=# commit;

+ fas2=# /q

+ ....

+ 

+ == Disable Groups

+ 

+ There is no explicit way to disable groups in FAS2. Instead, we close

+ the group for adding new members and optionally remove existing members

+ from it. This can be done from the web UI if you are an administrator of

+ the group or you are in the accounts group. First, go to the group info

+ page. Then click the (edit) link next to Group Details. Make sure that

+ the Invite Only box is checked. This will prevent other users from

+ requesting the group on their own.

+ 

+ If you want to remove the existing users, View the Group info, then

+ click on the View Member List link. Click on All under the Results

+ heading. Then go through and click on Remove for each member.

+ 

+ Doing this in the database instead can be quicker if you have a lot of

+ people to remove. Once again, this requires someone in sysadmin-db to do

+ the work:

+ 

+ ....

+ ssh db02

+ sudo -u postgres pqsql fas2

+ 

+ fas2=# begin;

+ fas2=# update group, set invite_only = true where name = 'FOOO';

+ fas2=# commit;

+ fas2=# begin;

+ fas2=# select p.name, g.name, r.role_status from people as p, person_roles as r, groups as g

+ where p.id = r.person_id and g.id = r.group_id

+ and g.name = 'FOOO';

+ fas2=# -- Make sure that the list of users in the groups looks correct

+ fas2=# delete from person_roles where person_roles.group_id = (select id from groups where g.name = 'FOOO');

+ fas2=# -- number of rows in both of the above should match

+ fas2=# commit;

+ fas2=# /q

+ ....

+ 

+ === User Requested Disables

+ 

+ According to our Privacy Policy, a user may request that their personal

+ information from FAS if they want to disable their account. We can do

+ this but need to do some extra work over simply setting the account

+ status to disabled.

+ 

+ == Record User's CLA information

+ 

+ If the user has signed the CLA/FPCA, then they may have contributed

+ something to Fedora that we'll need to contact them about at a later

+ date. For that, we need to keep at least the following information:

+ 

+ * Fedora username

+ * human name

+ * email address

+ 

+ All of this information should be on the CLA email that is sent out when

+ a user signs up. We need to verify with spot (Tom Callaway) that he has

+ that record. If not, we need to get it to him. Something like:

+ 

+ ....

+ select id, username, human_name, email, telephone, facsimile, postal_address from people where username = 'USERNAME';

+ ....

+ 

+ and send it to spot to keep.

+ 

+ == Remove the personal information

+ 

+ The following sequence of db commands should do it:

+ 

+ ....

+ fas2=# begin;

+ fas2=# select * from people where username = 'USERNAME';

+ ....

+ 

+ Here you need to verify that the account looks right, that there is only

+ one match, or other issues. If there are multiple matches you need to

+ contact one of the main sysadmin-db's on how to proceed.:

+ 

+ ....

+ fas2=# update people set human_name = '', gpg_keyid = null, ssh_key = null, unverified_email = null, comments = null, postal_address = null, telephone = null, facsimile = null, affiliation = null, ircnick = null, status = 'inactive', locale = 'C', timezone = null, latitude = null, longitude = null, country_code = null, email = 'disabled1@fedoraproject.org'  where username = 'USERNAME';

+ ....

+ 

+ Make sure only one record was updated:

+ 

+ ....

+ fas2=# select * from people where username = 'USERNAME';

+ ....

+ 

+ Make sure the correct record was updated:

+ 

+ ....

+ fas2=# commit;

+ ....

+ 

+ [NOTE]

+ .Note

+ ====

+ The email address is both not null and unique in the database. Due to

+ this, you need to set it to a new string for every user who requests

+ deletion like this.

+ ====

+ 

+ === Renames

+ 

+ In general, renames do not require as much work as deletions but they

+ still require coordination. This is because renames do not change the

+ UID/GID but some of our applications save information based on

+ username/groupname rather than UID/GID.

+ 

+ == Rename Accounts

+ 

+ [WARNING]

+ .Warning

+ ====

+ Needs more eyes This list may not be complete.

+ ====

+ 

+ * Check the databases for koji, pkgdb, and bodhi for occurrences of

+ the old username and update them to the new username.

+ * Check fedorapeople.org for home directories and yum repositories under

+ the old username that would need to be renamed

+ * Check (or ask the user to check and update) mailing list subscriptions

+ on fedorahosted.org and lists.fedoraproject.org under the old

+ username@fedoraproject.org email alias

+ * Check whether the user has a username@fedoraproject.org bugzilla

+ account in python-fedora and update that. Also ask the user to update

+ that in bugzilla.

+ * If the user is in a sysadmin-* group, check for home directories on

+ bastion and other infrastructure boxes that are owned by them and need

+ to be renamed (Could also just tell the user to backup any files there

+ themselves b/c they're getting a new home directory).

+ * grep through ansible for occurrences of the username

+ * Check for entries in trac on fedorahosted.org for the username as an

+ "Assigned to" or "CC" entry.

+ * Add other places to check here

+ 

+ == Rename Groups

+ 

+ [WARNING]

+ .Warning

+ ====

+ Needs more eyes This list may not be complete.

+ ====

+ * grep through ansible for occurrences of the group name.

+ * Check for group-members,group-admins,group-sponsors@fedoraproject.org

+ email alias presence in any fedorahosted.org or lists.fedoraproject.org

+ mailing list

+ * Check for entries in trac on fedorahosted.org for the username as an

+ "Assigned to" or "CC" entry.

+ * Add other places to check here

+ 

+ === Deletion

+ 

+ Deletion is the toughest one to audit because it requires that we look

+ through our systems looking for the UID and GID in addition to looking

+ for the username and password. The UID and GID are used on things like

+ filesystem permissions so we have to look there as well. Not catching

+ these places may lead to security issus should the UID/GID ever be

+ reused.

+ 

+ [NOTE]

+ .Note

+ ====

+ Recommended to rename instead When not strictly necessary to purge all

+ traces of an account, it's highlyrecommended to rename the user or group

+ to something like DELETED_oldusername instead of deleting. This avoids

+ the problems and additional checking that we have to do below.

+ ====

+ == Delete Accounts

+ 

+ [WARNING]

+ .Warning

+ ====

+ Needs more eyes This list may be incomplete. Needs more people to look

+ at this and find places that may need to be updated

+ ====

+ * Check everything for the #Rename Accounts case.

+ * Figure out what boxes a user may have had access to in the past. This

+ means you need to look at all the groups a user may ever have been

+ approved for (even if they are not approved for those groups now). For

+ instance, any git*, svn*, bzr*, hg* groups would have granted access to

+ hosted03 and hosted04. packager would have granted access to

+ pkgs.fedoraproject.org. Pretty much any group grants access to

+ fedorapeople.org.

+ * For those boxes, run a find over the files there to see if the UID

+ owns any files on the system:

+ +

+ ....

+ # find / -uid 100068 -print

+ ....

+ +

+ Any files owned by that uid must be reassigned to another user or::

+   removed.

+ 

+ [WARNING]

+ .Warning

+ ====

+ What to do about backups? Backups pose a special problem as they may

+ contain the uid that's being removed. Need to decide how to handle this

+ ====

+ * Add other places to check here

+ 

+ == Delete Groups

+ 

+ [WARNING]

+ .Warning

+ ====

+ Needs more eyes This list may be incomplete. Needs more people to look

+ at this and find places that may need to be updated

+ ====

+ * Check everything for the #Rename Groups case.

+ * Figure out what boxes may have had files owned by that group. This

+ means that you'd need to look at the users in that group, what boxes

+ they have shell accounts on, and then look at those boxes. groups used

+ for hosted would also need to add hosted03 and hosted04 to that list and

+ the box that serves the hosted mailing lists.

+ * For those boxes, run a find over the files there to see if the GID

+ owns any files on the system:

+ +

+ ....

+ # find / -gid 100068 -print

+ ....

+ +

+ Any files owned by that GID must be reassigned to another group or

+ removed.

+ 

+ [WARNING]

+ .Warning

+ ====

+ What to do about backups? Backups pose a special problem as they may

+ contain the gid that's being removed. Need to decide how to handle this

+ ====

+ * Add other places to check here

@@ -0,0 +1,210 @@ 

+ = Anitya Infrastructure SOP

+ 

+ Anitya is used by Fedora to track upstream project releases and maps

+ them to downstream distribution packages, including (but not limited to)

+ Fedora.

+ 

+ Anitya staging instance: https://stg.release-monitoring.org

+ 

+ Anitya production instance: https://release-monitoring.org

+ 

+ Anitya project page: https://github.com/fedora-infra/anitya

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, #fedora-apps

+ Persons::

+   zlopez

+ Location::

+   iad2.fedoraproject.org

+ Servers::

+   Production

+   +

+   * os-master01.iad2.fedoraproject.org

+   +

+   Staging

+   +

+   * os-master01.stg.iad2.fedoraproject.org

+ Purpose::

+   Map upstream releases to Fedora packages.

+ 

+ == Hosts

+ 

+ The current deployment is made up of release-monitoring OpenShift

+ namespace.

+ 

+ === release-monitoring

+ 

+ This OpenShift namespace runs following pods:

+ 

+ * The apache/mod_wsgi application for release-monitoring.org

+ * A libraries.io SSE client

+ * A service checking for new releases

+ 

+ This OpenShift project relies on:

+ 

+ * A postgres db server running in OpenShift

+ * Lots of external third-party services. The anitya webapp can scrape

+ pypi, rubygems.org, sourceforge and many others on command.

+ * Lots of external third-party services. The check service makes all

+ kinds of requests out to the Internet that can fail in various ways.

+ * Fedora messaging RabbitMQ hub for publishing messages

+ 

+ Things that rely on this host:

+ 

+ * `hotness-sop` is a fedora messaging consumer running in Fedora Infra

+ in OpenShift. It listens for Anitya messages from here and performs

+ actions on koji and bugzilla.

+ 

+ == Releasing

+ 

+ The release process is described in

+ https://anitya.readthedocs.io/en/latest/contributing.html#release-guide[Anitya

+ documentation].

+ 

+ === Deploying

+ 

+ Staging deployment of Anitya is deployed in OpenShift on

+ os-master01.stg.iad2.fedoraproject.org.

+ 

+ To deploy staging instance of Anitya you need to push changes to staging

+ branch on https://github.com/fedora-infra/anitya[Anitya GitHub]. GitHub

+ webhook will then automatically deploy a new version of Anitya on

+ staging.

+ 

+ Production deployment of Anitya is deployed in OpenShift on

+ os-master01.iad2.fedoraproject.org.

+ 

+ To deploy production instance of Anitya you need to push changes to

+ production branch on https://github.com/fedora-infra/anitya[Anitya

+ GitHub]. GitHub webhook will then automatically deploy a new version of

+ Anitya on production.

+ 

+ ==== Configuration

+ 

+ To deploy the new configuration, you need

+ https://fedora-infra-docs.readthedocs.io/en/latest/sysadmin-guide/sops/sshaccess.html[ssh

+ access] to batcave01.iad2.fedoraproject.org and

+ https://fedora-infra-docs.readthedocs.io/en/latest/sysadmin-guide/sops/ansible.html[permissions

+ to run the Ansible playbook].

+ 

+ All the following commands should be run from batcave01.

+ 

+ First, ensure there are no configuration changes required for the new

+ update. If there are, update the Ansible anitya role(s) and optionally

+ run the playbook:

+ 

+ ....

+ $ sudo rbac-playbook openshift-apps/release-monitoring.yml

+ ....

+ 

+ The configuration changes could be limited to staging only using:

+ 

+ ....

+ $ sudo rbac-playbook openshift-apps/release-monitoring.yml -l staging

+ ....

+ 

+ This is recommended for testing new configuration changes.

+ 

+ ==== Upgrading

+ 

+ ===== Staging

+ 

+ To deploy new version of Anitya you need to push changes to staging

+ branch on https://github.com/fedora-infra/anitya[Anitya GitHub]. GitHub

+ webhook will then automatically deploy a new version of Anitya on

+ staging.

+ 

+ ===== Production

+ 

+ To deploy new version of Anitya you need to push changes to production

+ branch on https://github.com/fedora-infra/anitya[Anitya GitHub]. GitHub

+ webhook will then automatically deploy a new version of Anitya on

+ production.

+ 

+ Congratulations! The new version should now be deployed.

+ 

+ == Administrating release-monitoring.org

+ 

+ Anitya web application offers some functionality to administer itself.

+ 

+ User admin status is tracked in Anitya database. Admin users can grant

+ or revoke admin priviledges to users in the

+ https://release-monitoring.org/users[users tab].

+ 

+ Admin users have additional functionality available in web interface. In

+ particular, admins can view flagged projects, remove projects and remove

+ package mappings etc.

+ 

+ For more information see

+ https://anitya.readthedocs.io/en/stable/admin-user-guide.html[Admin user

+ guide] in Anitya documentation.

+ 

+ === Flags

+ 

+ Anitya lets users flag projects for administrator attention. This is

+ accessible to administrators in the

+ https://release-monitoring.org/flags[flags tab].

+ 

+ == Monitoring

+ 

+ To monitor the activity of Anitya you can connect to Fedora infra

+ OpenShift and look at the state of pods.

+ 

+ For staging look at the [.title-ref]#release-monitoring# namespace in

+ https://os.stg.fedoraproject.org/console/project/release-monitoring/overview[staging

+ OpenShift instance].

+ 

+ For production look at the [.title-ref]#release-monitoring# namespace in

+ https://os.fedoraproject.org/console/project/release-monitoring/overview[production

+ OpenShift instance].

+ 

+ == Troubleshooting

+ 

+ This section contains various issues encountered during deployment or

+ configuration changes and possible solutions.

+ 

+ === Fedmsg messages aren't sent

+ 

+ *Issue:* Fedmsg messages aren't sent.

+ 

+ *Solution:* Set USER environment variable in pod.

+ 

+ *Explanation:* Fedmsg is using USER env variable as a username inside

+ messages. Without USER env set it just crashes and didn't send anything.

+ 

+ === Cronjob is crashing

+ 

+ *Issue:* Cronjob pod is crashing on start, even after configuration

+ change that should fix the behavior.

+ 

+ *Solution:* Restart the cronjob. This could be done by OPS.

+ 

+ *Explanation:* Every time the cronjob is executed after crash it is

+ trying to actually reuse the pod with bad configuration instead of

+ creating a new one with new configuration.

+ 

+ === Database migration is taking too long

+ 

+ *Issue:* Database migration is taking few hours to complete.

+ 

+ *Solution:* Stop every pod and cronjob before migration.

+ 

+ *Explanation:* When creating new index or doing some other complex

+ operation on database, the migration script needs exclusive access to

+ the database.

+ 

+ === Old version is deployed instead the new one

+ 

+ *Issue:* The pod is deployed with old version of Anitya, but it says

+ that it was triggered by correct commit.

+ 

+ *Solution:* Set [.title-ref]#dockerStrategy# in buildconfig.yml to

+ noCache.

+ 

+ *Explanation:* The OpenShift is by default caching the layers of docker

+ containers, so if there is no change in Dockerfile it will just use the

+ cached version and don't run the commands again.

@@ -0,0 +1,249 @@ 

+ = Ansible infrastructure SOP/Information.

+ 

+ == Background

+ 

+ Fedora infrastructure used to use func and puppet for system change

+ management. We are now using ansible for all system change mangement and

+ ad-hoc tasks.

+ 

+ == Overview

+ 

+ Ansible runs from batcave01 or backup01. These hosts run a ssh-agent

+ that has unlocked the ansible root ssh private key. (This is unlocked

+ manually by a human with the passphrase each reboot, the passphrase

+ itself is not stored anywhere on the machines). Using 'sudo -i',

+ sysadmin-main members can use this agent to access any machines with the

+ ansible root ssh public key setup, either with 'ansible' for one-off

+ commands or 'ansible-playbook' to run playbooks.

+ 

+ Playbooks are idempotent (or should be). Meaning you should be able to

+ re-run the same playbook over and over and it should get to a state

+ where 0 items are changing.

+ 

+ Additionally (see below) there is a rbac wrapper that allows members of

+ some other groups to run playbooks against specific hosts.

+ 

+ === GIT repositories

+ 

+ There are 2 git repositories associated with Ansible:

+ 

+ * The Fedora Infrastructure Ansible repository and replicas.

+ 

+ [CAUTION]

+ ====

+ This is a public repository. Never commit private data to this repo.

+ ====

+ 

+ image:ansible-repositories.png[image]

+ 

+ This repository exists as several copies or replicas:

+ 

+ ** The "upstream" repository on Pagure.

+ 

+ https://pagure.io/fedora-infra/ansible

+ 

+ This repository is the public facing place where people can contribute

+ (e.g. pull requests) as well as the authoritative source. Members of the

+ `sysadmin` FAS group or the `fedora-infra` Pagure group have commit

+ access to this repository.

+ 

+ To contribute changes, fork the repository on Pagure and submit a Pull

+ Request. Someone from the aforementioned groups can then review and

+ merge them.

+ 

+ It is recommended that you configure git to use `pull --rebase` by

+ default by running `git config --bool pull.rebase true` in your ansible

+ clone directory. This configuration prevents unneeded merges which can

+ occur if someone else pushes changes to the remote repository while you

+ are working on your own local changes.

+ 

+ ** Two bare mirrors on _batcave01_, `/srv/git/ansible.git`

+ and `/srv/git/mirrors/ansible.git`

+ 

+ [CAUTION]

+ ====

+ These are public repositories. Never commit private data to these

+ repositories. Don't commit or push to these repos directly, unless

+ Pagure is unavailable.

+ ====

+ 

+ The `mirror_pagure_ansible` service on _batcave01_ receives

+ bus messages about changes in the repository on Pagure, fetches these

+ into `/srv/git/mirrors/ansible.git` and pushes from there to

+ `/srv/git/ansible.git`. When this happens, various actions are triggered

+ via git hooks:

+ 

+ *** The working copy at `/srv/web/infra/ansible` is updated.

+ 

+ *** A mail about the changes is sent to _sysadmin-members_.

+ 

+ *** The changes are announced on the message bus, which in turn triggers

+ announcements on IRC.

+ 

+ You can check out the repo locally on _batcave01_ with:

+ 

+ ....

+ git clone /srv/git/ansible.git

+ ....

+ 

+ If the Ansible repository on Pagure is unavailable, members of the

+ _sysadmin_ group may commit directly, provided this

+ procedure is followed:

+ [arabic]

+ . The synchronization service is stopped and disabled:

+ 

+ ....

+ sudo systemctl disable --now mirror_pagure_ansible.service

+ ....

+ . Changes are applied to the repository on _batcave01_.

+ . After Pagure is available again, the changes are pushed to the

+ repository there.

+ . The synchronization service is enabled and started:

+ 

+ ....

+ sudo systemctl enable --now mirror_pagure_ansible.service

+ ....

+ ** `/srv/web/infra/ansible` on _batcave01_, the working copy

+ from which playbooks are run.

+ 

+ [CAUTION]

+ ====

+ This is a public repository. Never commit private data to this repo.

+ Don't commit or push to this repo directly, unless Pagure is

+ unavailable.

+ ====

+ +

+ You can access it also via a cgit web interface at:

+ https://pagure.io/fedora-infra/ansible/

+ 

+ * `/srv/git/ansible-private` on _batcave01_.

+ 

+ [CAUTION]

+ ====

+ This is a private repository for passwords and other sensitive data. It

+ is not available in cgit, nor should it be cloned or copied remotely.

+ ====

+ 

+ This repository is only accessible to members of 'sysadmin-main'.

+ 

+ === Cron job/scheduled runs

+ 

+ With use of run_ansible-playbook_cron.py that is run daily via cron we

+ walk through playbooks and run them with _--check --diff_

+ params to perform a dry-run.

+ 

+ This way we make sure all the playbooks are idempotent and there is no

+ unexpected changes on servers (or playbooks).

+ 

+ === Logging

+ 

+ We have in place a callback plugin that stores history for any

+ ansible-playbook runs and then sends a report each day to

+ sysadmin-logs-members with any CHANGED or FAILED actions. Additionally,

+ there's a fedmsg plugin that reports start and end of ansible playbook

+ runs to the fedmsg bus. Ansible also logs to syslog verbose reporting of

+ when and what commands and playbooks were run.

+ 

+ === role based access control for playbooks

+ 

+ There's a wrapper script on _batcave01_ called 'rbac-playbook' that allows

+ non sysadmin-main members to run specific playbooks against specific

+ groups of hosts. This is part of the ansible_utils package. The upstream

+ for ansible_utils is: https://bitbucket.org/tflink/ansible_utils

+ 

+ To add a new group:

+ 

+ [arabic]

+ . add the playbook name and sysadmin group to the rbac-playbook

+ (ansible-private repo)

+ . add that sysadmin group to sudoers on batcave01 (also in

+ ansible-private repo)

+ 

+ To use the wrapper:

+ 

+ ....

+ sudo rbac-playbook playbook.yml

+ ....

+ 

+ == Directory setup

+ 

+ === Inventory

+ 

+ The inventory directory tells ansible all the hosts that are managed by

+ it and the groups they are in. All files in this dir are concatenated

+ together, so you can split out groups/hosts into separate files for

+ readability. They are in ini file format.

+ 

+ Additionally under the inventory directory are host_vars and group_vars

+ subdirectories. These are files named for the host or group and

+ containing variables to set for that host or group. You should strive to

+ set variables in the highest level possible, and precedence is in:

+ global, group, host order.

+ 

+ === Vars

+ 

+ This directory contains global variables as well as OS specific

+ variables. Note that in order to use the OS specific ones you must have

+ 'gather_facts' as 'True' or ansible will not have the facts it needs to

+ determine the OS.

+ 

+ === Roles

+ 

+ Roles are a collection of tasks/files/templates that can be used on any

+ host or group of hosts that all share that role. In other words, roles

+ should be used except in cases where configuration only applies to a

+ single host. Roles can be reused between hosts and groups and are more

+ portable/flexable than tasks or specific plays.

+ 

+ === Scripts

+ 

+ In the ansible git repo under scripts are a number of utilty scripts for

+ sysadmins.

+ 

+ === Playbooks

+ 

+ In the ansible git repo there's a directory for playbooks. The top level

+ contains utility playbooks for sysadmins. These playbooks perform

+ one-off functions or gather information. Under this directory are hosts

+ and groups playbooks. These playbooks are for specific hosts and groups

+ of hosts, from provision to fully configured. You should only use a host

+ playbook in cases where there will never be more than one of that thing.

+ 

+ === Tasks

+ 

+ This directory contains one-off tasks that are used in playbooks. Some

+ of these should be migrated to roles (we had this setup before roles

+ existed in ansible). Those that are truely only used on one host/group

+ could stay as isolated tasks.

+ 

+ === Syntax

+ 

+ Ansible now warns about depreciated syntax. Please fix any cases you see

+ related to depreciation warnings.

+ 

+ Templates use the jinja2 syntax.

+ 

+ == Libvirt virtuals

+ 

+ * TODO: add steps to make new libvirt virtuals in staging and production

+ * TODO: merge in new-hosts.txt

+ 

+ == Cloud Instances

+ 

+ * TODO: add how to make new cloud instances

+ * TODO: merge in from ansible README file.

+ 

+ == rdiff-backups

+ 

+ see:

+ https://fedora-infra-docs.readthedocs.io/en/latest/sysadmin-guide/sops/rdiff-backup.html

+ 

+ == Additional Reading/Resources

+ 

+ Upstream docs:::

+   https://docs.ansible.com/

+ Example repo with all kinds of examples:::

+   * https://github.com/ansible/ansible-examples

+   * https://gist.github.com/marktheunissen/2979474

+ Jinja2 docs:::

+   http://jinja.pocoo.org/docs/

@@ -0,0 +1,31 @@ 

+ = apps-fp-o SOP

+ 

+ Updating and maintaining the landing page at

+ https://apps.fedoraproject.org/

+ 

+ == Contact Information

+ 

+ Owner:::

+   Fedora Infrastructure Team

+ Contact:::

+   #fedora-apps, #fedora-admin

+ Servers:::

+   proxy0*

+ Purpose:::

+   Have a nice landing page for all our webapps.

+ 

+ == Description

+ 

+ We have a number of webapps, many of which our users don't know about.

+ This page was created so there was a central place where users could

+ stumble through them and learn.

+ 

+ The page is generated by a ansible role in ansible/roles/apps-fp-o/ It

+ makes use of an RPM package, the source code for which is at

+ https://github.com/fedora-infra/apps.fp.o

+ 

+ You can update the page by updating the apps.yaml file in that ansible

+ module.

+ 

+ When ansible is run next, the two ansible handlers should see your

+ changes and regenerate the static html and json data for the page.

@@ -0,0 +1,104 @@ 

+ = How to Archive Old Fedora Releases

+ 

+ The Fedora download servers contain terabytes of data, and to allow for

+ mirrors to not have to take all of that data, infrastructure regularly

+ moves data of end of lifed releases (from `/pub/fedora/linux`) to the

+ archives section (`/pub/archive/fedora/linux`)

+ 

+ == Steps Involved

+ 

+ [arabic]

+ . log into batcave01.phx2.fedoraproject.org and ssh to bodhi-backend01

+ +

+ [source]

+ ----

+ $ sudo -i ssh root@bodhi-backend01.iad2.fedoraproject.org

+ # su - ftpsync

+ ----

+ 

+ . Then change into the releases directory.

+ +

+ [source]

+ ----

+ $ cd /pub/fedora/linux/releases

+ ----

+ 

+ . Check to see that the target directory doesn't already exist.

+ +

+ [source]

+ ----

+ $ ls /pub/archive/fedora/linux/releases/

+ ----

+ 

+ . If the target directory does not already exist, do a recursive link

+ copy of the tree you want to the target

+ +

+ [source]

+ ----

+ $ cp -lvpnr 21 /pub/archive/fedora/linux/releases/21

+ ----

+ 

+ . If the target directory already exists, then we need to do a recursive

+ rsync to update any changes in the trees since the previous copy.

+ +

+ [source]

+ ----

+ $ rsync -avAXSHP --delete ./21/ /pub/archive/fedora/linux/releases/21/

+ ----

+ 

+ . We now do the updates and updates/testing in similar ways.

+ +

+ [source]

+ ----

+ $ cd ../updates/

+ $ cp -lpnr 21 /pub/archive/fedora/linux/updates/21

+ $ cd testing

+ $ cp -lpnr 21 /pub/archive/fedora/linux/updates/testing/21

+ ----

+ +

+ Alternative if this is a later refresh of an older copy.

+ +

+ [source]

+ ----

+ $ cd ../updates/

+ $ rsync -avAXSHP 21/ /pub/archive/fedora/linux/updates/21/

+ $ cd testing

+ $ rsync -avAXSHP 21/ /pub/archive/fedora/linux/updates/testing/21/

+ ----

+ 

+ . Do the same with fedora-secondary.

+ 

+ . Announce to the mirror list this has been done and that in 2 weeks you

+ will move the old trees to archives.

+ 

+ . In two weeks, log into mm-backend01 and run the archive script

+ +

+ [source]

+ ----

+ $ sudo -u mirrormanager mm2_move-to-archive --originalCategory="Fedora Linux" --archiveCategory="Fedora Archive" --directoryRe='/21/Everything'

+ ----

+ 

+ . If there are problems, the postgres DB may have issues and so you need

+ to get a DBA to update the backend to fix items.

+ 

+ . Wait an hour or so then you can remove the files from the main tree.

+ +

+ [source]

+ ----

+ $ ssh bodhi-backend01

+ $ cd /pub/fedora/linux

+ $ cd releases/21

+ $ ls # make sure you have stuff here

+ $ rm -rf *

+ $ ln ../20/README .

+ $ cd ../../updates/21

+ $ ls #make sure you have stuff here

+ $ rm -rf *

+ $ ln ../20/README .

+ $ cd ../testing/21

+ $ ls # make sure you have stuff here

+ $ rm -rf *

+ $ ln ../20/README .

+ ----

+ 

+ This should complete the archiving.

@@ -0,0 +1,206 @@ 

+ = Fedora ARM Infrastructure

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, sysadmin-main, sysadmin-releng

+ Location::

+   Phoenix

+ Servers::

+   arm01, arm02, arm03, arm04

+ Purpose::

+   Information on working with the arm SOCs

+ 

+ == Description

+ 

+ We have 4 arm chassis in phx2, each containing 24 SOCs (System On Chip).

+ 

+ Each chassis has 2 physical network connections going out from it. The

+ first one is used for the management interface on each SOC. The second

+ one is used for eth0 for each SOC.

+ 

+ Current allocations (2016-03-11):

+ 

+ arm01::

+   primary builders attached to koji.fedoraproject.org

+ arm02::

+   primary arch builders attached to koji.fedoraproject.org

+ arm03::

+   In cloud network, public qa/packager and copr instances

+ arm04::

+   primary arch builders attached to koji.fedoraproject.org

+ 

+ == Hardware Configuration

+ 

+ Each SOC has:

+ 

+ * eth0 and eth1 (unused) and a management interface.

+ * 4 cores

+ * 4GB ram

+ * a 300GB disk

+ 

+ SOCs are addressed by:

+ 

+ ....

+ arm{chassisnumber}-builder{number}.arm.fedoraproject.org

+ ....

+ 

+ Where chassisnumber is 01 to 04 and number is 00-23

+ 

+ == PXE installs

+ 

+ Kickstarts for the machines are in the kickstarts repo.

+ 

+ PXE config is on noc01. (or cloud-noc01.cloud.fedoraproject.org for

+ arm03)

+ 

+ The kickstart installs the latests Fedora and sets them up with a base

+ package set.

+ 

+ == IPMI tool Management

+ 

+ The SOCs are managed via their mgmt interfaces using a custom ipmitool

+ as well as a custom python script called 'cxmanage'. The ipmitool

+ changes have been submitted upstream and cxmanage is under review in

+ Fedora.

+ 

+ The ipmitool is currently installed on noc01 and it has ability to talk

+ to them on their management interface. noc01 also serves dhcp and is a

+ pxeboot server for the SOCs.

+ 

+ However you will need to add it to your path:

+ 

+ ....

+ export PATH=$PATH:/opt/calxeda/bin/

+ ....

+ 

+ Some common commands:

+ 

+ To set the SOC to boot the next time only with pxe:

+ 

+ ....

+ ipmitool -U admin -P thepassword -H arm03-builder11-mgmt.arm.fedoraproject.org chassis bootdev pxe

+ ....

+ 

+ To set the SOC power off:

+ 

+ ....

+ ipmitool -U admin -P thepassword -H arm03-builder11-mgmt.arm.fedoraproject.org power off

+ ....

+ 

+ To set the SOC power on:

+ 

+ ....

+ ipmitool -U admin -P thepassword -H arm03-builder11-mgmt.arm.fedoraproject.org power on

+ ....

+ 

+ To get a serial over lan console from the SOC:

+ 

+ ....

+ ipmitool -U admin -P thepassword -H arm03-builder11-mgmt.arm.fedoraproject.org -I lanplus sol activate

+ ....

+ 

+ == DISK mapping

+ 

+ Each SOC has a disk. They are however mapped to the internal 00-23 in a

+ non direct manner:

+ 

+ [cols="1,1,1,1"]

+ |===

+ |HDD Bay |EnergyCard |SOC (Port 1) |SOC Num

+ |0 |0 | 3 | 03

+ |1 |0 | 0 | 00

+ |2 |0 | 1 | 01

+ |3 |0 | 2 | 02

+ |4 |1 | 3 | 07

+ |5 |1 | 0 | 04

+ |6 |1 | 1 | 05

+ |7 |1 | 2 | 06

+ |8 |2 | 3 | 11

+ |9 |2 | 0 | 08

+ |10 |2 | 1 | 09

+ |11 |2 | 2 | 10

+ |12 |3 | 3 | 15

+ |13 |3 | 0 | 12

+ |14 |3 | 1 | 13

+ |15 |3 | 2 | 14

+ |16 |4 | 3 | 19

+ |17 |4 | 0 | 16

+ |18 |4 | 1 | 17

+ |19 |4 | 2 | 18

+ |20 |5 | 3 | 23

+ |21 |5 | 0 | 20

+ |22 |5 | 1 | 21

+ |23 |5 | 2 | 22

+ |===

+ 

+ Looking at the system from the front, the bay numbering starts from left

+ to right.

+ 

+ == cxmanage

+ 

+ The cxmanage tool can be used to update firmware or gather diag info.

+ 

+ Until cxmanage is packaged, you can use it from a python virtualenv:

+ 

+ ....

+ virtualenv --system-site-packages cxmanage

+ cd cxmanage

+ source bin/activate

+ pip install --extra-index-url=http://sources.calxeda.com/python/packages/ cxmanage

+ <use cxmanage>

+ deactivate

+ ....

+ 

+ Some cxmanage commands

+ 

+ ....

+ cxmanage sensor arm03-builder00-mgmt.arm.fedoraproject.org 

+ Getting sensor readings...

+ 1 successes  |  0 errors  |  0 nodes left  |  .  

+ 

+ MP Temp 0

+ arm03-builder00-mgmt.arm.fedoraproject.org: 34.00 degrees C

+ Minimum         : 34.00 degrees C

+ Maximum         : 34.00 degrees C

+ Average         : 34.00 degrees C

+ ... (and about 20 more sensors)...

+ ....

+ 

+ ....

+ cxmanage info arm03-builder00-mgmt.arm.fedoraproject.org 

+ Getting info...

+ 1 successes  |  0 errors  |  0 nodes left  |  .  

+ 

+ [ Info from arm03-builder00-mgmt.arm.fedoraproject.org ]

+ Hardware version   : EnergyCard X04

+ Firmware version   : ECX-1000-v2.1.5

+ ECME version       : v0.10.2

+ CDB version        : v0.10.2

+ Stage2boot version : v1.1.3

+ Bootlog version    : v0.10.2

+ A9boot version     : v2012.10.16-3-g66a3bf3

+ Uboot version      : v2013.01-rc1_cx_2013.01.17

+ Ubootenv version   : v2013.01-rc1_cx_2013.01.17

+ DTB version        : v3.7-4114-g34da2e2

+ ....

+ 

+ firmware update:

+ 

+ ....

+ cxmanage --internal-tftp 10.5.126.41:6969 --all-nodes fwupdate package ECX-1000_update-v2.1.5.tar.gz arm03-builder00-mgmt.arm.fedoraproject.org

+ ....

+ 

+ (note that this runs against the 00 management interface for the chassis

+ and updates all the nodes), and that we must run a tftpserver on port

+ 6969 for firewall handling.

+ 

+ == Links

+ 

+ http://sources.calxeda.com/python/packages/cxmanage/

+ 

+ == Contacts

+ 

+ help.desk@boston.co.uk is the contact to send repair requests to.

@@ -0,0 +1,152 @@ 

+ = Amazon Web Services Access

+ 

+ AWS includes a highly granular set of access policies, which can be

+ combined into roles and groups. Ipsilon is used to translate between IAM

+ policy groupings and groups in the Fedora Account System (FAS). Tags and

+ namespaces are used to keep roles resources seperate.

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin

+ Persons::

+   nirik, pfrields

+ Location::

+   ?

+ Servers::

+   N/A

+ Purpose::

+   Provide AWS resource access to contributors via FAS group membership.

+ 

+ == Accessing the AWS Console

+ 

+ To access the AWS Console via Ipsilon authentication, use

+ https://id.fedoraproject.org/saml2/SSO/Redirect?SPIdentifier=urn:amazon:webservices&RelayState=https://console.aws.amazon.com[this

+ SAML link].

+ 

+ You must be in the

+ https://accounts.fedoraproject.org/group/aws-iam/[aws-iam FAS

+ group] (or another group with access) to perform this action.

+ 

+ === Adding a role to AWS IAM

+ 

+ Sign into AWS via the URL above, and visit

+ https://console.aws.amazon.com/iam/home[Identity and Access Management

+ (IAM)] in the Security, Identity and Compliance tools.

+ 

+ Choose Roles to view current roles. Confirm there is not already a role

+ matching the one you need. If not, create a new role as follows:

+ 

+ [arabic]

+ . Select _Create role_.

+ . Select _SAML 2.0 federation_.

+ . Choose the SAML provider _id.fedoraproject.org_, which should already

+ be populated as a choice from previous use.

+ . Select the attribute _SAML:aud_. For value, enter

+ _https://signin.aws.amazon.com/saml_. Do not add a condition. Proceed to

+ the next step.

+ . Assign the appropriate policies from the pre-existing IAM policies.

+ It's unlikely you'll have to create your own, which is outside the scope

+ of this SOP. Then proceed to the next step.

+ . Set the role name and description. It is recommended you use the

+ _same_ role name as the FAS group for clarity. Fill in a longer

+ description to clarify the purpose of the role. Then choose _Create

+ role_.

+ 

+ Note or copy the Role ARN (Amazon Resource Name) for the new role.

+ You'll need this in the mapping below.

+ 

+ === Adding a group to FAS

+ 

+ When finished, login to FAS and create a group to correspond to the new

+ role. Use the prefix _aws-_ to denote new AWS roles in FAS. This makes

+ them easier to locate in a search.

+ 

+ It may be appropriate to set group ownership for _aws-_ groups to an

+ Infrastructure team principal, and then add others as users or sponsors.

+ This is especially worth considering for groups that have modify (full)

+ access to an AWS resource.

+ 

+ === Adding an IAM role mapping in Ipsilon

+ 

+ Add the new role mapping for FAS group to Role ARN in the ansible git

+ repo, under _roles/ipsilon/files/infofas.py_. Current mappings look like

+ this:

+ 

+ ....

+ aws_groups = {

+     'aws-master': 'arn:aws:iam::125523088429:role/aws-master',

+     'aws-iam': 'arn:aws:iam::125523088429:role/aws-iam',

+     'aws-billing': 'arn:aws:iam::125523088429:role/aws-billing',

+     'aws-atomic': 'arn:aws:iam::125523088429:role/aws-atomic',

+     'aws-s3-readonly': 'arn:aws:iam::125523088429:role/aws-s3-readonly'

+ }

+ ....

+ 

+ Add your mapping to the dictionary as shown. Start a new build/rollout

+ of the ipsilon project in openshift to make the changes live.

+ 

+ === User accounts

+ 

+ If you only need to use the web interface to aws, a role (and associated

+ policy) should be all you need, however, if you need cli access, you

+ will need a user and a token. Users should be named the same as the role

+ they are associated with.

+ 

+ === Role and User policies

+ 

+ Each Role (and user if there is a user needed for the role) should have

+ the same policy attached to it. Policies are named

+ 'fedora-$rolename-$service' ie, 'fedora-infra-ec2'. A copy of polices is

+ available in the ansible repo under files/aws/iam/policies. These are in

+ json form.

+ 

+ Policies are setup such that roles/users can do most things with a

+ resource if it's untagged. If it's tagged it MUST be tagged with their

+ group: FedoraGroup / $groupname. If it's tagged with another group name,

+ they cannot do anything with or to that resource. (Aside from seeing it

+ exists).

+ 

+ If there's a permssion you need, please file a ticket and it will be

+ evaluated.

+ 

+ Users MUST keep tokens private and secure. YOU are responsible for all

+ use of tokens issued to you from Fedora Infrastructure. Report any

+ compromised or possibly public tokens as soon as you are aware.

+ 

+ Users MUST tag resources with their FedoraGroup tag within one day, or

+ the resource may be removed.

+ 

+ === ec2

+ 

+ users/roles with ec2 permissions should always tag their instances with

+ their FedoraGroup as soon as possible. Untagged resources can be

+ terminated at any time.

+ 

+ === s3

+ 

+ users/roles with s3 permissions will be given specific bucket(s) that

+ they can manage/use. Care should be taken to make sure nothing in them

+ is public that should not be.

+ 

+ === cloudfront

+ 

+ Please file a ticket if you need cloudfront and infrastructure will do

+ any needed setup if approved.

+ 

+ == Regions

+ 

+ Users/groups are encouraged to use regions 'near' them or wherever makes

+ the most sense. If you are trying to create ec2 instances you will need

+ infrastructure to create a vpc in the region with network, etc. File a

+ ticket for such requests.

+ 

+ == Other Notes

+ 

+ AWS resource access that is not read-only should be treated with care.

+ In some cases, Amazon or other entities may absorb AWS costs, so changes

+ in usage can cause issues if not controlled or monitored. If you have

+ doubts about access, consult the Fedora Project Leader or Fedora

+ Engineering Manager.

@@ -0,0 +1,43 @@ 

+ = Fedora Bastion Hosts

+ 

+ == Contact Information

+ 

+ Owner::

+   sysadmin-main

+ Contact::

+   admin@fedoraproject.org

+ Location::

+   iad2

+ Servers::

+   bastion01, bastion02

+ Purpose::

+   background and description of bastion hosts and their unique issues.

+ 

+ == Description

+ 

+ There are 2 primary bastion hosts in the _iad2_ datacenter. One will be

+ active at any given time and the second will be a hot spare, ready to

+ take over. Switching between bastion hosts is currently a manual process

+ that requires changes in ansible.

+ 

+ All of the bastion hosts have an external IP that is mapped into them.

+ The reverse dns for these IPs is controlled by RHIT, so any changes must

+ be carefully coordinated.

+ 

+ The active bastion host performs the following functions:

+ 

+ * Outgoing smtp from fedora servers. This includes email aliases,

+ mailing list posts, build and commit notices, mailing list posts, etc.

+ 

+ * Incoming smtp from servers in _iad2_ or on the fedora vpn. Incoming mail

+ directly from the outside is NOT accepted or forwarded.

+ 

+ * ssh access to all _iad2/vpn_ connected servers.

+ 

+ * openvpn hub. This is the hub that all vpn clients connect to and talk

+ to each other via. Taking down or stopping this service will be a major

+ outage of services as all proxy and app servers use the vpn to talk to

+ each other.

+ 

+ When rebuilding these machines, care must be taken to match up the dns

+ names externally, and to preserve the ssh host keys.

@@ -0,0 +1,156 @@ 

+ = Blockerbugs Infrastructure SOP

+ 

+ https://pagure.io/fedora-qa/blockerbugs[Blockerbugs] is an app developed

+ by Fedora QA to aid in tracking items related to release blocking and

+ freeze exception bugs in branched Fedora releases.

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_file_locations>>

+ * <<_building_for_infra>>

+ * <<_upgrading>>

+ ** <<_upgrade_preparation_all_upgrades>>

+ ** <<_minor_upgrades_no_database_changes>>

+ ** <<_major_upgrades_with_database_changes>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora QA Devel

+ Contact::

+   #fedora-qa

+ Location::

+   iad2

+ Servers::

+   blockerbugs01.iad2, blockerbugs02.iad2, blockerbugs01.stg.iad2

+ Purpose::

+   Hosting the https://pagure.io/fedora-qa/blockerbugs[blocker bug

+   tracking application] for QA

+ 

+ == File Locations

+ 

+ `/etc/blockerbugs/settings.py` - configuration for the app

+ 

+ === Node Roles

+ 

+ blockerbugs01.stg.iad2::

+   the staging instance, it is not load balanced

+ blockerbugs01.iad2::

+   one of the load balanced production nodes, it is responsible for

+   running bugzilla/bodhi/koji sync

+ blockerbugs02.iad2::

+   the other load balanced production node. It does not do any sync

+   operations

+ 

+ == Building for Infra

+ 

+ === Do not use mock

+ 

+ For whatever reason, the `epel7-infra` koji tag rejects SRPMs with the

+ `el7.centos` dist tag. Make sure that you build SRPMs with:

+ 

+ ....

+ rpmbuild -bs --define='dist .el7' blockerbugs.spec

+ ....

+ 

+ Also note that this expects the release tarball to be in

+ `~/rpmbuild/SOURCES/`.

+ 

+ === Building with Koji

+ 

+ You'll need to ask someone who has rights to build into `epel7-infra`

+ tag to make the build for you:

+ 

+ ....

+ koji build epel7-infra blockerbugs-0.4.4.11-1.el7.src.rpm

+ ....

+ 

+ [NOTE]

+ ====

+ The fun bit of this is that `python-flask` is only available on `x86_64`

+ builders. If your build is routed to one of the non-x86_64, it will

+ fail. The only solution available to us is to keep submitting the build

+ until it's routed to one of the x86_64 builders and doesn't fail.

+ ====

+ 

+ Once the build is complete, it should be automatically tagged into

+ `epel7-infra-stg` (after a ~15 min delay), so that you can test it on

+ blockerbugs staging instance. Once you've verified it's working well,

+ ask someone with infra rights to move it to `epel7-infra` tag so that

+ you can update it in production.

+ 

+ == Upgrading

+ 

+ Blockerbugs is currently configured through ansible and all

+ configuration changes need to be done through ansible.

+ 

+ === Upgrade Preparation (all upgrades)

+ 

+ Blockerbugs is not packaged in epel, so the new build needs to exist in

+ the infrastructure stg repo for deployment to stg or the infrastructure

+ repo for deployments to production.

+ 

+ See the blockerbugs documentation for instructions on building a

+ blockerbugs RPM.

+ 

+ === Minor Upgrades (no database changes)

+ 

+ Run the following on *both* `blockerbugs01.iad2` and

+ `blockerbugs02.iad2` if updating in production.

+ 

+ [arabic]

+ . Update ansible with config changes, push changes to the ansible repo:

+ +

+ ....

+ roles/blockerbugs/templates/blockerbugs-settings.py.j2

+ ....

+ . Clear yum cache and update the blockerbugs RPM:

+ +

+ ....

+ yum clean expire-cache && yum update blockerbugs

+ ....

+ . Restart httpd to reload the application:

+ +

+ ....

+ service httpd restart

+ ....

+ 

+ === Major Upgrades (with database changes)

+ 

+ Run the following on *both* `blockerbugs01.phx2` and

+ `blockerbugs02.phx2` if updating in production.

+ 

+ [arabic]

+ . Update ansible with config changes, push changes to the ansible repo:

+ +

+ ....

+ roles/blockerbugs/templates/blockerbugs-settings.py.j2

+ ....

+ . Stop httpd on *all* relevant instances (if load balanced):

+ +

+ ....

+ service httpd stop

+ ....

+ . Clear yum cache and update the blockerbugs RPM on all relevant

+ instances:

+ +

+ ....

+ yum clean expire-cache && yum update blockerbugs

+ ....

+ . Upgrade the database schema:

+ +

+ ....

+ blockerbugs upgrade_db

+ ....

+ . Check the upgrade by running a manual sync to make sure that nothing

+ unexpected went wrong:

+ +

+ ....

+ blockerbugs sync

+ ....

+ . Start httpd back up:

+ +

+ ....

+ service httpd start

+ ....

@@ -0,0 +1,431 @@ 

+ = Bodhi Infrastructure SOP

+ 

+ Bodhi is used by Fedora developers to submit potential package updates

+ for releases and to manage buildroot overrides. From here, bodhi handles

+ all of the dirty work, from sending around emails, dealing with Koji, to

+ composing the repositories.

+ 

+ Bodhi production instance: https://bodhi.fedoraproject.org

+ 

+ Bodhi project page: https://github.com/fedora-infra/bodhi

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_adding_a_new_pending_release>>

+ * <<_pre_beta_bodhi_config>>

+ * <<_post_beta_bodhi_config>>

+ * <<_0_day_release_actions>>

+ * <<_configuring_all_bodhi_nodes>>

+ * <<_pushing_updates>>

+ * <<_monitoring_the_bodhi_composer_output>>

+ * <<_resuming_a_failed_push>>

+ * <<_performing_a_bodhi_upgrade>>

+ * <<_syncing_the_production_database_to_staging>>

+ * <<_release_eol>>

+ * <<_adding_notices_to_the_front_page_or_new_update_form>>

+ * <<_using_the_bodhi_shell_to_modify_updates_by_hand>>

+ * <<_using_the_bodhi_shell_to_fix_uniqueness_problems_with_e_mail_addresses>>

+ * <<_troubleshooting_and_resolution>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin

+ Persons::

+   bowlofeggs

+ Location::

+   iad2

+ Servers::

+   * bodhi-backend01.iad2.fedoraproject.org (composer)

+   * os.fedoraproject.org (web front end and backend task workers for

+   non-compose tasks)

+   * bodhi-backend01.stg.iad2.fedoraproject.org (staging composer)

+   * os.stg.fedoraproject.org (staging web front end and backend task

+   workers for non-compose tasks)

+ Purpose::

+   Push package updates, and handle new submissions.

+ 

+ == Adding a new pending release

+ 

+ Adding and modifying releases is done using the

+ _bodhi-manage-releases_ tool.

+ 

+ You can add a new pending release by running this command:

+ 

+ ....

+ bodhi-manage-releases create --name F23 --long-name "Fedora 23" --id-prefix FEDORA --version 23 --branch f23 --dist-tag f23 --stable-tag f23-updates --testing-tag f23-updates-testing --candidate-tag f23-updates-candidate --pending-stable-tag f23-updates-pending --pending-testing-tag f23-updates-testing-pending --override-tag f23-override --state pending

+ ....

+ 

+ == Pre-Beta Bodhi config

+ 

+ Enable pre_beta policy in bodhi config in ansible.::::

+   ansible/roles/bodhi2/base/templates/production.ini.j2

+ 

+ Uncomment or add the following lines:

+ 

+ ....

+ #f29.status = pre_beta

+ #f29.pre_beta.mandatory_days_in_testing = 3

+ #f29.pre_beta.critpath.min_karma = 1

+ #f29.pre_beta.critpath.stable_after_days_without_negative_karma = 14

+ ....

+ 

+ == Post-Beta Bodhi config

+ 

+ Enable post_beta policy in bodhi config in ansible.::::

+   ansible/roles/bodhi2/base/templates/production.ini.j2

+ 

+ Comment or remove the following lines corresponding to pre_beta policy:

+ 

+ ....

+ #f29.status = pre_beta

+ #f29.pre_beta.mandatory_days_in_testing = 3

+ #f29.pre_beta.critpath.min_karma = 1

+ #f29.pre_beta.critpath.stable_after_days_without_negative_karma = 14

+ ....

+ 

+ Uncomment or add the following lines for post_beta policy

+ 

+ ....

+ #f29.status = post_beta

+ #f29.post_beta.mandatory_days_in_testing = 7

+ #f29.post_beta.critpath.min_karma = 2

+ #f29.post_beta.critpath.stable_after_days_without_negative_karma = 14

+ ....

+ 

+ == 0-day Release Actions

+ 

+ * update atomic config

+ * run the ansible playbook

+ 

+ Going from pending to a proper release in bodhi requires a few steps:

+ 

+ Change state from pending to current:

+ 

+ ....

+ bodhi-manage-releases edit --name F23 --state current

+ ....

+ 

+ You may also need to disable any pre-beta or post-beta policy defined in

+ the bodhi config in ansible.:

+ 

+ ....

+ ansible/roles/bodhi2/base/templates/production.ini.j2

+ ....

+ 

+ Uncomment or remove the lines related to pre and post beta polcy

+ 

+ ....

+ #f29.status = post_beta

+ #f29.post_beta.mandatory_days_in_testing = 7

+ #f29.post_beta.critpath.min_karma = 2

+ #f29.post_beta.critpath.stable_after_days_without_negative_karma = 14

+ #f29.status = pre_beta

+ #f29.pre_beta.mandatory_days_in_testing = 3

+ #f29.pre_beta.critpath.min_karma = 1

+ #f29.pre_beta.critpath.stable_after_days_without_negative_karma = 14

+ ....

+ 

+ == Configuring all bodhi nodes

+ 

+ Run this command from the _ansible_ checkout to configure

+ all of bodhi in production:

+ 

+ ....

+ # This will configure the backends

+ $ sudo rbac-playbook playbooks/groups/bodhi2.yml

+ # This will configure the frontend

+ $ sudo rbac-playbook openshift-apps/bodhi.yml

+ ....

+ 

+ == Pushing updates

+ 

+ SSH into the `bodhi-backend01` machine and run:

+ 

+ ....

+ $ sudo -u apache bodhi-push

+ ....

+ 

+ You can restrict the updates by release and/or request:

+ 

+ ....

+ $ sudo -u apache bodhi-push --releases f23,f22 --request stable

+ ....

+ 

+ You can also push specific builds:

+ 

+ ....

+ $ sudo -u apache bodhi-push --builds openssl-1.0.1k-14.fc22,openssl-1.0.1k-14.fc23

+ ....

+ 

+ This will display a list of updates that are ready to be pushed.

+ 

+ == Monitoring the bodhi composer output

+ 

+ You can monitor the bodhi composer via the `bodhi` CLI tool, or via the

+ systemd journal on `bodhi-backend01`:

+ 

+ ....

+ # From the comfort of your own laptop.

+ $ bodhi composes list

+ # From bodhi-backend01

+ $ journalctl -f -u fedmsg-hub

+ ....

+ 

+ == Resuming a failed push

+ 

+ If a push fails for some reason, you can easily resume it on

+ `bodhi-backend01` by running:

+ 

+ ....

+ $ sudo -u apache bodhi-push --resume

+ ....

+ 

+ == Performing a bodhi upgrade

+ 

+ === Build Bodhi

+ 

+ Bodhi is deployed from the infrastructure Koji repositories. At the time

+ of this writing, it is deployed from the `f29-infra` and `f29-infra-stg`

+ (for staging) repositories. Bodhi is built for these repositories from

+ the `master` branch of the

+ https://src.fedoraproject.org/rpms/bodhi[bodhi dist-git repository].

+ 

+ As an example, to build a Bodhi beta for the `f29-infra-stg` repository,

+ you can use this command:

+ 

+ ....

+ $ rpmbuild --define "dist .fc29.infra" -bs bodhi.spec

+ Wrote: /home/bowlofeggs/rpmbuild/SRPMS/bodhi-3.13.0-0.0.beta.e0ca5bc.fc29.infra.src.rpm

+ $ koji build f29-infra /home/bowlofeggs/rpmbuild/SRPMS/bodhi-3.13.0-0.0.beta.e0ca5bc.fc29.infra.src.rpm

+ ....

+ 

+ When building a Bodhi release that is intended for production, we should

+ build from the production dist-git repo instead of uploading an SRPM:

+ 

+ ....

+ $ koji build f29-infra git+https://src.fedoraproject.org/rpms/bodhi.git#d64f40408876ec85663ec52888c4e44d92614b37

+ ....

+ 

+ All builds against the `f29-infra` build target will go into the

+ `f29-infra-stg` repository. If you wish to promote a build from staging

+ to production, you can do something like this command:

+ 

+ ....

+ $ koji move-build f29-infra-stg f29-infra bodhi-3.13.0-1.fc29.infra

+ ....

+ 

+ === Staging

+ 

+ The upgrade playbook will apply configuration changes after running the

+ alembic upgrade. Sometimes you may need changes applied to the Bodhi

+ systems in order to get the upgrade playbook to succeed. If you are in

+ this situation, you can apply those changes by running the bodhi-backend

+ playbook:

+ 

+ ....

+ sudo rbac-playbook -l staging groups/bodhi-backend.yml

+ ....

+ 

+ In the 

+ https://pagure.io/fedora-infra/ansible/blob/main/f/inventory/group_vars/os_masters_stg[os_masters inventory],

+ edit the `bodhi_version` setting to the version you wish to deploy to

+ staging. For example, to deploy `bodhi-3.13.0-1.fc29.infra` to staging,

+ I would set that varible like this:

+ 

+ ....

+ bodhi_version: "bodhi-3.13.0-1.fc29.infra"

+ ....

+ 

+ Run these commands:

+ 

+ ....

+ # Synchronize the database from production to staging

+ $ sudo rbac-playbook manual/staging-sync/bodhi.yml -l staging

+ # Upgrade the Bodhi backend on staging

+ $ sudo rbac-playbook manual/upgrade/bodhi.yml -l staging

+ # Upgrade the Bodhi frontend on staging

+ $ sudo rbac-playbook openshift-apps/bodhi.yml -l staging

+ ....

+ 

+ === Production

+ 

+ The upgrade playbook will apply configuration changes after running the

+ alembic upgrade. Sometimes you may need changes applied to the Bodhi

+ systems in order to get the upgrade playbook to succeed. If you are in

+ this situation, you can apply those changes by running the bodhi-backend

+ playbook:

+ 

+ ....

+ sudo rbac-playbook groups/bodhi-backend.yml -l bodhi-backend

+ ....

+ 

+ In the 

+ https://pagure.io/fedora-infra/ansible/blob/main/f/inventory/group_vars/os_masters[os_masters inventory],

+ edit the `bodhi_version` setting to the version you wish to deploy to

+ production. For example, to deploy `bodhi-3.13.0-1.fc29.infra` to

+ production, I would set that varible like this:

+ 

+ ....

+ bodhi_version: "bodhi-3.13.0-1.fc29.infra"

+ ....

+ 

+ To update the bodhi RPMs in production:

+ 

+ ....

+ # Update the backend VMs (this will also run the migrations, if any)

+ $ sudo rbac-playbook manual/upgrade/bodhi.yml -l bodhi-backend

+ # Update the frontend

+ $ sudo rbac-playbook openshift-apps/bodhi.yml

+ ....

+ 

+ == Syncing the production database to staging

+ 

+ This can be useful for testing issues with production data in staging:

+ 

+ ....

+ $ sudo rbac-playbook manual/staging-sync/bodhi.yml -l staging

+ ....

+ 

+ == Release EOL

+ 

+ ....

+ bodhi-manage-releases edit --name F21 --state archived

+ ....

+ 

+ == Adding notices to the front page or new update form

+ 

+ You can easily add notification messages to the front page of bodhi

+ using the _frontpage_notice_ option in

+ _ansible/roles/bodhi2/base/templates/production.ini.j2_. If

+ you want to flash a message on the New Update Form, you can use the

+ _newupdate_notice_ variable instead. This can be useful for

+ announcing things like service outages, etc.

+ 

+ == Using the Bodhi Shell to modify updates by hand

+ 

+ The "bodhi shell" is a Python shell with the SQLAlchemy session and

+ transaction manager initialized. It can be run from any

+ production/staging backend instance and allows you to modify any models

+ by hand.

+ 

+ ....

+ sudo pshell /etc/bodhi/production.ini

+ 

+ # Execute a script that sets up the `db` and provides a `delete_update` function.

+ # This will eventually be shipped in the bodhi package, but can also be found here.

+ # https://raw.githubusercontent.com/fedora-infra/bodhi/develop/tools/shelldb.py

+ >>> execfile('shelldb.py')

+ ....

+ 

+ At this point you have access to a _db_ SQLAlchemy Session

+ instance, a _t_ _transaction module_, and

+ _m_ for the _bodhi.models_.

+ 

+ ....

+ # Fetch an update, and tweak it as necessary.

+ >>> up = m.Update.get(u'u'FEDORA-2016-4d226a5f7e', db)

+ 

+ # Commit the transaction

+ >>> t.commit()

+ ....

+ 

+ Here is an example of merging two updates together and deleting the

+ original.

+ 

+ ....

+ >>> up = m.Update.get(u'FEDORA-2016-4d226a5f7e', db)

+ >>> up.builds

+ [<Build {'epoch': 0, 'nvr': u'resteasy-3.0.17-2.fc24'}>, <Build {'epoch': 0, 'nvr': u'pki-core-10.3.5-1.fc24'}>]

+ >>> b = up.builds[0]

+ >>> up2 = m.Update.get(u'FEDORA-2016-5f63a874ca', db)

+ >>> up2.builds

+ [<Build {'epoch': 0, 'nvr': u'resteasy-3.0.17-3.fc24'}>]

+ >>> up.builds.remove(b)

+ >>> up.builds.append(up2.builds[0])

+ >>> delete_update(up2)

+ >>> t.commit()

+ ....

+ 

+ == Using the Bodhi shell to fix uniqueness problems with e-mail addresses

+ 

+ Bodhi currently enforces uniqueness on user e-mail addresses. There is

+ https://github.com/fedora-infra/bodhi/issues/2387[an issue] filed to

+ drop this upstream, but for the time being the constraint is enforced.

+ This can be a problem for users who have more than one FAS account if

+ they make one account use an e-mail address that was previously used by

+ another account, if that other account has not logged into Bodhi since

+ it was changed to use a different address. One way the user can fix this

+ themselves is to log in to Bodhi with the old account so that Bodhi

+ learns about its new address. However, an admin can also fix this by

+ hand by using the Bodhi shell.

+ 

+ For example, suppose a user has created `user_1` and `user_2`. Suppose

+ that `user_1` used to use `email_a@example.com` but has been changed to

+ use `email_b@example.com` in FAS, and `user_2` is now configured to use

+ `email_a@example.com` in FAS. If `user_2` attempts to log in to Bodhi,

+ it will cause a uniqueness violation since Bodhi does not know that

+ `user_1` has changed to `email_b@example.com`. The user can simply log

+ in as `user_1` to fix this, which will cause Bodhi to update its e-mail

+ address to `email_b@example.com`. Or an admin can fix it with a shell on

+ one of the Bodhi backend servers like this:

+ 

+ ....

+ [bowlofeggs@bodhi-backend02 ~][PROD]$ sudo -u apache pshell /etc/bodhi/production.ini

+ 2018-05-29 20:21:36,366 INFO  [bodhi][MainThread] Using python-bugzilla

+ 2018-05-29 20:21:36,367 DEBUG [bodhi][MainThread] Using Koji Buildsystem

+ 2018-05-29 20:21:42,559 INFO  [bodhi.server][MainThread] Bodhi ready and at your service!

+ Python 2.7.14 (default, Mar 14 2018, 13:36:31) 

+ [GCC 7.3.1 20180303 (Red Hat 7.3.1-5)] on linux2

+ Type "help" for more information.

+ 

+ Environment:

+   app          The WSGI application.

+   registry     Active Pyramid registry.

+   request      Active request object.

+   root         Root of the default resource tree.

+   root_factory Default root factory used to create `root`.

+ 

+ Custom Variables:

+   m            bodhi.server.models

+ 

+ >>> u = m.User.query.filter_by(name=u'user_1').one()

+ >>> u.email = u'email_b@example.com'

+ >>> m.Session().commit()

+ ....

+ 

+ == Troubleshooting and Resolution

+ 

+ === Atomic OSTree compose failure

+ 

+ If the Atomic OSTree compose fails with some sort of

+ _Device or Resource busy_ error, then run _mount_ to see if there

+ are any stray _tmpfs_ mounts still active:

+ 

+ ....

+ tmpfs on /var/lib/mock/fedora-22-updates-testing-x86_64/root/var/tmp/rpm-ostree.bylgUq type tmpfs (rw,relatime,seclabel,mode=755)

+ ....

+ 

+ You can then

+ _umount /var/lib/mock/fedora-22-updates-testing-x86_64/root/var/tmp/rpm-ostree.bylgUq_

+ and resume the push again.

+ 

+ === nfs repodata cache IOError

+ 

+ Sometimes you may hit an IOError during the updateinfo.xml generation

+ process from createrepo_c:

+ 

+ ....

+ IOError: Cannot open /mnt/koji/mash/updates/epel7-160228.1356/../epel7.repocache/repodata/repomd.xml: File /mnt/koji/mash/updates/epel7-160228.1356/../epel7.repocache/repodata/repomd.xml doesn't exists or not a regular file

+ ....

+ 

+ This issue will be resolved with NFSv4, but in the mean time it can be

+ worked around by removing the _.repocache_ directory and

+ resuming the push:

+ 

+ ....

+ rm -fr /mnt/koji/mash/updates/epel7.repocache

+ ....

@@ -0,0 +1,116 @@ 

+ = Bugzilla Sync Infrastructure SOP

+ 

+ We do not run bugzilla.redhat.com. If bugzilla itself is down we need to

+ get in touch with Red Hat IT or one of the bugzilla hackers (for

+ instance, Dave Lawrence (dkl)) in order to fix it.

+ 

+ Infrastructure has some scripts that perform administrative functions on

+ bugzilla.redhat.com. These scripts sync information from FAS and the

+ Package Database into bugzilla.

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_description>>

+ * <<_troubleshooting_and_resolution>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin

+ Persons::

+   abadger1999

+ Location::

+   Phoenix, Denver (Tummy), Red Hat Infrastructure

+ Servers::

+   (fas1, app5) => Need to migrate these to bapp1, bugzilla.redhat.com

+ Purpose::

+   Sync Fedora information to bugzilla.redhat.com

+ 

+ == Description

+ 

+ At present there are two scripts that sync information from Fedora into

+ bugzilla.

+ 

+ === export-bugzilla.py

+ 

+ `export-bugzilla.py` is the first script. It is responsible for syncing

+ Fedora Accounts into bugzilla. It adds Fedora packages and bug triagers

+ into a bugzilla group that gives the users extra permissions within

+ bugzilla. This script is run off of a cron job on FAS1. The source code

+ resides in the FAS git repo in `fas/scripts/export-bugzilla.*` however

+ the code we run on the servers presently lives in ansible:

+ 

+ ....

+ roles/fas_server/files/export-bugzilla

+ ....

+ 

+ === pkgdb-sync-bugzilla

+ 

+ The other script is pkgdb-sync-bugzilla. It is responsible for syncing

+ the package owners and cclists to bugzilla from the pkgdb. The script

+ runs off a cron job on app5. The source code is in the packagedb bzr

+ repo is

+ `packagedb/fedora-packagedb-stable/server-scripts/pkgdb-sync-bugzilla.*`.

+ Just like FAS, a separate copy is presently installed from ansbile to

+ `/usr/local/bin/pkgdb-sync-bugzilla` but that should change ASAP as the

+ present fedora-packagedb package installs

+ `/usr/bin/pkgdb-sync-bugzilla`.

+ 

+ == Troubleshooting and Resolution

+ 

+ === Errors while syncing bugzilla with the PackageDB

+ 

+ One frequent problem is that people will sign up to watch a package in

+ the packagedb but their email address in FAS isn't a bugzilla email

+ address. When this happens the scripts that try to sync the packagedb

+ information to bugzilla encounter an error and send an email like this:

+ 

+ ....

+ Subject: Errors while syncing bugzilla with the PackageDB

+ 

+ The following errors were encountered while updating bugzilla with information

+ from the Package Database.  Please have the problems taken care of:

+ 

+ ({'product': u'Fedora', 'component': u'aircrack-ng', 'initialowner': u'baz@zardoz.org',

+ 'initialcclist': [u'foo@bar.org', u'baz@zardoz.org']}, 504, 'The name foo@bar.org is not a

+ valid username.  \n    Either you misspelled it, or the person has not\n    registered for a

+ Red Hat Bugzilla account.')

+ ....

+ 

+ When this happens we attempt to contact the person with the problematic

+ mail address and get them to change it. Here's a boilerplate message:

+ 

+ ....

+ To: foo@bar.org

+ Subject: Fedora Account System Email vs Bugzilla Email

+ 

+ Hello,

+ 

+ You are signed up to receive bug reports against the aircrack-ng package

+ in Fedora.  Unfortunately, the email address we have for you in the

+ Fedora Account System is not a valid bugzilla email address.  That means

+ that bugzilla won't send you mail and we're getting errors in the script

+ that syncs the cclist into bugzilla.

+ 

+ There's a few ways to resolve this:

+ 

+ 1) Create a new bugzilla account with the email foo@bar.org as

+ an account at https://bugzilla.redhat.com.

+ 

+ 2) Change an existing account on https://bugzilla.redhat.com to use the

+ foo@bar.org email address.

+ 

+ 3) Change your email address in https://admin.fedoraproject.org/accounts

+ to use an email address that matches with an existing bugzilla email

+ address.

+ 

+ Please let me know what you want to do!

+ 

+ Thank you,

+ ....

+ 

+ If the user does not reply someone in the cvsadmin group needs to go

+ into the pkgdb and remove the user from the cclist for the package.

@@ -0,0 +1,73 @@ 

+ = bugzilla2fedmsg SOP

+ 

+ Receive events from bugzilla over the RH "unified messagebus" and

+ rebroadcast them over our own fedmsg bus.

+ 

+ Source code: https://github.com/fedora-infra/bugzilla2fedmsg

+ 

+ == Contact Information

+ 

+ Owner::

+   Messaging SIG, Fedora Infrastructure Team

+ Contact::

+   #fedora-apps, #fedora-fedmsg, #fedora-admin, #fedora-noc

+ Servers::

+   bugzilla2fedmsg01

+ Purpose::

+   Rebroadcast bugzilla events on our bus.

+ 

+ == Description

+ 

+ bugzilla2fedmsg is a small service running as the 'moksha-hub' process

+ which receives events from bugzilla via the RH "unified messagebus" and

+ rebroadcasts them to our fedmsg bus.

+ 

+ [NOTE]

+ ====

+ Unlike _all_ of our other fedmsg services, this one runs as the

+ 'moksha-hub' process and not as the 'fedmsg-hub'.

+ ====

+ 

+ The bugzilla2fedmsg package provides a plugin to the moksha-hub that

+ connects out over the STOMP protocol to a 'fabric' of JBOSS activemq

+ FUSE brokers living in the Red Hat DMZ. We authenticate with a cert/key

+ pair that is kept in _/etc/pki/fedmsg/_. Those brokers should push

+ bugzilla events over STOMP to our moksha-hub daemon. When a message

+ arrives, we query bugzilla about the change to get some 'more

+ interesting' data to stuff in our payload, then we sign the message

+ using a fedmsg cert and fire it off to the rest of our bus.

+ 

+ This service has no database, no memcached usage. It depends on those

+ STOMP brokers and being able to query bugzilla.rh.com.

+ 

+ == Relevant Files

+ 

+ All managed by ansible, of course:

+ 

+ ....

+ STOMP config: /etc/moksha/production.ini

+ fedmsg config: /etc/fedmsg.d/

+ certs: /etc/pki/fedmsg

+ code: /usr/lib/python2.7/site-packages/bugzilla2fedmsg.py

+ ....

+ 

+ == Useful Commands

+ 

+ To look at logs, run:

+ 

+ ....

+ $ journalctl -u moksha-hub -f

+ ....

+ 

+ To restart the service, run:

+ 

+ ....

+ $ systemctl restart moksha-hub

+ ....

+ 

+ == Internal Contacts

+ 

+ If we need to contact someone from the RH internal "unified messagebus"

+ team, search for "unified messagebus" in mojo. It is operated as a joint

+ project between RHIT and PnT Devops. See also the `#devops-message` IRC

+ channel, internally.

@@ -0,0 +1,68 @@ 

+ = Collectd SOP

+ 

+ Collectd ( https://collectd.org/ ) is a client/server setup that gathers

+ system information from clients and allows the server to display that

+ information over various time periods.

+ 

+ Our server instance runs on log01.phx2.fedoraproject.org and most other

+ servers run clients that connect to the server and provide it with data.

+ 

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_configuration>>

+ * <<_web_interface>>

+ * <<_restarting>>

+ * <<_removing_old_hosts>>

+ * <<_bug_reporting>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin

+ Location::

+   https://admin.fedoraproject.org/collectd/

+ Servers::

+   log01 and all/most other servers as clients

+ Purpose::

+   provide load and system information on servers.

+ 

+ == Configuration

+ 

+ The collectd roles configure collectd on the various machines:

+ 

+ ....

+ collectd/base - This is the base client role for most servers.

+ collectd/server - This is the server for use on log01.

+ collectd/other - There's various other subroles for different types of clients.

+ ....

+ 

+ == Web interface

+ 

+ The server web interface is available at:

+ 

+ https://admin.fedoraproject.org/collectd/

+ 

+ == Restarting

+ 

+ collectd runs as a normal systemd or sysvinit service, so you can:

+ systemctl restart collectd or service collectd restart to restart it.

+ 

+ == Removing old hosts

+ 

+ Collectd keeps information around until it's deleted, so you may need to

+ sometime go remove data from a host or hosts thats no longer used. To do

+ this:

+ 

+ [arabic]

+ . Login to log01

+ . `cd /var/lib/collectd/rrd`

+ . `sudo rm -rf oldhostname`

+ 

+ == Bug reporting

+ 

+ Collectd is in Fedora/EPEL and we use their packages, so report bugs to

+ bugzilla.redhat.com.

@@ -0,0 +1,26 @@ 

+ = Compose Tracker SOP

+ 

+ Compose Tracker tracks the pungi composes and creates a ticket in a

+ pagure repo for the composes are not FINISHED with a tail of the debug

+ and the koji tasks associated to it.

+ 

+ Compose Tracker: https://pagure.io/releng/compose-tracker

+ 

+ Failed Composes Repo: https://pagure.io/releng/failed-composes

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Release Engineering Team

+ Contact::

+   #fedora-releng

+ Persons::

+   dustymabe mohanboddu

+ Purpose::

+   Track failed composes

+ 

+ == More Information

+ 

+ For information about the tool and deployment on Fedora Infra Openshift

+ please look at the documetation in

+ https://pagure.io/releng/compose-tracker/blob/master/f/README.md

@@ -0,0 +1,126 @@ 

+ = Content Hosting Infrastructure SOP

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, sysadmin-main, fedora-infrastructure-list

+ Location::

+   Phoenix

+ Servers::

+   secondary1, netapp[1-3], torrent1

+ Purpose::

+   Policy regarding hosting, removal and pruning of content.

+ Scope::

+   download.fedora.redhat.com, alt.fedoraproject.org,

+   archives.fedoraproject.org, secondary.fedoraproject.org,

+   torrent.fedoraproject.org

+ 

+ == Description

+ 

+ Fedora hosts both Fedora content and some non-Fedora content. Our

+ resources are finite and as such we have to have some policy around when

+ to remove old content. This SOP describes the test to remove content.

+ The spirit of this SOP is to allow more people to host content and give

+ it a try, prove that it's useful. If it's not popular or useful, it will

+ get removed. Also out of date or expired content will be removed.

+ 

+ === What hosting options are available

+ 

+ Aside from the hosting at https://pagure.io/ we have a series of mirrors

+ we're allowing people to use. They are located at:

+ 

+ * http://archive.fedoraproject.org/pub/archive/ - For archives of

+ historical Fedora releases

+ * http://secondary.fedoraproject.org/pub/fedora-secondary/ - For

+ secondary architectures

+ * http://alt.fedoraproject.org/pub/alt/ - For misc content / catchall

+ * http://torrent.fedoraproject.org/ - For torrent hosting

+ * http://spins.fedoraproject.org/ - For official Fedora Spins hosting,

+ mirrored somewhat

+ * http://download.fedoraproject.com/pub/ - For official Fedora Releases,

+ mirrored widely

+ 

+ === Who can host? What can be hosted?

+ 

+ Any official Fedora content can hosted and made available for mirroring.

+ Official content is determined by the Council by virtue of allowing

+ people to use the Fedora trademark. People representing these teams will

+ be allowed to host.

+ 

+ === Non Official Hosting

+ 

+ People wanting to host unofficial bits may request approval for hosting.

+ Create a ticket at https://pagure.io/fedora-infrastructure/ explaining

+ what and why Fedora should host it. Such will be reviewed by the Fedora

+ Infrastructure team.

+ 

+ Requests for non-official hosting that may conflict with existing Fedora

+ policies will be escalated to the Council for approval.

+ 

+ === Licensing

+ 

+ Anything hosted with Fedora must come with a Free software license that

+ is approved by Fedora. See http://fedoraproject.org/wiki/Licensing for

+ more.

+ 

+ == Requesting Space

+ 

+ * Make sure you have a Fedora account

+ -https://admin.fedoraproject.org/accounts/

+ * Ensure you have signed the Fedora Project Contributor Agreement (FPCA)

+ * Submit a hosting request -https://pagure.io/fedora-infrastructure/

+ ** Include who you are, and any group you are working with (e.g. a SIG)

+ ** Include Space requirements

+ ** Include an estimate of the number of downloads expected (if you can).

+ ** Include the nature of the bits you want to host.

+ * Apply for group hosted-content

+ -https://admin.fedoraproject.org/accounts/group/view/hosted-content

+ 

+ == Using Space

+ 

+ A dedicated namespace in the mirror will be assigned to you. It will be

+ your responsibility to upload content, remove old content, stay within

+ your quota, etc. If you have any questions or concerns about this please

+ let us know. Generally you will use rsync. For example:

+ 

+ ....

+ rsync -av --progress ./my.iso secondary01.fedoraproject.org:/srv/pub/alt/mySpace/

+ ....

+ 

+ [IMPORTANT]

+ ====

+ None of our mirrored content is backed up. Ensure that you keep backups

+ of your content.

+ ====

+ 

+ == Content Pruning / Purging / Removal

+ 

+ The following guidelines / tests will be used to determine whether or

+ not to remove content from the mirror.

+ 

+ === Expired / Old Content

+ 

+ If content meets any of the following criteria it may be removed:

+ 

+ * Content that has reached the end of life (is no longer receiving

+ updates).

+ * Pre-release content that has been superceded.

+ * EOL releases that have been moved to archives.

+ * N-2 or greater releases. If more than 3 versions of a piece of content

+ are on the mirror, the oldest may be removed.

+ 

+ === Limited Use Content

+ 

+ If content meets any of the following criteria it may be removed:

+ 

+ * Content with exceedingly limited seeders or downloaders, with little

+ prospect of increasing those numbers and which is older then 1 year.

+ * Content such as videos or audio which are several years old.

+ 

+ === Catch All Removal

+ 

+ Fedora reserves the right to remove any content for any reason at any

+ time. We'll do our best to host things but sometimes we'll need space or

+ just need to remove stuff for legal or policy reasons.

@@ -0,0 +1,427 @@ 

+ = Copr

+ 

+ Copr is build system for 3rd party packages.

+ 

+ Frontend:::

+   * http://copr.fedorainfracloud.org/

+ Backend:::

+   * http://copr-be.cloud.fedoraproject.org/

+ Package signer:::

+   * copr-keygen.cloud.fedoraproject.org

+ Dist-git::

+   * copr-dist-git.fedorainfracloud.org

+ Devel instances (NO NEED TO CARE ABOUT THEM, JUST THOSE ABOVE):::

+   * http://copr-fe-dev.cloud.fedoraproject.org/

+   * http://copr-be-dev.cloud.fedoraproject.org/

+   * copr-keygen-dev.cloud.fedoraproject.org

+   * copr-dist-git-dev.fedorainfracloud.org

+ 

+ == Contact Information

+ 

+ Owner::

+   msuchy (mirek)

+ Contact::

+   #fedora-admin, #fedora-buildsys

+ Location::

+   Fedora Cloud

+ Purpose::

+   Build system

+ 

+ == This document

+ 

+ This document provides a condensed information allowing you to keep Copr

+ alive and working. For more sofisticated business processes, please see

+ https://docs.pagure.org/copr.copr/maintenance_documentation.html

+ 

+ == TROUBLESHOOTING

+ 

+ Almost every problem with Copr is due problem with spawning builder VMs,

+ or with processing action queue on backend.

+ 

+ === VM spawning/termination problems

+ 

+ Try to restart copr-backend service:

+ 

+ ....

+ $ ssh root@copr-be.cloud.fedoraproject.org

+ $ systemctl restart copr-backend

+ ....

+ 

+ If this doesn't solve the problem, try to follow logs for some clues:

+ 

+ ....

+ $ tail -f /var/log/copr-backend/{vmm,spawner,terminator}.log

+ ....

+ 

+ As the last resort option, you can terminate all builders and let

+ copr-backend to throw all information about them. This action will

+ obviously interrupt all running builds and reschedule them:

+ 

+ ....

+ $ ssh root@copr-be.cloud.fedoraproject.org

+ $ systemctl stop copr-backend

+ $ cleanup_vm_nova.py

+ $ redis-cli

+ > FLUSHALL

+ $ systemctl start copr-backend

+ ....

+ 

+ Sometimes OpenStack can not handle spawning too much VMs at the same

+ time. So it is safer to edit on _copr-be.cloud.fedoraproject.org_:

+ 

+ ....

+ vi /etc/copr/copr-be.conf

+ ....

+ 

+ and change:

+ 

+ ....

+ group0_max_workers=12

+ ....

+ 

+ to "6". Start copr-backend service and some time later increase it to

+ original value. Copr automaticaly detect change in script and increase

+ number of workers.

+ 

+ The set of aarch64 VMs isn't maintained by OpenStack, but by Copr's

+ backend itself. Steps to diagnose:

+ 

+ ....

+ $ ssh root@copr-be.cloud.fedoraproject.org

+ [root@copr-be ~][PROD]# systemctl status resalloc

+ ● resalloc.service - Resource allocator server

+ ...

+ 

+ [root@copr-be ~][PROD]# less /var/log/resallocserver/main.log

+ 

+ [root@copr-be ~][PROD]# su - resalloc

+ 

+ [resalloc@copr-be ~][PROD]$ resalloc-maint resource-list

+ 13569 - aarch64_01_prod_00013569_20190613_151319 pool=aarch64_01_prod tags=aarch64 status=UP

+ 13597 - aarch64_01_prod_00013597_20190614_083418 pool=aarch64_01_prod tags=aarch64 status=UP

+ 13594 - aarch64_02_prod_00013594_20190614_082303 pool=aarch64_02_prod tags=aarch64 status=STARTING

+ ...

+ 

+ [resalloc@copr-be ~][PROD]$ resalloc-maint ticket-list

+ 879 - state=OPEN tags=aarch64 resource=aarch64_01_prod_00013569_20190613_151319

+ 918 - state=OPEN tags=aarch64 resource=aarch64_01_prod_00013608_20190614_135536

+ 904 - state=OPEN tags=aarch64 resource=aarch64_02_prod_00013594_20190614_082303

+ 919 - state=OPEN tags=aarch64

+ ...

+ ....

+ 

+ Be careful when there's some resource in `STARTING` state. If that's so,

+ check

+ `/usr/bin/tail -F -n +0 /var/log/resallocserver/hooks/013594_alloc`.

+ Copr takes tickets from resalloc server; and if the resources fail to

+ spawn, the ticket numbers are not assigned with appropriately tagged

+ resource for a long time.

+ 

+ If that happens (it shouldn't) and there's some inconsistency between

+ resalloc's database and the actual status on aarch64 hypervisors

+ (`ssh copr@virthost-aarch64-os0{1,2}.fedorainfracloud.org`) - use

+ `virsh` there to introspect theirs statuses - use

+ `resalloc-maint resource-delete`, `resalloc ticket-close` or `psql`

+ commands to fix-up the resalloc's DB.

+ 

+ === Backend Troubleshoting

+ 

+ Information about status of Copr backend services:

+ 

+ ....

+ systemctl status copr-backend*.service

+ ....

+ 

+ Utilization of workers:

+ 

+ ....

+ ps axf

+ ....

+ 

+ Worker process change $0 to list which task they are working on and on

+ which builder.

+ 

+ To list which VM builders are tracked by copr-vmm service:

+ 

+ ....

+ /usr/bin/copr_get_vm_info.py

+ ....

+ 

+ === Appstream builder troubleshoting

+ 

+ Appstream builder is painfully slow when running on a repository with a

+ huge amount of packages. See

+ https://github.com/hughsie/appstream-glib/issues/301 . You might need to

+ disable it for some projects:

+ 

+ ....

+ $ ssh root@copr-be.cloud.fedoraproject.org

+ $ cd /var/lib/copr/public_html/results/<owner>/<project>/

+ $ touch .disable-appstream

+ # You should probably also delete existing appstream data because

+ # they might be obsolete

+ $ rm -rf ./appdata

+ ....

+ 

+ === Backend action queue issues

+ 

+ First check the _number of not-yet-processed actions_. If that

+ number isn't equal to zero, and is not decrementing relatively fast (say

+ single action takes longer than 30s) -- there might be some problem.

+ Logs for the action dispatcher can be found in:

+ 

+ ....

+ /var/log/copr-backend/action_dispatcher.log

+ ....

+ 

+ Check if there's no stucked process under `Action dispatch` parent

+ process in `pstree -a copr` output.

+ 

+ == Deploy information

+ 

+ Using playbooks and rbac:

+ 

+ ....

+ $ sudo rbac-playbook groups/copr-backend.yml

+ $ sudo rbac-playbook groups/copr-frontend-cloud.yml

+ $ sudo rbac-playbook groups/copr-keygen.yml

+ $ sudo rbac-playbook groups/copr-dist-git.yml

+ ....

+ 

+ The

+ https://pagure.io/copr/copr/blob/main/f/copr-setup.txt[copr-setup.txt]

+ manual is severely outdated, but there is

+ no up-to-date alternative. We should extract useful information from it

+ and put it here in the SOP or into

+ https://docs.pagure.org/copr.copr/maintenance_documentation.html and

+ then throw the _copr-setup.txt_ away.

+ 

+ On backend should run copr-backend service (which spawns several

+ processes). Backend spawns VM from Fedora Cloud. You could not login to

+ those machines directly. You have to:

+ 

+ ....

+ $ ssh root@copr-be.cloud.fedoraproject.org

+ $ su - copr

+ $ copr_get_vm_info.py

+ # find IP address of the VM that you want

+ $ ssh root@172.16.3.3

+ ....

+ 

+ Instances can be easily terminated in

+ https://fedorainfracloud.org/dashboard

+ 

+ === Order of start up

+ 

+ When reprovision you should start first: copr-keygen and copr-dist-git

+ machines (in any order). Then you can start copr-be. Well you can start

+ it sooner, but make sure that copr-* services are stopped.

+ 

+ Copr-fe machine is completly independent and can be start any time. If

+ backend is stopped it will just queue jobs.

+ 

+ == Logs

+ 

+ === Backend

+ 

+ * /var/log/copr-backend/action_dispatcher.log

+ * /var/log/copr-backend/actions.log

+ * /var/log/copr-backend/backend.log

+ * /var/log/copr-backend/build_dispatcher.log

+ * /var/log/copr-backend/logger.log

+ * /var/log/copr-backend/spawner.log

+ * /var/log/copr-backend/terminator.log

+ * /var/log/copr-backend/vmm.log

+ * /var/log/copr-backend/worker.log

+ 

+ And several logs for non-essential features such as

+ copr_prune_results.log, hitcounter.log, cleanup_vms.log, that you

+ shouldn't be worried with.

+ 

+ === Frontend

+ 

+ * /var/log/copr-frontend/frontend.log

+ * /var/log/httpd/access_log

+ * /var/log/httpd/error_log

+ 

+ === Keygen

+ 

+ * /var/log/copr-keygen/main.log

+ 

+ === Dist-git

+ 

+ * /var/log/copr-dist-git/main.log

+ * /var/log/httpd/access_log

+ * /var/log/httpd/error_log

+ 

+ == Services

+ 

+ === Backend

+ 

+ * copr-backend

+ ** copr-backend-action

+ ** copr-backend-build

+ ** copr-backend-log

+ ** copr-backend-vmm

+ * redis

+ * lighttpd

+ 

+ All the _copr-backend-*.service_ are configured to be a part

+ of the _copr-backend.service_ so e.g. in case of restarting

+ all of them, just restart the _copr-backend.service_.

+ 

+ === Frontend

+ 

+ * httpd

+ * postgresql

+ 

+ === Keygen

+ 

+ * signd

+ 

+ === Dist-git

+ 

+ * httpd

+ * copr-dist-git

+ 

+ == PPC64LE Builders

+ 

+ Builders for PPC64 are located at rh-power2.fit.vutbr.cz and anyone with

+ access to buildsys ssh key can get there using keys as::

+   msuchy@rh-power2.fit.vutbr.cz

+ 

+ There are commands:

+ ....

+ $ ls bin/

+ destroy-all.sh reinit-vm26.sh

+ reinit-vm28.sh virsh-destroy-vm26.sh virsh-destroy-vm28.sh

+ virsh-start-vm26.sh virsh-start-vm28.sh get-one-vm.sh reinit-vm27.sh

+ reinit-vm29.sh virsh-destroy-vm27.sh virsh-destroy-vm29.sh

+ virsh-start-vm27.sh virsh-start-vm29.sh

+ ....

+ 

+ `destroy-all.sh` destroy all VM and reinit them

+ 

+ `reinit-vmXX.sh` copy VM image from template

+ 

+ `virsh-destroy-vmXX.sh` destroys VM

+ 

+ `virsh-start-vmXX.sh` starts VM

+ 

+ `get-one-vm.sh` start one VM and return its IP - this is used in Copr playbooks.

+ 

+ In case of big queue of PPC64 tasks simply call `bin/destroy-all.sh` and

+ it will destroy stuck VM and copr backend will spawn new VM.

+ 

+ == Ports opened for public

+ 

+ Frontend:

+ 

+ [width="86%",cols="13%,17%,16%,54%",options="header",]

+ |===

+ |Port |Protocol |Service |Reason

+ |22 |TCP |ssh |Remote control

+ |80 |TCP |http |Serving Copr frontend website

+ |443 |TCP |https |^^

+ |===

+ 

+ Backend:

+ 

+ [width="86%",cols="13%,17%,16%,54%",options="header",]

+ |===

+ |Port |Protocol |Service |Reason

+ |22 |TCP |ssh |Remote control

+ |80 |TCP |http |Serving build results and repos

+ |443 |TCP |https |^^

+ |===

+ 

+ Distgit:

+ 

+ [width="86%",cols="13%,17%,16%,54%",options="header",]

+ |===

+ |Port |Protocol |Service |Reason

+ |22 |TCP |ssh |Remote control

+ |80 |TCP |http |Serving cgit interface

+ |443 |TCP |https |^^

+ |===

+ 

+ Keygen:

+ 

+ [width="86%",cols="13%,17%,16%,54%",options="header",]

+ |===

+ |Port |Protocol |Service |Reason

+ |22 |TCP |ssh |Remote control

+ |===

+ 

+ == Resources justification

+ 

+ Copr currently uses the following resources.

+ 

+ === Frontend

+ 

+ * RAM: 2G (out of 4G) and some swap

+ * CPU: 2 cores (3400mhz) with load 0.92, 0.68, 0.65

+ 

+ Most of the memory is eaten by PostgreSQL, followed by Apache. The CPU

+ usage is also mainly used for those two services but in the reversed

+ order.

+ 

+ I don't think we can settle down with any instance that provides less

+ than (2G RAM, obviously), but ideally, we need 3G+. 2-core CPU is good

+ enough.

+ 

+ * Disk space: 17G for system and 8G for _pgsqldb_ directory

+ 

+ If needed, we are able to clean-up the database directory of old dumps

+ and backups and get down to around 4G disk space.

+ 

+ === Backend

+ 

+ * RAM: 5G (out of 16G)

+ * CPU: 8 cores (3400MHz) with load 4.09, 4.55, 4.24

+ 

+ Backend takes care of spinning-up builders and running ansible playbooks

+ on them, running _createrepo_c_ (on big repositories) and so

+ on. Copr utilizes two queues, one for builds, which are delegated to

+ OpenStack builders, and action queue. Actions, however, are processed

+ directly by the backend, so it can spike our load up. We would ideally

+ like to have the same computing power that we have now. Maybe we can go

+ lower than 16G RAM, possibly down to 12G RAM.

+ 

+ * Disk space: 30G for the system, 5.6T (out of 6.8T) for build results

+ 

+ Currently, we have 1.3T of backup data, that is going to be deleted

+ soon, but nevertheless, we cannot go any lower on storage. Disk space is

+ a long-term issue for us and we need to do a lot of compromises and

+ settling down just to survive our daily increase (which is around 10G of

+ new data). Many features are blocked by not having enough storage. We

+ cannot go any lower and also we cannot go much longer with the current

+ storage.

+ 

+ === Distgit

+ 

+ * RAM: ~270M (out of 4G), but climbs to ~1G when busy

+ * CPU: 2 cores (3400MHz) with load 1.35, 1.00, 0.53

+ 

+ Personally, I wouldn't downgrade the machine too much. Possibly we can

+ live with 3G ram, but I wouldn't go any lower.

+ 

+ * Disk space: 7G for system, 1.3T dist-git data

+ 

+ We currently employ a lot of aggressive cleaning strategies on our

+ distgit data, so we can't go any lower than what we have.

+ 

+ === Keygen

+ 

+ * RAM: ~150M (out of 2G)

+ * CPU: 1 core (3400MHz) with load 0.10, 0.31, 0.25

+ 

+ We are basically running just _signd_ and

+ _httpd_ here, both with minimal resource requirements. The

+ memory usage is topped by _systemd-journald_.

+ 

+ * Disk space: 7G for system and ~500M (out of ~700M) for GPG keys

+ 

+ We are slowly pushing the GPG keys storage to its limit, so in the case

+ of migrating copr-keygen somewhere, we would like to scale-up it to at

+ least 1G.

@@ -0,0 +1,219 @@ 

+ = Database Infrastructure SOP

+ 

+ Our database servers provide database storage for many of our apps.

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_description>>

+ * <<_creating_a_new_postgresql_database>>

+ * <<_troubleshooting_and_resolution>>

+ ** <<_connection_issues>>

+ ** <<_some_useful_queries>>

+ *** <<_what_queries_are_running>>

+ *** <<_seeing_how_dirty_a_table_is>>

+ *** <<_xid_wraparound>>

+ ** <<_restart_procedure>>

+ *** <<_koji>>

+ *** <<_bodhi>>

+ * <<_turbogears_and_mysql>>

+ * <<_restoring_from_backups_or_specific_dbs>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, sysadmin-main, sysadmin-dba group

+ Location::

+   iad2

+ Servers::

+   sb01, db03, db-fas01, db-datanommer02, db-koji01, db-s390-koji01,

+   db-arm-koji01, db-ppc-koji01, db-qa01, dbqastg01

+ Purpose::

+   Provides database connection to many of our apps.

+ 

+ == Description

+ 

+ _db01_, _db03_ and _db-fas01_ are our primary servers. _db01_ and _db-fas01_ run

+ _PostgreSQL_. _db03_ contain _mariadb_. _db-koji01_, _db-s390-koji01_,

+ _db-arm-koji01_, _db-ppc-koji01_ contain secondary _kojis_. _db-qa01_ and

+ _db-qastg01_ contain _resultsdb_. _db-datanommer02_ contains all storage

+ messages from _postgresql_ database.

+ 

+ == Creating a New Postgresql Database

+ 

+ Creating a new database on our postgresql server isn't hard but there's

+ several steps that should be taken to make the database server as secure

+ as possible.

+ 

+ We want to separate the database permissions so that we don't have the

+ user/password combination that can do anything it likes to the database

+ on every host (the webapp user can usually do a lot of things even

+ without those extra permissions but every little bit helps).

+ 

+ Say we have an app called "raffle". We'd have three users:

+ 

+ * raffleadmin: able to make any changes they want to this particular

+ database. It should not be used in day to day but only for things like

+ updating the database schema when an update occurs. We could very likely

+ disable this account in the db whenever we are not using it.

+ 

+ * raffleapp: the database user that the web application uses. This will

+ likely need to be able to insert and select from all tables. It will

+ probably need to update most tables as well. There may be some tables

+ that it does _not_ need delete on. It should almost certainly not need

+ schema modifying permissions. (With postgres, it likely also needs

+ permission to insert/select on sequences as well).

+ 

+ * rafflereadonly: Only able to read data from tables, not able to modify

+ anything. Sadly, we aren't using this often but it can be useful for

+ scripts that need to talk directly to the database without modifying it.

+ 

+ ....

+ db2 $ sudo -u postgres createuser -P -E NEWDBadmin

+ Password: <randomly generated password>

+ db2 $ sudo -u postgres createuser -P -E NEWDBapp

+ Password: <randomly generated password>

+ db2 $ sudo -u postgres createuser -P -E NEWDBreadonly

+ Password: <randomly generated password>

+ db2 $ sudo -u postgres createdb -E utf8 NEWDB -O NEWDBadmin

+ db2 $ sudo -u postgres psql NEWDB

+ NEWDB=# revoke all on database NEWDB from public;

+ NEWDB=# revoke all on schema public from public;

+ NEWDB=# grant all on schema public to NEWDBadmin;

+ NEWDB=# [grant permissions to NEWDBapp as appropriate for your app]

+ NEWDB=# [grant permissions to NEWDBreadonly as appropriate for a user that

+        is only trusted enough to read information]

+ NEWDB=# grant connect on database NEWDB to nagiosuser;

+ ....

+ 

+ If your application needs to have the NEWDBapp and password to connect

+ to the database, you probably want to add these to ansible as well. Put

+ the password in the private repo in batcave01. Then use a templatefile

+ to incorporate it into the config file. See fas.pp for an example.

+ 

+ == Troubleshooting and Resolution

+ 

+ === Connection issues

+ 

+ There are no known outstanding issues with the database itself. Remember

+ that every time either database is restarted, services will have to be

+ restarted (see <<_restart_procedure>>).

+ 

+ === Some useful queries

+ 

+ ==== What queries are running

+ 

+ This can help you find out what queries are cuurently running on the

+ server:

+ 

+ ....

+ select datname, pid, query_start, backend_start, query from

+ pg_stat_activity where state<>'idle' order by query_start;

+ ....

+ 

+ This can help you find how many connections to the db server are for

+ each individual database:

+ 

+ ....

+ select datname, count(datname) from pg_stat_activity group by datname

+ order by count desc;

+ ....

+ 

+ ==== Seeing how "dirty" a table is

+ 

+ We've added a function from postgres's contrib directory to tell how

+ dirty a table is. By dirty we mean, how many tuples are active, how many

+ have been marked as having old data (and therefore "dead") and how much

+ free space is allocated to the table but not used.:

+ 

+ ....

+ \c fas2

+ \x

+ select * from pgstattuple('visit_identity');

+ table_len          | 425984

+ tuple_count        | 580

+ tuple_len          | 46977

+ tuple_percent      | 11.03

+ dead_tuple_count   | 68

+ dead_tuple_len     | 5508

+ dead_tuple_percent | 1.29

+ free_space         | 352420

+ free_percent       | 82.73

+ \x

+ ....

+ 

+ Vacuum should clear out dead_tuples. Only a vacuum full, which will lock

+ the table and therefore should be avoided, will clear out free space.

+ 

+ ==== XID Wraparound

+ 

+ Find out how close we are to having to perform a vacuum of a database

+ (as opposed to individual tables of the db). We should schedule a vacuum

+ when about 50% of the transaction ids have been used (approximately

+ 530,000,000 xids):

+ 

+ ....

+ select datname, age(datfrozenxid), pow(2, 31) - age(datfrozenxid) as xids_remaining

+ from pg_database order by xids_remaining;

+ ....

+ 

+ == Restart Procedure

+ 

+ If the database server needs to be restarted it should come back on it's

+ own. Otherwise each service on it can be restarted:

+ 

+ ....

+ service mysqld restart

+ service postgresql restart

+ ....

+ 

+ === Koji

+ 

+ Any time postgreql is restarted, koji needs to be restarted. Please also

+ see <<koji.adoc#_restarting_koji,Restarting Koji>>.

+ 

+ === Bodhi

+ 

+ Anytime postgresql is restarted Bodhi will need to be restarted no sop

+ currently exists for this.

+ 

+ == TurboGears and MySQL

+ 

+ [NOTE]

+ ====

+ About TurboGears and MySQL

+ 

+ There's a known bug in TurboGears that causes MySQL clients not to

+ automatically reconnect when lost. Typically a restart of the TurboGears

+ application will correct this issue.

+ ====

+ 

+ == Restoring from backups or specific dbs.

+ 

+ Our backups store the latest copy in `/backups/` on each db server. These

+ backups are created automatically by the db-backup script run fron cron.

+ Look in `/usr/local/bin` for the backup script.

+ 

+ To restore partially or completely you need to:

+ 

+ [arabic]

+ . setup postgres on a system

+ +

+ . start postgres/run initdb

+ +

+ if this new system running postgres has already run ansible then it

+ will have wrong config files in `/var/lib/pgsql/data` - clear them out

+ before you start postgres so initdb can work.

+ +

+ . grab the backups you need from `/backups` - also grab _global.sql_

+ +

+ edit up _global.sql_ to only create/alter the dbs you care about

+ +

+ . as postgres run: `psql -U postgres -f global.sql`

+ . when this completes you can restore each db with (as postgres user)

+ +

+ `createdb $dbname pg_restore -d dbname dbname_backup_file.db`

+ +

+ . restart postgres and check your data.

@@ -0,0 +1,121 @@ 

+ = datanommer SOP

+ 

+ Consume fedmsg bus activity and stuff it in a postgresql db.

+ 

+ == Contact Information

+ 

+ Owner::

+   Messaging SIG, Fedora Infrastructure Team

+ Contact::

+   #fedora-apps, #fedora-fedmsg, #fedora-admin, #fedora-noc

+ Servers::

+   busgateway01

+ Purpose::

+   Save fedmsg bus activity

+ 

+ == Description

+ 

+ datanommer is a set of three modules:

+ 

+ python-datanommer-models::

+   Schema definition and API for storing new items and querying existing

+   items

+ python-datanommer-consumer::

+   A plugin for the fedmsg-hub that actively listens to the bus and

+   stores events.

+ datanommer-commands::

+   A set of CLI tools for querying the DB.

+ 

+ datanommer will one day serve as a backend for future web services like

+ datagrepper and dataviewer.

+ 

+ Source: https://github.com/fedora-infra/datanommer/

+ 

+ Plan: https://fedoraproject.org/wiki/User:Ianweller/statistics_plus_plus

+ 

+ == CLI tools

+ 

+ Dump the db into a file as json:

+ 

+ ....

+ $ datanommer-dump > datanommer-dump.json

+ ....

+ 

+ When was the last bodhi message?:

+ 

+ ....

+ $ # It was 678 seconds ago

+ $ datanommer-latest --category bodhi --timesince

+ [678]

+ ....

+ 

+ When was the last bodhi message in more readable terms?:

+ 

+ ....

+ $ # It was 12 minutes and 43 seconds ago

+ $ datanommer-latest --category bodhi --timesince --human

+ [0:12:43.087949]

+ ....

+ 

+ What was that last bodhi message?:

+ 

+ ....

+ $ datanommer-latest --category bodhi

+ [{"bodhi": {

+   "topic": "org.fedoraproject.stg.bodhi.update.comment", 

+   "msg": {

+     "comment": {

+       "group": null, 

+       "author": "ralph", 

+       "text": "Testing for latest datanommer.", 

+       "karma": 0, 

+       "anonymous": false, 

+       "timestamp": 1360349639.0, 

+       "update_title": "xmonad-0.10-10.fc17"

+     }, 

+     "agent": "ralph"

+   }, 

+ }}]

+ ....

+ 

+ Show me stats on datanommer messages by topic:

+ 

+ ....

+ $ datanommer-stats --topic

+ org.fedoraproject.stg.fas.group.member.remove has 10 entries

+ org.fedoraproject.stg.logger.log has 76 entries

+ org.fedoraproject.stg.bodhi.update.comment has 5 entries

+ org.fedoraproject.stg.busmon.colorized-messages has 10 entries

+ org.fedoraproject.stg.fas.user.update has 10 entries

+ org.fedoraproject.stg.wiki.article.edit has 106 entries

+ org.fedoraproject.stg.fas.user.create has 3 entries

+ org.fedoraproject.stg.bodhitest.testing has 4 entries

+ org.fedoraproject.stg.fedoratagger.tag.create has 9 entries

+ org.fedoraproject.stg.fedoratagger.user.rank.update has 5 entries

+ org.fedoraproject.stg.wiki.upload.complete has 1 entries

+ org.fedoraproject.stg.fas.group.member.sponsor has 6 entries

+ org.fedoraproject.stg.fedoratagger.tag.update has 1 entries

+ org.fedoraproject.stg.fas.group.member.apply has 17 entries

+ org.fedoraproject.stg.__main__.testing has 1 entries

+ ....

+ 

+ == Upgrading the DB Schema

+ 

+ datanommer uses "python-alembic" to manage its schema. When developers

+ want to add new columns or features, these should/must be tracked in

+ alembic and shipped with the RPM.

+ 

+ In order to run upgrades on our stg/prod dbs:

+ 

+ [arabic]

+ . ssh to busgateway01\{.stg}

+ . `cd /usr/share/datanommer.models/`

+ . Run:

+ +

+ ....

+ $ alembic upgrade +1

+ ....

+ 

+ ____

+ Over and over again until the db is fully upgraded.

+ ____

@@ -0,0 +1,133 @@ 

+ = Fedora Debuginfod Service - SOP

+ 

+ Debuginfod is the software that lies behind the service at

+ https://debuginfod.fedoraproject.org/ and

+ https://debuginfod.stg.fedoraproject.org/ . These services run on 1 VM

+ each in the stg and prod infrastructure at IAD2.

+ 

+ == Contact Information

+ 

+ Owner:::

+   RH perftools team + Fedora Infrastructure Team

+ Contact:::

+   @fche in #fedora-noc

+ Servers:::

+   VMs

+ Purpose:::

+   Serve elf/dwarf/source-code debuginfo for supported releases to

+   debugger-like tools in Fedora.

+ Repository:::

+   https://sourceware.org/elfutils/Debuginfod.html

+   https://fedoraproject.org/wiki/Debuginfod

+ 

+ == How it works

+ 

+ One virtual machine in prod NFS-mount the koji build system's RPM

+ repository, read-only. The production VM has a virtual twin in the

+ staging environment. They each run elfutils debuginfod to index

+ designated RPMs into a large local sqlite database. They answers HTTP

+ queries received from users on the Internet via reverse-proxies at the

+ https://debuginfod.fedoraproject.org/ URL. The reverse proxies apply

+ gzip compression on the data and provide redirection of the root `/`

+ location only into the fedora wiki.

+ 

+ Normally, it is autonomous and needs no maintenance. It should come back

+ nicely after many kinds of outage. The software is based on elfutils in

+ Fedora, but may occasionally track a custom COPR build with backported

+ patches from future elfutils versions.

+ 

+ == Configuration

+ 

+ The daemon uses systemd and `/etc/sysconfig/debuginfod` to set basic

+ parameters. These have been tuned from the distro defaults via

+ experimental hand-editing or ansible. Key parameters are:

+ 

+ [arabic]

+ . The -I/-X include/exclude regexes. These tell debuginfod what fedora

+ versions to include RPMs for. If index disk space starts to run low, one

+ can eliminate some older fedoras from the index to free up space (after

+ the next groom cycle).

+ . The --fdcache related parameters. These tell debuginfod how much data

+ to cache from RPMs. (Some debuginfo files - kernel, llvm, gtkweb, ...)

+ are huge and worth retaining instead of repeated extracting.) This is

+ straight disk space vs. time tradeoff.

+ . The -t (scan interval) parameter. Scanning lets an index get bigger,

+ as new RPMs in koji are examined and their contents indexed. Each pass

+ takes a bunch of hours to traverse the entire koji NFS directory

+ structure to fstat() everything for newness or change. A smaller scan

+ interval lets debuginfod react quicker to koji builds coming into

+ existence, but increases load on the NFS server. More -n (scan threads)

+ may help the indexing process go faster, if the networking fabric & NFS

+ server are underloaded.

+ . The -g (groom interval) parameter. Grooming lets an index get smaller,

+ as files removed from koji will be forgotten about. It can be run very

+ intermittently - weekly or less - since it takes many hours and cannot

+ run concurrently with scanning.

+ 

+ A quick:

+ 

+ ....

+ systemd restart debuginfod

+ ....

+ 

+ activates the new settings.

+ 

+ In case of some drastic failure like database corruption or signs of

+ penetration/abuse, one can shut down the server with systemd, and/or

+ stop traffic at the incoming proxy configuration level. The index sqlite

+ database under `/var/cache/debuginfod` may be deleted, if necessary, but

+ keep in mind that it takes days to reindex the relevant parts of koji.

+ Alternately, with the services stopped, the 150GB+ sqlite database files

+ may be freely copied between the staging and production servers, if that

+ helps during disaster recovery.

+ 

+ == Monitoring

+ 

+ === Prometheus

+ 

+ The debuginfod daemons answer the standard /metrics URL endpoint to

+ serve a variety of operational metrics in prometheus. Important metrics

+ include:

+ 

+ [arabic]

+ . filesys_free_ratio - free space on the filesystems. (These are also

+ monitored via fedora-infra nagios.) If the free space on the database or

+ tmp partition falls low, further indexing or even service may be

+ impacted. Add more disk space if possible, or start eliding older fedora

+ versions from the database via the -I/-X daemon options.

+ . thread_busy - number of busy threads. During indexing, 1-6 threads may

+ be busy for minutes or even days, intermittently. User requests show up

+ as "buildid" (real request) or "buildid-after-you" (deferred duplicate

+ request) labels. If there are more than a handful of "buildid" ones,

+ there may be an overload/abuse underway, in which case it's time to

+ identify the excessive traffic via the logs and get a temporary iptables

+ block going. Or perhaps there is an outage or slowdown of the koji NFS

+ storage system, in which case there's not much to do.

+ . error_count. These should be zero or near zero all the time.

+ 

+ === Logs

+ 

+ The debuginfod daemons produce voluminous logs into the local systemd

+ journal, whence the traffic moves to the usual fedora-infra log01

+ server, `/var/log/hosts/debuginfod*/YYYY/MM/DD/messages.log`. The lines

+ related to HTTP GET identify the main webapi traffic, with originating

+ IP addresses in the XFF: field, and response size and elapsed service

+ time in the last columns. These can be useful in tracking down possible

+ abuse. :

+ 

+ ....

+ Jun 28 22:36:43 debuginfod01 debuginfod[381551]: [Mon 28 Jun 2021 10:36:43 PM GMT] (381551/2413727): 10.3.163.75:43776 UA:elfutils/0.185,Linux/x86_64,fedora/35 XFF:*elided* GET /buildid/90910c1963bbcf700c0c0c06ee3bf4c5cc831d3a/debuginfo 200 335440 0+0ms

+ ....

+ 

+ The lines related to prometheus /metrics are usually no big deal.

+ 

+ The log also includes info about errors and indexing progress.

+ Interesting may be the lines like:

+ 

+ ....

+ Jun 28 22:36:43 debuginfod01 debuginfod[381551]: [Mon 28 Jun 2021 10:36:43 PM GMT] (381551/2413727): serving fdcache archive /mnt/fedora_koji_prod/koji/packages/valgrind/3.17.0/3.fc35/x86_64/valgrind-3.17.0-3.fc35.x86_64.rpm file /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so

+ ....

+ 

+ which identify the file names derived from requests (which RPMs the

+ buildids to). These can provide some indirect distro telemetry: what

+ packages and binaries are being debugged and for which architectures?

@@ -0,0 +1,62 @@ 

+ = Departing admin SOP

+ 

+ From time to time admins depart the project, this SOP checks any access

+ they may no longer need.

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, sysadmin-main

+ Location::

+   Everywhere

+ Servers::

+   all

+ 

+ == Description

+ 

+ From time to time people with admin access to various parts of the

+ project may leave the project or no longer wish to contribute. This SOP

+ attempts to list the process for removing access they no longer need.

+ 

+ [arabic, start=0]

+ . First, make sure that this SOP is needed. Verify the person has left

+ the project and what areas they might wish to still contibute to.

+ . Gather info: fas username, email address, knowledge of passwords.

+ . Check the following areas with the following commands:

+ +

+ ____

+ email address in ansible::

+   * Check: `git grep email@address`

+   * Remove: `git commit`

+ koji admin::

+   * Check: `koji list-permissions --user=username`

+   * Remove: `koji revoke-permission permissionname username`

+ wiki pages::

+   * Check: look for https://fedoraproject.org/wiki/User:Username

+   * Remove: delete page, or modify with info they are no longer

+   contributing.

+ packages::

+   * Check: Download

+   https://admin.fedoraproject.org/pkgdb/lists/bugzilla?tg_format=plain

+   and grep

+   * Remove: remove from cc, orphan packages or reassign.

+ fas account::

+   * Check: check username in fas

+   * Remove: set user inactive

+ +

+ [NOTE]

+ ====

+ If there are scripts or files needed, save homedir of user.

+ ====

+ +

+ passwords::

+   * Check: if departing admin knew sensitive passwords.

+   * Remove: Change passwords.

+ +

+ [NOTE]

+ ====

+ root pw, management interfaces, etc

+ ====

+ ____

@@ -0,0 +1,332 @@ 

+ = DNS repository for fedoraproject

+ 

+ We've set this up so we can easily (and quickly) edit and deploy dns

+ changes with a record of who changed what and why. This system also lets

+ us edit out proxies from rotation for our many and varied websites

+ quickly and with a minimum of opportunity for error. Finally, it checks

+ to make sure that all of the zone changes will actually work before they

+ are allowed.

+ 

+ == DNS Infrastructure SOP

+ 

+ We have 5 DNS servers:

+ 

+ ns02.fedoraproject.org::

+   hosted at ibiblio (ipv6 enabled)

+ ns05.fedoraproject.org::

+   hosted at internetx (ipv6 enabled)

+ ns13.rdu2.fedoraproject.org::

+   in rdu2, internal to rdu2.

+ ns01.iad2.fedoraproject.org::

+   in iad2, internal to iad2.

+ ns02.iad2.fedoraproject.org::

+   in iad2, internal to iad2.

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_troubleshooting_resolution_and_maintenance>>

+ ** <<_check_out_the_dns_repository>>

+ ** <<_adding_a_new_host>>

+ ** <<_editing_the_domains>>

+ ** <<_dns_update>>

+ ** <<_adding_a_new_zone>>

+ * <<_geodns>>

+ ** <<_adding_and_removing_countries>>

+ ** <<_ip_country_mapping>>

+ * <<_resolv_conf>>

+ 

+ == Contact Information

+ 

+ Owner:::

+   Fedora Infrastructure Team

+ Contact:::

+   #fedora-admin, sysadmin-main, sysadmin-dns

+ Location:::

+   ServerBeach and ibiblio and internetx and phx2.

+ Servers:::

+   ns02, ns05, ns13.rdu2, ns01.iad2, ns02.iad2

+ Purpose:::

+   Provides DNS to our users

+ 

+ == Troubleshooting, Resolution and Maintenance

+ 

+ === Check out the DNS repository

+ 

+ You can get the dns repository from `/srv/git/dns` on `batcave01`:

+ 

+ ....

+ $ git clone /srv/git/dns

+ ....

+ 

+ === Adding a new Host

+ 

+ Adding a new host requires to add it to DNS and to ansible, see

+ new-hosts.rst for the details.

+ 

+ === Editing the domain(s)

+ 

+ We have three domains which needs to be able to change on demand for

+ proxy rotation/removal:

+ 

+ * fedoraproject.org.

+ * getfedora.org.

+ * cloud.fedoraproject.org.

+ 

+ The other domains are edited only when we add/subtract a host or move it

+ to a new ip. Not much else.

+ 

+ If you need to edit a domain that is NOT In the above list:

+ 

+ * change to the 'master' subdir, edit the domain as usual (remember to

+ update the serial), save it.

+ 

+ If you need to edit one of the domains in the above list: (replace

+ fedoraproject.org with the domain from above):

+ 

+ * if you need to add/change a host in fedoraproject.org that is not '@'

+ or 'wildcard' then:

+ ** edit fedoraproject.org.template

+ ** make your changes

+ ** do not edit the serial or anything surrounded by \{\{ }} unless you

+ REALLY know what you are doing.

+ * if you need to only add/remove a proxy during an outage or due to networking

+ issue then run:

+ ** `./zone-template fedoraproject.org.cfg disable ip [ip] [ip]`::

+   to disable the ip of the proxy you want removed.

+ ** `./zone-template fedoraproject.org.cfg enable ip [ip] [ip]`::

+   reverses the disable

+ ** `./zone-template fedoraproject.org.cfg reset`::

+   will reset to all ips enabled.

+ * if you want to add an all new proxy as '@' or 'wildcard' for

+ fedoraproject.org:

+ ** edit fedoraproject.org.cfg

+ ** add the ip to the correct section of the ipv4 or ipv6 in the config.

+ ** save the file

+ ** check the file for validity by running:

+ `python fedoraproject.org.cfg` looking for errors or tracebacks.

+ 

+ When complete run:

+ 

+ ....

+ git add . git commit -a -m 'description of your change here'

+ ....

+ 

+ It is important to commit this before running the do-domains script as

+ it makes it easier to track the changes.

+ 

+ In all cases then run:

+ 

+ * `./do-domains`

+ * if that completes successfully then run:

+ +

+ ....

+ git add .

+ git commit -a -m 'description of your change here'

+ git push

+ ....

+ * nameservers update from dns via cron every 10minutes.

+ 

+ The above git process can be achieved with the below bash function where

+ the commit message is passed as an arg when running.:

+ 

+ ....

+ dnscommit()

+ {

+   local args=$1

+   cd ~/dns;

+   git commit -a -m "${args}"

+   git pull --rebase && ./do-domains && git add built && git commit -a -m "Signed DNS" && git push

+ }

+ ....

+ 

+ If you need an update to be live more quickly:

+ 

+ and then run this on all of the nameservers (as root):

+ 

+ ....

+ /usr/local/bin/update-dns

+ ....

+ 

+ To run this via ansible from batcave do:

+ 

+ ....

+ $ sudo rbac-playbook update_dns.yml

+ ....

+ 

+ this will pull from the git tree, update all of the zones and reload the

+ name server.

+ 

+ === DNS update

+ 

+ DNS config files are ansible managed on batcave01.

+ 

+ From your local machine run:

+ 

+ ....

+ git clone ssh://git@pagure.io/fedora-infra/ansible.git

+ cd ansible/roles/dns/files/

+ ...make changes needed...

+ git commit -m "What you did"

+ git push

+ ....

+ 

+ It should update within a half hour. You can test the new configs with

+ dig:

+ 

+ ....

+ dig @ns01.fedoraproject.org fedoraproject.org

+ ....

+ 

+ === Adding a new zone

+ 

+ First name the zone and generate new set of keys for it. Run this on

+ ns01. Note it could take SEVERAL minutes to run:

+ 

+ ....

+ /usr/sbin/dnssec-keygen -a RSASHA1 -b 1024 -n ZONE c.fedoraproject.org

+ /usr/sbin/dnssec-keygen -a RSASHA1 -b 2048 -n ZONE -f KSK c.fedoraproject.org

+ ....

+ 

+ Then copy the created .key and .private files to the private git repo

+ (You need to be sysadmin-main to do this). The directory is

+ `private/private/dnssec`.

+ 

+ * add the zone in zones.conf in `ansible/roles/dns/files/zones.conf`

+ * save and commit - but do not push

+ * Add zone file to the master subdir in this repo

+ * git add and commit the file

+ * check the zone by running check-domains

+ * if you intend to have this be a dnssec signed zone then you must

+ ** create a new key:

+ +

+ ....

+ /usr/sbin/dnssec-keygen -a RSASHA1 -b 1024 -n ZONE $domain.org

+ /usr/sbin/dnssec-keygen -a RSASHA1 -b 2048 -n ZONE -f KSK $domain.org

+ 

+ - put the files this generates into /srv/privatekeys/dnssec on batcave01

+   - edit the do-domains file in this dir and your domain to the

+   signed_domains entry at the top

+   - edit the zone you just created and add the contents of the .key

+   files to the bottom of the zone

+ ....

+ 

+ If this is a subdomain of fedoraproject.org:

+ 

+ * run dnssec-dsfromkey on each of the .key files generated

+ * paste that output into the bottom of fedoraproject.org.template

+ * commit everything to the dns tree

+ * push your changes

+ * push your changes to the ansible repo

+ * test

+ 

+ If you add a new child zone, such as c.fedoraproject.org or

+ vpn.fedoraproject.org you will also need to add the contents of

+ dsset-childzone.fedoraproject.org (for example), to the main

+ fedoraproject.org zonefile, so that DNSSEC has a valid trust path to

+ that zone.

+ 

+ You also must set the NS delegation entries near the top of

+ fedoraproject.org zone file these are necessary to keep dnssec-signzone

+ from whining with this error msg:

+ 

+ ....

+ dnssec-signzone: fatal: 'xxxxx.example.com': found DS RRset without NS RRset

+ ....

+ 

+ Look for the: "vpn IN NS" records at the top of fedoraproject.org and

+ copy them for the new child zone.

+ 

+ == GeoDNS

+ 

+ As part of our Content Distribution Network we use geodns for certain

+ zones. At the moment just `fedoraproject.org` and `*.fedoraproject.org`

+ zones. We've got proxy servers all over the US and in Europe. We are now

+ sending users to proxy servers that are near them. The current list of

+ available 'zone areas' are:

+ 

+ * DEFAULT

+ * EU

+ * NA

+ 

+ DEFAULT contains all the zones. So someone who does not seem to be in or

+ near the EU, or NA would get directed to any random set. (South Africa

+ for example doesn't get directed to any particular server).

+ 

+ [IMPORTANT]

+ ====

+ Don't forget to increase the serial number in the fedoraproject.org zone

+ file. Even if you're making a change to one of the geodns IPs. There is

+ only one serial number for all setups and that serial number is in the

+ fedoraproject.org zone.

+ ====

+ [NOTE]

+ ====

+ Non geodns fedoraproject.org IPs If you're adding as server that is just

+ in one location, and isn't going to get geodns balanced. Just add that

+ host to the fedoraproject.org zone.

+ ====

+ === Adding and removing countries

+ 

+ Our setup actually requires us to specify which countries go to which

+ servers. To do this, simply edit the named.conf file in ansible. Below

+ is an example of what counts as "NA" (North America).:

+ 

+ ....

+ view "NA" {

+        match-clients { US; CA; MX; };

+        recursion no;

+        zone "fedoraproject.org" {

+                type master;

+                file "master/NA/fedoraproject.org.signed";

+        };

+        include "etc/zones.conf";

+ };

+ ....

+ 

+ === IP Country Mapping

+ 

+ The IP -> Location mapping is done via a config file that exists on the

+ dns servers themselves (it's not ansible controlled). The file, located

+ at `/var/named/chroot/etc/GeoIP.acl` is generated by the `GeoIP.sh`

+ script (that script is in ansible).

+ 

+ [WARNING]

+ ====

+ This is known to be a less efficient means of doing geodns than the

+ patched version from kernel.org. We're using this version at the moment

+ because it's in Fedora and works. The level of DNS traffic we see is

+ generally low enough that the inefficiencies aren't that noticed. For

+ example, average load on the servers before this geodns was .2, now it's

+ around .4

+ ====

+ == resolv.conf

+ 

+ In order to make the network more transparent to the admins, we do a lot

+ of search based relative names. Below is a list of what a resolv.conf

+ should look like.

+ 

+ [IMPORTANT]

+ ====

+ Any machine that is not on our vpn or has not yet joined the vpn should

+ *NOT* have the vpn.fedoraproject.org search until after it has

+ been added to the vpn (if it ever does)

+ ====

+ iad2::

+ ....

+ search iad2.fedoraproject.org vpn.fedoraproject.org fedoraproject.org

+ ....

+ iad2 in the QA network:::

+ ....

+ search qa.fedoraproject.org vpn.fedoraproject.org iad2.fedoraproject.org fedoraproject.org

+ ....

+ Non-iad2::

+ ....

+ search vpn.fedoraproject.org fedoraproject.org

+ ....

+ 

+ The idea here is that we can, when need be, setup local domains to

+ contact instead of having to go over the VPN directly but still have

+ sane configs. For example if we tell the proxy server to hit "app1" and

+ that box is in _iad2_, it will go directly to app1, if its not, it will go

+ over the vpn to app1.

@@ -0,0 +1,66 @@ 

+ = Docs SOP

+ 

+ ____

+ Fedora Documentation - Documentation for installing and using Fedora

+ ____

+ 

+ == Contact Information

+ 

+ Owner:::

+   docs, Fedora Infrastrcture Team

+ Contact:::

+   #fedora-docs

+ Servers:::

+   proxy*

+ Purpose:::

+   Provide documentation for users and contributors.

+ 

+ == Description

+ 

+ The Fedora Documentation Project was created to provide documentation

+ for fedora users and contributors. It's like "The Bible" for using

+ Fedora and other software used by the Fedora Project. It uses Publican,

+ a free and open-source publishing tool. Publican generates html pages

+ from content in DocBook XML format. The source files are in a git repo

+ and publican builds html files from these source files whenever changes

+ are made. As these are static pages these are available on all the proxy

+ servers which serve our requests for docs.fedoraproject.org.

+ 

+ == Updates process

+ 

+ The fedora docs writers update and build their docs and then push the

+ completed output into a git repo. This git repo is then pulled by each

+ of the Fedora proxies and served as static content.

+ 

+ Note that docs is talking about setting up a new process, this SOP needs

+ updating when that happens.

+ 

+ == Reporting bugs

+ 

+ Bugs can be reported at the Fedora Documentation's Bugzilla. Here's the

+ link:

+ 

+ https://bugzilla.redhat.com/enter_bug.cgi?product=Fedora%20Documentation

+ 

+ Errors or problems in the wiki can be modified by anyone with a FAS

+ account.

+ 

+ == Contributing to the Fedora Documentation Project

+ 

+ If you find the existing documentation insufficient or outdated or any

+ particular page is not available in your language feel free to improve

+ the documentation by contributing to Fedora Documentation Project. You

+ can find more details here:

+ 

+ https://fedoraproject.org/wiki/Join_the_Docs_Project

+ 

+ Translation of documentation is taken care by the Fedora Localization

+ Project aka L10N. More details can be found at:

+ 

+ https://fedoraproject.org/wiki/L10N

+ 

+ == Publican wiki

+ 

+ More details about Publican can be found at the publican wiki here:

+ 

+ https://sourceware.org/publican/en-US/index.html

@@ -0,0 +1,154 @@ 

+ = Fedora Account System

+ 

+ Notes about FAS and how to do things in it:

+ 

+ * where are certs for fas accounts for koji, etc? on fas01

+ `/var/lib/fedora-ca` - makefile targets allow you to do things with them.

+ 

+ look in `index.txt` for certs. One's marked with an 'R' in the left-most

+ column are 'REVOKED'

+ 

+ to revoke a cert:

+ 

+ ....

+ cd /var/lib/fedora-ca

+ ....

+ 

+ find the cert number in `index.txt` - the number is the 3rd column in the

+ file - you can match it to the user by searching for their username. You

+ want the highest number cert for their account.

+ 

+ once you have the number you would run (as root or fas):

+ 

+ ....

+ make revoke cert=newcerts/$that_number.pem

+ ....

+ 

+ == How to gather information about a user

+ 

+ You'll want to have direct access to query the database for this. The

+ common way is to have someone in _sysadmin-db_ ssh to the postgres db

+ hosting FAS (currently _db01_). Then access it via ident auth on the box:

+ 

+ ....

+ sudo -u postgres psql fas2

+ ....

+ 

+ There are several tables that will have information about a user. Some

+ of it is redundant but it's good to check all the sources there

+ shouldn't be inconsistencies:

+ 

+ ....

+ select * from people where username = 'USERNAME';

+ ....

+ 

+ Of interest here are:

+ 

+ id::

+   for later queries

+ password_changed::

+   tells when the password was last changed

+ last_seen::

+   last login to fas (including through jsonfas from other TG1/2 apps.

+   Maybe wiki and insight as well. Not fedorahosted trac, shell login,

+   etc)

+ status_change::

+   last time that the user's status was updated via the website. Usually

+   triggered when the user was marked inactive for a mass password change

+   and then they reset their password.

+ 

+ Next table is the log table:

+ 

+ ....

+ select * from log where author_id = ID_FROM_PREV_QUERY or description ~ '.*USERNAME.*';

+ ....

+ 

+ The FAS writes certain events to the log table. This will get those

+ events. We use both the author_id field (who made the change) and the

+ username in a description regex search because a few changes are made to

+ users by admins. Fields of interest are pretty self explanatory here:

+ 

+ changetime::

+   when the log was made

+ description::

+   description of the event that's being logged

+ 

+ [NOTE]

+ ====

+ FAS does not log every event that happens to a user. Only "important"

+ ones. FAS also cannot record direct changes to the database here (for

+ instance, when we mark accounts inactive administratively via the db).

+ ====

+ 

+ Lastly, there's the groups and person_roles table. When a user joins

+ a group, the person_roles table is updated to reflect the user's status

+ in the group, when they applied, and when they were approved:

+ 

+ ....

+ select groups.name, person_roles.* from person_roles, groups where person_id = ID_FROM_INITIAL_QUERY and groups.id = person_roles.group_id;

+ ....

+ 

+ This will give you the following fields to pay attention to:

+ 

+ name::

+   Name of the group

+ role_status::

+   If this is unapproved, it just means the user applied for it. If it is

+   approved, it means they are actually in the group.

+ creation::

+   When the user applied to the group

+ approval::

+   When the user was approved to be in the group

+ role_type::

+   What role the person has or wants to have in the group

+ sponsor_id::

+   If you suspect something is suspicious with one of the roles, you may

+   want to ask the sponsor if they remember sponsoring this person

+ 

+ == Account Deletion and renaming

+ 

+ [NOTE]

+ ====

+ See also <<accountdeletion.adoc#>> for information on how to disable, rename,

+ and remove accounts.

+ ====

+ 

+ == Pseudo Users

+ 

+ [NOTE]

+ ====

+ See also <<nonhumanaccounts.adoc#>> for information on creating pseudo user

+ accounts for use in pkgdb/bugzilla

+ ====

+ 

+ == fas staging

+ 

+ We have a staging fas db setup on `db-fas01.stg.iad2.fedoraproject.org` -

+ it's accessed by `fas01.stg.iad2.fedoraproject.org`

+ 

+ This system is not autopopulated by production fas - it must be done

+ manually. To do this you must:

+ 

+ * dump the fas2 db on `db-fas01.iad2.fedoraproject.org`:

+ +

+ ....

+ sudo -u postgres pg_dump -C fas2 > fas2.dump

+ scp fas2.dump db-fas01.stg.iad2.fedoraproject.org:/tmp

+ ....

+ * then on `fas01.stg.iad2.fedoraproject.org`:

+ +

+ ....

+ /etc/init.d/httpd stop

+ ....

+ * then on `db02.stg.iad2.fedoraproject.org`:

+ +

+ ....

+ echo "drop database fas2\;" | sudo -u postgres psql ; cat fas2.dump | sudo -u postgres psql

+ ....

+ * then on `fas01.stg.iad2.fedoraproject.org`:

+ +

+ ....

+ /etc/init.d/httpd start

+ ....

+ 

+ that should do it.

@@ -0,0 +1,42 @@ 

+ = FAS-OpenID

+ 

+ FAS-OpenID is the OpenID server of Fedora infrastructure.

+ 

+ Live instance is at https://id.fedoraproject.org/ Staging instance is at

+ https://id.stg.fedoraproject.org/

+ 

+ == Contact Information

+ 

+ Owner::

+   Patrick Uiterwijk (puiterwijk)

+ Contact::

+   #fedora-admin, #fedora-apps, #fedora-noc

+ Location::

+   openid0\{1,2}.iad2.fedoraproject.org openid01.stg.fedoraproject.org

+ Purpose::

+   Authentication & Authorization

+ 

+ == Trusted roots

+ 

+ FAS-OpenID has a set of "trusted roots", which contains websites which

+ are always trusted, and thus FAS-OpenID will not show the Approve/Reject

+ form to the user when they login to any such site.

+ 

+ As a policy, we will only add websites to this list which Fedora

+ Infrastructure controls. If anyone ever ask to add a website to this

+ list, just answer with this default message:

+ 

+ ....

+ We only add websites we (Fedora Infrastructure) maintain to this list.

+ 

+ This feature was put in because it wouldn't make sense to ask for permission

+ to send data to the same set of servers that it already came from.

+ 

+ Also, if we were to add external websites, we would need to judge their

+ privacy policy etc.

+ 

+ Also, people might start complaining that we added site X but not their site,

+ maybe causing us "political" issues later down the road.

+ 

+ As a result, we do NOT add external websites.

+ ....

@@ -0,0 +1,178 @@ 

+ = fedmsg (Fedora Messaging) Certs, Keys, and CA - SOP

+ 

+ X509 certs, private RSA keys, Certificate Authority, and Certificate

+ Revocation List.

+ 

+ == Contact Information

+ 

+ Owner::

+   Messaging SIG, Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, #fedora-apps, #fedora-noc

+ Servers::

+   * app0[1-7]

+   * packages0[1-2]

+   * fas0[1-3]

+   * pkgs01

+   * busgateway01,

+   * value0\{1,3}

+   * releng0\{1,4}

+   * relepel03

+ Purpose::

+   Certify fedmsg messages come from authentic sources.

+ 

+ == Description

+ 

+ fedmsg sends JSON-encoded messages from many services to a zeromq

+ messaging bus. We're not concerned with encrypting the messages, only

+ with signing them so an attacker cannot spoof.

+ 

+ Every instance of each service on each host has its own cert and private

+ key, signed by the CA. By convention, we name the certs

+ `<service>-<fqdn>.\{crt,key}` For instance, bodhi has the following certs:

+ 

+ * bodhi-app01.iad2.fedoraproject.org

+ * bodhi-app02.iad2.fedoraproject.org

+ * bodhi-app03.iad2.fedoraproject.org

+ * bodhi-app01.stg.iad2.fedoraproject.org

+ * bodhi-app02.stg.iad2.fedoraproject.org

+ * more

+ 

+ Scripts to generate new keys, sign them, and revoke them live in the

+ ansible repo in `ansible/roles/fedmsg/files/cert-tools/`. The keys and

+ certs themselves (including ca.crt and the CRL) live in the private repo

+ in `private/fedmsg-certs/keys/`

+ 

+ fedmsg is locally configured to find the key it needs by looking in

+ `/etc/fedmsg.d/ssl.py` which is kept in ansible in

+ `ansible/roles/fedmsg/templates/fedmsg.d/ssl.py.erb`.

+ 

+ Each service-host has its own key. This means:

+ 

+ * A key is not shared across multiple instances of a service on

+ different machines. i.e., bodhi on app01 and bodhi on app02 should have

+ different key/cert pairs.

+ * A key is not shared across multiple services on a host. i.e.,

+ mediawiki on app01 and bodhi on app01 should have different key/cert

+ pairs.

+ 

+ The attempt here is to minimize the number of potential attack vectors.

+ Each private key should be readable only by the service that needs it.

+ bodhi runs under mod_wsgi in apache and should run as its own unique

+ bodhi user (not as apache). The permissions for

+ its _iad2.fedoraproject.org_ private_key, when deployed by ansible, should

+ be read-only for that local bodhi user.

+ 

+ For more information on how fedmsg uses these certs see

+ http://fedmsg.readthedocs.org/en/latest/crypto.html

+ 

+ == Configuring the Scripts

+ 

+ Usage of the main scripts is described in more detail below. They are

+ located in `ansible/rolesfedmsg/files/cert-tools`.

+ 

+ Before you use them, you'll need to point them at the right directory to

+ modify. By default, this is `~/private/fedmsg-certs/keys/`. You can

+ change that by editing `ansible/roles/fedmsg/files/cert-tools/vars` in

+ the event that you have the private repo checked out to an alternate

+ location.

+ 

+ There are other configuration values defined in that script. Most will

+ not need to be changed.

+ 

+ == Wiping and Rebuilding Everything

+ 

+ There is a script in `ansible/roles/fedmsg/files/cert-tools/` named

+ `rebuild-all-fedmsg-certs`. You can run it with no arguments to wipe out

+ the old and generate a new CA root certificate, a signing cert and key,

+ and all key/cert pairs for all service-hosts.

+ 

+ [NOTE]

+ ====

+ Warning -- Obviously, this will wipe everything. Do you want that?

+ ====

+ == Adding a new key for a new service-host

+ 

+ First, checkout the ansible private repo as that's where the keys are

+ going to be stored. The scripts will assume this is checked out to

+ `~/private`.

+ 

+ In `ansible/roles/fedmsg/files/cert-tools` run:

+ 

+ ....

+ $ source ./vars

+ $ ./build-and-sign-key <service>-<fqdn>

+ ....

+ 

+ For instance, if we bring up a new app host,

+ _app10.iad2.fedoraproject.org_, we'll need to generate a new cert/key pair

+ for each fedmsg-enabled service that will be running on it, so you'd

+ run:

+ 

+ ....

+ $ source ./vars

+ $ ./build-and-sign-key shell-app10.iad2.fedoraproject.org

+ $ ./build-and-sign-key bodhi-app10.iad2.fedoraproject.org

+ $ ./build-and-sign-key mediawiki-app10.iad2.fedoraproject.org

+ ....

+ 

+ Just creating the keys isn't quite enough, there are four more things

+ you'll need to do.

+ 

+ The private keys are created in your checkout of the private repo under

+ `~/private/private/fedmsg-certs/keys` . There will be four files for each

+ cert you created: `<hexdigits>.pem` (ex: 5B.pem) and

+ `<service>-<fqdn>.\{crt,csr,key}` git add, commit, and push all of those.

+ 

+ Second, You need to edit

+ `ansible/roles/fedmsg/files/cert-tools/rebuild-all-fedmsg-certs` and add

+ the argument of the commands you just ran, so that next time certs need

+ to be blown away and recreated, the new service-hosts will be included.

+ For the examples above, you would need to add to the list:

+ 

+ ....

+ shell-app10.iad2.fedoraproject.org

+ bodhi-app10.iad2.fedoraproject.org

+ mediawiki-app10.iad2.fedoraproject.org

+ ....

+ 

+ You need to ensure that the keys are distributed to the host with the

+ proper permissions. Only the bodhi user should be able to access bodhi's

+ private key. This can be accomplished by using the `fedmsg::certificate`

+ in ansible. It should distribute your new keys to the correct hosts and

+ correctly permission them.

+ 

+ Lastly, if you haven't already updated the global fedmsg config, you'll

+ need to. You need to add your new service-node to `fedmsg.d/endpoint.py`

+ and to `fedmsg.d/ssl.py`. Those can be found in

+ `ansible/roles/fedmsg/templates/fedmsg.d`. See

+ http://fedmsg.readthedocs.org/en/latest/config.html for more information

+ on the layout and meaning of those files.

+ 

+ == Revoking a key

+ 

+ In `ansible/roles/fedmsg/files/cert-tools` run:

+ 

+ ....

+ $ source ./vars

+ $ ./revoke-full <service>-<fqdn>

+ ....

+ 

+ This will alter `private/fedmsg-certs/keys/crl.pem` which should be

+ picked up and served publicly, and then consumed by all fedmsg consumers

+ globally.

+ 

+ `crl.pem` is publicly available at

+ http://fedoraproject.org/fedmsg/crl.pem

+ 

+ [NOTE]

+ ====

+ Even though crl.pem lives in the private repo, we're just keeping it

+ there for convenience. It really _should_ be served publicly, so don't

+ panic. :)

+ ====

+ [NOTE]

+ ====

+ At the time of this writing, the CRL is not actually used. I need one

+ publicly available first so we can test it out.

+ ====

@@ -0,0 +1,104 @@ 

+ = fedmsg-gateway SOP

+ 

+ Outgoing raw ZeroMQ message stream.

+ 

+ [NOTE]

+ ====

+ See also: <<fedmsg-websocket.adoc#>>

+ ====

+ 

+ == Contact Information

+ 

+ Owner:::

+   Messaging SIG, Fedora Infrastructure Team

+ Contact:::

+   #fedora-apps, #fedora-admin, #fedora-noc

+ Servers:::

+   busgateway01, proxy0*

+ Purpose:::

+   Expose raw ZeroMQ messages outside the FI environment.

+ 

+ == Description

+ 

+ Users outside of Fedora Infrastructure can listen to the production

+ message bus by connecting to specific addresses. This is required for

+ local users to run their own hubs and message processors ("Consumers").

+ 

+ The specific public endpoints are:

+ 

+ production::

+   tcp://hub.fedoraproject.org:9940

+ staging::

+   tcp://stg.fedoraproject.org:9940

+ 

+ _fedmsg-gateway_, the daemon running on _busgateway01_, is listening to the

+ FI production fedmsg bus and will relay every message that it receives

+ out to a special ZMQ pub endpoint bound to port 9940. haproxy mediates

+ connections to the _fedmsg-gateway_ daemon.

+ 

+ == Connection Flow

+ 

+ Clients connect through haproxy on `proxy0*:9940` are redirected to

+ `busgateway0*:9940`. This can be found in the `haproxy.cfg` entry for

+ `listen fedmsg-raw-zmq 0.0.0.0:9940`.

+ 

+ This is different than the apache reverse proxy pass setup we have for

+ the _app0*_ and _packages0*_ machines. _That_ flow looks something like

+ this:

+ 

+ ....

+ Client -> apache(proxy01) -> haproxy(proxy01) -> apache(app01)

+ ....

+ 

+ The flow for the raw zmq stream provided by _fedmsg-gateway_ looks

+ something like this:

+ 

+ ....

+ Client -> haproxy(proxy01) -> fedmsg-gateway(busgateway01)

+ ....

+ 

+ _haproxy_ is listening on a public port.

+ 

+ At the time of this writing, _haproxy_ does not actually load balance

+ zeromq session requests across multiple _busgateway0*_ machines, but there

+ is nothing stopping us from adding them. New hosts can be added in

+ ansible and pressed from _busgateway01_'s template. Add them to the

+ fedmsg-raw-zmq listen in _haproxy_'s config and it should Just Work.

+ 

+ == Increasing the Maximum Number of Concurrent Connections

+ 

+ HTTP requests are typically very short (a few seconds at most). This

+ means that the number of concurrent tcp connections we require for most

+ of our services is quite low (1024 is overkill). ZeroMQ tcp connections,

+ on the other hand, are expected to live for quite a long time.

+ Consequently we needed to scale up the number of possible concurrent tcp

+ connections.

+ 

+ All of this is in ansible and should be handled for us automatically if

+ we bring up new nodes.

+ 

+ * The pam_limits user limit for the fedmsg user was increased from 1024

+ to 160000 on _busgateway01_.

+ * The pam_limits user limit for the haproxy user was increased from 1024

+ to 160000 on the _proxy0*_ machines.

+ * The zeromq High Water Mark (HWM) was increased to 160000 on

+ _busgateway01_.

+ * The maximum number of connections allowed was increased in

+ `haproxy.cfg`.

+ 

+ == Nagios

+ 

+ New nagios checks were added for this that check to see if the number of

+ concurrent connections through haproxy is approaching the maximum number

+ allowed.

+ 

+ You can check these numbers by hand by inspecting the _haproxy_ web

+ interface: https://admin.fedoraproject.org/haproxy/proxy1#fedmsg-raw-zmq

+ 

+ Look at the "Sessions" section. "Cur" is the current number of sessions

+ versus "Max", the maximum number seen at the same time and "Limit", the

+ maximum number of concurrent connections allowed.

+ 

+ == RHIT

+ 

+ We had RHIT open up port 9940 special to _proxy01.iad2_ for this.

@@ -0,0 +1,57 @@ 

+ = fedmsg introduction and basics, SOP

+ 

+ General information about fedmsg

+ 

+ == Contact Information

+ 

+ Owner::

+   Messaging SIG, Fedora Infrastructure Team

+ Contact::

+   #fedora-apps, #fedora-admin, #fedora-noc

+ Servers::

+   Almost all of them.

+ Purpose::

+   Introduce sysadmins to fedmsg tools and config

+ 

+ == Description

+ 

+ _fedmsg_ is a system that links together most of our webapps and services

+ into a message mesh or net (often called a "bus"). It is built on top of

+ the zeromq messaging library.

+ 

+ _fedmsg_ has its own developer documentation that is a good place to check

+ if this or other SOPs don't provide enough information -

+ http://fedmsg.rtfd.org

+ 

+ == Tools

+ 

+ Generally, _fedmsg-tail_ and _fedmsg-logger_ are the two most commonly used

+ tools for debugging and testing. To see if bus-connectivity exists

+ between two machines, log onto each of them and run the following on the

+ first:

+ 

+ ....

+ $ echo testing from $(hostname) | fedmsg-logger

+ ....

+ 

+ And run the following on the second:

+ 

+ ....

+ $ fedmsg-tail --really-pretty

+ ....

+ 

+ == Configuration

+ 

+ _fedmsg_ configuration lives in `/etc/fedmsg.d/`

+ 

+ `/etc/fedmsg.d/endpoints.py` keeps the list of every possible fedmsg

+ endpoint. It acts as a global index that defines the bus.

+ 

+ See https://fedmsg.readthedocs.org/en/stable/configuration/ for a full glossary of

+ configuration values.

+ 

+ == Logs

+ 

+ _fedmsg_ daemons keep their logs in `/var/log/fedmsg`. _fedmsg_ message hooks

+ in existing apps (like bodhi) will log any errors to the logs of the app

+ they've been added to (like `/var/log/httpd/error_log`).

@@ -0,0 +1,29 @@ 

+ = fedmsg-irc SOP

+ 

+ ____

+ Echo fedmsg bus activity to IRC.

+ ____

+ 

+ == Contact Information

+ 

+ Owner::

+   Messaging SIG, Fedora Infrastructure Team

+ Contact::

+   #fedora-apps, #fedora-fedmsg, #fedora-admin, #fedora-noc

+ Servers::

+   value03

+ Purpose::

+   Echo fedmsg bus activity to IRC

+ 

+ == Description

+ 

+ _fedmsg-irc_ is a daemon running on _value03_ and _value01.stg_. It is

+ listening to the fedmsg bus and echoing that activity to the

+ _#fedora-fedmsg_ channel in IRC.

+ 

+ It can be configured to ignore certain messages, join certain rooms, and

+ take on a different nick by editing the values in `/etc/fedmsg.d/irc.py`

+ and restarting it with `sudo service fedmsg-irc restart`

+ 

+ See https://fedmsg.readthedocs.org/en/latest/configuration/#irc for more

+ information on configuration.

@@ -0,0 +1,73 @@ 

+ = Adding a new fedmsg message type

+ 

+ == Instrumenting the program

+ 

+ First, figure out how you're going to publish the message? Is it from a

+ shell script or from a long running process?

+ 

+ If its from shell script, you need to just add a

+ _fedmsg-logger_ statement to the script. Remember to set the

+ _--modname_ and _--topic_ for your new message's

+ fully-qualified topic.

+ 

+ If its from a python process, you need to just add a

+ `fedmsg.publish(..)` call. The same concerns about modname and topic

+ apply here.

+ 

+ If this is a short-lived python process, you'll want to add

+ _active=True_ to the call to `fedmsg.publish(..)`. This will

+ make the _fedmsg_ lib "actively" reach out to our _fedmsg-relay_ running on

+ _busgateway01_.

+ 

+ If it is a long-running python process (like a WSGI thread), then you

+ don't need to pass any extra arguments. You don't want it to reach out

+ to the _fedmsg-relay_ if possible. Your process will require that some

+ "endpoints" are created for it in `/etc/fedmsg.d/`. More on that below.

+ 

+ == Supporting infrastructure

+ 

+ You need to make sure that the machine this is running on has a cert and

+ key that can be read by the program to sign its message. If you don't

+ have a cert already, then you need to create it in the private repo. Ask

+ a sysadmin-main member.

+ 

+ Then you need to declare those certs in the _fedmsg_certs

+ data structure stored typically in our ansible `group_vars/` for this

+ service. Declare both the name of the cert, what group and user it

+ should be owned by, and in the `can_send:` section, declare the list of

+ topics that this cert should be allowed to publish.

+ 

+ If this is a long-running python process that is _not_ passing

+ _active=True_ to the call to

+ `fedmsg.publish(..)`, then you have to also declare

+ endpoints for it. You do that by specifying the `fedmsg_wsgi_procs` and

+ `fedmsg_wsgi_vars` in the `group_vars` for your service. The iptables

+ rules and _fedmsg_ endpoints should be automatically created for you on

+ the next playbook run.

+ 

+ == Supporting code

+ 

+ At this point, you can push the change out to production and be

+ publishing messages "okay". Everything should be fine.

+ 

+ However, your message will show up blank in _datagrepper_, in IRC, and in

+ _FMN_, and everywhere else we try to render it. You _must_ then follow up

+ and write a new _Processor_ for it in the _fedmsg_meta_

+ library we maintain:

+ https://github.com/fedora-infra/fedmsg_meta_fedora_infrastructure

+ 

+ You also _must_ write a test case for it there. The docs listing all

+ topics we publish at http://fedora-fedmsg.rtfd.org/ is automatically

+ generated from the test suite. Please don't forget this.

+ 

+ Lastly, you should cut a release of _fedmsg_meta_ and deploy it using the

+ `playbooks/manual/upgrade/fedmsg.yml` playbook, which should

+ update all the relevant hosts.

+ 

+ == Corner cases

+ 

+ If the process publishing the new message lives _outside_ our main

+ network, you have to jump through more hoops. Look at _abrt_, _koschei_, and

+ _copr_ for examples of how to configure this (you need a special firewall

+ rule, and they need to be configured to talk to our "inbound gateway"

+ running on the proxies.

@@ -0,0 +1,56 @@ 

+ = fedmsg-relay SOP

+ 

+ Bridge ephemeral scripts into the fedmsg bus.

+ 

+ == Contact Information

+ 

+ Owner::

+   Messaging SIG, Fedora Infrastructure Team

+ Contact::

+   #fedora-apps, #fedora-admin, #fedora-noc

+ Servers::

+   app01

+ Purpose::

+   Bridge ephemeral bash and python scripts into the fedmsg bus.

+ 

+ == Description

+ 

+ _fedmsg-relay_ is running on _app01_, which is a bad choice. We should look

+ to move it to a more isolated place in the future. _busgateway01_ would be

+ a better choice.

+ 

+ "Ephemeral" scripts like `pkgdb2branch.py`, the post-receive git hook on

+ _pkgs01_, and anywhere _fedmsg-logger_ is used all depend on _fedmsg-relay_.

+ Instead of emitting messages "directly" to the rest of the bus, they use

+ fedmsg-relay as an intermediary.

+ 

+ Check that _fedmsg-relay_ is running by looking for it in the process

+ list. You can restart it in the standard way with

+ `sudo service fedmsg-relay restart`. Check for its logs in

+ `/var/log/fedmsg/fedmsg-relay.log`

+ 

+ Ephemeral scripts know where the _fedmsg-relay_ is by looking for the

+ relay_inbound and relay_outbound values in the global fedmsg config.

+ 

+ == But What is it Doing? And Why?

+ 

+ The _fedmsg_ bus is designed to be "passive" in its normal operation. A

+ _mod_wsgi_ process under _httpd_ sets up its _fedmsg_ publisher socket to

+ passively emit messages on a certain port. When some other service wants

+ to receive these messages, it is up to that service to know where

+ _mod_wsgi_ is emitting and to actively connect there. In this way,

+ emitting is passive and listening is active.

+ 

+ We get a problem when we have a one-off or "ephemeral" script that is

+ not a long-running process -- a script like _pkgdb2branch_ which is run

+ when a user runs it and which ends shortly after. Listeners who want

+ these scripts messages will find that they are usually not available

+ when they try to connect.

+ 

+ To solve this problem, we introduced the "_fedmsg-relay_" daemon which is

+ a kind of "passive"-to-"passive" adaptor. It binds to an outbound port

+ on one end where it will publish messages (like normal) but it also

+ binds to an another port where it listens passively for inbound

+ messages. Ephemeral scripts then actively connect to the passive inbound

+ port of the _fedmsg-relay_ to have their payloads echoed on the

+ bus-proper.

@@ -0,0 +1,70 @@ 

+ = websocket SOP

+ 

+ websocket communication with Fedora apps.

+ 

+ See-also: <<fedmsg-gateway.adoc#>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Messaging SIG, Fedora Infrastructure Team

+ Contact::

+   #fedora-apps, #fedora-admin, #fedora-noc

+ Servers::

+   busgateway01, proxy0*, app0*

+ Purpose::

+   Expose a websocket server for FI apps to use

+ 

+ == Description

+ 

+ _WebSocket_ is a protocol (an extension of HTTP/1.1) by which client web

+ browsers can establish full-duplex socket communications with a server

+ --the "real-time web".

+ 

+ In our case, webapps served from _app0*_ and _packages0*_ will include

+ javascript code instructing client browsers to establish a second

+ connection to our _WebSocket_ server. They point browsers to the following

+ addresses:

+ 

+ production::

+   wss://hub.fedoraproject.org:9939

+ staging::

+   wss://stg.fedoraproject.org:9939

+ 

+ The websocket server itself is a _fedmsg-hub_ daemon running on

+ _busgateway01_. It is configured to enable its websocket server component

+ in the presence of certain configuration values.

+ 

+ _haproxy_ mediates connections to the _fedmsg-hub_ _websocket_ server daemon.

+ An _stunnel_ daemon provides SSL support.

+ 

+ == Connection Flow

+ 

+ The connection flow is much the same as in the <<fedmsg-gateway.adoc#>>,

+ but is somewhat more complicated.

+ 

+ "Normal" HTTP requests to our app servers traverse the following chain:

+ 

+ ....

+ Client -> apache(proxy01) -> haproxy(proxy01) -> apache(app01)

+ ....

+ 

+ The flow for a websocket requests looks something like this:

+ 

+ ....

+ Client -> stunnel(proxy01) -> haproxy(proxy01) -> fedmsg-hub(busgateway01)

+ ....

+ 

+ stunnel is listening on a public port, negotiates the SSL connection,

+ and redirects the connection to haproxy who in turn hands it off to the

+ _fedmsg-hub_ websocket server listening on _busgateway01_.

+ 

+ At the time of this writing, _haproxy_ does not actually load balance

+ zeromq session requests across multiple _busgateway0*_ machines, but there

+ is nothing stopping us from adding them. New hosts can be added in

+ ansible and pressed from _busgateway01_'s template. Add them to the

+ _fedmsg-websockets_ listen in _haproxy_'s config and it should Just Work.

+ 

+ == RHIT

+ 

+ We had RHIT open up port 9939 special to _proxy01.iad2_ for this.

@@ -0,0 +1,35 @@ 

+ = Fedocal SOP

+ 

+ _Fedocal_ is a web-based group calender application that is made available

+ to the various groups with in the Fedora project.

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_documentation_links>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin

+ Location::

+   https://apps.fedoraproject.org/calendar

+ 

+ Servers

+ 

+ Purpose::

+   To provide links to the documentation for _fedocal_, as it exists

+   elsewhere on the internet and it was decided that a link document

+   would be a better use of resources than to rewrite the book.

+ 

+ == Documentation Links

+ 

+ For information on the latest and greatest in fedocal please review:

+ 

+ https://fedocal.readthedocs.org/en/latest/

+ 

+ For documentation on the usage of fedocal please consult:

+ 

+ https://fedocal.readthedocs.org/en/latest/usage.html

@@ -0,0 +1,364 @@ 

+ = Fedora Release Infrastructure SOP

+ 

+ This SOP contains all of the steps required by the Fedora Infrastructure

+ team in order to get a release out. Much of this work overlaps with the

+ Release Engineering team (and at present share many of the same

+ members). Some work may get done by releng, some may get done by

+ Infrastructure, as long as it gets done, it doesn't matter.

+ 

+ == Contact Information

+ 

+ Owner:::

+   Fedora Infrastructure Team, Fedora Release Engineering Team

+ Contact:::

+   #fedora-admin, #fedora-releng, sysadmin-main, sysadmin-releng

+ Location:::

+   N/A

+ Servers:::

+   All

+ Purpose:::

+   Releasing a new version of Fedora

+ 

+ == Preparations

+ 

+ Before a release ships, the following items need to be completed.

+ 

+ [arabic]

+ . New website from the websites team (typically hosted at

+ https://getfedora.org/_/)

+ 

+ . Verify mirror space (for all test releases as well)

+ 

+ . Verify with rel-eng permissions on content are right on the mirrors.

+ Don't leak.

+ 

+ . Communication with Red Hat IS (Give at least 2 months notice, then

+ reminders as the time comes near) (final release only)

+ 

+ . Infrastructure change freeze

+ 

+ . Modify _Template:FedoraVersion_ to reference new version. (Final release

+ only)

+ 

+ . Move old releases to archive (post final release only)

+ 

+ . Switch release from development/N to normal releases/N/ tree in mirror

+ manager (post final release only)

+ 

+ == Change Freeze

+ 

+ The rules are simple:

+ 

+ * Hosts with the ansible variable "freezes" "True" are frozen.

+ 

+ * You may make changes as normal on hosts that are not frozen. (For

+ example, staging is never frozen)

+ 

+ * Changes to frozen hosts requires a freeze break request sent to the

+ fedora infrastructure list, containing a description of the problem or

+ issue, actions to be taken and (if possible) patches to ansible that

+ will be applied. These freeze breaks must then get two approvals from

+ sysadmin-main or sysadmin-releng group members before being applied.

+ 

+ * Changes to recover from outages are acceptable to frozen hosts if

+ needed.

+ 

+ Change freezes will be sent to the fedora-infrastructure-list and begin

+ 3 weeks before each release and the final release. The freeze will end

+ one day after the release. Note, if the release slips during a change

+ freeze, the freeze just extends until the day after a release ships.

+ 

+ You can get a list of frozen/non-frozen hosts by:

+ 

+ ....

+ git clone https://pagure.io/fedora-infra/ansible.git

+ scripts/freezelist -i inventory

+ ....

+ 

+ == Notes about release day

+ 

+ Release day is always an interesting and unique event. After the final

+ sprint from test to the final release a lot of the developers will be

+ looking forward to a bit of time away, as well as some sleep. Once

+ Release Engineering has built the final tree, and synced it to the

+ mirrors it is our job to make sure everything else (except the bit flip)

+ gets done as painlessly and easily as possible.

+ 

+ [NOTE]

+ ====

+ All communication is typically done in #fedora-admin. Typically these

+ channels are laid back and staying on topic isn't strictly enforced. On

+ release day this is not true. We encourage people to come, stay in the

+ room and be quiet unless they have a specific task or question releated

+ to release day. Its nothing personal, but release day can get out of

+ hand quick.

+ ====

+ 

+ During normal load, our websites function as normal. This is

+ especially true since we've moved the wiki to mod_fcgi. On release day

+ our load spikes a great deal. During the Fedora 6 launch many services

+ were offline for hours. Some (like the docs) were off for days. A large

+ part of this outage was due to the wiki not being able to handle the

+ load, part was a lack of planning by the Infrastructure team, and part

+ is still a mystery. There are questions as to whether or not all of the

+ traffic was legit or a DDoS.

+ 

+ The Fedora 7 release went much better. Some services were offline for

+ minutes at a time but very little of it was out longer then that. The

+ wiki crashed, as it always does. We had made sure to make the

+ fedoraproject.org landing page static though. This helped a great deal

+ though we did see load on the proxy boxes as spiky.

+ 

+ Recent releases have been quite smooth due to a number of changes: we

+ have a good deal more bandwith on master mirrors, more cpus and memory,

+ as well as prerelease versions are much easier to come by for those

+ interested before release day.

+ 

+ == Day Prior to Release Day

+ 

+ === Step 1 (Torrent)

+ 

+ Setup the torrent. All files can be synced with the torrent box but just

+ not published to the world. Verify with sha1sum. Follow the instructions

+ on the torrentrelease.txt sop up to and including step 4.

+ 

+ === Step 2 (Website)

+ 

+ Verify the website design / content has been finalized with the websites

+ team. Update the Fedora version number wiki template if this is a final

+ release. It will need to be changed in

+ https://fedoraproject.org/wiki/Template:CurrentFedoraVersion

+ 

+ Additionally, there are redirects in the ansible

+ playbooks/include/proxies-redirects.yml file for Cloud Images. These

+ should be pushed as soon as the content is available. See:

+ https://pagure.io/fedora-infrastructure/issue/3866 for example

+ 

+ === Step 3 (Mirrors)

+ 

+ Verify enough mirrors are setup and have Fedora ready for release. If

+ for some reason something is broken it needs to be fixed. Many of the

+ mirrors are running a check-in script. This lets us know who has Fedora

+ without having to scan everyone. Hide the Alpha, Beta, and Preview

+ releases from the publiclist page.

+ 

+ You can check this by looking at:

+ 

+ ....

+ wget "http://mirrors.fedoraproject.org/mirrorlist?path=pub/fedora/linux/releases/test/28-Beta&country=global"

+ 

+ (replace 28 and Beta with the version and release.)

+ ....

+ 

+ == Release day

+ 

+ === Step 1 (Prep and wait)

+ 

+ Verify the mirrors are ready and that the torrent has valid copies of

+ its files (use sha1sum)

+ 

+ Do not move on to step two until the Release Engineering team has given

+ the ok for the release. It is the releng team's decision as to whether

+ or not we release and they may pull the plug at any moment.

+ 

+ === Step 2 (Torrent)

+ 

+ Once given the ok to release, the Infrastructure team should publish the

+ torrent and encourage people to seed. Complete the steps on the

+ <<torrentrelease.adoc#>> after step 4.

+ 

+ === Step 3 (Bit flip)

+ 

+ The mirrors sit and wait for a single permissions bit to be altered so

+ that they show up to their services. The bit flip (done by the releng

+ team) will replicate out to the mirrors. Verify that the mirrors have

+ received the change by seeing if it is actually available, just use a

+ spot check. Once that is complete move on.

+ 

+ === Step 4 (Website)

+ 

+ Once all of the distribution pieces are verified (mirrors and torrent),

+ all that is left is to publish the website. At present this is done by

+ making sure the master branch of fedora-web is pulled by the

+ `syncStatic.sh` script in ansible. It will sync in an hour normally but on

+ release day people don't like to wait that long so do the following on

+ sundries01

+ 

+ ____

+ sudo -u apache /usr/local/bin/lock-wrapper syncStatic 'sh -x

+ /usr/local/bin/syncStatic'

+ ____

+ 

+ Once that completes, on _batcave01_:

+ 

+ ....

+ sudo -i ansible proxy\* "/usr/bin/rsync --delete -a --no-owner --no-group bapp02::getfedora.org/ /srv/web/getfedora.org/"

+ ....

+ 

+ Verify http://getfedora.org/ is working.

+ 

+ === Step 5 (Docs)

+ 

+ Just as with the website, the docs site needs to be published. Just as

+ above follow the following steps:

+ 

+ ....

+ /root/bin/docs-sync

+ ....

+ 

+ === Step 6 (Monitor)

+ 

+ Once the website is live, keep an eye on various news sites for the

+ release announcement. Closely watch the load on all of the boxes, proxy,

+ application and otherwise. If something is getting overloaded, see

+ suggestions on this page in the "Juggling Resources" section.

+ 

+ === Step 7 (Badges) (final release only)

+ 

+ We have some badge rules that are dependent on which release of Fedora

+ we're on. As you have time, please performs the following on your local

+ box:

+ 

+ ....

+ $ git clone ssh://git@pagure.io/fedora-badges.git

+ $ cd badges

+ ....

+ 

+ Edit `rules/tester-it-still-works.yml` and update the release tag to

+ match the now old but stable release. For instance, if we just released

+ fc21, then the tag in that badge rule should be fc20.

+ 

+ Edit `rules/tester-you-can-pry-it-from-my-cold-dead-hands.yml` and

+ update the release tag to match the release that is about to reach EOL.

+ For instance, if we just released f28, then the tag in that badge rule

+ should be f26. Commit the changes:

+ 

+ ....

+ $ git commit -a -m 'Updated tester badge rule for f28 release.'

+ $ git push origin master

+ ....

+ 

+ Then, on batcave, perform the following:

+ 

+ ....

+ $ sudo -i ansible-playbook $(pwd)/playbooks/manual/push-badges.yml

+ ....

+ 

+ === Step 8 (Done)

+ 

+ Just chill, keep an eye on everything and make changes as needed. If you

+ can't keep a service up, try to redirect randomly to some of the

+ mirrors.

+ 

+ == Priorities

+ 

+ Priorities of during release day (In order):

+ 

+ [arabic]

+ . Website

+ +

+ Anything related to a user landing at fedoraproject.org, and clicking

+ through to a mirror or torrent to download something must be kept up.

+ This is distribution, and without it we can potentially lose many

+ users.

+ +

+ . Linked addresses

+ +

+ We do not have direct control over what Hacker News, Phoronix or

+ anyone else links to. If they link to something on the wiki and it is

+ going down or link to any other site we control a rewrite should be

+ put in place to direct them to http://fedoraproject.org/get-fedora.

+ +

+ . Torrent

+ +

+ The torrent server has never had problems during a release. Make sure

+ it is up.

+ +

+ . Release Notes

+ +

+ Typically grouped with the docs site, the release notes are often

+ linked to (this is fine, no need to redirect) but keep an eye on the

+ logs and ensure that where we've said the release notes are, that they

+ can be found there. In previous releases we sometimes had to make this

+ available in more than one spot.

+ +

+ . docs.fedoraproject.org

+ +

+ People will want to see whats new in Fedora and get further

+ documentation about it. Much of this is in the release notes.

+ +

+ . wiki

+ +

+ Because it is so resource heavy, and because it is so developer

+ oriented we have no choice but to give the wiki a lower priority.

+ +

+ . Everything else.

+ 

+ == Juggling Resources

+ 

+ In our environment we're running different things on many different

+ servers. Using Xen we can easily give machines more or less ram,

+ processors. We can take down builders and bring up application servers.

+ The trick is to be smart and make sure you understand what is causing

+ the problem. These are some tips to keep in mind:

+ 

+ * IPTables based bandwidth and connection limiting (successful in the

+ past)

+ * Altering the weight on the proxy balancers

+ * Create static pages out of otherwise dynamic content

+ * Redirect pages to a mirror

+ * Add a server / remove un-needed servers

+ 

+ == CHECKLISTS:

+ 

+ === Beta:

+ 

+ * Announce infrastructure freeze 3 weeks before Beta

+ * Change /topic in #fedora-admin

+ * mail infrastucture list a reminder.

+ * File all tickets

+ * new website

+ * check mirror permissions, mirrormanager, check mirror sizes, release

+ day ticket.

+ 

+ After release is a "go":

+ 

+ * Make sure torrents are setup and ready to go.

+ * fedora-web needs a branch for fN-beta. In it:

+ ** Beta used on get-prerelease

+ ** get-prerelease doesn't direct to release

+ ** verify is updated with Beta info

+ ** releases.txt gets a branched entry for pre-upgrade

+ * bfo gets updated to have a Beta entry.

+ 

+ After release:

+ 

+ * Update /topic in #fedora-admin

+ * post to infrastructure list that freeze is over.

+ 

+ === Final:

+ 

+ * Announce infrastructure freeze 2 weeks before Final

+ * Change /topic in #fedora-admin

+ * mail infrastucture list a reminder.

+ * File all tickets

+ * new website, check mirror permissions, mirrormanager, check

+ * mirror sizes, release day ticket.

+ 

+ After release is a "go":

+ 

+ * Make sure torrents are setup and ready to go.

+ * fedora-web needs a branch for fN-alpha. In it:

+ ** get-prerelease does direct to release

+ ** verify is updated with Final info

+ * bfo gets updated to have a Final entry.

+ * update wiki version numbers and names.

+ 

+ After release:

+ 

+ * Update /topic in #fedora-admin

+ * post to infrastructure list that freeze is over.

+ * Move MirrorManager repository tags from the `development/$version/`

+ Directory objects, to the `releases/$version/` Directory objects. This is

+ done using the `move-devel-to-release --version=$version` command on

+ _bapp02_. This is usually done now a week or two after release.

@@ -0,0 +1,304 @@ 

+ = Websites Release SOP

+ 

+ == Contents

+ 

+ * <<_preparing_the_website_for_a_new_release_cycle>>

+ ** <<_obsolete_gpg_key>>

+ ** <<_update_gpg_key>>

+ * <<_update_website>>

+ ** <<_for_alpha>>

+ ** <<_for_beta>>

+ ** <<_for_ga>>

+ * <<_fire_in_the_hole>>

+ * <<_tips>>

+ ** <<_merging_branches>>

+ 

+ 

+ == Preparing the website for a new release cycle

+ 

+ === Obsolete GPG key

+ 

+ One month after a Fedora release the release number 'FXX-2' (i.e. 1

+ month after F21 release, F19 will be EOL) will be EOL (End of Life). At

+ this point we should drop the GPG key from the list in verify/ and move

+ the keys to the obsolete keys page in `keys/obsolete.html`.

+ 

+ === Update GPG key

+ 

+ After another couple of weeks and as the next release approaches, watch

+ the _fedora-release_ package for a new key to be added. Use the

+ `update-gpg-keys` script in the https://pagure.io/fedora-web/websites/[fedora-web git repository]

+ to add it to `static/`. Manually add it to `/keys` and `/verify` in all websites where we

+ use these keys:

+ 

+ * https://arm.fedoraproject.org/

+ * https://getfedora.org/

+ * https://labs.fedoraproject.org/

+ * https://spins.fedoraproject.org/

+ 

+ ===== Steps

+ 

+ [loweralpha]

+ . Get a copy of the new key(s) from the fedora-release repo, you will

+ find FXX-primary and FXX-secondary keys. Save them in `./tools` to make

+ the update easier.

+ +

+ https://pagure.io/fedora-repos

+ . Start by editing `./tools/update-gpg-keys` and adding the key-ids of any

+ obsolete keys to the obsolete_keys list.

+ . Then run that script to add the new key(s) to the `fedora.gpg` block:

+ +

+ ....

+ fedora-web git:(master) cd tools/

+ tools git:(master) ./update-gpg-keys RPM-GPG-KEY-fedora-23-primary

+ tools git:(master) ./update-gpg-keys RPM-GPG-KEY-fedora-23-secondary

+ ....

+ +

+ This will add the key(s) to the keyblock in static/fedora.gpg and create

+ a text file for the key in `static/$KEYID.txt` as well. Verify that these

+ files have been created properly and contain all the keys that they

+ should.

+ +

+ * Handy checks: `gpg static/fedora.gpg` or `gpg static/$KEYID.txt`

+ +

+ * Adding _--with-fingerprint_ option will add the fingerprint to the

+ output

+ +

+ The output of `fedora.gpg` should contain only the actual keys, not the

+ obsolete keys. The single text files should contain the correct

+ information for the uploaded key.

+ +

+ . Next, add new key(s) to the list in `data/verify.html` and move the new

+ key informations in the keys page in `data/content/keys/index.html`. A

+ script to aid in generating the HTML code for new keys is in

+ `./tools/make-gpg-key-html`. It will print HTML to stdout for each

+ RPM-GPG-KEY-* file given as arguments. This is suitable for copy/paste

+ (or directly importing if your editor supports this). Check the copied

+ HTML code and select if the key info is for a primary or secondary key

+ (output says 'Primary or Secondary').

+ +

+ `tools git:(master) ./make-gpg-key-html RPM-GPG-KEY-fedora-23-primary`

+ +

+ Build the website with 'make en test' and carefully verify that the data

+ is correct. Please double check all keys in

+ http://localhost:5000/en/keys and http://localhost:5000/en/verify.

+ +

+ [NOTE]

+ ====

+ The tool will give you an outdated output, adapt it to the new

+ websites and bootstrap layout!

+ ====

+ 

+ == Update website

+ 

+ 

+ === For Alpha

+ 

+ ____

+ [loweralpha]

+ . Create the fXX-alpha branch from master fedora-web

+ +

+ `git:(master) git push origin master:refs/heads/f22-alpha`

+ +

+ and checkout to the new branch:

+ +

+ `fedora-web git:(master) git checkout -t -b f13-alpha origin/f13-alpha`

+ . Update the global variables Change curr_state to Alpha for all arches

+ . Add Alpha banner Upload the FXX-Alpha banner to

+ `static/images/banners/f22alpha.png` which should appear in every

+ `$\{PRODUCT}/download/index.html` page. Make sure the banner is shown in

+ all sidebars, also in labs, spins, and arm.

+ . Check all Download links and paths in

+ `$\{PRODUCT}/prerelease/index.html`. You can find all paths in _bapp01_

+ (`sudo su - mirrormanager first`) or you can look at the downlaod page

+ http://dl.fedoraproject.org/pub/alt/stage

+ . Add CHECKSUM files to `static/checksums` and verify that the paths are

+ correct. The files should be in _sundries01_ and you can query them with:

+ +

+ `$ find /pub/fedora/linux/releases/test/17-Alpha/ -type f -name CHECKSUM -exec cp '\{}' . ;`

+ +

+ Remember to add the right checksums to the right websites (same path).

+ . Add EC2 AMI IDs for Alpha. All IDs now are in the `globalvar.py` file.

+ We get all data from there, even the redirect path to trac the AMI IDs.

+ We now also have a script which is useful to get all the AMI IDs

+ uploaded with fedimg. Execute it to get the latest uploads, but don't

+ run the script too early, as new builds are added constantly.

+ +

+ `fedora-web git:(fXX-alpha) python ~/fedora-web/tools/get_ami.py`

+ . Add CHECKSUM files also to https://spins.fedoraproject.org in

+ `static/checksums`. Verify the paths are correct in

+ `data/content/verify.html`. (see point e) to query them on _sundries01_).

+ Same for https://labs.fedoraproject.org/ and https://arm.fedoraproject.org.

+ . Verify all paths and links on https://spins.fedoraproject.org,

+ https://labs.fedoraproject.org/ and https://arm.fedoraproject.org.

+ . Update Alpha Image sizes and pre_cloud_composedate in

+ `./build.d/globalvar.py`. Verify they are right in Cloud images and Docker

+ image.

+ . Update the new POT files and push them to Zanata (ask a maintainer to

+ do so) every time you change text strings.

+ . Add this build to https://stg.fedoraproject.org (ansible `syncStatic.sh.stg`) to

+ test the pages online.

+ . Release Date:

+ * Merge the fXX-alpha branch to master and correct conflicts manually

+ * Remove the redirect of prerelease pages in ansible, edit:

+ ** `ansible/playbooks/include/proxies-redirects.yml`

+ * ask a sysadmin-main to run playbook

+ * When ready and about 90 minutes before Release Time push to master

+ ** Tag the commit as new release and push it too:

+ +

+ `$ git tag -a FXX-Alpha -m 'Releasing Fedora XX Alpha' $ git push --tags`

+ * If needed follow "Fire in the hole" below.

+ ____

+ 

+ 

+ === For Beta

+ 

+ ____

+ [loweralpha]

+ . Create the fXX-beta branch from master

+ +

+ `fedora-web git:(master) git push origin master:refs/heads/f22-beta`

+ +

+ and checkout to the new branch:

+ +

+ `fedora-web git:(master) git checkout -t -b f22-beta origin/f22-beta`

+ . Update the global variables Change curr_state to Beta for all arches

+ . Add Alpha banner Upload the FXX-Beta banner to

+ `static/images/banners/f22beta.png` which should appear in every

+ `$\{PRODUCT}/download/index.html` page. Make sure the banner is shown in

+ all sidebars, also in https://spins.fedoraproject.org,

+ https://labs.fedoraproject.org/ and https://arm.fedoraproject.org.

+ . Check all Download links and paths in

+ `$\{PRODUCT}/prerelease/index.html` You can find all paths in _bapp01_

+ (`sudo su - mirrormanager first`) or you can look at the downlaod page

+ https://dl.fedoraproject.org/pub/alt/stage

+ . Add CHECKSUM files to static/checksums and verify that the paths are

+ correct. The files should be in sundries and you can query them with:

+ +

+ `$ find /pub/fedora/linux/releases/test/17-Beta/ -type f -name CHECKSUM -exec cp '\{}' . ;`

+ +

+ Remember to add the right checksums to the right websites (same path).

+ . Add EC2 AMI IDs for Beta. All IDs now are in the `globalvar.py` file. We

+ get all data from there, even the redirect path to trac the AMI IDs. We

+ now also have a script which is useful to get all the AMI IDs uploaded

+ with fedimg. Execute it to get the latest uploads, but don't run the

+ script too early, as new builds are added constantly.

+ +

+ `fedora-web git:(fXX-beta) python ~/fedora-web/tools/get_ami.py`

+ . Add CHECKSUM files also to http://spins.fedoraproject.org in

+ `static/checksums`. Verify the paths are correct in

+ `data/content/verify.html`. (see point e) to query them on _sundries01_).

+ Same for https://labs.fedoraproject.org/ and https://arm.fedoraproject.org.

+ . Remove `static/checksums/Fedora-XX-Alpha*` in all websites.

+ . Verify all paths and links on http://spins.fpo, labs.fpo and arm.fpo.

+ . Update Beta Image sizes and pre_cloud_composedate in

+ ./build.d/globalvar.py. Verify they are right in Cloud images and Docker

+ image.

+ . Update the new POT files and push them to Zanata (ask a maintainer to

+ do so) every time you change text strings.

+ . Add this build to https://stg.fedoraproject.org (ansible `syncStatic.sh.stg`) to

+ test the pages online.

+ . Release Date:

+ * Merge the fXX-beta branch to master and correct conflicts manually

+ * When ready and about 90 minutes before Release Time push to master

+ * Tag the commit as new release and push it too:

+ +

+ `$ git tag -a FXX-Beta -m 'Releasing Fedora XX Beta' $ git push --tags`

+ * If needed follow "Fire in the hole" below.

+ ____

+ 

+ === For GA

+ 

+ ____

+ [loweralpha]

+ . Create the fXX branch from master

+ +

+ `fedora-web git:(master) git push origin master:refs/heads/f22`

+ +

+ and checkout to the new branch:

+ +

+ `fedora-web git:(master) git checkout -t -b f22 origin/f22`

+ . Update the global variables Change curr_state for all arches

+ . Check all Download links and paths in `$\{PRODUCT}/download/index.html`

+ You can find all paths in _bapp01_ (`sudo su - mirrormanager first`) or you

+ can look at the downlaod page http://dl.fedoraproject.org/pub/alt/stage

+ . Add CHECKSUM files to `static/checksums` and verify that the paths are

+ correct. The files should be in _sundries01_ and you can query them with:

+ +

+ `$ find /pub/fedora/linux/releases/17/ -type f -name CHECKSUM -exec cp '\{}' . ;`

+ +

+ Remember to add the right checksums to the right websites (same path).

+ . At some point freeze translations. Add an empty PO_FREEZE file to

+ every website's directory you want to freeze.

+ . Add EC2 AMI IDs for GA. All IDs now are in the `globalvar.py` file. We

+ get all data from there, even the redirect path to trac the AMI IDs. We

+ now also have a script which is useful to get all the AMI IDs uploaded

+ with fedimg. Execute it to get the latest uploads, but don't run the

+ script too early, as new builds are added constantly.

+ +

+ `fedora-web git:(fXX) python ~/fedora-web/tools/get_ami.py`

+ . Add CHECKSUM files also to http://spins.fedoraproject.org in

+ `static/checksums`. Verify the paths are correct in

+ `data/content/verify.html`. (see point e) to query them on _sundries01_).

+ Same for https://labs.fedoraproject.org/ and https://arm.fedoraproject.org.

+ . Remove `static/checksums/Fedora-XX-Beta*` in all websites.

+ . Verify all paths and links on https://spins.fedoraproject.org,

+ https://labs.fedoraproject.org/ and https://arm.fedoraproject.org.

+ . Update GA Image sizes and cloud_composedate in `./build.d/globalvar.py`.

+ Verify they are right in Cloud images and Docker image.

+ . Update `static/js/checksum.js` and check if the paths and checksum still

+ match.

+ . Update the new POT files and push them to Zanata (ask a maintainer to

+ do so) every time you change text strings.

+ . Add this build to https://stg.fedoraproject.org (ansible `syncStatic.sh.stg`) to

+ test the pages online.

+ . Release Date:

+ * Merge the fXX-beta branch to master and correct conflicts manually

+ * Add the redirect of prerelease pages in ansible, edit:

+ ** `ansible/playbooks/include/proxies-redirects.yml`

+ ** ask a sysadmin-main to run playbook

+ * Unfreeze translations by deleting the PO_FREEZE files

+ * When ready and about 90 minutes before Release Time push to master

+ * Update the short links for the Cloud Images for 'Fedora XX', 'Fedora

+ XX-1' and 'Latest'

+ * Tag the commit as new release and push it too:

+ +

+ ....

+ `$ git tag -a FXX -m 'Releasing Fedora XX'`

+ `$ git push --tags`

+ ....

+ +

+ * If needed follow "Fire in the hole" below.

+ ____

+ 

+ == Fire in the hole

+ 

+ We now use ansible for everything, and normally use a regular build to

+ make the websites live. If something is not happening as expected, you

+ should get in contact with a sysadmin-main to run the ansible playbook

+ again.

+ 

+ All our stuff, such as `SyncStatic.sh` and `SyncTranslation.sh` scripts are

+ now also in ansible!

+ 

+ Staging server _app02_ and production server _bapp01_ do not exist anymore,

+ now our staging websites are on _sundries01.stg_ and the production on

+ _sundries01_. Change your scripts accordingly and as sysadmin-web you

+ should have access to those servers as before.

+ 

+ == Tips

+ 

+ === Merging branches

+ 

+ Suggested by Ricky

+ 

+ This can be useful if you're _sure_ all new changes

+ on devel branch should go into the master branch. Conflicts will be

+ solved directly accepting only the changes in the devel branch. If

+ you're not 100% sure do a normal merge and fix conflicts manually!

+ 

+ ....

+ $ git merge f22-beta

+ $ git checkout --theirs f22-beta [list of conflicting po files]

+ $ git commit

+ ....

@@ -0,0 +1,198 @@ 

+ = FedMsg Notifications (FMN) SOP

+ 

+ Route individualized notifications to fedora contributors over email,

+ irc.

+ 

+ == Contact Information

+ 

+ === Owner

+ 

+ * Messaging SIG

+ * Fedora Infrastructure Team

+ 

+ === Contact

+ 

+ ____

+ * #fedora-apps for FMN development

+ * #fedora-fedmsg for an IRC feed of all fedmsgs

+ * #fedora-admin for problems with the deployment of FMN

+ * #fedora-noc for outage/crisis alerts

+ ____

+ 

+ === Servers

+ 

+ Production servers:

+ 

+ ____

+ * notifs-backend01.iad2.fedoraproject.org (RHEL 7)

+ * notifs-web01.iad2.fedoraproject.org (RHEL 7)

+ * notifs-web02.iad2.fedoraproject.org (RHEL 7)

+ ____

+ 

+ Staging servers:

+ 

+ ____

+ * notifs-backend01.stg.iad2.fedoraproject.org (RHEL 7)

+ * notifs-web01.stg.iad2.fedoraproject.org (RHEL 7)

+ * notifs-web02.stg.iad2.fedoraproject.org (RHEL 7)

+ ____

+ 

+ === Purpose

+ 

+ Route notifications to users

+ 

+ == Description

+ 

+ fmn is a pair of systems intended to route fedmsg notifications to

+ Fedora contributors and users.

+ 

+ There is a web interface running on notifs-web01 and notifs-web02 that

+ allows users to login and configure their preferences to select this or

+ that type of message.

+ 

+ There is a backend running on notifs-backend01 where most of the work is

+ done.

+ 

+ The backend process is a 'fedmsg-hub' daemon, controlled by systemd.

+ 

+ == Hosts

+ 

+ === notifs-backend

+ 

+ This host runs:

+ 

+ * `fedmsg-hub.service`

+ * One or more `fmn-worker@.service`. Currently notifs-backend01 runs

+ `fmn-worker@\{1-4}.service`

+ * `fmn-backend@1.service`

+ * `fmn-digests@1.service`

+ * `rabbitmq-server.service`, an AMQP broker used to communicate between

+ the services.

+ * `redis.service`, used for caching.

+ 

+ This host relies on a PostgreSQL database running on

+ db01.phx2.fedoraproject.org.

+ 

+ === notifs-web

+ 

+ This host runs:

+ 

+ * A Python WSGI application via Apache httpd that serves the

+ https://apps.fedoraproject.org/notifications[FMN web user interface].

+ 

+ This host relies on a PostgreSQL database running on

+ db01.iad2.fedoraproject.org.

+ 

+ == Deployment

+ 

+ Once upstream releases a new version of

+ https://github.com/fedora-infra/fmn[fmn],

+ https://github.com/fedora-infra/fmn.web[fmn-web], or

+ https://github.com/fedora-infra/fmn.sse[fmn-sse] creating a Git tag, a

+ new version can be built an deployed into Fedora infrastructure.

+ 

+ === Building

+ 

+ FMN is packaged in Fedora and EPEL as

+ https://src.fedoraproject.org/rpms/python-fmn/[python-fmn]

+ (the backend),

+ https://src.fedoraproject.org/rpms/python-fmn-web/[python-fmn-web]

+ (the frontend), and the optional

+ https://src.fedoraproject.org/rpms/python-fmn-sse/[python-fmn-sse].

+ 

+ Since all the hosts run RHEL 7, you need to build all these packages for

+ EPEL 7.

+ 

+ === Configuration

+ 

+ If there are any configuration updates required by the new version of

+ FMN, update the `notifs` Ansible roles on

+ batcave01.iad2.fedoraproject.org. Remember to use:

+ 

+ ....

+ {% if env == 'staging' %}

+     <new config here>

+ {% else %}

+     <retain old config>

+ {% endif %}

+ ....

+ 

+ When deploying the update to staging. You can apply configuration

+ updates to staging by running:

+ 

+ ....

+ $ sudo rbac-playbook -l staging groups/notifs-backend.yml

+ $ sudo rbac-playbook -l staging groups/notifs-web.yml

+ ....

+ 

+ Simply drop the `-l staging` to update the production configuration.

+ 

+ === Upgrading

+ 

+ To upgrade the

+ https://src.fedoraproject.org/rpms/python-fmn/[python-fmn],

+ https://src.fedoraproject.org/rpms/python-fmn-web/[python-fmn-web],

+ and

+ https://src.fedoraproject.org/rpms/python-fmn-sse/[python-fmn-sse]

+ packages, apply configuration changes, and restart the services, you

+ should use the manual upgrade playbook:

+ 

+ ....

+ $ sudo rbac-playbook -l staging manual/upgrade/fmn.yml

+ ....

+ 

+ Again, drop the `-l staging` flag to upgrade production.

+ 

+ Be aware that the FMN services take a significant amount of time to

+ start up as they pre-heat their caches before starting work.

+ 

+ == Service Administration

+ 

+ Disable an account (on notifs-backend01):

+ 

+ ....

+ $ sudo -u fedmsg /usr/local/bin/fmn-disable-account USERNAME

+ ....

+ 

+ Restart:

+ 

+ ....

+ $ sudo systemctl restart fedmsg-hub

+ ....

+ 

+ Watch logs:

+ 

+ ....

+ $ sudo journalctl -u fedmsg-hub -f

+ ....

+ 

+ Configuration:

+ 

+ ....

+ $ ls /etc/fedmsg.d/

+ $ sudo fedmsg-config | less

+ ....

+ 

+ Upgrade (from batcave):

+ 

+ ....

+ $ sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/manual/upgrade/fmn.yml

+ ....

+ 

+ == Mailing Lists

+ 

+ We use FMN as a way to forward certain kinds of messages to mailing

+ lists so people can read them the good old fashioned way that they like

+ to. To accomplish this, we create 'bot' FAS accounts with their own FMN

+ profiles and we set their email addresses to the lists in question.

+ 

+ If you need to change the way some set of messages are forwarded, you

+ can do it from the FMN web interface (if you are an FMN admin as defined

+ in the config file in roles/notifs/frontend/). You can navigate to

+ https://apps.fedoraproject.org/notifications/USERNAME.id.fedoraproject.org

+ to do this.

+ 

+ If the account exists as a FAS user already (for instance, the

+ `virtmaint` user) but it does not yet exist in FMN, you can add it to

+ the FMN database by logging in to notifs-backend01 and running

+ `fmn-create-user --email DESTINATION@EMAIL.COM --create-defaults FAS_USERNAME`.

@@ -0,0 +1,33 @@ 

+ = Fedora gather easyfix SOP

+ 

+ Fedora-gather-easyfix as the name says gather tickets marked as easyfix

+ from multiple sources (pagure, github and fedorahosted currently).

+ Providing a single place for new-comers to find small tasks to work on.

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin

+ Location::

+   http://fedoraproject.org/easyfix/

+ Servers::

+   sundries01, sundries02, sundries01.stg

+ Purpose::

+   Gather easyfix tickets from multiple sources.

+ 

+ Upstream sources are hosted on github at:

+ https://github.com/fedora-infra/fedora-gather-easyfix/

+ 

+ The files are then mirrored to our ansible repo, under the

+ https://pagure.io/fedora-infra/ansible/blob/main/f/roles/easyfix/gather[easyfix/gather] role.

+ 

+ The project is a simple script `gather_easyfix.py` gathering information

+ from the projects sets on the

+ https://fedoraproject.org/wiki/Easyfix[Fedora wiki] and outputing a

+ single html file. This html file is then improved via the css and

+ javascript files present in the sources.

+ 

+ The generated html file together with the css and js files are then

+ synced to the proxies for public consumption :)

@@ -0,0 +1,110 @@ 

+ = GDPR Delete SOP

+ 

+ This SOP covers how Fedora Infrastructure handles General Data

+ Protection Regulation (GDPR) Delete Requests. It contains information

+ about how system administrators will use tooling to respond to Delete

+ requests, as well as how application developers can integrate their

+ applications with that tooling.

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin

+ Persons::

+   nirik

+ Servers::

+   batcave01.iad2.fedoraproject.org Various application servers, which

+   will run scripts to delete data.

+ Purpose::

+   Respond to Delete requests.

+ 

+ == Responding to a Deletion Request

+ 

+ This section covers how a system administrator will use our

+ `gdpr-delete.yml` playbook to respond to a Delete request.

+ 

+ When processing a Delete request, perform the following steps:

+ 

+ [arabic, start=0]

+ . Verify that the requester is who they say they are. If the request

+ came in email ask them to file an issue at

+ https://pagure.io/fedora-pdr/new_issue Use the following in email reply

+ to them:

+ +

+ `In order to verify your identity, please file a new issue at https://pagure.io/fedora-pdr/new_issue using the appropriate issue type. Please note this form requires you to sign in to your account to verify your identity.`

+ +

+ If the request has come via Red Hat internal channels as an explicit

+ request to delete, mark the ticket with the tag `rh`. This tag will help

+ delineate requests for any future reporting needs.

+ +

+ If they do not have a FAS account, indicate to them that there is no

+ data to be deleted. Use this response:

+ +

+ `Your request for deletion has been reviewed. Since there is no related account in the Fedora Account System, the Fedora infrastructure does not store data relevant for this deletion request. Note that some public content related to Fedora you may have previously submitted without an account, such as to public mailing lists, is not deleted since accurate maintenance of this data serves Fedora's legitimate business interests, the public interest, and the interest of the open source community.`

+ . Identify the users FAS account name. The delete playbook will use this

+ FAS account to delete the required data. Update the `fedora-pdr` issue

+ saying the request has been received. There is a 'quick response' in the

+ pagure issue tracker to note this.

+ . Login to FAS and clear the `Telephone number` entry, set Country to

+ `Other`, clear `Lattitude` and `Longitude` and `IRC Nick` and

+ `GPG Key ID` and set `Time Zone` to UTC and `Locale` to `en` and set the

+ user status to `disabled`. If the user is not in cla_done plus one

+ group, you are done. Update the ticket and close it. This step will be

+ folded into the following one once we implement it.

+ . If the user is in cla_done + one group, they may have additional data:

+ Run the gdpr delete playbook on `batcave01`. You will need to define one

+ Ansible variable for the playbook. `sar_fas_user` will be the FAS

+ username of the user.

+ +

+ ....

+ $ sudo ansible-playbook playbooks/manual/gdpr/delete.yml -e gdpr_delete_fas_user=bowlofeggs

+ ....

+ +

+ After the script completes, update the ticket that the request is

+ completed and close it. There is a 'quick response' in the pagure issue

+ tracker to note this.

+ 

+ == Integrating an application with our delete playbook

+ 

+ This section covers how an infrastructure application can be configured

+ to integrate with our `delete.yml` playbook. To integrate, you must

+ create a script and Ansible variables so that your application is

+ compatible with this playbook.

+ 

+ === Script

+ 

+ You need to create a script and have your project's Ansible role install

+ that script somewhere (most likely on a host from your project - for

+ example fedocal's is going on `fedocal01`.) It's not a bad idea to put

+ your script into your upstream project. This script should accept one

+ environment variable as input: `GDPR_DELETE_USERNAME`. This will be a

+ FAS username.

+ 

+ Some scripts may need secrets embedded in them - if you must do this be

+ careful to install the script with `0700` permissions, ensuring that

+ only `gdpr_delete_script_user` (defined below) can run them. Bodhi

+ worked around this concern by having the script run as `apache` so it

+ could read Bodhi's server config file to get the secrets, so it does not

+ have secrets in its script.

+ 

+ === Variables

+ 

+ In addition to writing a script, you need to define some Ansible

+ variables for the host that will run your script:

+ 

+ [cols=",,",options="header",]

+ |===

+ |Variable |Description |Example

+ |`gdpr_delete_script` | The full path to the script. | `/usr/bin/fedocal-delete`

+ |``gdpr_delete_script_user` | The user the script should be run as | `apache`

+ |===

+ 

+ You also need to add the host that the script should run on to the

+ `[gdpr_delete]` group in https://pagure.io/fedora-infra/ansible/blob/main/f/inventory/inventory[inventory/inventory]:

+ 

+ ....

+ [gdpr_delete]

+ fedocal01.iad2.fedoraproject.org

+ ....

@@ -0,0 +1,153 @@ 

+ = GDPR SAR SOP

+ 

+ This SOP covers how Fedora Infrastructure handles General Data

+ Protection Regulation (GDPR) Subject Access Requests (SAR). It contains

+ information about how system administrators will use tooling to respond

+ to SARs, as well as how application developers can integrate their

+ applications with that tooling.

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin

+ Persons::

+   bowlofeggs

+ Servers::

+   batcave01.iad2.fedoraproject.org Various application servers, which

+   will run scripts to collect SAR data.

+ Purpose::

+   Respond to SARs.

+ 

+ == Responding to a SAR

+ 

+ This section covers how a system administrator will use our `sar.yml`

+ playbook to respond to a SAR.

+ 

+ When processing a SAR, perform the following steps:

+ 

+ [arabic]

+ . Verify that the requester is who they say they are. If the request

+ came in email and the user has a FAS account, ask them to file an issue

+ at https://pagure.io/fedora-pdr/new_issue Use the following in email

+ reply to them:

+ +

+ `In order to verify your identity, please file a new issue at

+ https://pagure.io/fedora-pdr/new_issue using the appropriate issue type.

+ Please note this form requires you to sign in to your account to verify your identity.`

+ +

+ If the request has come via Red Hat internal channels as an explicit

+ request to delete, mark the ticket with the tag `rh`. This tag will help

+ delineate requests for any future reporting needs.

+ . Identify an e-mail address for the requester, and if applicable, their

+ FAS account name. The SAR playbook will use both of these since some

+ applications have data associated with FAS accounts and others have data

+ associated with e-mail addresses. Update the fedora-pdr issue saying the

+ request has been received. There is a 'quick response' in the pagure

+ issue tracker to note this.

+ . Run the SAR playbook on `batcave01`. You will need to define three

+ Ansible variables for the playbook. `sar_fas_user` will be the FAS

+ username, if applicable; this may be omitted if the requester does not

+ have a FAS account. `sar_email` will be the e-mail address associated

+ with the user. `sar_tar_output_path` will be the path you want the

+ playbook to write the resulting tarball to, and should have a `.tar.gz`

+ extension. For example, if `bowlofeggs` submitted a SAR and his e-mail

+ address is `bowlof@eggs.biz`, you might run the playbook like this:

+ +

+ ....

+ $ sudo ansible-playbook playbooks/manual/gdpr/sar.yml -e sar_fas_user=bowlofeggs \

+     -e sar_email=bowlof@eggs.biz -e sar_tar_output_path=/home/bowlofeggs/bowlofeggs.tar.gz

+ ....

+ . Generate a random sha512 with something like:

+ `openssl rand 512 | sha512sum` and then move the output file to

+ _/srv/web/infra/pdr/the-sha512.tar.gz_

+ . Update the ticket to fixed / processed on pdr requests to have a link

+ to https://infrastructure.fedoraproject.org/infra/pdr/the-sha512.tar.gz

+ and tell them it will be available for one week.

+ 

+ == Integrating an application with our SAR playbook

+ 

+ This section covers how an infrastructure application can be configured

+ to integrate with our `sar.yml` playbook. To integrate, you must create

+ a script and Ansible variables so that your application is compatible

+ with this playbook.

+ 

+ === Script

+ 

+ You need to create a script and have your project's Ansible role install

+ that script somewhere (most likely on a host from your project - for

+ example Bodhi's is going on `bodhi-backend02`.) It's not a bad idea to

+ put your script into your upstream project - there are plans for

+ upstream Bodhi to ship `bodhi-sar`, for example. This script should

+ accept two environment variables as input: `SAR_USERNAME` and

+ `SAR_EMAIL`. Not all applications will use both, so do what makes sense

+ for your application. The first will be a FAS username and the second

+ will be an e-mail address. Your script should gather the required

+ information related to those identifiers and print it in a machine

+ readable format to stdout. Bodhi, for example, prints information to

+ stdout in `JSON`.

+ 

+ Some scripts may need secrets embedded in them - if you must do this be

+ careful to install the script with `0700` permissions, ensuring that

+ only `sar_script_user` (defined below) can run them. Bodhi worked around

+ this concern by having the script run as `apache` so it could read

+ Bodhi's server config file to get the secrets, so it does not have

+ secrets in its script.

+ 

+ === Variables

+ 

+ In addition to writing a script, you need to define some Ansible

+ variables for the host that will run your script:

+ 

+ [cols=",,",options="header",]

+ |===

+ |Variable |Description |Example

+ |`sar_script` |The full path to the script. |`/usr/bin/bodhi-sar`

+ 

+ |`sar_script_user` |The user the script should be run as |`apache`

+ 

+ |`sar_output_file` |The name of the file to write into the output

+ tarball |`bodhi.json`

+ |===

+ 

+ You also need to add the host that the script should run on to the

+ `[sar]` group in `inventory/inventory`:

+ 

+ ....

+ [sar]

+ bodhi-backend02.iad2.fedoraproject.org

+ ....

+ 

+ === Variables for OpenShift apps

+ 

+ When you need to add OpenShift app to SAR playbook, you need to add

+ following variables to existing `sar_openshift` dictionary:

+ 

+ [cols=",,",options="header",]

+ |===

+ |Variable |Description |Example

+ |`sar_script` |The full path to the script. |`/usr/local/bin/sar.py`

+ 

+ |`sar_output_file` |The name of the file to write into the output

+ tarball |`anitya.json`

+ 

+ |`openshift_namespace` |The namespace in which the application is

+ running |`release-monitoring`

+ 

+ |`openshift_pod` |The pod name in which the script will be run

+ |`release-monitoring-web`

+ |===

+ 

+ The `sar_openshift` dictionary is located in

+ `inventory/group_vars/os_masters`:

+ 

+ ....

+ sar_openshift:

+     # Name of the app

+     release-monitoring:

+       sar_script: /usr/local/bin/sar.py

+       sar_output_file: anitya.json

+       openshift_namespace: release-monitoring

+       openshift_pod: release-monitoring-web

+ ....

@@ -0,0 +1,57 @@ 

+ = geoip-city-wsgi SOP

+ 

+ A simple web service that return geoip information as JSON-formatted

+ dictionary in utf-8. Particularly, it's used by anaconda[1] to get the

+ most probable territory code, based on the public IP of the caller.

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_basic_function>>

+ * <<_ansible_roles>>

+ * <<_documentation_links>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-apps, #fedora-admin, #fedora-noc

+ Location::

+   https://geoip.fedoraproject.org

+ Servers::

+   sundries*, sundries*-stg

+ Purpose::

+   A simple web service that return geoip information as JSON-formatted

+   dictionary in utf-8. Particularly, it's used by https://fedoraproject.org/wiki/Anaconda[anaconda]

+   to get the most probable territory code, based on the public IP of the caller.

+ 

+ == Basic Function

+ 

+ * Users go to https://geoip.fedoraproject.org/city

+ * The website is exposed via

+ `/etc/httpd/conf.d/geoip-city-wsgi-proxy.conf`.

+ * Return a string with geoip information with syntax as JSON-formatted

+ dict in utf8

+ * It also currently accepts one override: `?ip=xxx.xxx.xxx.xxx`, e.g.

+ https://geoip.fedoraproject.org/city?ip=18.0.0.1 which then uses the

+ passed IP address instead of the determined IP address of the client.

+ 

+ == Ansible Roles

+ 

+ The geoip-city-wsgi role:

+ https://pagure.io/fedora-infra/ansible/blob/main/f/roles/geoip-city-wsgi

+ 

+ is present in sundries playbook:

+ https://pagure.io/fedora-infra/ansible/blob/main/f/playbooks/groups/sundries.yml

+ 

+ the proxy tasks are present in:

+ https://pagure.io/fedora-infra/ansible/blob/main/f/playbooks/include/proxies-reverseproxy.yml

+ 

+ == Documentation Links

+ 

+ App: https://geoip.fedoraproject.org

+ 

+ Source: https://github.com/fedora-infra/geoip-city-wsgi

+ 

+ Bugs: https://github.com/fedora-infra/geoip-city-wsgi/issues

@@ -0,0 +1,70 @@ 

+ = Using github for Infra Projects

+ 

+ We're presently using github to host git repositories and issue tracking

+ for some infrastructure projects. Anything we need to know should be

+ recorded here.

+ 

+ == Setting up a new repo

+ 

+ Create projects inside of the fedora-infra group:

+ 

+ https://github.com/fedora-infra

+ 

+ That will allow us to more easily track what projects we have.

+ 

+ *TODO*: How do we create a new project and import it?

+ 

+ 

+ * After creating a new repo, click on the Settings tab to set up some

+ fancy things.

+ +

+ If using git-flow for your project:

+ +

+ ** Set the default branch from _master_ to _develop_. Having the default

+ branch be _develop_ is nice: new contributors will automatically start

+ committing there if they're not paying attention to what branch they're

+ on. You almost never want to commit directly to the master branch.

+ +

+ If there does not exist a _develop_ branch, you should create one by

+ branching off of _master_:

+ +

+ ....

+ $ git clone GIT_URL

+ $ git checkout -b develop

+ $ git push --all

+ ....

+ ** Set up an IRC hook for notifications. From the `Settings` tab click

+ on `Webhooks & Services`. Under the `Add Service` dropdown, find `IRC`

+ and click it. You might need to enter your password. In the form, you

+ probably want the following values:

+ *** Server, irc.freenode.net

+ *** Port, 6697

+ *** Room, #fedora-apps

+ *** Nick, <nothing>

+ *** Branch Regexes, <nothing>

+ *** Password, <nothing>

+ *** Ssl, <on>

+ *** Message Without Join, <on>

+ *** No Colors, <off>

+ *** Long Url, <off>

+ *** Notice, <on>

+ *** Active, <on>

+ 

+ 

+ == Add an EasyFix label

+ 

+ The EasyFix label is used to mark bugs that are potentially fixable by

+ new contributors getting used to our source code or relatively new to

+ python programming. GitHub doesn't provide this label automatically so

+ we have to add it. You can add the label from the issues page of the

+ repository or use this curl command to add it:

+ 

+ ....

+ curl -k -u '$GITHUB_USERNAME:$GITHUB_PASSWORD' https://api.github.com/repos/fedora-infra/python-fedora/labels -H "Content-Type: application/json" -d '{"name":"EasyFix","color":"3b6eb4"}'

+ ....

+ 

+ Please try to use the same color for consistency between Fedora

+ Infrastructure Projects. You can then add the github repo to the list

+ that _easyfix.fedoraproject.org_ scans for easyfix tickets here:

+ 

+ https://fedoraproject.org/wiki/Easyfix

@@ -0,0 +1,51 @@ 

+ = github2fedmsg SOP

+ 

+ Bridge github events onto our fedmsg bus.

+ 

+ App: https://apps.fedoraproject.org/github2fedmsg/

+ 

+ Source: https://github.com/fedora-infra/github2fedmsg/

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-apps, #fedora-admin, #fedora-noc

+ Servers::

+   github2fedmsg01

+ Purpose::

+   Bridge github events onto our fedmsg bus.

+ 

+ == Description

+ 

+ github2fedmsg is a small Python Pyramid app that bridges github events

+ onto our fedmsg bus by way of github's "webhooks" feature. It is what

+ allows us to have IRC notifications of github activity via fedmsg. It

+ has two phases of operation:

+ 

+ * Infrequently, a user will log in to github2fedmsg via Fedora OpenID.

+ They then push a button to also log in to github.com. They are then

+ logged in to github2fedmsg with _both_ their FAS account and their

+ github account.

+ +

+ They are then presented with a list of their github repositories. They

+ can toggle each one: "on" or "off". When they turn a repo on, our webapp

+ makes a request to github.com to install a "webhook" for that repo with

+ a callback URL to our app.

+ * When events happen to that repo on github.com, github looks up our

+ callback URL and makes an http POST request to us, informing us of the

+ event. Our github2fedmsg app receives that, validates it, and then

+ republishes the content to our fedmsg bus.

+ 

+ == What could go wrong?

+ 

+ * Restarting the app or rebooting the host shouldn't cause a problem. It

+ should come right back up.

+ * Our database could die. We have a db with a list of all the repos we

+ have turned on and off. We would want to restore that from backup.

+ * If github gets compromised, they might have to revoke all of their

+ application credentials. In that case, our app would fail to work. There

+ are _lots_ of private secrets set in our private repo that allow our app

+ to talk to github.com. There are inline comments there with instructions

+ about how to generate new keys and secrets.

@@ -0,0 +1,111 @@ 

+ = Greenwave SOP

+ 

+ == Contact Information

+ 

+ Owner::

+   Factory2 Team, Fedora QA Team, Infrastructure Team

+ Contact::

+   #fedora-qa, #fedora-admin

+ Persons::

+   gnaponie (giulia), mprahl, lucarval, ralph (threebean)

+ Public addresses::

+   * https://greenwave-web-greenwave.app.os.fedoraproject.org/api/v1.0/version

+   * https://greenwave-web-greenwave.app.os.fedoraproject.org/api/v1.0/policies

+   * https://greenwave-web-greenwave.app.os.fedoraproject.org/api/v1.0/decision

+ Servers::

+   * In OpenShift.

+ Purpose::

+   Provide gating decisions.

+ 

+ == Description

+ 

+ * See

+ http://fedoraproject.org/wiki/Infrastructure/Factory2/Focus/Greenwave[the

+ focus document] for background.

+ * See https://gating-greenwave.readthedocs.io/en/latest/[the upstream docs] for more

+ detailed info.

+ 

+ Greenwave's job is:

+ 

+ * answering yes/no questions (or making decisions)

+ * about artifacts (RPM packages, source tarballs, …)

+ * at certain gating points in our pipeline

+ * based on test results

+ * according to some policy

+ 

+ In particular, we'll be using Greenwave to provide yes/no gating

+ decisions to _Bodhi_ about rpms in each update. Greenwave will do this

+ by consulting resultsdb and waiverdb for individual test results and

+ then combining those results into an aggregate decision.

+ 

+ The _policies_ for how those results should be combined or ignored, are

+ defined in ansible in

+ https://pagure.io/fedora-infra/ansible/blob/main/f/roles/openshift-apps/greenwave/templates/configmap.yml[roles/openshift-apps/greenwave/templates/configmap.yml].

+ We expect to grow these over time to new use cases (rawhide compose gating, etc..)

+ 

+ == Observing Greenwave Behavior

+ 

+ Login to `os-master01.iad2.fedoraproject.org` as `root` (or,

+ authenticate remotely with openshift using

+ `oc login https://os.fedoraproject.org`), and run:

+ 

+ ....

+ $ oc project greenwave

+ $ oc status -v

+ $ oc logs -f dc/greenwave-web

+ ....

+ 

+ == Database

+ 

+ Greenwave currently has no database (and we'd like to keep it that way).

+ It relies on `resultsdb` and `waiverdb` for information.

+ 

+ == Upgrading

+ 

+ You can roll out configuration changes by changing the files in

+ https://pagure.io/fedora-infra/ansible/blob/main/f/roles/openshift-apps/greenwave[roles/openshift-apps/greenwave/] and running the

+ https://pagure.io/fedora-infra/ansible/blob/main/f/playbooks/openshift-apps/greenwave.yml[playbooks/openshift-apps/greenwave.yml] playbook.

+ 

+ To understand how the software is deployed, take a look at these two

+ files:

+ 

+ * https://pagure.io/fedora-infra/ansible/blob/main/f/roles/openshift-apps/greenwave/templates/imagestream.yml[roles/openshift-apps/greenwave/templates/imagestream.yml]

+ * https://pagure.io/fedora-infra/ansible/blob/main/f/roles/openshift-apps/greenwave/templates/buildconfig.yml[roles/openshift-apps/greenwave/templates/buildconfig.yml]

+ 

+ See that we build a fedora-infra specific image on top of an app image

+ published by upstream. The `latest` tag is automatically deployed to

+ staging. This should represent the latest commit to the `master` branch

+ of the upstream git repo that passed its unit and functional tests.

+ 

+ The `prod-fedora` tag is manually controlled. To upgrade prod to match

+ what is in stage, move the `prod-fedora` tag to point to the same image

+ as the `latest` tag. Our buildconfig is configured to poll that tag, so

+ a new os.fp.o build and deployment should be automatically created.

+ 

+ You can watch the build and deployment with `oc` commands.

+ 

+ You can poll this URL to see what version is live at the moment:

+ 

+ https://greenwave-web-greenwave.app.os.fedoraproject.org/api/v1.0/version

+ 

+ == Troubleshooting

+ 

+ In case of problems with greenwave messaging, check the logs of the

+ container dc/greenwave-fedmsg-consumers to see if the is something

+ wrong:

+ 

+ ....

+ $ oc logs -f dc/greenwave-fedmsg-consumers

+ ....

+ 

+ It is also possible to check if greenwave is actually publishing

+ messages looking at

+ https://apps.fedoraproject.org/datagrepper/raw?category=greenwave&delta=127800&rows_per_page=1[this

+ link] and checking the time of the last message.

+ 

+ In case of problems with greenwave webapp, check the logs of the

+ container dc/greenwave-web:

+ 

+ ....

+ $ oc logs -f dc/greenwave-web

+ ....

@@ -0,0 +1,138 @@ 

+ = Guest Disk Resize SOP

+ 

+ Resize disks in our kvm guests

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_how_to_do_it>>

+ ** <<_kvmlibvirt_guests>>

+ 

+ == Contact Information

+ 

+ Owner:::

+   Fedora Infrastructure Team

+ Contact:::

+   #fedora-admin, sysadmin-main

+ Location:::

+   PHX, Tummy, ibiblio, Telia, OSUOSL

+ Servers:::

+   All xen servers, kvm/libvirt servers.

+ Purpose:::

+   Resize guest disks

+ 

+ == How to do it

+ 

+ === KVM/libvirt Guests

+ 

+ [arabic]

+ . SSH to the kvm server and resize the guest's logical volume. If you

+   want to be extra careful, make a snapshot of the LV first:

+ +

+ ....

+ lvcreate -n [guest name]-snap -L 10G -s /dev/VolGroup00/[guest name] 

+ ....

+ +

+ Optional, but always good to be careful

+ 

+ . Shutdown the guest:

+ +

+ ....

+ sudo virsh shutdown [guest name]

+ ....

+ 

+ . Disable the guests lv:

+ +

+ ....

+ lvchange -an /dev/VolGroup00/[guest name]

+ ....

+ 

+ . Resize the lv:

+ +

+ ....

+ lvresize -L [NEW TOTAL SIZE]G /dev/VolGroup00/[guest name]

+ 

+ or

+ 

+ lvresize -L +XG /dev/VolGroup00/[guest name]

+ (to add X GB to the disk)

+ ....

+ . Enable the lv:

+ +

+ ....

+ lvchange -ay /dev/VolGroup00/[guest name]

+ ....

+ 

+ . Bring the guest back up:

+ +

+ ....

+ sudo virsh start [guest name]

+ ....

+ . Login into the guest:

+ +

+ ....

+ sudo virsh console [guest name]

+ You may wish to boot single user mode to avoid services coming up and going down again

+ ....

+ 

+ . On the guest, run:

+ +

+ ....

+ fdisk /dev/vda

+ ....

+ 

+ . Delete the the LVM partition on the guest you want to add space to and

+ recreate it with the maximum size. Make sure to set its type to LV (8e):

+ +

+ ....

+ p to list partitions

+ d to delete selected partition

+ n to create new partition (default values should be ok)

+ t to change partition type (set to 8e)

+ w to write changes 

+ ....

+ 

+ . Run partprobe:

+ +

+ ....

+ partprobe

+ ....

+ 

+ . Check the size of the partition:

+ +

+ ....

+ fdisk -l /dev/vdaN

+ ....

+ +

+ If this still reflects the old size, then reboot the guest and verify

+ that its size changed correctly when it comes up again.

+ 

+ . Login to the guest again, and run:

+ +

+ ....

+ pvresize /dev/vdaN

+ ....

+ 

+ . A vgs should now show the new size. Use lvresize to resize the root

+ lv:

+ +

+ ....

+ lvresize -L [new root partition size]G /dev/GuestVolGroup00/root

+ 

+ (pvs will tell you how much space is available)

+ ....

+ 

+ . Finally, resize the root partition:

+ +

+ ....

+ resize2fs /dev/GuestVolGroup00/root

+ (If the root fs is ext4)

+ 

+ or

+ 

+ xfs_growfs /dev/GuestVolGroup00/root

+ (if the root fs is xfs)

+ ....

+ +

+ verify that everything worked out, and delete the snapshot you made if

+ you made one.

@@ -0,0 +1,72 @@ 

+ = Guest Editing SOP

+ 

+ Various virsh commands

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_how_to_do_it>>

+ ** <<_add_cpu>>

+ ** <<_resize_memory>>

+ 

+ 

+ == Contact Information

+ 

+ Owner:::

+   Fedora Infrastructure Team

+ Contact:::

+   #fedora-admin, sysadmin-main

+ Location:::

+   PHX, Tummy, ibiblio, Telia, OSUOSL

+ Servers:::

+   All xen servers, kvm/libvirt servers.

+ Purpose:::

+   Resize guest disks

+ 

+ == How to do it

+ 

+ === Add cpu

+ 

+ [arabic]

+ . SSH to the virthost server

+ . Calculate the number of CPUs the system needs

+ . `sudo virsh setvcpus  <guest> <num_of_cpus> --config` - ie:

+ +

+ ....

+ sudo virsh setvcpus bapp01 16 --config

+ ....

+ . Shutdown the virtual system

+ . Start the virtual system

+ +

+ [NOTE]

+ ====

+ Note that using _virsh reboot_ is insufficient. You have to

+ actually stop the domain and start it with `virsh destroy <guest>` and

+ `virsh start <guest>` for the change to take effect.

+ ====

+ . Login and check that cpu count matches

+ . *Remember to update the group_vars in ansible* to match the new value

+ you set, if appropriate.

+ 

+ === Resize memory

+ 

+ [arabic]

+ . SSH to the virthost server

+ . Calculate the amount of memory the system needs in kb

+ . `sudo virsh setmem <guest> <num_in_kilobytes> --config` - ie:

+ +

+ ....

+ sudo virsh setmem bapp01 16777216 --config

+ ....

+ . Shutdown the virtual system

+ . Start the virtual system

+ +

+ [NOTE]

+ ====

+ Note that using _virsh reboot_ is insufficient. You have to

+ actually stop the domain and start it with `virsh destroy <guest>` and

+ `virsh start <guest>` for the change to take effect.

+ ====

+ . Login and check that memory matches

+ . *Remember to update the group_vars in ansible* to match the new value

+ you set, if appropriate.

@@ -0,0 +1,139 @@ 

+ = Haproxy Infrastructure SOP

+ 

+ haproxy is an application that does load balancing at the tcp layer or

+ at the http layer. It can do generic tcp balancing but it does

+ specialize in http balancing. Our proxy servers are still running apache

+ and that is what our users connect to. But instead of using

+ mod_proxy_balancer and ProxyPass balancer://, we do a ProxyPass to

+ http://localhost:10001/ or http://localhost:10002/. haproxy must

+ be told to listen to an individual port for each farm. All haproxy farms

+ are listed in /etc/haproxy/haproxy.cfg.

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_how_it_works>>

+ * <<_configuration_example>>

+ * <<_stats>>

+ * <<_advanced_usage>>

+ 

+ == Contact Information

+ 

+ Owner:::

+   Fedora Infrastructure Team

+ Contact:::

+   #fedora-admin, sysadmin-main, sysadmin-web group

+ Location:::

+   Phoenix, Tummy, Telia

+ Servers:::

+   proxy1, proxy2, proxy3, proxy4, proxy5

+ Purpose:::

+   Provides load balancing from the proxy layer to our application layer.

+ 

+ == How it works

+ 

+ haproxy is a load balancer. If you're familiar, this section won't be

+ that interesting. haproxy in its normal usage acts just like a web

+ server. It listens on a port for requests. Unlike most webservers though

+ it then sends that request to one of our back end application servers

+ and sends the response back. This is referred to as reverse proxying. We

+ typically configure haproxy to send check to a specific url and look for

+ the response code. If this url isn't sent, it just does basic checks to

+ /. In most of our configurations we're using round robin balancing. IE,

+ request 1 goes to app1, request2 goes to app2, request 3 goes to app3

+ request 4 goes to app1, and the whole process repeats.

+ 

+ [WARNING]

+ ====

+ These checks do add load to the app servers. As well as additional

+ connections. Be smart about which url you're checking as it gets checked

+ often. Also be sure to verify the application servers can handle your

+ new settings, monitor them closely for the hour or two after you make

+ changes.

+ ====

+ 

+ == Configuration example

+ 

+ The below example is how our fedoraproject wiki could be configured.

+ Each application should have its own farm. Even though it may have an

+ identical configuration to another farm, this allows easy addition and

+ subtraction of specific nodes when we need them.:

+ 

+ ....

+ listen  fpo-wiki 0.0.0.0:10001

+ balance roundrobin

+ server  app1 app1.fedora.iad2.redhat.com:80 check inter 2s rise 2 fall 5

+ server  app2 app2.fedora.iad2.redhat.com:80 check inter 2s rise 2 fall 5

+ server  app4 app4.fedora.iad2.redhat.com:80 backup check inter 2s rise 2 fall 5

+ option  httpchk GET /wiki/Infrastructure

+ ....

+ 

+ * The first line "listen ...." Says to create a farm called _fpo-wiki_.

+ Listening on all IP's on port 10001. _fpo-wiki_ can be arbitrary but make

+ it something obvious. Aside from that the important bit is :10001.

+ Always make sure that when creating a new farm, its listening on a

+ unique port. In Fedora's case we're starting at 10001, and moving up by

+ one. Just check the config file for the lowest open port above 10001.

+ * The next line _balance roundrobin_ says to use round robin balancing.

+ * The server lines each add a new node to the balancer farm. In this

+ case the wiki is being served from app1, app2 and app4. If the wiki is

+ available at http://app1.fedora.iad2.redhat.com/wiki/ Then this

+ config would be used in conjunction with "RewriteRule ^/wiki/(.*)

+ http://localhost:10001/wiki/$1 [P,L]".

+ * _server_ means we're adding a new node to the farm

+ * _app1_ is the worker name, it is analagous to fpo-wiki but should::

+   match shorthostname of the node to make it easy to follow.

+ * _app1.fedora.iad2.redhat.com:80_ is the hostname and port to be

+ contacted.

+ * _check_ means to check via bottom line "option httpchk GET

+ /wiki/Infrastructure" which will use /wiki/Infrastructure to verify the

+ wiki is working. If that URL fails, that entire node will be taken out

+ of the farm mix.

+ * _inter 2s_ means to check every 2 seconds. 2s is the same as 2000 in

+ this case.

+ * _rise 2_ means to not put this node back in the mix until it has had

+ two successful connections in a row. haproxy will continue to check

+ every 2 seconds whether a node is up or down

+ * _fall 5_ means to take a node out of the farm after 5 failures.

+ * _backup_ You'll notice that app4 has a _backup_ option. We don't

+ actually use this for the wiki but do for other farms. It basically

+ means to continue checking and treat this node like any other node but

+ don't send it any production traffic unless the other two nodes are

+ down.

+ 

+ All of these options can be tweaked so keep that in mind when changing

+ or building a new farm. There are other configuration options in this

+ file that are global. Please see the haproxy documentation for more

+ info:

+ 

+ ....

+ /usr/share/doc/haproxy-1.3.14.6/haproxy-en.txt

+ ....

+ 

+ == Stats

+ 

+ In order to view the stats for a farm please see the stats page. Each

+ proxy server has its own stats page since each one is running its own

+ haproxy server. To view the stats point your browser to

+ https://admin.fedoraproject.org/haproxy/shorthostname/ so proxy1 is at

+ https://admin.fedoraproject.org/haproxy/proxy1/ The trailing / is

+ important.

+ 

+ * https://admin.fedoraproject.org/haproxy/proxy1/

+ * https://admin.fedoraproject.org/haproxy/proxy2/

+ * https://admin.fedoraproject.org/haproxy/proxy3/

+ * https://admin.fedoraproject.org/haproxy/proxy4/

+ * https://admin.fedoraproject.org/haproxy/proxy5/

+ 

+ == Advanced Usage

+ 

+ haproxy has some more advanced usage that we've not needed to worry

+ about yet but is worth mentioning. For example, one could send users to

+ just one app server based on session id. If user A happened to hit app1

+ first and user B happened to hit app4 first. All subsequent requests for

+ user A would go to app1 and user B would go to app4. This is handy for

+ applications that cannot normally be balanced because of shared storage

+ needs or other locking issues. This won't solve all problems though and

+ can have negative affects for example when app1 goes down user A would

+ either lose their session, or be unable to work until app1 comes back

+ up. Please do some great testing before looking in to this option.

@@ -0,0 +1,51 @@ 

+ = HOTFIXES SOP

+ 

+ From time to time we have to quickly patch a problem or issue in

+ applications in our infrastructure. This process allows us to do that

+ and track what changed and be ready to remove it when the issue is fixed

+ upstream.

+ 

+ == Ansible based items:

+ 

+ For ansible, they should be placed after the task that installs the

+ package to be changed or modified. Either in roles or tasks.

+ 

+ hotfix tasks should be called "HOTFIX description" They should also link

+ in comments to any upstream bug or ticket. They should also have tags of

+ _hotfix_

+ 

+ The process is:

+ 

+ * Create a diff of any files changed in the fix.

+ * Check in the \_original_ files and change to role/task

+ * Check in now your diffs of those same files.

+ * ansible will replace the files on the affected machines completely

+ with the fixed versions.

+ * If you need to back it out, you can revert the diff step, wait and

+ then remove the first checkin

+ 

+ Example:

+ 

+ ....

+ <task that installs the httpd package>

+ 

+ #

+ # install hash randomization hotfix

+ # See bug https://bugzilla.redhat.com/show_bug.cgi?id=812398

+ #

+ - name: hotfix - copy over new httpd init script

+   copy: src="{{ files }}/hotfix/httpd/httpd.init" dest=/etc/init.d/httpd

+         owner=root group=root mode=0755

+   notify:

+   - restart apache

+   tags:

+   - config

+   - hotfix

+   - apache

+ ....

+ 

+ == Upstream changes

+ 

+ Also, if at all possible a bug should be filed with the upstream

+ application to get the fix in the next version. Hotfixes are something

+ we should strive to only carry a short time.

@@ -0,0 +1,136 @@ 

+ = The New Hotness

+ 

+ https://github.com/fedora-infra/the-new-hotness/[the-new-hotness] is a

+ https://fedora-messaging.readthedocs.io/en/stable/[fedora messaging

+ consumer] that subscribes to

+ https://release-monitoring.org/[release-monitoring.org] fedora messaging

+ notifications to determine when a package in Fedora should be updated.

+ For more details on the-new-hotness, consult the

+ http://the-new-hotness.readthedocs.io/[project documentation].

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin #fedora-apps

+ Persons::

+   zlopez

+ Location::

+   os.fedoraproject.org

+ Purpose::

+   File issues when upstream projects release new versions of a package

+ 

+ == Hosts

+ 

+ The current deployment is made up of the-new-hotness OpenShift

+ namespace.

+ 

+ === the-new-hotness

+ 

+ This OpenShift namespace runs following pods:

+ 

+ * A fedora messaging consumer

+ 

+ This OpenShift project relies on:

+ 

+ * xref:anitya.adoc[Anitya] as message publisher

+ * Fedora messaging RabbitMQ hub for consuming messages

+ * Koji for scratch builds

+ * Bugzilla for issue reporting

+ 

+ == Releasing

+ 

+ The release process is described in

+ https://the-new-hotness.readthedocs.io/en/stable/dev-guide.html#release-guide[the-new-hotness

+ documentation].

+ 

+ === Deploying

+ 

+ Staging deployment of the-new-hotness is deployed in staging OpenShift.

+ 

+ To deploy staging instance of the-new-hotness you need to push changes

+ to staging branch on

+ https://github.com/fedora-infra/the-new-hotness[the-new-hotness GitHub].

+ GitHub webhook will then automatically deploy a new version of

+ the-new-hotness on staging.

+ 

+ Production deployment of the-new-hotness is deployed in production OpenShift.

+ 

+ To deploy production instance of the-new-hotness you need to push

+ changes to production branch on

+ https://github.com/fedora-infra/the-new-hotness[the-new-hotness GitHub].

+ GitHub webhook will then automatically deploy a new version of

+ the-new-hotness on production.

+ 

+ ==== Configuration

+ 

+ To deploy the new configuration, you need

+ https://fedora-infra-docs.readthedocs.io/en/latest/sysadmin-guide/sops/sshaccess.html[ssh

+ access] to _batcave01.iad2.fedoraproject.org_ and

+ https://fedora-infra-docs.readthedocs.io/en/latest/sysadmin-guide/sops/ansible.html[permissions

+ to run the Ansible playbook].

+ 

+ All the following commands should be run from batcave01.

+ 

+ First, ensure there are no configuration changes required for the new

+ update. If there are, update the Ansible anitya role(s) and optionally

+ run the playbook:

+ 

+ ....

+ $ sudo rbac-playbook openshift-apps/the-new-hotness.yml

+ ....

+ 

+ The configuration changes could be limited to staging only using:

+ 

+ ....

+ $ sudo rbac-playbook openshift-apps/the-new-hotness.yml -l staging

+ ....

+ 

+ This is recommended for testing new configuration changes.

+ 

+ ==== Upgrading

+ 

+ ===== Staging

+ 

+ To deploy new version of the-new-hotness you need to push changes to

+ staging branch on

+ https://github.com/fedora-infra/the-new-hotness[the-new-hotness GitHub].

+ GitHub webhook will then automatically deploy a new version of

+ the-new-hotness on staging.

+ 

+ ===== Production

+ 

+ To deploy new version of the-new-hotness you need to push changes to

+ production branch on

+ https://github.com/the-new-hotness/anitya[the-new-hotness GitHub].

+ GitHub webhook will then automatically deploy a new version of

+ the-new-hotness on production.

+ 

+ Congratulations! The new version should now be deployed.

+ 

+ == Monitoring Activity

+ 

+ It can be nice to check up on the-new-hotness to make sure its behaving

+ correctly. You can see all the Bugzilla activity using the

+ https://bugzilla.redhat.com/page.cgi?id=user_activity.html[user activity

+ query] (staging uses

+ https://bugzilla.stage.redhat.com//page.cgi?id=user_activity.html[bugzilla.stage.redhat.com])

+ and querying for the `upstream-release-monitoring@fedoraproject.org`

+ user.

+ 

+ You can also view all the Koji tasks dispatched by the-new-hotness. For

+ example, you can see the

+ https://koji.fedoraproject.org/koji/tasks?state=failed&owner=hotness[failed

+ tasks] it has created.

+ 

+ To monitor the pods of the-new-hotness you can connect to Fedora infra

+ OpenShift and look at the state of pods.

+ 

+ For staging look at the [.title-ref]#the-new-hotness# namespace in

+ https://os.stg.fedoraproject.org/console/project/release-monitoring/overview[staging

+ OpenShift instance].

+ 

+ For production look at the [.title-ref]#the-new-hotness# namespace in

+ https://os.fedoraproject.org/console/project/release-monitoring/overview[production

+ OpenShift instance].

@@ -0,0 +1,72 @@ 

+ = System Administrator Guide

+ 

+ Welcome to The Fedora Infrastructure system administration guide.

+ 

+ == Getting Started

+ 

+ If you haven't already, you should complete the general

+ https://fedora-infra-docs.readthedocs.io/en/latest/index.html#getting-started[Getting Started] guide.

+ Once you've completed that, you're ready to get

+ involved in the

+ https://accounts.fedoraproject.org/group/fi-apprentice/[Fedora

+ Infrastructure Apprentice] group.

+ 

+ === Fedora Infrastructure Apprentice

+ 

+ The

+ https://accounts.fedoraproject.org/group/fi-apprentice/[Fedora

+ Infrastructure Apprentice] group in the Fedora Account System grants

+ read-only access to many Fedora infrastructure machines. This group is

+ used for new folks to look around at the infrastructure setup, check

+ machines and processes and see where they might like to contribute

+ moving forward. This also allows apprentices to examine and gather info

+ on problems, then propose solutions.

+ 

+ [NOTE]

+ ====

+ This group will be pruned often of inactive folks who miss the monthly

+ email check-in on the

+ https://lists.fedoraproject.org/admin/lists/infrastructure.lists.fedoraproject.org/[infrastructure

+ mailing list]. There's nothing personal in this and you're welcome to

+ re-join later when you have more time, we just want to make sure the

+ group only has active members.

+ ====

+ 

+ Members of the https://accounts.fedoraproject.org/group/fi-apprentice/[Fedora

+ Infrastructure Apprentice] group have ssh/shell access to many machines,

+ but no sudo rights or ability to commit to the

+ https://pagure.io/fedora-infra/ansible/[Ansible repository] (but they do

+ have read-only access). Apprentice can, however, contribute to the

+ infrastructure documentation by making a pull request to the

+ https://pagure.io/infra-docs/[infra-docs] repository. Access is via the

+ bastion.fedoraproject.org machine and from there to each machine. See

+ the xref:ssh.adoc[ssh-sop] for instructions on how to set up SSH. You can see a list

+ of hosts that allow apprentice access by using:

+ 

+ ....

+ $  ./scripts/hosts_with_var_set -i inventory/ -o ipa_client_shell_groups=fi-apprentice

+ ....

+ 

+ from a checkout of the https://pagure.io/fedora-infra/ansible/[Ansible

+ repository]. The Ansible repository is hosted on pagure.io at

+ `https://pagure.io/fedora-infra/ansible.git`.

+ 

+ === Selecting a Ticket

+ 

+ Start by checking out the

+ https://pagure.io/fedora-infrastructure/issues?status=Open&tags=easyfix[easyfix

+ tickets]. Tickets marked with this tag are a good place for apprentices

+ to learn how things are setup, and also contribute a fix.

+ 

+ Since apprentices do not have commit access to the

+ https://pagure.io/fedora-infra/ansible/[Ansible repository], you should

+ make your change, produce a patch with `git diff`, and attach it to the

+ infrastructure ticket you are working on. It will then be reviewed.

+ 

+ == Standard Operating Procedures

+ 

+ Below is a table of contents containing all the standard operating

+ procedures for Fedora Infrastructure applications. For information on

+ how to write a new standard operating procedure, consult the guide on

+ https://fedora-infra-docs.readthedocs.io/en/latest/dev-guide/sops.html#develop-sops[Developing

+ Standard Operating Procedures].

@@ -0,0 +1,55 @@ 

+ = Infrastructure Git Repos

+ 

+ Setting up an infrastructure git repo - and the push mechanisms for the

+ magicks

+ 

+ We have a number of git repos (in /git on batcave) that manage files for

+ ansible, our docs, our common host info database and our kickstarts This

+ is a doc on how to setup a new one of these, if it is needed.

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, sysadmin-main

+ Location::

+   Phoenix

+ Servers::

+   batcave01.iad2.fedoraproject.org, batcave-comm01.qa.fedoraproject.org

+ 

+ == Steps

+ 

+ Create the bare repo:

+ 

+ ....

+ make $git_dir

+ setfacl -m d:g:$yourgroup:rwx -m d:g:$othergroup:rwx  \

+  -m g:$yourgroup:rwx -m g:$othergroup:rwx $git_dir

+ 

+ cd $git_dir

+ git init --bare

+ ....

+ 

+ edit up config - add these lines to the bottom:

+ 

+ ....

+ [hooks]

+ # (normallysysadmin-members@fedoraproject.org)

+ mailinglist = emailaddress@yourdomain.org

+ emailprefix =

+ maildomain = fedoraproject.org

+ reposource = /path/to/this/dir

+ repodest = /path/to/where/you/want/the/files/dumped

+ ....

+ 

+ edit up description - make it something useful:

+ 

+ ....

+ cd hooks

+ rm -f *.sample

+ cp hooks from /git/infra-docs/hooks/ on batcave01 to this path

+ ....

+ 

+ modify sudoers to allow users in whatever groups can commit to this repo

+ can run `/usr/local/bin/syncgittree.sh` w/o inputting a password

@@ -0,0 +1,113 @@ 

+ = Infrastructure Host Rename SOP

+ 

+ This page is intended to guide you through the process of renaming a

+ virtual node.

+ 

+ == Contents

+ 

+ * <<_introduction>>

+ * <<_finding_out_where_the_host_is>>

+ * <<_preparation>>

+ * <<_renaming_the_logical_volume>>

+ * <<_doing_the_actual_rename>>

+ * <<_vpn_stuff>>

+ 

+ == Introduction

+ 

+ Throughout this SOP, we will refer to the old hostname as $oldhostname

+ and the new hostname as $newhostname. We will refer to the Dom0 host

+ that the vm resides on as `$vmhost`.

+ 

+ If this process is being followed so that a temporary-named host can

+ replace a production host, please be sure to follow the

+ xref:infra-retiremachine.adoc[Infrastructure retire machine SOP] to properly

+ decommission the old host before continuing.

+ 

+ == Finding out where the host is

+ 

+ In order to rename the host, you must have access to the _Dom0_ (host) on

+ which the virtual server resides. To find out which host that is, log in

+ to _batcave01_, and run:

+ 

+ ....

+ grep $oldhostname /var/log/virthost-lists.out

+ ....

+ 

+ The first column of the output will be the Dom0 of the virtual node.

+ 

+ == Preparation

+ 

+ SSH to `$oldhostname`. If the new name is replacing a production box,

+ change the IP Address that it binds to, in

+ `/etc/sysconfig/network-scripts/ifcfg-eth0`.

+ 

+ Also change the hostname in `/etc/sysconfig/network`.

+ 

+ At this point, you can `sudo poweroff` `$oldhostname`.

+ 

+ Open an ssh session to `$vmhost`, and make sure that the node is listed as

+ `shut off`. If it is not, you can force it off with:

+ 

+ ....

+ virsh destroy $oldhostname

+ ....

+ 

+ == Renaming the Logical Volume

+ 

+ Find out the name of the logical volume (on `$vmhost`):

+ 

+ ....

+ virsh dumpxml $oldhostname | grep 'source dev'

+ ....

+ 

+ This will give you a line that looks like

+ `<source dev='/dev/VolGroup00/$oldhostname'/>` which tells you that

+ `/dev/VolGroup00/$oldhostname` is the path to the logical volume.

+ 

+ Run `/usr/sbin/lvrename` (the path that you found above) (the path that

+ you found above, with `$newhostname` at the end instead of `$oldhostname`)`

+ 

+ For example::

+   `/usr/sbin/lvrename /dev/VolGroup00/noc03-tmp /dev/VolGroup00/noc01`

+ 

+ == Doing the actual rename

+ 

+ Now that the logical volume has been renamed, we can rename the host in

+ libvirt.

+ 

+ Dump the configuration of `$oldhostname` into an xml file, by running:

+ 

+ ....

+ virsh dumpxml $oldhostname > $newhostname.xml

+ ....

+ 

+ Open up `$newhostname.xml`, and change all instances of `$oldhostname` to

+ `$newhostname`.

+ 

+ Save the file and run:

+ 

+ ....

+ virsh define $newhostname.xml

+ ....

+ 

+ If there are no errors above, you can undefine `$oldhostname`:

+ 

+ ....

+ virsh undefine $oldhostname

+ ....

+ 

+ Power on $newhostname, with:

+ 

+ ....

+ virsh start $newhostname

+ ....

+ 

+ And remember to set it to autostart:

+ 

+ ....

+ virsh autostart $newhostname

+ ....

+ 

+ == VPN Stuff

+ 

+ TODO

@@ -0,0 +1,68 @@ 

+ = Infrastructure Raid Mismatch Count SOP

+ 

+ What to do when a raid device has a mismatch count

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_description>>

+ * <<_correction>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, sysadmin-main

+ Location::

+   All

+ Servers::

+   Physical hosts

+ Purpose::

+   Provides database connection to many of our apps.

+ 

+ == Description

+ 

+ In some situations a raid device may indicate there is a count mismatch

+ as listed in:

+ 

+ ....

+ /sys/block/mdX/md/mismatch_cnt

+ ....

+ 

+ Anything other than 0 is considered not good. Though if the number is

+ low it's probably nothing to worry about. To correct this situation try

+ the directions below.

+ 

+ == Correction

+ 

+ More than anything these steps are to A) Verify there is no problem and

+ B) make the error go away. If step 1 and step 2 don't correct the

+ problems, *PROCEED WITH CAUTION*. The steps below, however, should be

+ relatively safe.

+ 

+ Issue a repair (replace mdX with the questionable raid device):

+ 

+ ....

+ echo repair > /sys/block/mdX/md/sync_action

+ ....

+ 

+ Depending on the size of the array and disk speed this can take a while.

+ Watch the progress with:

+ 

+ ....

+ cat /proc/mdstat

+ ....

+ 

+ Issue a check. It's this check that will reset the mismatch count if

+ there are no problems. Again replace mdX with your actual raid device:

+ 

+ ....

+ echo check > /sys/block/mdX/md/sync_action

+ ....

+ 

+ Just as before, you can watch the progress with:

+ 

+ ....

+ cat /proc/mdstat

+ ....

@@ -0,0 +1,112 @@ 

+ = Infrastructure Yum Repo SOP

+ 

+ In some cases RPM's in Fedora need to be rebuilt for the Infrastructure

+ team to suit our needs. This repo is provided to the public (except for

+ the RHEL RPMs). Rebuilds go into this repo which are stored on the

+ netapp and shared via the proxy servers after being built on koji.

+ 

+ For basic instructions, read the standard documentation on Fedora wiki:

+ 

+ - https://fedoraproject.org/wiki/Using_the_Koji_build_system

+ 

+ This document will only outline the differences between the "normal"

+ repos and the infra repos.

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_building_an_rpm>>

+ * <<_tagging_an_existing_build>>

+ * <<_promoting_a_staging_build>>

+ * <<_koji_package_list>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin

+ Location::

+   https://kojipkgs.fedoraproject.org/repos-dist/

+ Servers::

+   koji batcave01 / Proxy Servers

+ Purpose::

+   Provides infrastructure repo for custom Fedora Infrastructure rebuilds

+ 

+ == Building an RPM

+ 

+ Building an RPM for Infrastructure is significantly easier then building

+ an RPM for Fedora. Basically get your SRPM ready, then submit it to koji

+ for building to the `$repo-infra` target. (e.g. `epel7-infra`).

+ 

+ Example:

+ 

+ ....

+ rpmbuild --define "dist .el7.infra" -bs test.spec

+ koji build epel7-infra test-1.0-1.el7.infra.src.rpm

+ ....

+ 

+ [NOTE]

+ ====

+ Remember to build it for every dist / arch you need to deploy it on.

+ ====

+ 

+ After it has been built, you will see it's tagged as

+ `$repo-infra-candidate`, this means that it is a candidate for being

+ signed. The automatic signing system will pick it up and sign the

+ package for you without any further intervention. You can track when

+ this is done by checking the build info: when it is moved from

+ `$repo-infra-candidate` to `$repo-infra-stg`, it has been signed. You can

+ check this on the web interface (look under "Tags"), or via:

+ 

+ ....

+ koji buildinfo test-1.0-1.el7.infra

+ ....

+ 

+ After the build has been tagged into the `$repo-infra-stg` tag,

+ `tag2distrepo` will automatically create a distrepo task, which will

+ update the repository so that the package is available on staging hosts.

+ After this time, you can yum clean all and then install the packages via

+ yum install or yum update.

+ 

+ == Tagging an existing build

+ 

+ If you already have a real build and want to use it in the

+ infrastructure before it has landed in stable, you can tag it into the

+ respective `infra-candidate` tag. For example, if you have an _epel7_ build

+ of _test2-1.0-1.el7.infra_, run:

+ 

+ ....

+ koji tag epel7-infra-candidate test2-1.0-1.el7.infra

+ ....

+ 

+ And then the same autosigning and repogen from the previous section

+ applies.

+ 

+ == Promoting a staging build

+ 

+ After getting autosigned, builds will land in the respective infra-stg

+ tag, for example `epel7-infra-stg`. These tags go into repos that are

+ enabled on staging machines, but not on production. If you decide, after

+ testing, that the build is good enough for production, you can promote

+ it by running:

+ 

+ ....

+ koji move epel7-infra-stg epel7-infra test2-1.0-1.el7.infra

+ ....

+ 

+ == Koji package list

+ 

+ If you try to build a package into the infra tags, and koji says

+ something like: `BuildError: package test not in list for tag

+ epel7-infra-candidate`. That means that the package has not been added to

+ the list for building in that particular tag. Either add the package to

+ the respective Fedora/EPEL branches (this is the preferred method, since

+ we should always aim to get everything packaged for Fedora/EPEL), or add

+ the package to the listing for the respective tag.

+ 

+ To add package to infra tag, run:

+ 

+ ....

+ koji add-pkg $tag $package --owner=$user

+ ....

@@ -0,0 +1,34 @@ 

+ = Infrastructure retire machine SOP

+ 

+ == Introduction

+ 

+ When a machine (be it virtual instance or real physical hardware is

+ decommisioned, a set of steps must be followed to ensure that the

+ machine is properly removed from the set of machines we manage and

+ doesn't cause problems down the road.

+ 

+ == Retire process

+ 

+ [arabic]

+ . Ensure that the machine is no longer used for anything. Use git-grep, stop services, etc.

+ 

+ . Remove the machine from ansible. Make sure you not only remove the main

+ machine name, but also any aliases it might have (or move them to an

+ active server if they are active services. Make sure to search for the

+ IP address(s) of the machine as well. Ensure dns is updated to remove

+ the machine.

+ 

+ . Remove the machine from any labels in hardware devices like consoles

+ or the like.

+ 

+ . Revoke the ansible cert for the machine.

+ 

+ . Move the machine xml defintion to ensure it does NOT start on boot.

+ You can move it to `name-retired-YYYY-MM-DD`.

+ 

+ . Ensure any backend storage the machine was using is freed or renamed

+ to `name-retired-YYYY-MM-DD`

+ 

+ == TODO

+ 

+ fill in commands

@@ -0,0 +1,215 @@ 

+ = Ipsilon Infrastructure SOP

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_description>>

+ * <<_known_issues>>

+ * <<_restarting>>

+ * <<_configuration>>

+ * <<_common_actions>>

+ ** <<_openid_connect_scope_registration>>

+ ** <<_generate_an_openid_connect_token>>

+ ** <<_create_openid_connect_secrets_for_apps>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin

+ Primary upstream contact::

+   Patrick Uiterwijk - FAS: puiterwijk

+ Backup upstream contact::

+   Simo Sorce - FAS: simo (irc: simo) Howard Johnson - FAS: merlinthp

+   (irc: MerlinTHP) Rob Crittenden - FAS: rcritten (irc: rcrit)

+ Servers::

+ * ipsilon01.iad2.fedoraproject.org

+ * ipsilon02.iad2.fedoraproject.org

+ * ipsilion01.stg.iad2.fedoraproject.org

+ Purpose::

+   Ipsilon is our central authentication service that is used to

+   authenticate users agains FAS. It is seperate from FAS.

+ 

+ == Description

+ 

+ Ipsilon is our central authentication agent that is used to authenticate

+ users agains FAS. It is seperate from FAS. The only service that is not

+ using this currently is the wiki. It is a web service that is presented

+ via httpd and is load balanced by our standard haproxy setup.

+ 

+ == Known issues

+ 

+ No known issues at this time. There is not currently a logout option for

+ ipsilon, but it is not considered an issue. If group memberships are

+ updated in ipsilon the user will need to wait a few minutes for them to

+ replicate to the all the systems.

+ 

+ == Restarting

+ 

+ To restart the application you simply need to ssh to the servers for the

+ problematic region and issue an `service httpd restart`. This should

+ rarely be required.

+ 

+ == Configuration

+ 

+ Configuration is handled by the ipsilon.yaml playbook in Ansible. This

+ can also be used to reconfigure application, if that becomes necessary.

+ 

+ == Common actions

+ 

+ This section describes some common configuration actions.

+ 

+ === OpenID Connect Scope Registration

+ 

+ As documented on

+ https://fedoraproject.org/wiki/Infrastructure/Authentication,

+ application developers can request their own scopes. When a request for

+ this comes in, look in _ansible/roles/ipsilon/files/oidc_scopes/_ and copy

+ an example module. Copy this to a new file, so we have a file per scope

+ set. Fill in the information:

+ 

+ * _name_ is an Ipsilon-internal name. This should not include any spaces

+ * _display_name_ is the name that is displayed to the category of scopes

+ to the user

+ * _scopes_ is a dictionary with the full scope identifier (with namespace)

+ as keys. The values are dicts with the following keys:

+ ** _display_name_: The complete display name for this scope. This is what

+ the user gets shown to accept/reject

+ ** _claims_: A list of additional "claims" (pieces of user information) an

+ application will get when the user consents to this scope. For most scopes, this will be the empty list.

+ 

+ In _ansible/roles/ipsilon/tasks/main.yml_, add the name of the new file

+ (without .py) to the _with_items_ of "Copy OpenID Connect scope

+ registrations"). To enable, open

+ _ansible/roles/ipsilon/templates/configuration.conf_, and look for the

+ lines starting with "openidc enabled extensions". Add the name of the

+ plugin (in the "name" field of the file) to the environment this

+ scopeset has been requested for. Run the ansible _ipsilon.yml_ playbook.

+ 

+ === Generate an OpenID Connect token

+ 

+ There is a handy script in the Ansible project under

+ `scripts/generate-oidc-token` that can help you generate an OIDC token.

+ It has a self-explanatory `--help` argument, and it will print out some

+ SQL that you can run against Ipsilon's database, as well as the token

+ that you seek.

+ 

+ The `SERVICE_NAME` (the required positional argument) is the name of the

+ application that wants to use the token to perform actions against

+ another service.

+ 

+ To generate the scopes, you can visit our

+ https://fedoraproject.org/wiki/Infrastructure/Authentication[authentication] docs and

+ find the service you want the token to be used for. Each service has a

+ base namespace (a URL) and one or more scopes for that namespace. To

+ form a scope for this script, you concatenate the namespace of the

+ service with the scope you want to grant the service. You can provide

+ the script the `-s` flag multiple times if you want to grant more than one

+ scope to the same token.

+ 

+ As an example, to give Bodhi access to create waivers in WaiverDB, you

+ can see that the base namespace is

+ `https://waiverdb.fedoraproject.org/oidc/` and that there is a

+ `create-waiver` scope. You can run this to generate Ipsilon SQL and a

+ token with that scope:

+ 

+ ....

+ [bowlofeggs@batcave01 ansible][PROD]$ ./scripts/generate-oidc-token bodhi -e 365 -s https://waiverdb.fedoraproject.org/oidc/create-waiver

+ 

+ Run this SQL against Ipsilon's database:

+ 

+ --------START CUTTING HERE--------

+ BEGIN;

+ insert into token values ('2a5f2dff-4e93-4a8d-8482-e62f40dce046','username','bodhi@service');

+ insert into token values ('2a5f2dff-4e93-4a8d-8482-e62f40dce046','security_check','-ptBqVLId-kUJquqkVyhvR0DbDULIiKp1eqbXqG_dfVK9qACU6WwRBN3-7TRfoOn');

+ insert into token values ('2a5f2dff-4e93-4a8d-8482-e62f40dce046','client_id','bodhi');

+ insert into token values ('2a5f2dff-4e93-4a8d-8482-e62f40dce046','expires_at','1557259744');

+ insert into token values ('2a5f2dff-4e93-4a8d-8482-e62f40dce046','type','Bearer');

+ insert into token values ('2a5f2dff-4e93-4a8d-8482-e62f40dce046','issued_at','1525723744');

+ insert into token values ('2a5f2dff-4e93-4a8d-8482-e62f40dce046','scope','["openid", "https://someapp.fedoraproject.org/"]');

+ COMMIT;

+ -------- END CUTTING HERE --------

+ 

+ 

+ Token: 2a5f2dff-4e93-4a8d-8482-e62f40dce046_-ptBqVLId-kUJquqkVyhvR0DbDULIiKp1eqbXqG_dfVK9qACU6WwRBN3-7TRfoOn

+ ....

+ 

+ Once you have the SQL, you can run it against Ipsilon's database, and

+ you can provide the token to the application through some secure means

+ (such as putting into Ansible's secrets and telling the requestor the

+ Ansible variable they can use to access it.)

+ 

+ === Create OpenID Connect secrets for apps

+ 

+ Application wanting to use OpenID Connect need to register against our

+ OpenID Connect server (Ipsilon). Since we do not allow self-registration

+ (except on _iddev.fedorainfracloud.org_) for obvious reasons, the secrets

+ need to be created and configured per application and environment

+ (production vs staging).

+ 

+ To do so: - Go to the private ansible repository. - Edit the file:

+ `files/ipsilon/openidc.{{env}}.static` - At the bottom of this file, add

+ the information concerning the application you are adding. This will

+ look something like:

+ 

+ ____

+ ....

+ fedocal client_name="fedocal"

+ fedocal client_secret="<long random string>"

+ fedocal redirect_uris=["https://calendar.stg.fedoraproject.org/oidc_callback"]

+ fedocal client_uri="https://calendar.stg.fedoraproject.org/"

+ fedocal ipsilon_internal={"type":"static","client_id":"fedocal","trusted":true}

+ fedocal contacts=["admin@fedoraproject.org"]

+ fedocal client_id=null

+ fedocal policy_uri="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"

+ fedocal grant_types="authorization_code"

+ fedocal response_types="code"

+ fedocal application_type="web"

+ fedocal subject_type="pairwise"

+ fedocal logo_uri=null

+ fedocal tos_uri=null

+ fedocal jwks_uri=null

+ fedocal jwks=null

+ fedocal sector_identifier_uri=null

+ fedocal request_uris=[]

+ fedocal require_auth_time=null

+ fedocal token_endpoint_auth_method="client_secret_post"

+ fedocal id_token_signed_response_alg="RS256"

+ fedocal request_object_signing_alg="none"

+ fedocal initiate_login_uri=null

+ fedocal default_max_age=null

+ fedocal default_acr_values=null

+ fedocal client_secret_expires_at=0

+ ....

+ ____

+ 

+ In most of situation, only the first 5 lines (up to `ipsilon_internal`)

+ will change. If the application is not using flask-oidc or is not

+ maintained by the Fedora Infrastructure the first 11 lines (up to

+ `application_type`) may change. The remaining lines require a deeper

+ understanding of OpenID Connect and Ipsilon.

+ 

+ [NOTE]

+ ====

+ `client_id` in `ipsilon_internal` must match the begining of the line,

+ and the `client_id` field must either match the begining of the line or

+ be `null` as in the example here.

+ ====

+ [NOTE]

+ ====

+ In our OpenID connect server, `OIDC.user_getfield('nickname')` will return

+ the FAS username, which we know from FAS is unique. However, not all

+ OpenID Connect servers enforce this constraint, so the application code

+ may rely on the `sub` which is the only key that is sure to be unique.

+ If the application relies on `sub` and wants `sub` to return the FAS

+ username, then the configuration should be adjusted with:

+ `subject_type="public"`.

+ ====

+ After adjusting this file, you will need to make the `client_secret`

+ available to the application via ansible, for this simply add it to

+ `vars.yml` as we do for the other private variables and provide the

+ variable name to the person who requested it.

+ 

+ Finally, commit and push the changes to both files and run the

+ `ipsilon.yml` playbook.

@@ -0,0 +1,132 @@ 

+ = iSCSI

+ 

+ iscsi allows one to share and mount block devices using the scsi

+ protocol over a network. Fedora currently connects to a netapp that has

+ an iscsi export.

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_typical_uses>>

+ * <<_iscsi_basics>>

+ ** <<_terms>>

+ ** <<_iscsis_basic_login_logout_procedure_is>>

+ * <<_logging_in>>

+ * <<_logging_out>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, sysadmin-main

+ Servers::

+   xen[1-15]

+ Purpose::

+   Provides iscsi connectivity to our netapp.

+ 

+ == Typical Uses

+ 

+ The best uses for Fedora are for servers that are not part of a farm or

+ live replicated. For example, we wouldn't put app1 on the iscsi share

+ because we don't gain anything from it. Shutting down app1 to move it

+ isn't an issue because app1 is part of our application server farm.

+ 

+ _noc1_, however, is not replicated. It's a stand alone box that, at best,

+ would have a non-live failover. By placing this host on an iscsi share,

+ we can make it more highly available as it allows us to move that box

+ around our virtualization infrastructure without rebooting it or even

+ taking it down.

+ 

+ == iscsi Basics

+ 

+ === Terms

+ 

+ * initiator means client

+ * target means server

+ * swab means mop

+ * deck means floor

+ 

+ === iscsi's basic login / logout procedure is

+ 

+ [arabic]

+ . Notify your client that a new target is available (similar to editing

+   /etc/fstab for a new nfs mount)

+ . Login to the iscsi target (similar to running "mount /my/nfs"

+ . Logout from the iscsi target (similar to running "umount /my/nfs"

+ . Delete the target from the client (similar to removing the nfs mount

+   from /etc/fstab)

+ 

+ ==== Logging in

+ 

+ Most mounts are covered by ansible so this should be automatic. In the

+ event that something goes wrong though, the best way to fix this is:

+ 

+ * Notify the client of the target:

+ +

+ ....

+ iscsiadm --mode node --targetname iqn.1992-08.com.netapp:sn.118047036 --portal 10.5.88.21:3260 -o new

+ ....

+ * Log in to the new target:

+ +

+ ....

+ iscsiadm --mode node --targetname iqn.1992-08.com.netapp:sn.118047036 --portal 10.5.88.21:3260 --login

+ ....

+ * Scan and activate lvm:

+ +

+ ....

+ pvscan

+ vgscan

+ vgchange -ay xenGuests

+ ....

+ 

+ Once this is done, one should be able to run "lvs" to see the logical

+ volumes

+ 

+ ==== Logging out

+ 

+ Logging out isn't normally needed, for example rebooting a machine

+ automatically logs the initiator out. Should a problem arise though here

+ are the steps:

+ 

+ * Disable the logical volume:

+ +

+ ....

+ vgchange -an xenGuests

+ ....

+ * log out:

+ +

+ ....

+ iscsiadm --mode node --targetname iqn.1992-08.com.netapp:sn.118047036 --portal 10.5.88.21:3260 --logout

+ ....

+ 

+ [NOTE]

+ ====

+ `Cannot deactivate volume group`

+ 

+ If the vgchange command fails with an error about not being able to

+ deactivate the volume group, this means that one of the logical volumes

+ is still in use. By running "lvs" you can get a list of volume groups.

+ Look in the Attr column. There are 6 attrs listed. The 5th column

+ usually has a _-_ or an _a_. _a_ means its active, - means it is not. To

+ the right of that (the last column) you will see an _-_ or an _o_. If

+ you see an _o_ that means that logical volume is still mounted and in use.

+ ====

+ 

+ [IMPORTANT]

+ ====

+ 

+ Note about creating new logical volumes

+ 

+ At present we do not have logical volume locking on the xen servers.

+ This is dangerous and being worked on. Basically when you create a new

+ volume on a host, you need to run:

+ 

+ ....

+ pvscan

+ vgscan

+ lvscan

+ ....

+ 

+ on the other virtualization servers.

+ ====

@@ -0,0 +1,40 @@ 

+ = Jenkins Fedmsg SOP

+ 

+ Send information about Jenkins builds to fedmsg.

+ 

+ == Contact Information

+ 

+ Owner::

+   Ricky Elrod, Fedora Infrastructure Team

+ Contact::

+   #fedora-apps

+ 

+ == Reinstalling when it disappears

+ 

+ For an as-of-yet unknown reason, the plugin sometimes seems to

+ disappear, though it still shows as "installed" on Jenkins.

+ 

+ To re-install it, grab _fedmsg.hpi_ from

+ _/srv/web/infra/bigfiles/jenkins_. Go to the Jenkins web

+ interface and log in. Click _Manage Jenkins_ ->

+ _Manage Plugins_ -> _Advanced_. Upload the

+ plugin and on the page that comes up, check the box to have Jenkins

+ restart when running jobs are finished.

+ 

+ == Configuration Values

+ 

+ These are written here in case the Jenkins configuration ever gets lost.

+ This is how to configure the jenkins-fedmsg-emit plugin.

+ 

+ Assume the plugin is already installed.

+ 

+ Go to _Configure Jenkins_ -> _System Configuration_

+ 

+ Towards the bottom, look for _Fedmsg Emitter_

+ 

+ Values::

+   * Signing: Checked Fedmsg

+   * Endpoint: tcp://209.132.181.16:9941 Environment

+   * Shortname: prod

+   * Certificate File: /etc/pki/fedmsg/jenkins-jenkins.fedorainfracloud.org.crt

+   * Keystore File: /etc/pki/fedmsg/jenkins-jenkins.fedorainfracloud.org.key

@@ -0,0 +1,61 @@ 

+ = Kerneltest-harness SOP

+ 

+ The kerneltest-harness is the web application used to gather and present

+ statistics about kernel test results.

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_add_a_new_fedora_release>>

+ * <<_upload_new_test_results>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin

+ Location::

+   https://apps.fedoraproject.org/kerneltest/

+ Servers::

+   kerneltest01, kerneltest01.stg

+ Purpose::

+   Provide a system to gather and present kernel tests results

+ 

+ == Add a new Fedora release

+ 

+ * Login

+ * On the front page, in the menu on the left side, if there is a

+ _Fedora Rawhide_ release, click on _(edit)_.

+ * Bump the _Release number_ on _Fedora Rawhide_

+ to avoid conflicts with the new release you're creating

+ * Back on the index page, click on _New release_

+ * Complete the form:

+ ** Release number

+ +

+ This would be the integer version of the Fedora release, for example

+ 24 for Fedora 24.

+ ** Support

+ +

+ The current status of the Fedora release

+ +

+ *** Rawhide for Fedora Rawhide

+ *** Test for branched release

+ *** Release for released Fedora

+ *** Retired for retired release of Fedora

+ 

+ == Upload new test results

+ 

+ The kernel tests are available on the

+ https://pagure.io/kernel-tests[kernel-tests] git

+ repository.

+ 

+ Once ran with `runtests.sh`, you can upload the resulting

+ file either using `fedora_submit.py` or the UI.

+ 

+ If you choose the UI the steps are simply:

+ 

+ * Login

+ * Click on _Upload_ in the main menu on the top

+ * Select the result file generated by running the tests

+ * Submit

@@ -0,0 +1,170 @@ 

+ = Kickstart Infrastructure SOP

+ 

+ Kickstart scripts provide our install infrastructure. We have a plethora

+ of different kickstarts to best match the system you are trying to

+ install.

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, sysadmin-main

+ Location::

+   Everywhere we have machines.

+ Servers::

+   batcave01 (stores kickstarts and install media)

+ Purpose::

+   Provides our install infrastructure

+ 

+ == Introduction

+ 

+ Our kickstart infrastructure lives on _batcave01_. All install media and

+ kickstart scripts are located on _batcave01_. Because the RHEL binaries

+ are not public we have these bits blocked. You can add needed IPs to

+ (from _batcave01_):

+ 

+ ....

+ ansible/roles/batcave/files/allows

+ ....

+ 

+ == Physical Machine (kvm virthost)

+ 

+ [NOTE]

+ ====

+ PXE Booting: If PXE booting just follow the prompt after doing the pxe boot (most

+ hosts will pxeboot via console hitting f12).

+ ====

+ 

+ === Prep

+ 

+ This only works on an already booted box, many boxes at our colocations

+ may have to be rebuilt by the people in those locations first. Also make

+ sure the IP you are about to boot to install from is allowed to our IP

+ restricted _infrastructure.fedoraproject.org_ as noted above (in

+ Introduction).

+ 

+ Download the vmlinuz and initrd images.

+ 

+ For a rhel6 install:

+ 

+ ....

+ wget https://infrastructure.fedoraproject.org/repo/rhel/RHEL6-x86_64/images/pxeboot/vmlinuz \

+     -O /boot/vmlinuz-install

+ wget https://infrastructure.fedoraproject.org/repo/rhel/RHEL6-x86_64/images/pxeboot/initrd.img \

+     -O /boot/initrd-install.img

+ 

+ grubby --add-kernel=/boot/vmlinuz-install \

+        --args="ks=https://infrastructure.fedoraproject.org/repo/rhel/ks/hardware-rhel-6-nohd \

+        repo=https://infrastructure.fedoraproject.org/repo/rhel/RHEL6-x86_64/ \

+        ksdevice=link ip=$IP gateway=$GATEWAY netmask=$NETMASK dns=$DNS" \

+        --title="install el6" --initrd=/boot/initrd-install.img

+ ....

+ 

+ For a rhel7 install:

+ 

+ ....

+ wget https://infrastructure.fedoraproject.org/repo/rhel/RHEL7-x86_64/images/pxeboot/vmlinuz -O /boot/vmlinuz-install

+ wget https://infrastructure.fedoraproject.org/repo/rhel/RHEL7-x86_64/images/pxeboot/initrd.img -O /boot/initrd-install.img

+ ....

+ 

+ For iad2 hosts:

+ 

+ ....

+ grubby --add-kernel=/boot/vmlinuz-install \

+        --args="ks=http://10.5.126.23/repo/rhel/ks/hardware-rhel-7-nohd \

+        repo=http://10.5.126.23/repo/rhel/RHEL7-x86_64/ \

+        net.ifnames=0 biosdevname=0 bridge=br0:eth0 ksdevice=br0 \

+        ip={{ br0_ip }}::{{ gw }}:{{ nm }}:{{ hostname }}:br0:none" \

+        --title="install el7" --initrd=/boot/initrd-install.img

+ ....

+ 

+ (You will need to setup the br1 device if any after install)

+ 

+ For non iad2 hosts:

+ 

+ ....

+ grubby --add-kernel=/boot/vmlinuz-install \

+        --args="ks=https://infrastructure.fedoraproject.org/repo/rhel/ks/hardware-rhel-7-ext \

+        repo=https://infrastructure.fedoraproject.org/repo/rhel/RHEL7-x86_64/ \

+        net.ifnames=0 biosdevname=0 bridge=br0:eth0 ksdevice=br0 \

+        ip={{ br0_ip }}::{{ gw }}:{{ nm }}:{{ hostname }}:br0:none" \

+        --title="install el7" --initrd=/boot/initrd-install.img

+ ....

+ 

+ Fill in the br0 ip, gateway, etc

+ 

+ The default here is to use the hardware-rhel-7-nohd config which

+ requires you to connect via VNC to the box and configure its drives. If

+ this is a new machine or you are fine with blowing everything away, you

+ can instead use

+ https://infrastructure.fedoraproject.org/rhel/ks/hardware-rhel-6-minimal

+ as your kickstart

+ 

+ If you know the number of hard drives the system has there are other

+ kickstarts which can be used.

+ 

+ 2 disk system::

+   ks=https://infrastructure.fedoraproject.org/repo/rhel/ks/hardware-rhel-7-02disk

+ or external::

+   ks=https://infrastructure.fedoraproject.org/repo/rhel/ks/hardware-rhel-7-02disk-ext

+ 4 disk system::

+   ks=https://infrastructure.fedoraproject.org/repo/rhel/ks/hardware-rhel-7-04disk

+ or external::

+   ks=https://infrastructure.fedoraproject.org/repo/rhel/ks/hardware-rhel-7-04disk-ext

+ 6 disk system::

+   ks=https://infrastructure.fedoraproject.org/repo/rhel/ks/hardware-rhel-7-06disk

+ or external::

+   ks=https://infrastructure.fedoraproject.org/repo/rhel/ks/hardware-rhel-7-06disk-ext

+ 8 disk system::

+   ks=https://infrastructure.fedoraproject.org/repo/rhel/ks/hardware-rhel-7-08disk

+ or external::

+   ks=https://infrastructure.fedoraproject.org/repo/rhel/ks/hardware-rhel-7-08disk-ext

+ 10 disk system::

+   ks=https://infrastructure.fedoraproject.org/repo/rhel/ks/hardware-rhel-7-10disk

+ or external::

+   ks=https://infrastructure.fedoraproject.org/repo/rhel/ks/hardware-rhel-7-10disk-ext

+ 

+ Double and triple check your configuration settings (On RHEL-6

+ `cat /boot/grub/menu.lst` and on RHEL-7 `cat /boot/grub2/grub.cfg`),

+ especially your IP information. In places like ServerBeach not all hosts

+ have the same netmask or gateway. Once everything you are ready to run

+ the commands to get it set up to boot next boot.

+ 

+ RHEL-6:

+ 

+ ....

+ echo "savedefault --default=0 --once" | grub --batch

+ shutdown -r now

+ ....

+ 

+ RHEL-7:

+ 

+ ....

+ grub2-reboot 0

+ shutdown -r now

+ ....

+ 

+ === Installation

+ 

+ Once the box logs you out, start pinging the IP address. It will

+ disappear and come back. Once you can ping it again, try to open up a

+ VNC session. It can take a couple of minutes after the box is back up

+ for it to actually allow vnc sessions. The VNC password is in the

+ kickstart script on _batcave01_:

+ 

+ ....

+ grep vnc /mnt/fedora/app/fi-repo/rhel/ks/hardware-rhel-7-nohd

+ 

+ vncviewer $IP:1

+ ....

+ 

+ If using the standard kickstart script, one can watch as the install

+ completes itself, there should be no need to do anything. If using the

+ hardware-rhel-6-nohd script, one will need to configure the drives. The

+ password is in the kickstart file in the kickstart repo.

+ 

+ === Post Install

+ 

+ Run ansible on the box asap to set root passwords and other security

+ features. Don't leave a newly installed box sitting around.

@@ -0,0 +1,44 @@ 

+ = Koji Archive SOP

+ 

+ This SOP documents how to archive Fedora EOL'd builds from the DEFAULT

+ volume to archived volume.

+ 

+ Before archiving the builds, identify if any of the EOL'd release builds

+ are still being used in the current releases. For example. to test if

+ f28 builds are still being using in f32, use:

+ 

+ ....

+ $ koji list-tagged f32 | grep fc28

+ ....

+ 

+ Tag all these builds to koji's _do-not-archive-yet_ tag, so that they wont

+ be archived. To do that, first add the packages to the

+ _do-not-archive-tag_

+ ....

+ $ koji add-pkg do-not-archive-yet --owner <username> pkg1 pkg2 ...

+ ....

+ 

+ Then tags the builds to _do-not-archive-yet_ tag

+ 

+ ....

+ $ koji tag-build do-not-archive-yet build1 build2 ...

+ ....

+ 

+ Then update the archive policy which is available in releng repo

+ (https://pagure.io/releng/blob/main/f/koji-archive-policy)

+ 

+ Run the following from _compose-x86-01.iad2.fedoraproject.org_

+ 

+ ....

+ $ cd $ wget https://pagure.io/releng/raw/master/f/koji-archive-policy

+ $ git clone https://pagure.io/koji-tools/

+ $ cd koji-tools

+ $ ./koji-change-volumes -p compose_koji -v ~/archive-policy

+ ....

+ 

+ In any case, if you need to move a build back to DEFAULT volume

+ ....

+ $ koji add-pkg do-not-archive-yet --owner <username> pkg1

+ $ koji tag-build do-not-archive-yet build1

+ $ koji set-build-volume DEFAULT <n-v-r>

+ ....

@@ -0,0 +1,119 @@ 

+ = Setup Koji Builder SOP

+ 

+ == Contents

+ 

+ * <<_builder_setup>>

+ ** <<_network_overview>>

+ ** <<_setup_overview>>

+ ** <<_resettinginstalling_an_old_koji_builder>>

+ 

+ == Builder Setup

+ 

+ Setting up a new koji builder involves a goodly number of steps:

+ 

+ === Network Overview

+ 

+ [arabic]

+ . First get an instance spun up following the kickstart sop.

+ . Define a hostname for it on the 125 network and a $hostname-nfs name

+ for it on the .127 network.

+ . make sure the instance has 2 network connections:

+ * eth0 should be on the .125 network

+ * eth1 should be on the .127 network

+ +

+ [NOTE]

+ =====

+ For VM _eth0_ should be on _br0_, _eth1_ on _br1_ on the vmhost.

+ =====

+ 

+ === Setup Overview

+ 

+ * install the system as normal:

+ +

+ ....

+ virt-install -n $builder_fqdn -r $memsize \

+ -f $path_to_lvm  --vcpus=$numprocs \

+ -l http://10.5.126.23/repo/rhel/RHEL6-x86_64/ \

+ -x "ksdevice=eth0 ks=http://10.5.126.23/repo/rhel/ks/kvm-rhel-6 \

+  ip=$ip netmask=$netmask gateway=$gw dns=$dns \

+  console=tty0 console=ttyS0" \

+  --network=bridge=br0 --network=bridge=br1 \

+  --vnc --noautoconsole

+ ....

+ * run python `/root/tmp/setup-nfs-network.py` this should print out the

+ -nfs hostname that you made above

+ * change root pw

+ * disable selinux on the machine in /etc/sysconfig/selinux

+ * reboot

+ * setup ssl cert into private/builders - use fqdn of host as DN

+ ** login to _fas01_ as root

+ ** `cd /var/lib/fedora-ca`

+ ** `./kojicerthelper.py normal --outdir=/tmp/  \ --name=$fqdn_of_the_new_builder  --cadir=. --caname=Fedora`

+ ** info for the cert should be like this:

+ +

+ ....

+ Country Name (2 letter code) [US]:

+ State or Province Name (full name) [North Carolina]:

+ Locality Name (eg, city) [Raleigh]:

+ Organization Name (eg, company) [Fedora Project]:

+ Organizational Unit Name (eg, section) []:Fedora Builders

+ Common Name (eg, your name or your servers hostname) []:$fqdn_of_new_builder

+ Email Address []:buildsys@fedoraproject.org

+ ....

+ ** scp the file in `/tmp/$\{fqdn}_key_and_cert.pem` over to _batcave01_

+ ** put file in the private repo under `private/builders/$dn}.pem`

+ ** `git add` + `git commit`

+ ** `git push`

+ * run `./sync-hosts` in infra-hosts repo; `git commit; git push`

+ * as a koji admin run:

+ +

+ ....

+ koji add-host $fqdnr i386 x86_64

+ 

+ (note: those are yum basearchs on the end - season to taste)

+ ....

+ 

+ === Resetting/installing an old koji builder

+ 

+ * disable the builder in koji (ask a koji admin)

+ * halt the old system (`halt -p`)

+ * undefine the vm instance on the buildvmhost:

+ +

+ ....

+ virsh undefine $builder_fqdn

+ ....

+ * reinstall it - from the buildvmhost run:

+ +

+ ....

+ virt-install -n $builder_fqdn -r $memsize \

+ -f $path_to_lvm  --vcpus=$numprocs \

+  -l http://10.5.126.23/repo/rhel/RHEL6-x86_64/ \

+  -x "ksdevice=eth0 ks=http://10.5.126.23/repo/rhel/ks/kvm-rhel-6 \

+  ip=$ip netmask=$netmask gateway=$gw dns=$dns \

+  console=tty0 console=ttyS0" \

+  --network=bridge=br0 --network=bridge=br1 \

+  --vnc --noautoconsole

+ ....

+ * watch install via vnc:

+ +

+ ....

+ vncviewer -via bastion.fedoraproject.org $builder_fqdn:1

+ ....

+ * when the install finishes:

+ ** start the instance on the buildvmhost:

+ +

+ ....

+ virsh start $builder_fqdn

+ ....

+ ** set it to autostart on the buildvmhost:

+ +

+ ....

+ virsh autostart $builder_fqdn

+ ....

+ * when the guest comes up

+ ** login via ssh using the temp root password

+ ** python `/root/tmp/setup-nfs-network.py`

+ ** change root password

+ ** disable selinux in `/etc/sysconfig/selinux`

+ ** `reboot`

+ ** ask a koji admin to re-enable the host

@@ -0,0 +1,206 @@ 

+ = Koji Infrastructure SOP

+ 

+ [NOTE]

+ ====

+ We are transitioning from two buildsystems, koji for Fedora and plague

+ for EPEL, to just using koji. This page documents both.

+ ====

+ 

+ Koji and plague are our buildsystems. They share some of the same

+ machines to do their work.

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_description>>

+ * <<_add_packages_into_buildroot>>

+ * <<_troubleshooting_and_resolution>>

+ ** <<_restarting_koji>>

+ ** <<_kojid_wont_start_or_some_builders_wont_connect>>

+ ** <<_oom_out_of_memory_issues>>

+ *** <<_increase_memory>>

+ *** <<_decrease_weight>>

+ ** <<_disk_space_issues>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, sysadmin-build group

+ Persons::

+   mbonnet, dgilmore, f13, notting, mmcgrath, SmootherFrOgZ

+ Servers::

+   * koji.fedoraproject.org

+   * buildsys.fedoraproject.org

+   * xenbuilder[1-4]

+   * hammer1, ppc[1-4]

+ Purpose::

+   Build packages for Fedora.

+ 

+ == Description

+ 

+ Users submit builds to _koji.fedoraproject.org_ or

+ _buildsys.fedoraproject.org_. From there it gets passed on to the

+ builders.

+ 

+ [IMPORTANT]

+ ====

+ At present plague and koji are unaware of each other. A result of this

+ may be an overloaded builder. A easy fix for this is not clear at this

+ time

+ ====

+ 

+ == Add packages into Buildroot

+ 

+ Some contributors may have the need to build packages against fresh

+ built packages which are not into buildroot yet. Koji has override tags

+ as a Inheritance to the build tag in order to include them into

+ buildroot which can be set by:

+ 

+ ....

+ koji tag-pkg dist-$release-override <package_nvr>

+ ....

+ 

+ == Troubleshooting and Resolution

+ 

+ === Restarting Koji

+ 

+ If for some reason koji needs to be restarted, make sure to restart the

+ koji master first, then the builders. If the koji master has been down

+ for a short enough time the builders do not need to be restarted.:

+ 

+ ....

+ service httpd restart

+ service kojira restart

+ service kojid restart

+ ....

+ 

+ [IMPORTANT]

+ ====

+ If postgres becomes interrupted in some way, koji will need to be

+ restarted. As long as the koji master daemon gets restarted the builders

+ should reconnect automatically. If the db server has been restarted and

+ the builders don't seem to be building, restart their daemons as well.

+ ====

+ 

+ === kojid won't start or some builders won't connect

+ 

+ In the event that some items are able to connect to koji while some are

+ not, please make sure that the database is not filled up on connections.

+ This is common if koji crashes and the db connections aren't properly

+ cleared. Upon restart many of the connections are full so koji cannot

+ reconnect. Clearing old connections is easy, guess about how long it the

+ new koji has been up and pick a number of minutes larger then that and

+ kill those queries. From _db3_ as _postgres_ run:

+ 

+ ....

+ echo "select procpid from pg_stat_activity where usename='koji' and now() - query_start \

+ >= '00:40:00' order by query_start;" | psql koji | grep "^  " | xargs kill

+ ....

+ 

+ === OOM (Out of Memory) Issues

+ 

+ Out of memory issues occur from time to time on the build machines.

+ There are a couple of options for correction. The first fix is to just

+ restart the machine and hope it was a one time thing. If the problem

+ continues please choose from one of the following options.

+ 

+ ==== Increase Memory

+ 

+ The xen machines can have memory increased on their corresponding xen

+ hosts. At present this is the table:

+ 

+ [width="34%",cols="44%,56%",]

+ |===

+ |xen3 |xenbuilder1

+ |xen4 |xenbuilder2

+ |disabled |xenbuilder3

+ |xen8 |xenbuilder4

+ |===

+ 

+ Edit `/etc/xen/xenbuilder[1-4]` and add more memory.

+ 

+ ==== Decrease weight

+ 

+ Each builder has a weight as to how much work can be given to it.

+ Presently the only way to alter weight is actually changing the database

+ on _db3_:

+ 

+ ....

+ $ sudo su - postgres

+ -bash-2.05b$ psql koji

+ koji=# select * from host limit 1;

+ id | user_id |          name          |  arches   | task_load | capacity | ready | enabled

+ ---+---------+------------------------+-----------+-----------+----------+-------+---------

+ 6  |     130 | ppc3.fedora.redhat.com | ppc ppc64 |       1.5 |        4 | t     | t

+ (1 row)

+ koji=# update host set capacity=2 where name='ppc3.fedora.redhat.com';

+ ....

+ 

+ Simply update capacity to a lower number.

+ 

+ === Disk Space Issues

+ 

+ The builders use a lot of temporary storage. Failed builds also get left

+ on the builders, most should get cleaned but plague does not. The

+ easiest thing to do is remove some older cache dirs.

+ 

+ Step one is to turn off both koji and plague:

+ 

+ ....

+ /etc/init.d/plague-builder stop

+ /etc/init.d/kojid stop

+ ....

+ 

+ Next check to see what file system is full:

+ 

+ ....

+ df -h

+ ....

+ 

+ [IMPORTANT]

+ ====

+ If any one of the following directories is full, send an outage

+ notification as outlined in: Infrastructure/OutageTemplate to the

+ fedora-infrastructure-list and fedora-devel-list, then contact Mike

+ McGrath

+ 

+ * /mnt/koji

+ * /mnt/ntap-fedora1/scratch

+ * /pub/epel

+ * /pub/fedora

+ ====

+ 

+ Typically just / will be full. The next thing to do is determine if

+ we have any extremely large builds left on the builder. Typical

+ locations include /var/lib/mock and /mnt/build (/mnt/build actually is

+ on the local filesystem):

+ 

+ ....

+ du -sh /var/lib/mock/* /mnt/build/*

+ ....

+ 

+ `/var/lib/mock/dist-f8-build-10443-1503`::

+   classic koji build

+ `/var/lib/mock/fedora-6-ppc-core-57cd31505683ef1afa533197e91608c5a2c52864`::

+   classic plague build

+ 

+ If nothing jumps out immediately, just start deleting files older than

+ one week. Once enough space has been freed start koji and plague back

+ up:

+ 

+ ....

+ /etc/init.d/plague-builder start

+ /etc/init.d/kojid start

+ ....

+ 

+ === Unmounting

+ 

+ [WARNING]

+ ====

+ Should there be mention of being sure filesystems in chroots are

+ unmounted before you delete the chroots?

+ 

+ Res ipsa loquitur.

+ ====

@@ -0,0 +1,210 @@ 

+ = Koschei SOP

+ 

+ Koschei is a continuous integration system for RPM packages. Koschei

+ runs package scratch builds after dependency change or after time elapse

+ and reports package buildability status to interested parties.

+ 

+ Production instance::

+   https://apps.fedoraproject.org/koschei

+ Staginginstance::

+   https://apps.stg.fedoraproject.org/koschei

+ 

+ == Contact Information

+ 

+ Owner::

+   mizdebsk, msimacek

+ Contact::

+   #fedora-admin

+ Location::

+   Fedora Cloud

+ Purpose::

+   continuous integration system

+ 

+ == Deployment

+ 

+ Koschei deployment is managed by two Ansible playbooks:

+ 

+ ....

+ sudo rbac-playbook groups/koschei-backend.yml

+ sudo rbac-playbook groups/koschei-web.yml

+ ....

+ 

+ == Description

+ 

+ Koschei is deployed on two separate machines - `koschei-backend` and

+ `koschei-web`

+ 

+ Frontend (`koschei-web`) is a Flask WSGi application running with httpd.

+ It displays information to users and allows editing package groups and

+ changing priorities.

+ 

+ Backend (`koschei-backend`) consists of multiple services:

+ 

+ * `koschei-watcher` - listens to fedmsg events for complete builds and

+ changes build states in the database

+ * `koschei-repo-resolver` - resolves package dependencies in given repo

+ using hawkey and compares them with previous iteration to get a

+ dependency diff. It resolves all packages in the newest repo available

+ in Koji. The output is a base for scheduling new builds

+ * `koschei-build-resolver` - resolves complete builds in the repo in

+ which they were done in Koji. Produces the dependency differences

+ visible in the frontend

+ * `koschei-scheduler` - schedules new builds based on multiple criteria:

+ ** dependency priority - dependency changes since last build valued by

+ their distance in the dependency graph

+ ** manual and static priorities - set manually in the frontend. Manual

+ priority is reset after each build, static priority persists

+ ** time priority - time elapsed since the last build

+ * `koschei-polling` - polls the same types of events as koschei-watcher

+ without reliance on fedmsg. Additionaly takes care of package list

+ synchronization and other regularly executed tasks

+ 

+ == Configuration

+ 

+ Koschei configuration is in `/etc/koschei/config-backend.cfg` and

+ `/etc/koschei/config-frontend.cfg`, and is merged with the default

+ configuration in `/usr/share/koschei/config.cfg` (the ones in `/etc`

+ overrides the defaults in `/usr`). Note the merge is recursive. The

+ configuration contains all configurable items for all Koschei services

+ and the frontend. The alterations to configuration that aren't temporary

+ should be done through ansible playbook. Configuration changes have no

+ effect on already running services -- they need to be restarted, which

+ happens automatically when using the playbook.

+ 

+ == Disk usage

+ 

+ Koschei doesn't keep on disk anything that couldn't be recreated easily -

+ all important data is stored in PostgreSQL database, configuration is

+ managed by Ansible, code installed by RPM and so on.

+ 

+ To speed up operation and reduce load on external servers, Koschei

+ caches some data obtained from services it integrates with. Most

+ notably, YUM repositories downloaded from Koji are kept in

+ `/var/cache/koschei/repodata`. Each repository takes about 100 MB of

+ disk space. Maximal number of repositories kept at time is controlled by

+ `cache_l2_capacity` parameter in `config-backend.cfg`

+ (`config-backend.cfg.j2` in Ansible). If repodata cache starts to

+ consume too much disk space, that value can be decreased - after

+ restart, `koschei-*-resolver` will remove least recently used cache

+ entries to respect configured cache capacity.

+ 

+ == Database

+ 

+ Koschei needs to connect to a PostgreSQL database, other database

+ systems are not supported. Database connection is specified in the

+ configuration under the `database_config` key that can contain the

+ following keys: `username, password, host, port, database`.

+ 

+ After an update of koschei, the database needs to be migrated to new

+ schema. This happens automatically when using the upgrade playbook.

+ Alternatively, it can be executed manulally using:

+ 

+ ....

+ koschei-admin alembic upgrade head

+ ....

+ 

+ The backend services need to be stopped during the migration.

+ 

+ == Managing koschei services

+ 

+ Koschei services are systemd units managed through `systemctl`. They can

+ be started and stopped independently in any order. The frontend is run

+ using httpd.

+ 

+ == Suspending koschei operation

+ 

+ For stopping builds from being scheduled, stopping the

+ `koschei-scheduler` service is enough. For planned Koji outages, it's

+ recommended to stop `koschei-scheduler`. It is not necessary, as koschei

+ can recover from Koji errors and network errors automatically, but when

+ Koji builders are stopped, it may cause unexpected build failures that

+ would be reported to users. Other services can be left running as they

+ automatically restart themselves on Koji and network errors.

+ 

+ == Limiting Koji usage

+ 

+ Koschei is by default limited to 30 concurrently running builds. This

+ limit can be changed in the configuration under `koji_config.max_builds`

+ key. There's also Koji load monitoring, that prevents builds from being

+ scheduled when Koji load is higher that certain threshold. That should

+ prevent scheduling builds during mass rebuilds, so it's not necessary to

+ stop scheduling during those.

+ 

+ == Fedmsg notifications

+ 

+ Koschei optionally supports sending fedmsg notifications for package

+ state changes. The fedmsg dispatch can be turned on and off in the

+ configuration (key `fedmsg-publisher.enabled`). Koschei doesn't supply

+ configuration for fedmsg, it lets the library to load it's own (in

+ `/etc/fedmsg.d/`).

+ 

+ == Setting admin announcement

+ 

+ Koschei can display announcement in web UI. This is mostly useful to

+ inform users about outages or other problems.

+ 

+ To set announcement, run as koschei user:

+ 

+ ....

+ koschei-admin set-notice "Koschei operation is currently suspended due to scheduled Koji outage"

+ ....

+ 

+ or:

+ 

+ ....

+ koschei-admin set-notice "Sumbitting scratch builds by Koschei is currently disabled due to Fedora 23 mass rebuild"

+ ....

+ 

+ To clear announcement, run as koschei user:

+ 

+ ....

+ koschei-admin clear-notice

+ ....

+ 

+ == Adding package groups

+ 

+ Packages can be added to one or more group.

+ 

+ To add new group named `mynewgroup`, run as `koschei` user:

+ 

+ ....

+ koschei-admin add-group mynewgroup

+ ....

+ 

+ To add new group named `mynewgroup` and populate it with some packages,

+ run as `koschei` user:

+ 

+ ....

+ koschei-admin add-group mynewgroup pkg1 pkg2 pkg3

+ ....

+ 

+ == Set package static priority

+ 

+ Some packages are more or less important and can have higher or lower

+ priority. Any user can change manual priority, which is reset after

+ package is rebuilt. Admins can additionally set static priority, which

+ is not affected by package rebuilds.

+ 

+ To set static priority of package `foo` to value `100`, run as `koschei`

+ user:

+ 

+ ....

+ koschei-admin --collection f27 set-priority --static foo 100

+ ....

+ 

+ == Branching a new Fedora release

+ 

+ After branching occurs and Koji build targets have been created, Koschei

+ should be updated to reflect the new state. There is a special admin

+ command for this purpose, which takes care of copying the configuration

+ and also last builds from the history.

+ 

+ To branch the collection from Fedora 27 to Fedora 28, use the following:

+ 

+ ....

+ koschei-admin branch-collection f27 f28 -d 'Fedora 27' -t f28 --bugzilla-version 27

+ ....

+ 

+ Then you can optionally verify that the collection configuration is

+ correct by visiting https://apps.fedoraproject.org/koschei/collections

+ and examining the configuration of the newly branched collection.

@@ -0,0 +1,280 @@ 

+ = Layered Image Build System

+ 

+ The

+ https://docs.pagure.org/releng/layered_image_build_service.html[Fedora

+ Layered Image Build System], often referred to as

+ https://github.com/projectatomic/osbs-client[OSBS] (OpenShift Build

+ Service) as that is the upstream project that this is based on, is used

+ to build Layered Container Images in the Fedora Infrastructure via Koji.

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_overview>>

+ * <<_setup>>

+ * <<_outage>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Clement Verna (cverna)

+ Contact::

+   #fedora-admin, #fedora-releng, #fedora-noc, sysadmin-main,

+   sysadmin-releng

+ Location::

+   osbs-control01, osbs-master01, osbs-node01, osbs-node02

+   registry.fedoraproject.org, candidate-registry.fedoraproject.org

+   +

+   osbs-control01.stg, osbs-master01.stg, osbs-node01.stg,

+   osbs-node02.stg registry.stg.fedoraproject.org,

+   candidate-registry.stg.fedoraproject.org

+   +

+   x86_64 koji buildvms

+ Purpose::

+   Layered Container Image Builds

+ 

+ == Overview

+ 

+ The build system is setup such that Fedora Layered Image maintainers

+ will submit a build to Koji via the `fedpkg container-build` command a

+ `container` namespace within

+ https://src.fedoraproject.org/projects/container/*[DistGit]. This will

+ trigger the build to be scheduled in

+ https://www.openshift.org/[OpenShift] via

+ https://github.com/projectatomic/osbs-client[osbs-client] tooling,

+ which creates a custom

+ https://docs.okd.io/latest/cicd/builds/understanding-image-builds.html[OpenShift Build]

+ which will use the pre-made buildroot container image that we have

+ created. The https://github.com/projectatomic/atomic-reactor[Atomic

+ Reactor] (`atomic-reactor`) utility will run within the buildroot and

+ prep the build container where the actual build action will execute, it

+ will also maintain uploading the

+ https://docs.pagure.org/koji/content_generators/[Content Generator]

+ metadata back to https://fedoraproject.org/wiki/Koji[Koji] and upload

+ the built image to the candidate docker registry. This will run on a

+ host with iptables rules restricting access to the docker bridge, this

+ is how we will further limit the access of the buildroot to the outside

+ world verifying that all sources of information come from Fedora.

+ 

+ Completed layered image builds are hosted in a candidate docker registry

+ which is then used to pull the image and perform tests.

+ 

+ == Setup

+ 

+ The Layered Image Build System setup is currently as follows (more

+ detailed view available in the

+ https://docs.pagure.org/releng/layered_image_build_service.html[RelEng

+ Architecture Document]):

+ 

+ ....

+ === Layered Image Build System Overview ===

+ 

+      +--------------+                           +-----------+

+      |              |                           |           |

+      |   koji hub   +----+                      |  batcave  |

+      |              |    |                      |           |

+      +--------------+    |                      +----+------+

+                          |                           |

+                          V                           |

+              +----------------+                      V

+              |                |           +----------------+

+              |  koji builder  |           |                +-----------+

+              |                |           | osbs-control01 +--------+  |

+              +-+--------------+           |                +-----+  |  |

+                |                          +----------------+     |  |  |

+                |                                                 |  |  |

+                |                                                 |  |  |

+                |                                                 |  |  |

+                V                                                 |  |  |

+     +----------------+                                           |  |  |

+     |                |                                           |  |  |

+     | osbs-master01  +------------------------------+           [ansible]

+     |                +-------+                      |            |  |  |

+     +----------------+       |                      |            |  |  |

+          ^                   |                      |            |  |  |

+          |                   |                      |            |  |  |

+          |                   V                      V            |  |  |

+          |        +-----------------+       +----------------+   |  |  |

+          |        |                 |       |                |   |  |  |

+          |        |  osbs-node01    |       |  osbs-node02   |   |  |  |

+          |        |                 |       |                |   |  |  |

+          |        +-----------------+       +----------------+   |  |  |

+          |               ^                           ^           |  |  |

+          |               |                           |           |  |  |

+          |               |                           +-----------+  |  |

+          |               |                                          |  |

+          |               +------------------------------------------+  |

+          |                                                             |

+          +-------------------------------------------------------------+

+ ....

+ 

+ === Deployment

+ 

+ From batcave you can run the following

+ 

+ ....

+ $ sudo rbac-playbook groups/osbs/deploy-cluster.yml

+ ....

+ 

+ This is going to deploy the OpenShift cluster used by OSBS. Currently

+ the playbook deploys 2 clusters (x86_64 and aarch64). Ansible tags can

+ be used to deploy only one of these if needed for example

+ _osbs-x86-deploy-openshift_.

+ 

+ If the openshift-ansible playbook fails it can be easier to run it

+ directly from osbs-control01 and use the verbose mode.

+ 

+ ....

+ $ ssh osbs-control01.iad2.fedoraproject.org

+ $ sudo -i

+ # cd /root/openshift-ansible

+ # ansible-playbook -i cluster-inventory playbooks/prerequisites.yml

+ # ansible-playbook -i cluster-inventory playbooks/deploy_cluster.yml

+ ....

+ 

+ Once these playbook have been successfull, you can configure OSBS on the

+ cluster. For that use the following playbook

+ 

+ ....

+ $ sudo rbac-playbook groups/osbs/configure-osbs.yml

+ ....

+ 

+ When this is done we need to get the new koji service token and update

+ its value in the private repository

+ 

+ ....

+ $ ssh osbs-master01.iad2.fedoraproject.org

+ $ sudo -i

+ # oc -n osbs-fedora sa get-token koji

+ dsjflksfkgjgkjfdl ....

+ ....

+ 

+ The token needs to be saved in the private ansible repo in

+ `files/osbs/production/x86-64-osbs-koji`. Once this is done

+ you can run the builder playbook to update that token.

+ 

+ ....

+ $ sudo rbac-playbook groups/buildvm.yml -t osbs

+ ....

+ 

+ === Operation

+ 

+ Koji Hub will schedule the `containerBuild` on a koji builder via the

+ `koji-containerbuild-hub` plugin, the builder will then submit the build

+ in OpenShift via the `koji-containerbuild-builder` plugin which uses the

+ `osbs-client` python API that wraps the OpenShift API along with a custom

+ OpenShift Build JSON payload.

+ 

+ The Build is then scheduled in OpenShift and it's logs are captured by

+ the koji plugins. Inside the buildroot, atomic-reactor will upload the

+ built container image as well as provide the metadata to koji's content

+ generator.

+ 

+ == Outage

+ 

+ If Koji is down, then builds can't be scheduled but repairing Koji is

+ outside the scope of this document.

+ 

+ If either the _candidate-registry.fedoraproject.org_ or

+ _registry.fedoraproject.org_. Container registries are unavailable, but

+ repairing those is also outside the scope of this document.

+ 

+ === OSBS Failures

+ 

+ OpenShift Build System itself can have various types of failures that

+ are known about and the recovery procedures are listed below.

+ 

+ ==== Ran out of disk space

+ 

+ Docker uses a lot of disk space, and while the osbs-nodes have been

+ allocated what is considered to be ample disk space for builds (since they

+ are automatically cleaned up periodically) it is possible this will run

+ out.

+ 

+ To resolve this, run the following commands:

+ 

+ ....

+ # These command will clean up old/dead docker containers from old OpenShift

+ # Pods

+ 

+ $ for i in $(sudo docker ps -a | awk '/Exited/ { print $1 }'); do sudo docker rm $i; done

+ 

+ $ for i in $(sudo docker images -q -f 'dangling=true'); do sudo docker rmi $i; done

+ 

+ 

+ # This command should only be run on osbs-master01 (it won't work on the

+ # nodes)

+ #

+ # This command will clean up old builds and related artifacts in OpenShift

+ # that are older than 30 days (We can get more aggressive about this if

+ # necessary, the main reason these still exist is in the event we need to

+ # debug something. All build info we care about is stored in Koji.)

+ 

+ $ oadm prune builds --orphans --keep-younger-than=720h0m0s --confirm

+ ....

+ 

+ ==== A node is broken, how to remove it from the cluster?

+ 

+ If a node is having an issue, the following command will effectively

+ remove it from the cluster temporarily.

+ 

+ In this example, we are removing osbs-node01

+ 

+ ....

+ $ oadm manage-node osbs-node01.phx2.fedoraproject.org --schedulable=true

+ ....

+ 

+ ==== Container Builds are unable to access resources on the network

+ 

+ Sometimes the Container Builds will fail and the logs will show that the

+ buildroot is unable to access networked resources (docker registry, dnf

+ repos, etc).

+ 

+ This is because of a bug in OpenShift v1.3.1 (current upstream release

+ at the time of this writing) where an OpenVSwitch flow is left behind

+ when a Pod is destroyed instead of the flow being deleted along with the

+ Pod.

+ 

+ Method to confirm the issue is unfortunately multi-step since it's not a

+ cluster-wide issue but isolated to the node experiencing the problem.

+ 

+ First in the koji createContainer task there is a log file called

+ openshift-incremental.log and in there you will find a key:value in some

+ JSON output similar to the following:

+ 

+ ....

+ 'openshift_build_selflink': u'/oapi/v1/namespaces/default/builds/cockpit-f24-6``

+ ....

+ 

+ The last field of the value, in this example `cockpit-f24-6` is the

+ OpenShift build identifier. We need to ssh into `osbs-master01` and get

+ information about which node that ran on.

+ 

+ ....

+ # On osbs-master01

+ #   Note: the output won't be pretty, but it gives you the info you need

+ 

+ $ sudo oc get build cockpit-f25-3 -o yaml | grep osbs-node

+ ....

+ 

+ Once you know what machine you need, ssh into it and run the following:

+ 

+ ....

+ $ sudo docker run --rm -ti buildroot /bin/bash'

+ 

+ # now attempt to run a curl command

+ 

+ $ curl https://google.com

+ # This should get refused, but if this node is experiencing the networking

+ # issue then this command will hang and eventually time out

+ ....

+ 

+ How to fix:

+ 

+ Reboot the affected node that's experiencing the issue, when the node

+ comes back up OpenShift will rebuild the flow tables on OpenVSwitch and

+ things will be back to normal.

+ 

+ ....

+ systemctl reboot

+ ....

@@ -0,0 +1,112 @@ 

+ = Mailman Infrastructure SOP

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, sysadmin-main, sysadmin-tools, sysadmin-hosted

+ Servers::

+   mailman01, mailman02, mailman01.stg

+ Purpose::

+   Provides mailing list services.

+ 

+ == Description

+ 

+ Mailing list services for Fedora projects are located on the

+ mailman01.iad2.fedoraproject.org server.

+ 

+ == Common Tasks

+ 

+ === Creating a new mailing list

+ 

+ * Log into mailman01

+ * `sudo -u mailman mailman3 create <listname>@lists.fedora(project|hosted).org --owner <username>@fedoraproject.org --notify`

+ +

+ [IMPORTANT]

+ ====

+ Please make sure to add a valid description to the newly created list.

+ (to avoid [no description available] on listinfo index)

+ ====

+ 

+ == Removing content from archives

+ 

+ We don't.

+ 

+ It's not easy to remove content from the archives and it's generally

+ useless as well because the archives are often mirrored by third parties

+ as well as being in the INBOXs of all of the people on the mailing list

+ at that time. Here's an example message to send to someone who requests

+ removal of archived content:

+ 

+ ....

+ Greetings,

+ 

+ We're sorry to say that we don't remove content from the mailing list archives.

+ Doing so is a non-trivial amount of work and usually doesn't achieve anything

+ because the content has already been disseminated to a wide audience that we do

+ not control.  The emails have gone out to all of the subscribers of the mailing

+ list at that time and also (for a great many of our lists) been copied by third

+ parties (for instance: http://markmail.org and http://gmane.org).

+ 

+ Sorry we cannot help further,

+ 

+  Mailing lists and their owners

+ ....

+ 

+ == Checking Membership

+ 

+ Are you in need of checking who owns a certain mailing list without

+ having to search around on list's frontpages?

+ 

+ Mailman has a nice tool that will help us list members by type.

+ 

+ Get a full list of all the mailing lists hosted on the server:

+ 

+ ....

+ sudo -u mailman mailman3 lists

+ ....

+ 

+ Get the list of regular members for example@example.com:

+ 

+ ....

+ sudo -u mailman mailman3 members example@example.com

+ ....

+ 

+ Get the list of owners for example@example.com:

+ 

+ ....

+ sudo -u mailman mailman3 members -R owner example@example.com

+ ....

+ 

+ Get the list of moderators for example@example.com:

+ 

+ ....

+ sudo -u mailman mailman3 members -R moderator example@example.com

+ ....

+ 

+ == Troubleshooting and Resolution

+ 

+ === List Administration

+ 

+ Specific users are marked as 'site admins' in the database.

+ 

+ Please file a issue if you feel you need to have this access.

+ 

+ === Restart Procedure

+ 

+ If the server needs to be restarted mailman should come back on it's

+ own. Otherwise each service on it can be restarted:

+ 

+ ....

+ sudo service mailman3 restart

+ sudo service postfix restart

+ ....

+ 

+ == How to delete a mailing list

+ 

+ Delete a list, but keep the archives:

+ 

+ ....

+ sudo -u mailman mailman3 remove <listname>

+ ....

@@ -0,0 +1,53 @@ 

+ = SSL Certificate Creation SOP

+ 

+ Every now and then you will need to create an SSL certificate for a

+ Fedora Service.

+ 

+ == Creating a CSR for a new server

+ 

+ Know your hostname, ie _lists.fedoraproject.org_:

+ 

+ ....

+ export ssl_name=<fqdn of host> 

+ ....

+ 

+ Create the cert. 8192 does not work with various boxes so we use 4096

+ currently.

+ 

+ ....

+ openssl genrsa -out ${ssl_name}.pem 4096

+ openssl req -new  -key ${ssl_name}.pem -out $(ssl_name}.csr

+ 

+ Country Name (2 letter code) [XX]:US

+ State or Province Name (full name) []:NM

+ Locality Name (eg, city) [Default City]:Raleigh

+ Organization Name (eg, company) [Default Company Ltd]:Red Hat

+ Organizational Unit Name (eg, section) []:Fedora Project

+ Common Name (eg, your name or your server's hostname)

+ []:lists.fedorahosted.org

+ Email Address []:admin@fedoraproject.org

+ 

+ Please enter the following 'extra' attributes

+ to be sent with your certificate request

+ A challenge password []:

+ An optional company name []:

+ ....

+ 

+ send the CSR to the signing authority and wait for a cert. place all

+ three into private directory so that you can make certs in the future.

+ 

+ == Creating a temporary self-signed certificate

+ 

+ Repeat the steps above but add in the following:

+ 

+ ....

+ openssl x509 -req -days 30 -in ${ssl_name}.csr -signkey ${ssl_name}.pem -out ${ssl_name}.cert

+  Signature ok

+  subject=/C=US/ST=NM/L=Raleigh/O=Red Hat/OU=Fedora

+  Project/CN=lists.fedorahosted.org/emailAddress=admin@fedoraproject.org

+ ....

+ 

+ Getting Private key

+ 

+ We only want a self-signed certificate to be good for a short time so 30

+ days sounds good.

@@ -0,0 +1,412 @@ 

+ = Mass Upgrade Infrastructure SOP

+ 

+ Every once in a while, we need to apply mass upgrades to our servers for

+ various security and other upgrades.

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_preparation>>

+ * <<_staging>>

+ * <<_special_considerations>>

+ ** <<_disable_builders>>

+ ** <<_post_reboot_action>>

+ ** <<_schedule_autoqa01_reboot>>

+ ** <<_bastion01_and_bastion02_and_openvpn_server>>

+ ** <<_special_yum_directives>>

+ * <<_update_leader>>

+ * <<_group_a_reboots>>

+ * <<_group_b_reboots>>

+ * <<_group_c_reboots>>

+ * <<_doing_the_upgrade>>

+ * <<_doing_the_reboot>>

+ * <<_aftermath>>

+ 

+ == Contact Information

+ 

+ Owner:::

+   Fedora Infrastructure Team

+ Contact:::

+   #fedora-admin, sysadmin-main, infrastructure@lists.fedoraproject.org,

+   #fedora-noc

+ Location:::

+   All over the world.

+ Servers:::

+   all

+ Purpose:::

+   Apply kernel/other upgrades to all of our servers

+ 

+ == Preparation

+ 

+ [arabic]

+ . Determine which host group you are going to be doing updates/reboots

+ on.

+ +

+ Group "A"::

+   servers that end users will see or note being down and anything that

+   depends on them.

+ Group "B"::

+   servers that contributors will see or note being down and anything

+   that depends on them.

+ Group "C"::

+   servers that infrastructure will notice are down, or are redundent

+   enough to reboot some with others taking the load.

+ . Appoint an 'Update Leader' for the updates.

+ . Follow the xref:outage.adoc[Outage Infrastructure SOP] and send advance notification

+ to the appropriate lists. Try to schedule the update at a time when many

+ admins are around to help/watch for problems and when impact for the

+ group affected is less. Do NOT do multiple groups on the same day if

+ possible.

+ . Plan an order for rebooting the machines considering two factors:

+ +

+ ____

+ * Location of systems on the kvm or xen hosts. [You will normally reboot

+ all systems on a host together]

+ * Impact of systems going down on other services, operations and users.

+ Thus since the database servers and nfs servers are the backbone of many

+ other systems, they and systems that are on the same xen boxes would be

+ rebooted before other boxes.

+ ____

+ . To aid in organizing a mass upgrade/reboot with many people helping,

+ it may help to create a checklist of machines in a gobby document.

+ . Schedule downtime in nagios.

+ . Make doubly sure that various app owners are aware of the reboots

+ 

+ == Staging

+ 

+ ____

+ Any updates that can be tested in staging or a pre-production

+ environment should be tested there first. Including new kernels, updates

+ to core database applications / libraries. Web applications, libraries,

+ etc.

+ ____

+ 

+ == Special Considerations

+ 

+ While this may not be a complete list, here are some special things that

+ must be taken into account before rebooting certain systems:

+ 

+ === Disable builders

+ 

+ Before the following machines are rebooted, all koji builders should be

+ disabled and all running jobs allowed to complete:

+ 

+ ____

+ * db04

+ * nfs01

+ * kojipkgs02

+ ____

+ 

+ Builders can be removed from koji, updated and re-added. Use:

+ 

+ ....

+ koji disable-host NAME

+ 

+   and

+ 

+ koji enable-host NAME

+ ....

+ 

+ [NOTE]

+ ====

+ you must be a koji admin

+ ====

+ 

+ Additionally, rel-eng and builder boxes may need a special version

+ of rpm. Make sure to check with rel-eng on any rpm upgrades for them.

+ 

+ === Post reboot action

+ 

+ The following machines require post-boot actions (mostly entering

+ passphrases). Make sure admins that have the passphrases are on hand for

+ the reboot:

+ 

+ ____

+ * backup-2 (LUKS passphrase on boot)

+ * sign-vault01 (NSS passphrase for sigul service)

+ * sign-bridge01 (NSS passphrase for sigul bridge service)

+ * serverbeach* (requires fixing firewall rules):

+ ____

+ 

+ Each serverbeach host needs 3 or 4 iptables rules added anytime it's

+ rebooted or libvirt is upgraded:

+ 

+ ....

+ iptables -I FORWARD -o virbr0 -j ACCEPT 

+ iptables -I FORWARD -i virbr0 -j ACCEPT 

+ iptables -t nat -I POSTROUTING -s 192.168.122.3/32 -j SNAT --to-source 66.135.62.187

+ ....

+ 

+ [NOTE]

+ ====

+ The source is the internal guest ips, the to-source is the external ips

+ that map to that guest ip. If there are multiple guests, each one needs

+ the above SNAT rule inserted.

+ ====

+ 

+ === Schedule autoqa01 reboot

+ 

+ There is currently an autoqa01.c host on cnode01. Check with QA folks

+ before rebooting this guest/host.

+ 

+ === Bastion01 and Bastion02 and openvpn server

+ 

+ We need one of the bastion machines to be up to provide openvpn for all

+ machines. Before rebooting bastion02, modify:

+ `manifests/nodes/bastion0*.iad2.fedoraproject.org.pp` files to start

+ openvpn server on bastion01, wait for all clients to re-connect, reboot

+ bastion02 and then revert back to it as openvpn hub.

+ 

+ === Special yum directives

+ 

+ Sometimes we will wish to exclude or otherwise modify the yum.conf on a

+ machine. For this purpose, all machines have an include, making them

+ read

+ http://infrastructure.fedoraproject.org/infra/hosts/FQHN/yum.conf.include

+ (TODO Fix link)

+ from the infrastructure repo. If you need to make such changes, add them

+ to the infrastructure repo before doing updates.

+ 

+ == Update Leader

+ 

+ Each update should have a Leader appointed. This person will be in

+ charge of doing any read-write operations, and delegating to others to

+ do tasks. If you aren't specficially asked by the Leader to reboot or

+ change something, please don't. The Leader will assign out machine

+ groups to reboot, or ask specific people to look at machines that didn't

+ come back up from reboot or aren't working right after reboot. It's

+ important to avoid multiple people operating on a single machine in a

+ read-write manner and interfering with changes.

+ 

+ == Group A reboots

+ 

+ Group A machines are end user critical ones. Outages here should be

+ planned at least a week in advance and announced to the announce list.

+ 

+ List of machines currently in A group (note: this is going to be

+ automated)

+ 

+ These hosts are grouped based on the virt host they reside on:

+ 

+ * torrent02.fedoraproject.org

+ * ibiblio02.fedoraproject.org

+ * people03.fedoraproject.org

+ * ibiblio03.fedoraproject.org

+ * collab01.fedoraproject.org

+ * serverbeach09.fedoraproject.org

+ * db05.iad2.fedoraproject.org

+ * virthost03.iad2.fedoraproject.org

+ * db01.iad2.fedoraproject.org

+ * virthost04.iad2.fedoraproject.org

+ * db-fas01.iad2.fedoraproject.org

+ * proxy01.iad2.fedoraproject.org

+ * virthost05.iad2.fedoraproject.org

+ * ask01.iad2.fedoraproject.org

+ * virthost06.iad2.fedoraproject.org

+ 

+ These are the rest:

+ 

+ * bapp02.iad2.fedoraproject.org

+ * bastion02.iad2.fedoraproject.org

+ * app05.fedoraproject.org

+ * backup02.fedoraproject.org

+ * bastion01.iad2.fedoraproject.org

+ * fas01.iad2.fedoraproject.org

+ * fas02.iad2.fedoraproject.org

+ * log02.iad2.fedoraproject.org

+ * memcached03.iad2.fedoraproject.org

+ * noc01.iad2.fedoraproject.org

+ * ns02.fedoraproject.org

+ * ns04.iad2.fedoraproject.org

+ * proxy04.fedoraproject.org

+ * smtp-mm03.fedoraproject.org

+ * batcave02.iad2.fedoraproject.org

+ * mm3test.fedoraproject.org

+ * packages02.iad2.fedoraproject.org

+ 

+ === Group B reboots

+ 

+ This Group contains machines that contributors use. Announcements of

+ outages here should be at least a week in advance and sent to the

+ devel-announce list.

+ 

+ These hosts are grouped based on the virt host they reside on:

+ 

+ * db04.iad2.fedoraproject.org

+ * bvirthost01.iad2.fedoraproject.org

+ * nfs01.iad2.fedoraproject.org

+ * bvirthost02.iad2.fedoraproject.org

+ * pkgs01.iad2.fedoraproject.org

+ * bvirthost03.iad2.fedoraproject.org

+ * kojipkgs02.iad2.fedoraproject.org

+ * bvirthost04.iad2.fedoraproject.org

+ 

+ These are the rest:

+ 

+ * koji04.iad2.fedoraproject.org

+ * releng03.iad2.fedoraproject.org

+ * releng04.iad2.fedoraproject.org

+ 

+ === Group C reboots

+ 

+ Group C are machines that infrastructure uses, or can be rebooted in

+ such a way as to continue to provide services to others via multiple

+ machines. Outages here should be announced on the infrastructure list.

+ 

+ Group C hosts that have proxy servers on them:

+ 

+ * proxy02.fedoraproject.org

+ * ns05.fedoraproject.org

+ * hosted-lists01.fedoraproject.org

+ * internetx01.fedoraproject.org

+ * app01.dev.fedoraproject.org

+ * darkserver01.dev.fedoraproject.org

+ * fakefas01.fedoraproject.org

+ * proxy06.fedoraproject.org

+ * osuosl01.fedoraproject.org

+ * proxy07.fedoraproject.org

+ * bodhost01.fedoraproject.org

+ * proxy03.fedoraproject.org

+ * smtp-mm02.fedoraproject.org

+ * tummy01.fedoraproject.org

+ * app06.fedoraproject.org

+ * noc02.fedoraproject.org

+ * proxy05.fedoraproject.org

+ * smtp-mm01.fedoraproject.org

+ * telia01.fedoraproject.org

+ * app08.fedoraproject.org

+ * proxy08.fedoraproject.org

+ * coloamer01.fedoraproject.org

+ 

+ Other Group C hosts:

+ 

+ * ask01.stg.iad2.fedoraproject.org

+ * app02.stg.iad2.fedoraproject.org

+ * proxy01.stg.iad2.fedoraproject.org

+ * releng01.stg.iad2.fedoraproject.org

+ * value01.stg.iad2.fedoraproject.org

+ * virthost13.iad2.fedoraproject.org

+ * db-fas01.stg.iad2.fedoraproject.org

+ * pkgs01.stg.iad2.fedoraproject.org

+ * packages01.stg.iad2.fedoraproject.org

+ * virthost11.iad2.fedoraproject.org

+ * app01.stg.iad2.fedoraproject.org

+ * koji01.stg.iad2.fedoraproject.org

+ * db02.stg.iad2.fedoraproject.org

+ * fas01.stg.iad2.fedoraproject.org

+ * virthost10.iad2.fedoraproject.org

+ * autoqa01.qa.fedoraproject.org

+ * autoqa-stg01.qa.fedoraproject.org

+ * bastion-comm01.qa.fedoraproject.org

+ * batcave-comm01.qa.fedoraproject.org

+ * virthost-comm01.qa.fedoraproject.org

+ * compose-x86-01.iad2.fedoraproject.org

+ * compose-x86-02.iad2.fedoraproject.org

+ * download01.iad2.fedoraproject.org

+ * download02.iad2.fedoraproject.org

+ * download03.iad2.fedoraproject.org

+ * download04.iad2.fedoraproject.org

+ * download05.iad2.fedoraproject.org

+ * download-rdu01.vpn.fedoraproject.org

+ * download-rdu02.vpn.fedoraproject.org

+ * download-rdu03.vpn.fedoraproject.org

+ * fas03.iad2.fedoraproject.org

+ * secondary01.iad2.fedoraproject.org

+ * memcached04.iad2.fedoraproject.org

+ * virthost01.iad2.fedoraproject.org

+ * app02.iad2.fedoraproject.org

+ * value03.iad2.fedoraproject.org

+ * virthost07.iad2.fedoraproject.org

+ * app03.iad2.fedoraproject.org

+ * value04.iad2.fedoraproject.org

+ * ns03.iad2.fedoraproject.org

+ * darkserver01.iad2.fedoraproject.org

+ * virthost08.iad2.fedoraproject.org

+ * app04.iad2.fedoraproject.org

+ * packages02.iad2.fedoraproject.org

+ * virthost09.iad2.fedoraproject.org

+ * hosted03.fedoraproject.org

+ * serverbeach06.fedoraproject.org

+ * hosted04.fedoraproject.org

+ * serverbeach07.fedoraproject.org

+ * collab02.fedoraproject.org

+ * serverbeach08.fedoraproject.org

+ * dhcp01.iad2.fedoraproject.org

+ * relepel01.iad2.fedoraproject.org

+ * sign-bridge02.iad2.fedoraproject.org

+ * koji03.iad2.fedoraproject.org

+ * bvirthost05.iad2.fedoraproject.org

+ * (disable each builder in turn, update and reenable).

+ * ppc11.iad2.fedoraproject.org

+ * ppc12.iad2.fedoraproject.org

+ * backup03

+ 

+ == Doing the upgrade

+ 

+ If possible, system upgrades should be done in advance of the reboot

+ (with relevant testing of new packages on staging). To do the upgrades,

+ make sure that the Infrastructure RHEL repo is updated as necessary to

+ pull in the new packages (xref:infra-repo.adoc[Infrastructure Yum Repo SOP])

+ 

+ On batcave01, as root run:

+ 

+ ....

+ func-yum [--host=hostname] update

+ ....

+ 

+ ..note: --host can be specified multiple times and takes wildcards.

+ 

+ pinging people as necessary if you are unsure about any packages.

+ 

+ Additionally you can see which machines still need rebooted with:

+ 

+ ....

+ sudo func-command --timeout=10 --oneline /usr/local/bin/needs-reboot.py | grep yes

+ ....

+ 

+ You can also see which machines would need a reboot if updates were all

+ applied with:

+ 

+ ....

+ sudo func-command --timeout=10 --oneline /usr/local/bin/needs-reboot.py after-updates | grep yes

+ ....

+ 

+ == Doing the reboot

+ 

+ In the order determined above, reboots will usually be grouped by the

+ virtualization hosts that the servers are on. You can see the guests per

+ virt host on batcave01 in `/var/log/virthost-lists.out`

+ 

+ To reboot sets of boxes based on which virthost they are we've written a

+ special script which facilitates it:

+ 

+ ....

+ func-vhost-reboot virthost-fqdn

+ ....

+ 

+ ex:

+ 

+ ....

+ sudo func-vhost-reboot virthost13.iad2.fedoraproject.org

+ ....

+ 

+ == Aftermath

+ 

+ [arabic]

+ . Make sure that everything's running fine

+ . Reenable nagios notification as needed

+ . Make sure to perform any manual post-boot setup (such as entering

+   passphrases for encrypted volumes)

+ . Close outage ticket.

+ 

+ === Non virthost reboots

+ 

+ If you need to reboot specific hosts and make sure they recover -

+ consider using:

+ 

+ ....

+ sudo func-host-reboot hostname hostname1 hostname2 ...

+ ....

+ 

+ If you want to reboot the hosts one at a time waiting for each to come

+ back before rebooting the next pass a `-o` to `func-host-reboot`.

@@ -0,0 +1,74 @@ 

+ = Master Mirror Infrastructure SOP

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_iad_master_mirror_setup>>

+ * <<_rdu_i2_master_mirror_setup>>

+ * <<_raising_issues>>

+ 

+ == Contact Information

+ 

+ Owner:::

+   Red Hat IS

+ Contact:::

+   #fedora-admin, Red Hat ticket

+ Servers:::

+   server[1-5].download.iad2.redhat.com

+ Purpose:::

+   Provides the master mirrors for Fedora distribution

+ 

+ == IAD Master Mirror Setup

+ 

+ The master mirrors are accessible as:

+ 

+ ....

+ download1.fedora.redhat.com -> CNAME to download3.fedora.redhat.com

+ download2.fedora.redhat.com -> currently no DNS entry

+ download3.fedora.redhat.com -> 209.132.176.20

+ download4.fedora.redhat.com -> 209.132.176.220

+ download5.fedora.redhat.com -> 209.132.176.221

+ ....

+ 

+ from the outside. download.fedora.redhat.com is a round robin to the

+ above IPs.

+ 

+ The external IPs correspond to internal load balancer IPs that balance

+ between server[1-5]:

+ 

+ ....

+ 209.132.176.20  -> 10.9.24.20

+ 209.132.176.220 -> 10.9.24.220

+ 209.132.176.221 -> 10.9.24.221

+ ....

+ 

+ The load balancers then balance between the below Fedora IPs on the

+ rsync servers:

+ 

+ ....

+ 10.8.24.21 (fedora1.download.iad2.redhat.com) - server1.download.iad2.redhat.com

+ 10.8.24.22 (fedora2.download.iad2.redhat.com) - server2.download.iad2.redhat.com

+ 10.8.24.23 (fedora3.download.iad2.redhat.com) - server3.download.iad2.redhat.com

+ 10.8.24.24 (fedora4.download.iad2.redhat.com) - server4.download.iad2.redhat.com

+ 10.8.24.25 (fedora5.download.iad2.redhat.com) - server5.download.iad2.redhat.com

+ ....

+ 

+ == RDU I2 Master Mirror Setup

+ 

+ [NOTE]

+ ====

+ This section is awaiting confirmation from RH - information here may not

+ be 100% accurate yet.

+ ====

+ 

+ download-i2.fedora.redhat.com (rhm-i2.redhat.com) is a round robin

+ between:

+ 

+ ....

+ 204.85.14.3 - 10.11.45.3

+ 204.85.14.5 - 10.11.45.5

+ ....

+ 

+ == Raising Issues

+ 

+ Issues with any of this setup should be raised in a helpdesk ticket.

@@ -0,0 +1,204 @@ 

+ = Module Build Service Infra SOP

+ 

+ The MBS is a build orchestrator on top of Koji for "modules".

+ 

+ https://fedoraproject.org/wiki/Changes/ModuleBuildService

+ 

+ == Contact Information

+ 

+ Owner::

+   Release Engineering Team, Infrastructure Team

+ Contact::

+   #fedora-modularity, #fedora-admin, #fedora-releng

+ Persons::

+   jkaluza, fivaldi, breilly, mikem

+ Public addresses::

+   * mbs.fedoraproject.org

+ Servers::

+   * mbs-frontend0[1-2].iad2.fedoraproject.org

+   * mbs-backend01.iad2.fedoraproject.org

+ Purpose::

+   Build modules for Fedora.

+ 

+ == Description

+ 

+ Users submit builds to _mbs.fedoraproject.org_ referencing their modulemd

+ file in https://src.fedoraproject.org/[dist-git]. (In the future,

+ users will not submit their own module

+ builds. The _freshmaker_ daemon (running in infrastructure)

+ will watch for `.spec` file changes and `modulemd.yaml` file changes -- it

+ will submit the relevant module builds to the MBS on behalf of users.)

+ 

+ The request to build a module is received by the MBS flask app running

+ on the `mbs-frontend` nodes.

+ 

+ Cursory validation of the submitted modulemd is performed on the

+ frontend: are the named packages valid? Are their branches valid? The

+ MBS keeps a copy of the modulemd and appends additional data describing

+ which branches pointed to which hashes at the time of submission.

+ 

+ A fedmsg from the frontend triggers the backend to start building the

+ module. First, tags and build/srpm-build groups are created. Then, a

+ module-build-macros package is synthesized and submitted as an srpm

+ build. When it is complete and available in the buildroot, the rest of

+ the rpm builds are submitted.

+ 

+ These are grouped and limited in two ways:

+ 

+ * First, there is a global `NUM_CONCURRENT_BUILDS` config option that

+ controls how many koji builds the MBS is allowed to have open at any

+ time. It serves as a throttle.

+ * Second, a given module may specify that it's components should have a

+ certain "build order". If there are 50 components, it may say that the

+ first 25 of them are in one buildorder batch, and the second 25 are in

+ another buildorder batch. The first batch will be submitted and, when

+ complete, tagged back into the buildroot. Only after they are available

+ will the second batch of 25 begin.

+ 

+ When the last component is complete, the MBS backend marks the build as

+ "done", and then marks it again as "ready". (There is currently no

+ meaning to the "ready" state beyond "done". We reserved that state for

+ future CI interactions.)

+ 

+ == Observing MBS Behavior

+ 

+ === The mbs-build command

+ 

+ The https://pagure.io/fm-orchestrator[fm-orchestrator repo] and the

+ _module-build-service_ package provide an

+ _mbs-build_ command with a few subcommands. For general

+ help:

+ 

+ ....

+ $ mbs-build --help

+ ....

+ 

+ To generate a report of all currently active module builds:

+ 

+ ....

+ $ mbs-build overview

+   ID  State    Submitted             Components    Owner    Module

+ ----  -------  --------------------  ------------  -------  -----------------------------------

+  570  build    2017-06-01T17:18:11Z  35/134        psabata  shared-userspace-f26-20170601141014

+  569  build    2017-06-01T14:18:04Z  14/15         mkocka   mariadb-f26-20170601141728

+ ....

+ 

+ To generate a report of an individual module build, given its ID:

+ 

+ ....

+ $ mbs-build info 569

+ NVR                                             State     Koji Task

+ ----------------------------------------------  --------  ------------------------------------------------------------

+ libaio-0.3.110-7.module_414736cc                COMPLETE  https://koji.fedoraproject.org/koji/taskinfo?taskID=19803741

+                                                 BUILDING  https://koji.fedoraproject.org/koji/taskinfo?taskID=19804081

+ libedit-3.1-17.20160618cvs.module_414736cc      COMPLETE  https://koji.fedoraproject.org/koji/taskinfo?taskID=19803745

+ compat-openssl10-1.0.2j-6.module_414736cc       COMPLETE  https://koji.fedoraproject.org/koji/taskinfo?taskID=19803746

+ policycoreutils-2.6-5.module_414736cc           COMPLETE  https://koji.fedoraproject.org/koji/taskinfo?taskID=19803513

+ selinux-policy-3.13.1-255.module_414736cc       COMPLETE  https://koji.fedoraproject.org/koji/taskinfo?taskID=19803748

+ systemtap-3.1-5.module_414736cc                 COMPLETE  https://koji.fedoraproject.org/koji/taskinfo?taskID=19803742

+ libcgroup-0.41-11.module_ea91dfb0               COMPLETE  https://koji.fedoraproject.org/koji/taskinfo?taskID=19685834

+ net-tools-2.0-0.42.20160912git.module_414736cc  COMPLETE  https://koji.fedoraproject.org/koji/taskinfo?taskID=19804010

+ time-1.7-52.module_414736cc                     COMPLETE  https://koji.fedoraproject.org/koji/taskinfo?taskID=19803747

+ desktop-file-utils-0.23-3.module_ea91dfb0       COMPLETE  https://koji.fedoraproject.org/koji/taskinfo?taskID=19685835

+ libselinux-2.6-6.module_ea91dfb0                COMPLETE  https://koji.fedoraproject.org/koji/taskinfo?taskID=19685833

+ module-build-macros-0.1-1.module_414736cc       COMPLETE  https://koji.fedoraproject.org/koji/taskinfo?taskID=19803333

+ checkpolicy-2.6-1.module_414736cc               COMPLETE  https://koji.fedoraproject.org/koji/taskinfo?taskID=19803514

+ dbus-glib-0.108-2.module_ea91dfb0               COMPLETE  https://koji.fedoraproject.org/koji/taskinfo?taskID=19685836

+ ....

+ 

+ To actively watch a module build in flight, given its ID:

+ 

+ ....

+ $ mbs-build watch 570

+ Still building:

+    libXrender https://koji.fedoraproject.org/koji/taskinfo?taskID=19804885

+    libXdamage https://koji.fedoraproject.org/koji/taskinfo?taskID=19805153

+ Failed:

+    libXxf86vm https://koji.fedoraproject.org/koji/taskinfo?taskID=19804903

+ 

+ Summary:

+    2 components in the BUILDING state

+    34 components in the COMPLETE state

+    1 components in the FAILED state

+    97 components in the undefined state

+ psabata's build #570 of shared-userspace-f26 is in the "build" state

+ ....

+ 

+ === The releng repo

+ 

+ There are more tools located in the `scripts/mbs/` directory

+ of the releng repo: https://pagure.io/releng/blob/main/f/scripts/mbs

+ 

+ == Cancelling a module build

+ 

+ Users can cancel their own module builds with:

+ 

+ ....

+ $ mbs-build cancel $BUILD_ID

+ ....

+ 

+ MBS admins can also cancel builds of any user.

+ 

+ [NOTE]

+ ====

+ MBS admins are defined as members of the groups listed in the

+ `ADMIN_GROUPS` configuration options in

+ `roles/mbs/common/templates/config.py`.

+ ====

+ == Logs

+ 

+ The frontend logs are on mbs-frontend0[1-2] in

+ `/var/log/httpd/error_log`.

+ 

+ The backend logs are on mbs-backend01. Look in the journal for the

+ `fedmsg-hub` service.

+ 

+ == Upgrading

+ 

+ The package in question is `module-build-service`. Please

+ use the `playbooks/manual/upgrade/mbs.yml` playbook.

+ 

+ == Managing Bootstrap Modules

+ 

+ In general, modules use other modules to define their buildroots, but

+ what defines the buildroot of the very first module? For this, we use

+ "bootstrap" modules which are manually selected. For some history on

+ this, see these tickets:

+ 

+ * https://pagure.io/releng/issue/6791

+ * https://pagure.io/fedora-infrastructure/issue/6097

+ 

+ The tag for a bootstrap module needs to be manually created and

+ populated by Release Engineering. Builds for that tag are curated and

+ selected from other Fedora tags, with care to ensure that only as many

+ builds are added as needed.

+ 

+ The existence of the tag is not enough for the bootstrap module to be

+ useable by MBS. MBS discovers the bootstrap module as a possible

+ dependency for other yet-to-be-built modules by querying PDC. During

+ normal operation, these entries in PDC are automatically created by

+ `pdc-updater` on _pdc-backend02_, but for the bootstrap tag they need to be

+ manually created and linked to the new bootstrap tag.

+ 

+ To be usable, you'll need a token with rights to speak to staging/prod

+ PDC. See the PDC SOP for information on client configuration in

+ `/etc/pdc.d/` and on where to find those tokens.

+ 

+ == Things that could go wrong

+ 

+ === Overloading koji

+ 

+ If koji is overloaded, it should be acceptable to _stop_ the fedmsg-hub

+ daemon on _mbs-backend01_ at any time.

+ 

+ [NOTE]

+ ====

+ As builds finish in koji, they will be _missed_ by the backend.. but

+ when it restarts it should find them in datagrepper. If that fails as

+ well, the mbs backend has a poller which should start up ~5 minutes

+ after startup that checks koji for anything it may have missed, at which

+ point it will resume functioning.

+ ====

+ If koji continues to be overloaded after startup, try decreasing the

+ `NUM_CONCURRENT_BUILDS` option in the config file in

+ `roles/mbs/common/templates/`.

@@ -0,0 +1,69 @@ 

+ = Memcached Infrastructure SOP

+ 

+ Our memcached setup is currently only used for wiki sessions. With

+ mediawiki, sessions stored in files over NFS or in the DB are very slow.

+ Memcached is a non-blocking solution for our session storage.

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_checking_status>>

+ * <<_flushing_memcached>>

+ * <<_restarting_memcached>>

+ * <<_configuring_memcached>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, sysadmin-main, sysadmin-web groups

+ Servers::

+   memcached03, memcached04

+ Purpose::

+   Provide caching for Fedora web applications.

+ 

+ == Checking Status

+ 

+ Our memcached instances are currently firewalled to only allow access

+ from wiki application servers. To check the status of an instance, use:

+ 

+ ....

+ echo stats | nc memcached0{3,4} 11211

+ ....

+ 

+ from an allowed host.

+ 

+ == Flushing Memcached

+ 

+ Sometimes, wrong contents get cached, and the cache should be flushed.

+ To do this, use:

+ 

+ ....

+ echo flush_all | nc memcached0{3,4} 11211

+ ....

+ 

+ from an allowed host.

+ 

+ == Restarting Memcached

+ 

+ Note that restarting an memcached instance will drop all sessions stored

+ on that instance. As mediawiki uses hashing to distribute sessions

+ across multiple instances, restarting one out of two instances will

+ result in about half of the total sessions being dropped.

+ 

+ To restart memcached:

+ 

+ ....

+ sudo /etc/init.d/memcached restart

+ ....

+ 

+ == Configuring Memcached

+ 

+ Memcached is currently setup as a

+ https://pagure.io/fedora-infra/ansible/blob/main/f/roles/memcached[role in the ansible git repo].

+ The main two tunables are the MAXCONN (the maximum number of concurrent

+ connections) and CACHESIZE (the amount memory to use for storage). These

+ variables can be set through $memcached_maxconn and $memcached_cachesize

+ in ansible. Additionally, other options (as described in the memcached

+ manpage) can be set via `$memcached_options`.

@@ -0,0 +1,83 @@ 

+ = Message Tagging Service SOP

+ 

+ == Contact Information

+ 

+ Owner::

+   Factory2 Team, Fedora QA Team, Infrastructure Team

+ Contact::

+   #fedora-qa, #fedora-admin

+ Persons::

+   cqi, lucarval, vmaljulin

+ Servers::

+   * In OpenShift.

+ Purpose::

+   Tag module build

+ 

+ == Description

+ 

+ Message Tagging Service, aka MTS, is an event-driven microservice to tag

+ a module build triggered by MBS specific event.

+ 

+ MTS basically listens on message bus for the MBS event

+ `mbs.build.state.change`. Once a message is received, the module build

+ represented by that message will be tested if it matches any predefined

+ rules. Each rule definition has destination tag defined. If a rule

+ matches the build, the destination tag will be applied to that build.

+ Only module build in ready state is handled by MTS for now.

+ 

+ == Observing Behavior

+ 

+ Login to `os-master01.phx2.fedoraproject.org` as `root` (or,

+ authenticate remotely with openshift using

+ `oc login https://os.fedoraproject.org`), and run:

+ 

+ ....

+ oc project mts

+ oc status -v

+ oc logs -f dc/mts

+ ....

+ 

+ == Database

+ 

+ MTS does not use database.

+ 

+ == Configuration

+ 

+ Please do remember to increase `MTS_CONFIG_VERSION` so that Openshift

+ creates a new pod after running the playbook.

+ 

+ == Deployment

+ 

+ You can roll out configuration changes by changing the files in

+ `roles/openshift-apps/message-tagging-service/` and running the

+ `playbooks/openshift-apps/message-tagging-service.yml` playbook.

+ 

+ === Stage

+ 

+ MTS docker image is built automatically and pushed to upstream quay.io.

+ By default, tag `latest` is applied to a fresh image. Tag `stg` is

+ applied to image, then run the playbook

+ `playbooks/openshift-apps/message-tagging-service.yml` with environment

+ `staging`.

+ 

+ === Prod

+ 

+ If everything works well, apply tag `prod` to docker image in quay.io,

+ then, run the playbook with environment `prod`.

+ 

+ == Update Rules

+ 

+ https://pagure.io/fedora-infra/ansible/blob/main/f/roles/openshift-apps/message-tagging-service/files/mts-rules.yml[Rules

+ file] is managed along side the playbook role in same repository.

+ 

+ For detailed information of rules format, please refer to

+ https://pagure.io/modularity/blob/master/f/drafts/module-tagging-service/format.md[documentation]

+ under Modularity.

+ 

+ == Troubleshooting

+ 

+ In case of problems with MTS, check the logs:

+ 

+ ....

+ oc logs -f dc/mts

+ ....

@@ -0,0 +1,34 @@ 

+ = Mirror Hiding Infrastructure SOP

+ 

+ At times, such as release day, there may be a conflict between Red Hat

+ trying to release content for RHEL, and Fedora trying to release Fedora.

+ One way to limit the pain to Red Hat on release day is to hide

+ download.fedora.redhat.com from the publiclist and mirrorlist

+ redirector, which will keep most people from downloading the content

+ from Red Hat directly.

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, sysadmin-main, sysadmin-web group

+ Servers::

+   app3, app4

+ Purpose::

+   Hide Public Mirrors from the publiclist / mirrorlist redirector

+ 

+ == Description

+ 

+ To hide a public mirror, so it doesn't appear on the publiclist or the

+ mirrorlist, simply go into the MirrorManager administrative web user

+ interface, at https://admin.fedoraproject.org/mirrormanager. Fedora

+ sysadmins can see all Sites and Hosts. For each Site and Host, there is

+ a checkbox marked "private", which if set, will hide that Site (and all

+ its Hosts), or just that single Host, such that it won't appear on the

+ public lists.

+ 

+ To make a private-marked mirror public, simply clear the "private"

+ checkbox again.

+ 

+ This change takes effect at the top of each hour.

@@ -0,0 +1,20 @@ 

+ = AWS Mirrors

+ 

+ Fedora Infrastructure mirrors EPEL content (`/pub/epel`) into Amazon

+ Simple Storage Service (S3) in multiple regions, to make it fast for EC2

+ CentOS/RHEL users to get EPEL content from an effectively local mirror.

+ 

+ For this to work, we have private mirror entries in MirrorManager, one

+ for each region, which include the EC2 netblocks for that region.

+ 

+ Amazon updates their list of network blocks roughly monthly, as they

+ consume additional address space. Therefore, we need to make the

+ corresponding changes into MirrorManager's entries for same.

+ 

+ Amazon publishes their list of network blocks on their forum site, with

+ the subject "Announcement: Amazon EC2 Public IP Ranges". As of November

+ 2014, this was https://forums.aws.amazon.com/ann.jspa?annID=1701

+ 

+ As of November 19, 2014, Amazon publishes it as a JSON file we can

+ download.

+ http://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html

@@ -0,0 +1,171 @@ 

+ = MirrorManager Infrastructure SOP

+ 

+ MirrorManager manages mirrors for fedora distribution.

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, sysadmin-main, sysadmin-web

+ Servers::

+   mm-frontend01, mm-frontend02, mm-frontend-checkin01, mm-backend01,

+   mm-crawler01, mm-crawler02

+ Mirrorlist Servers::

+   Docker container on the proxy servers

+ Purpose::

+   Manage mirrors for Fedora distribution

+ 

+ == Description

+ 

+ MirrorManager handles our mirroring system. It keeps track of lists of

+ valid mirrors and handles handing out metalink URLs to end users to

+ download packages from.

+ 

+ The backend server (_mm-backend01_) scans the master mirror (NFS mounted

+ at `/srv`) using the _mm2_update-master-directory-list_ script (_umdl_)

+ for changes. Changed directories are detected by comparing the ctime to

+ the value in the database.

+ 

+ The two crawlers (_mm-crawler01_ and _mm-crawler02_) compare the content

+ on the mirrors with the results from _umdl_ using RSYNC, HTTP, HTTPS.

+ The crawler process on _mm-crawler01_ starts at 0:00 and 12:00 and at

+ 2:00 and 14:00 on _mm-crawler02_.

+ 

+ If the content on the mirrors is the same as on the master those mirrors

+ are included in the dynamic metalink/mirrorlist.

+ 

+ Every hour the backend server generates a python pickle which contains

+ the information about the state of each mirror. This pickle file is used

+ by the mirrorlist containers on the proxy servers to dynamically

+ generate the metalink/mirrorlist for each client individually.

+ 

+ The frontend servers (_mm-frontend01_ and _mm-frontend02_) offer an

+ interface to manipulate the mirrors. Each mirror-admin can only change

+ the details of the associated mirror. Members of the FAS group

+ _sysadmin-web_ can seen and change all existing mirrors.

+ 

+ The mirrorlist provided by the frontend servers has no actively consumed

+ content and is therefore heavily cached (12h). It is only used to give

+ an overview of existing mirrors.

+ 

+ Additionally the frontend servers provide:::

+   * an overview of the mirror list usage

+   https://admin.fedoraproject.org/mirrormanager/statistics

+   * a propagation overview

+   https://admin.fedoraproject.org/mirrormanager/propgation

+   * a mirror map https://admin.fedoraproject.org/mirrormanager/maps

+ 

+ The _mm-frontend-checkin01_ server is only used for _report_mirror_

+ check-ins. This is used by mirrors to report their status independent of

+ the crawlers.

+ 

+ == Release Preparation

+ 

+ MirrorManager should automatically detect the new release version, and

+ will create a new `Version()` object in the database. This is visible on

+ the Version page in the web UI, and on

+ https://admin.fedoraproject.org/mirrormanager/.

+ 

+ If the versioning scheme changes, it's possible this will fail. If so,

+ contact the Mirror Wrangler.

+ 

+ == One Week After a Release

+ 

+ In the first week after the release MirrorManager still uses the files

+ at `fedora/linux/development/<version>` and not at

+ `fedora/linux/releases/<version>`

+ 

+ Once enough mirrors have picked up the files in the release directory

+ following script (on _mm-backend01_) can be used to change the paths in

+ MirrorManager:

+ 

+ ....

+ sudo -u mirrormanager mm2_move-devel-to-release --version=26 --category="Fedora Linux"

+ sudo -u mirrormanager mm2_move-devel-to-release --version=26 --category="Fedora Secondary Arches"

+ ....

+ 

+ == Move to Archive

+ 

+ Once the files of an EOL release have been copied to the archive

+ directory tree and enough mirrors have picked the files up at the

+ archive location there is also a script to adapt those paths in

+ MirrorManager's database:

+ 

+ ....

+ sudo -u mirrormanager mm2_move-to-archive --originalCategory='Fedora EPEL' --directoryRe='/4/'

+ ....

+ 

+ == mirrorlist containers and mirrorlist servers

+ 

+ Every hour at :55 after the hour, mm-backend01 generates a pkl file with

+ all the current mirrormanager information in it and syncs it to proxies

+ and mirrorlist-servers. Each proxy accepts requests to

+ _mirrors.fedoraproject.org_ on apache, then uses haproxy to determine what

+ backend will reply. There are 2 containers defined on each proxy:

+ mirrorlist1 and mirrorlist2. haproxy will look for those first, then

+ fall back to any of the mirrorlist servers defined over the vpn.

+ 

+ At :15 after the hour, a script runs on all proxies:

+ `/usr/local/bin/restart-mirrorlist-containers` This script starts up

+ mirrorlist2 container, makes sure it can process requests and then if

+ so, restarts mirrorlist1 container with the new pkl data. If not,

+ mirrorlist1 keeps running with the old data. During this process at

+ least one (with mirrorlists servers as backup) server is processing

+ requests so users see no issues.

+ 

+ `mirrorlist-containers` log to `/var/log/mirrormanager/mirrorlist\{1|2}/` on

+ the host proxy server.

+ 

+ == Troubleshooting and Resolution

+ 

+ === Regenerating the Publiclist

+ 

+ On _mm-backend01_:

+ 

+ ....

+ sudo -u mirrormanager /usr/bin/mm2_update-mirrorlist-server

+ sudo -u mirrormanager /usr/local/bin/sync_pkl_to_mirrorlists.sh

+ ....

+ 

+ Those two commands generates a new mirrorlist pickle and transfers it to

+ the proxies. The mirrorlist containers on the proxies are restarted 15

+ minutes after each full hour.

+ 

+ The mirrorlist generation can take up to 20 minutes. If a faster

+ solution is required the mirrorlist pickle from the previous run is

+ available at:

+ 

+ ....

+ /var/lib/mirrormanager/old/mirrorlist_cache.pkl

+ ....

+ 

+ === Updating the mirrorlist containers

+ 

+ The container used for mirrorlists is the mirrormanager2-mirrorlist

+ container in Fedora dist git:

+ https://src.fedoraproject.org/cgit/container/mirrormanager2-mirrorlist.git/

+ The one being used is defined in a ansible variable in:

+ roles/mirrormanager/mirrorlist_proxy/defaults/main.yml

+ (TODO: This file no longer exists, find the new place where this is defined)

+ and in turn used in systemd unit files for mirrorlist1 and mirrorlist2. To update the

+ container used, update this variable, run the playbook and then restart

+ the mirrorlist1 and mirrorlist2 containers on each proxy. Note that this

+ may take a while the first time as the image has to be downloaded from

+ our registry.

+ 

+ === Debugging problems with mirrorlist container startup

+ 

+ Sometimes on boot some hosts won't be properly serving mirrorlists. This

+ is due to a container startup issue. run: `docker ps -a` as root to see

+ the active containers. It will usually say something like 'exited(1)' or

+ the like. Record the container id and then run: `docker rm --force

+ <containerid>` then run `docker ps -a` and confirm nothing shows. Then

+ run `systemctl start mirrorlist1` and it should correctly start

+ mirrorlist1.

+ 

+ === General debugging for mirrorlist containers

+ 

+ Docker commands like `docker ps -a` show a fair bit of information.

+ Also, `systemctl status mirrorlist1/2` or the journal should have

+ information when a container is failing.

@@ -0,0 +1,103 @@ 

+ = mote SOP

+ 

+ mote is a MeetBot log wrangler, providing an user-friendly interface for

+ viewing logs produced by Fedora's IRC meetings.

+ 

+ Production instance: http://meetbot.fedoraproject.org/ Staging instance:

+ http://meetbot.stg.fedoraproject.org

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_deployment>>

+ * <<_description>>

+ * <<_configuration>>

+ * <<_database>>

+ * <<_managing_mote>>

+ * <<_suspending_mote_operation>>

+ * <<_changing_motes_name_and_category_definitions>>

+ 

+ == Contact Information

+ 

+ Owner::

+   cydrobolt

+ Contact::

+   #fedora-admin

+ Location::

+   Fedora Infrastructure

+ Purpose::

+   IRC meeting coordination

+ 

+ == Deployment

+ 

+ If you have access to rbac-playbook:

+ 

+ ....

+ sudo rbac-playbook groups/value.yml

+ ....

+ 

+ == Forcing Reload

+ 

+ There is a playbook that can force mote to update its cache in case it

+ gets stuck somehow:

+ 

+ ....

+ sudo rbac-playbook manual/rebuild/mote.yml

+ ....

+ 

+ == Doing Upgrades

+ 

+ Put a new copy of the mote rpm in the infra repo and run:

+ 

+ ....

+ sudo rbac-playbook manual/upgrade/mote.yml

+ ....

+ 

+ == Description

+ 

+ mote is a Python webapp running on Flask with mod_wsgi. It can be used

+ to view past logs, browse meeting minutes, or glean other information

+ relevant to Fedora's IRC meetings. It employs a JSON file store cache,

+ in addition to a memcached store which is currently not in use with

+ Fedora infrastructure.

+ 

+ == Configuration

+ 

+ mote configuration is located in `/etc/mote/config.py`. The

+ configuration contains all configurable items for all mote services.

+ Alterations to configuration that aren't temporary should be done

+ through ansible playbooks. Configuration changes have no effect on

+ running services -- they need to be restarted, which can be done using

+ the playbook.

+ 

+ == Database

+ 

+ mote does not currently utilise any databases, although it uses a file

+ store in Fedora Infrastructure and has an optional memcached store which

+ is currently unused.

+ 

+ == Managing mote

+ 

+ mote is ran using mod_wsgi and `httpd`, hence, you must manage the `httpd`

+ service to change mote's status.

+ 

+ == Suspending mote operation

+ 

+ mote can be stopped by stopping the `httpd` service:

+ 

+ ....

+ service httpd stop

+ ....

+ 

+ == Changing mote's name and category definitions

+ 

+ mote uses a set of JSON name and category definitions to provide

+ friendly names, aliases, and listings on its interface. These

+ definitions can be located in mote's GitHub repository, and need to be

+ pulled into ansible in order to be deployed.

+ 

+ These files are `name_mappings.json` and `category_mappings.json`. To

+ deploy an update to these definitions, place the updated name and

+ category mapping files in

+ https://pagure.io/fedora-infra/ansible/blob/main/f/roles/mote/templates[ansible/roles/mote/templates].

+ Run the playbook in order to deploy your changes.

@@ -0,0 +1,89 @@ 

+ = Fedora Infrastructure Nagios

+ 

+ == Contact Information

+ 

+ Owner::

+   sysadmin-main, sysadmin-noc

+ Contact::

+   #fedora-admin, #fedora-noc

+ Location::

+   Anywhere

+ Servers::

+   noc01, noc02, noc01.stg, batcave01

+ Purpose::

+   This SOP is to describe nagios configurations

+ 

+ == Configuration

+ 

+ Fedora Project runs two nagios instances, nagios (noc01)

+ https://admin.fedoraproject.org/nagios and nagios-external (noc02)

+ https://nagios-external.fedoraproject.org/nagios, you must be in the

+ 'sysadmin' group to access them.

+ 

+ Apart from the two production instances, we are currently running a

+ staging instance for testing-purposes available through SSH at

+ noc01.stg.

+ 

+ nagios (noc01)::

+   The nagios configuration on noc01 should only monitor general host

+   statistics ansible status, uptime, apache status (up/down), SSH etc.

+   +

+   The configurations are found in nagios ansible roles:

+   * https://pagure.io/fedora-infra/ansible/blob/main/f/roles/nagios_client[ansible/roles/nagios_client]

+   * https://pagure.io/fedora-infra/ansible/blob/main/f/roles/nagios_server[ansible/roles/nagios_server]

+ nagios-external (noc02)::

+   The nagios configuration on noc02 is located outside of our main

+   datacenter and should monitor our user websites/applications

+   (fedoraproject.org, FAS, PackageDB, Bodhi/Updates).

+   +

+   The configurations are found in nagios ansible roles:

+   * https://pagure.io/fedora-infra/ansible/blob/main/f/roles/nagios_client[ansible/roles/nagios_client]

+   * https://pagure.io/fedora-infra/ansible/blob/main/f/roles/nagios_server[ansible/roles/nagios_server]

+ 

+ [NOTE]

+ ====

+ Production and staging instances through SSH: Please make sure you are

+ into 'sysadmin' and 'sysadmin-noc' FAS groups before trying to access

+ these hosts.

+ 

+ See xref:sshaccess.adoc[SSH Access SOP]

+ ====

+ 

+ === NRPE

+ 

+ We are currently using NRPE to execute remote Nagios plugins on any host

+ of our network.

+ 

+ A great guide about it and its usage mixed up with some nice images

+ about its structure can be found at:

+ https://assets.nagios.com/downloads/nagioscore/docs/nrpe/NRPE.pdf

+ 

+ == Understanding the Messages

+ 

+ === General

+ 

+ Nagios notifications are generally easy to read, and follow this

+ consistent format:

+ 

+ ....

+ ** PROBLEM/ACKNOWLEDGEMENT/RECOVERY alert - hostname/Check is WARNING/CRITICAL/OK **

+ ** HOST DOWN/UP alert - hostname **

+ ....

+ 

+ Reading the message will provide extra information on what is wrong.

+ 

+ === Disk Space Warning/Critical

+ 

+ Disk space warnings normally include the following information:

+ 

+ ....

+ DISK WARNING/CRITICAL/OK - free space: mountpoint freespace(MB) (freespace(%) inode=freeinodes(%)):

+ ....

+ 

+ A message stating "(1% inode=99%)" means that the diskspace is critical

+ not the inode usage and is a sign that more diskspace is required.

+ 

+ === Further Reading

+ 

+ * xref:ansible.adoc[Ansible SOP]

+ * xref:outage.adoc[Outages SOP]

@@ -0,0 +1,131 @@ 

+ = Netapp Infrastructure SOP

+ 

+ Provides primary mirrors and additional storage in IAD2

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_description>>

+ * <<_public_mirrors>>

+ ** <<_snapshots>>

+ * <<_iad2_nfs_storage>>

+ ** <<_access>>

+ ** <<_snapshots>>

+ * <<_iscsi>>

+ ** <<_updating_lvm>>

+ ** <<_mounting_iscsi>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, sysadmin-main, releng

+ Servers::

+   batcave01, virt servers, application servers, builders, releng boxes

+ Purpose::

+   Provides primary mirrors and additional storage in IAD2

+ 

+ == Description

+ 

+ At present we have three netapps in our infrastructure. One in TPA, RDU

+ and IAD2. For purposes of visualization its easiest to think of us as

+ having 4 netapps, 1 TPA, 1 RDU and 1 IAD2 for public mirrors. And an

+ additional 1 in IAD2 used for additional storage not related to the

+ public mirrors.

+ 

+ == Public Mirrors

+ 

+ The netapps are our primary public mirrors. The canonical location for

+ the mirrors is currently in IAD2. From there it gets synced to RDU and

+ TPA.

+ 

+ === Snapshots

+ 

+ Snapshots on the IAD2 netapp are taken hourly. Unfortunately the way it

+ is setup only Red Hat employees can access this mirror (this is

+ scheduled to change when PHX becomes the canonical location but that

+ will take time to setup and deploy). The snapshots are available, for

+ example, on wallace in:

+ 

+ ....

+ /var/ftp/download.fedora.redhat.com/.snapshot/hourly.0

+ ....

+ 

+ == IAD2 NFS Storage

+ 

+ There is a great deal of storage in IAD2 over NFS from the netapp there.

+ This storage includes the public mirror. The majority of this storage is

+ koji however there are a few gig worth of storage that goes to wiki

+ attachments and other storage needs we have in IAD2.

+ 

+ You can access all of the nfs share shares at:

+ 

+ ....

+ batcave01:/mnt/fedora

+ ....

+ 

+ or:

+ 

+ ....

+ ntap-fedora-a.storage.iad2.redhat.com:/vol/fedora/

+ ....

+ 

+ === Access

+ 

+ The netapp is provided by RHIS and as a result they also control access.

+ Access is controlled by IP mostly and some machines have root squashed.

+ Worst case scenario if batcave01 is not accessible, just bring another

+ box up under its IP address and use that for an emergency.

+ 

+ === Snapshots

+ 

+ There are hourly and nightly snapshots on the netapp. They are available

+ in:

+ 

+ ....

+ batcave01:/mnt/fedora/.snapshot

+ ....

+ 

+ == iscsi

+ 

+ We have iscsi deployed in a number of locations in our infrastructure

+ for xen machines. To get a list of what xen machines are deployed with

+ iscsi, just run lvs:

+ 

+ ....

+ lvs /dev/xenGuests

+ ....

+ 

+ Live migration is possible though not fully supported at this time.

+ Please shut a xen machine down and bring it up on another host. Memory

+ is the main issue here.

+ 

+ === Updating LVM

+ 

+ iscsi is mounted all over the place and if one xen machine creates a

+ logical volume the other xen machines will have to pick up those

+ changes. To do this run:

+ 

+ ....

+ pvscan

+ vgscan

+ lvscan

+ vgchange -a y

+ ....

+ 

+ === Mounting ISCSI

+ 

+ On reboots sometimes the iscsi share is not remounted. This should be

+ automated in the future but for now run:

+ 

+ ....

+ iscsiadm -m discovery -tst -p ntap-fedora-b.storage.iad2.redhat.com:3260

+ sleep 1

+ iscsiadm -m node -T iqn.1992-08.com.netapp:sn.118047036 -p 10.5.88.21:3260 -l

+ sleep 1

+ pvscan

+ vgscan

+ lvscan

+ vgchange -a y

+ ....

@@ -0,0 +1,337 @@ 

+ = DNS Host Addition SOP

+ 

+ You should be able to follow these steps in order to create a new set of

+ hosts in infrastructure.

+ 

+ == Walkthrough

+ 

+ === Get a DNS repo checkout on batcave01

+ 

+ ....

+ git clone /srv/git/dns

+ cd dns

+ ....

+ 

+ An example always helps, so you can use git grep for something that has

+ been recently added to the data center/network that you want:

+ 

+ ....

+ git grep badges-web01

+   built/126.5.10.in-addr.arpa:69       IN        PTR      badges-web01.stg.iad2.fedoraproject.org.

+   [...lots of other stuff in built/ ignore these as they'll be generated later...]

+   master/126.5.10.in-addr.arpa:69       IN        PTR      badges-web01.stg.iad2.fedoraproject.org.

+   master/126.5.10.in-addr.arpa:101      IN        PTR      badges-web01.iad2.fedoraproject.org.

+   master/126.5.10.in-addr.arpa:102      IN        PTR      badges-web02.iad2.fedoraproject.org.

+   master/168.192.in-addr.arpa:109.1   IN      PTR     badges-web01.vpn.fedoraproject.org

+   master/168.192.in-addr.arpa:110.1   IN      PTR     badges-web02.vpn.fedoraproject.org

+   master/iad2.fedoraproject.org:badges-web01.stg   IN      A       10.5.126.69

+   master/iad2.fedoraproject.org:badges-web01        IN  A       10.5.126.101

+   master/iad2.fedoraproject.org:badges-web02        IN  A       10.5.126.102

+   master/vpn.fedoraproject.org:badges-web01    IN A         192.168.1.109

+   master/vpn.fedoraproject.org:badges-web02    IN A         192.168.1.110

+ ....

+ 

+ So those are the files we need to edit. In the above example, two of

+ those files are for the host on the IAD network. The other two are for

+ the host to be able to talk over the VPN. Although the VPN is not always

+ needed, the common case is that the host will need it. (If any clients

+ _need to connect to it via the proxy servers_ or it is not hosted in

+ IAD2 it will need a VPN connection). An common exception is here the

+ staging environment: since we only have one proxy server in staging and

+ it is in IAD2, a VPN connection is not typically needed for staging

+ hosts.

+ 

+ Edit the zone file for the reverse lookup first (the *in-addr.arpa file)

+ and find ips to use. The ips will be listed with a domain name of

+ "unused." If you're configuring a web application server, you probably

+ want two hosts for stg and at least two for production. Two in

+ production means that we don't need downtime for reboots and updates.

+ Two in stg means that we'll be less likely to encounter problems related

+ to having multiple web application servers when we take a change tested

+ in stg into production:

+ 

+ ....

+ -105      IN        PTR      unused.

+ -106      IN        PTR      unused.

+ -107      IN        PTR      unused.

+ -108      IN        PTR      unused.

+ +105      IN        PTR      elections01.stg.iad2.fedoraproject.org.

+ +106      IN        PTR      elections02.stg.iad2.fedoraproject.org.

+ +107      IN        PTR      elections01.iad2.fedoraproject.org.

+ +108      IN        PTR      elections02.iad2.fedoraproject.org.

+ ....

+ 

+ Edit the forward domain (iad2.fedoraproject.org in our example) next:

+ 

+ ....

+ elections01.stg IN      A       10.5.126.105

+ elections02.stg IN      A       10.5.126.106

+ elections01     IN      A       10.5.126.107

+ elections02     IN      A       10.5.126.108

+ ....

+ 

+ Repeat these two steps if you need to make them available on the VPN.

+ Note: if your stg hosts are in IAD2, you don't need to configure VPN for

+ them as all our stg proxy servers are in IAD2.

+ 

+ Also remember to update the Serial at the top of all zone files.

+ 

+ Once the files are edited, you need to run a script to build the zones.

+ But first, commit the changes you just made to the "source":

+ 

+ ....

+ git add .

+ git commit -a -m 'Added staging and production elections hosts.'

+ ....

+ 

+ Once that is committed, you need to run a script to build the zones and

+ then push them to the dns servers.:

+ 

+ ....

+ ./do-domains # This builds the files

+ git add .

+ git commit -a -m 'done build'

+ git push

+ 

+ $ sudo -i ansible ns\* -a '/usr/local/bin/update-dns' # This tells the dns servers to load the new files

+ ....

+ 

+ == Make certs

+ 

+ [WARNING]

+ ====

+ If you already had a clone of private, make VERY sure to do a

+ git pull first! It's quite likely somebody else added a new host without

+ you noticing it, and you cannot merge the keys repos manually.

+ (seriously, don't: the index and serial files just wouldn't match up

+ with the certificate, and you would revoke the wrong certificate upon

+ revocation).

+ ====

+ 

+ When doing 2 factor auth for sudo, the hosts that we connect from need

+ to have valid SSL Certs. These are currently stored in the private repo:

+ 

+ ....

+ git clone /srv/git/ansible-private && chmod 0700 ansible-private

+ cd ansible-private/files/2fa-certs

+ . ./vars

+ ./build-and-sign-key $FQDN  # ex: elections01.stg.iad2.fedoraproject.org

+ ....

+ 

+ The `$FQDN` should be the iad2 domain name if it's in iad2, vpn if not in

+ iad2, and if it has no vpn and is not in iad2 we should add it to the

+ vpn.:

+ 

+ ....

+ git add .

+ git commit -a

+ git push

+ ....

+ 

+ [NOTE]

+ ====

+ Make sure to re-run vars from the vpn repo. If you forget to do

+ that, You will just (try to) generate a second pair of 2fa certs, since

+ the ./vars script create an environment var to the root key directory,

+ which is different.

+ ====

+ 

+ Servers that are on the VPN also need certs for that. These are also

+ stored in the private repo:

+ 

+ ....

+ cd ansible-private/files/vpn/openvpn

+ . ./vars

+ ./build-and-sign-key $FQDN  # ex: elections01.iad2.fedoraproject.org

+ ./build-and-sign-key $FQDN  # ex: elections02.iad2.fedoraproject.org

+ ....

+ 

+ The `$FQDN` should be the iad2 domain name if it's in iad2, and just

+ fedoraproject.org if it's not in IAD2 (note that there is never .vpn in

+ the FQDN in the openvpn keys). Now commit and push.:

+ 

+ ....

+ git add .

+ git commit -a

+ git push

+ ....

+ 

+ == ansible

+ 

+ ....

+ git clone https://pagure.io/fedora-infra/ansible.git

+ cd ansible

+ ....

+ 

+ To see an example:

+ 

+ ....

+ git grep badges-web01 (example)

+ find . -name badges-web01\*

+ find . -name badges-web'\'*'

+ ....

+ 

+ === inventory

+ 

+ The ansible inventory file lists all the hosts that ansible knows about

+ and also allows you to create sets of hosts that you can refer to via a

+ group name. For a typical web application server set of hosts we'd

+ create things like this:

+ 

+ ....

+ [elections]

+ elections01.iad2.fedoraproject.org

+ elections02.iad2.fedoraproject.org

+ 

+ [elections-stg]

+ elections01.stg.iad2.fedoraproject.org

+ elections02.stg.iad2.fedoraproject.org

+ 

+ [... find the staging group and add there: ...]

+ 

+ [staging]

+ db-fas01.stg.iad2.fedoraproject.org

+ elections01.stg.iad2.fedoraproject.org

+ electionst02.stg.iad2.fedoraproject.org

+ ....

+ 

+ The hosts should use their fully qualified domain names here. The rules

+ are slightly different than for 2fa certs. If the host is in IAD2, use

+ the .iad2.fedoraproject.org domain name. If they aren't in IAD2, then

+ they usually just have .fedoraproject.org as their domain name. (If in

+ doubt about a not-in-IAD2 host, just ask).

+ 

+ === VPN config

+ 

+ If the machine is in VPN, create a file in ansible at

+ `roles/openvpn/server/files/ccd/$FQDN` with contents like:

+ 

+ ....

+ ifconfig-push 192.168.1.X 192.168.0.X

+ ....

+ 

+ Where X is the last octet of the DNS IP address assigned to the host, so

+ for example for _elections01.iad2.fedoraproject.org_ that would be:

+ 

+ ....

+ ifconfig-push 192.168.1.44 192.168.0.44

+ ....

+ 

+ == Work in progress

+ 

+ From here to the end of file is still being worked on

+ 

+ === host_vars and group_vars

+ 

+ ansible consults files in

+ https://pagure.io/fedora-infra/ansible/blob/main/f/inventory/group_vars[inventory/group_vars]

+ and https://pagure.io/fedora-infra/ansible/blob/main/f/inventory/host_vars[inventory/host_vars]

+ to set parameters that can be used in templates and playbooks. You may

+ need to edit these

+ 

+ It's usually easy to copy the `host_vars` and `group_vars` from an existing

+ host that's similar to the one you are working on and then modify a few

+ names to make it work. For instance, for a web application server:

+ 

+ ....

+ cd ~/ansible/inventory/group_vars

+ cp badges-web elections

+ ....

+ 

+ Change the following:

+ 

+ ....

+ - fas_client_groups: sysadmin-noc,sysadmin-badges

+ + fas_client_groups: sysadmin-noc,sysadmin-web

+ ....

+ 

+ (You can change disk size, mem_size, number of cpus, and ports too if

+ you need them).

+ 

+ Some things will definitely need to be defined differently for each host

+ in a group -- notably, ip_address. You should use the ip_address you

+ claimed in the dns repo:

+ 

+ ....

+ cd ~/ansible/inventory/host_vars

+ cp badges-web01.stg.iad2.fedoraproject.org elections01.stg.iad2.fedoraproject.org

+ <edit appropriately>

+ ....

+ 

+ The host will need vmhost declaration. There is a script in

+ `ansible/scripts/vhost-info` that will report how much free memory and

+ how many free cpus each vmhost has. You can use that to inform your

+ decision. By convention, staging hosts go on virthost12.

+ 

+ Each vmhost has a different volume group. To figure out what volume

+ group that is, execute the following command on the virthost.:

+ 

+ ....

+ vgdisplay

+ ....

+ 

+ You mant want to run "lsblk" to check that the volume group you expect

+ is the one actually used for virtual guests.

+ 

+ [NOTE]

+ ====

+ 19:16:01 <nirik> 3. add ./inventory/host_vars/FQDN host_vars for the new host.

+ 

+ 19:16:56 <nirik> that will have in it ip addresses, dns

+ resolv.conf, ks url/repo, volume group to make the host lv in, etc etc.

+ 

+ 19:17:10 <nirik> 4. add any needed vars to inventory/group_vars/ for

+ the group

+ 

+ 19:17:33 <nirik> this has memory size, lvm size, cpus, etc |

+ 

+ 19:17:45 <nirik> 5. add tasks/virt_instance_create.yml task to top of

+ group/host playbook

+ 

+ 19:18:10 <nirik> 6. run the playbook and it will

+ go to the virthost you set, create the lv, guest, install it, wait for

+ it to come up, then continue configuring it.

+ ====

+ 

+ mailman.yml::

+   copy it from another file.

+ 

+ ....

+ ./ans-vhost-freemem --hosts=virtost\*

+ ....

+ 

+ group vars

+ 

+ * vmhost (of the host that will host the VM)

+ * kickstart info (url of the kickstart itself and the repo)

+ * datacenter (although most likely won't change)

+ 

+ The host playbook is rather basic

+ 

+ * Change the name

+ * Most things won't change much

+ 

+ ....

+ ansible-playbook /srv/web/infra/ansible/infra/ansible/playbooks/grous/mailman.yml

+ ....

+ 

+ == Adding a new proxy or webserver

+ 

+ When adding a new web server other files must be edited by hand

+ currently until templates replace them. These files cover getting httpd

+ logs from the server onto log01 so that log analysis can be done.

+ 

+ ....

+ roles/base/files/syncHttpLogs.sh

+ roles/epylog/files/merged/modules.d/rsyncd.conf

+ roles/hosts/files/staging-hosts

+ roles/mediawiki123/templates/LocalSettings.php.fp.j2

+ ....

+ 

+ There are also nagios files which will need to be edited but that should

+ be done following the nagios document.

+ 

+ == References

+ 

+ * The making a new instance section of:

+ http://meetbot.fedoraproject.org/meetbot/fedora-meeting-1/2013-07-17/infrastructure-ansible-meetup.2013-07-17-19.00.html

@@ -0,0 +1,25 @@ 

+ = Non-human Accounts Infrastructure SOP

+ 

+ We have many non-human accounts for various services, used by our web

+ applications and certain automated scripts.

+ 

+ == Contact Information

+ 

+ Owner:::

+   Fedora Infrastructure Team

+ Contact:::

+   #fedora-admin

+ Persons:::

+   sysadmin-main

+ Purpose:::

+   Provide Non-human accounts to our various services

+ 

+ == Tokens

+ 

+ Wherever possible OIDC (Open Id Connect) tokens or other tokens should

+ be used for the script. Whatever the token it should have the minimum

+ privs to do whatever the script or process needs to do and no more.

+ 

+ Depending on what service(s) it needs to interact with this could be

+ different tokens. Consult with the Fedora Security Officer for exact

+ details.

@@ -0,0 +1,142 @@ 

+ = Nuancier SOP

+ 

+ Nuancier is the web application used by the design team and the

+ community to submit and vote on the supplemental wallpapers provided

+ with each version of Fedora.

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_review_an_election>>

+ * <<_vote_on_an_election>>

+ * <<_view_all_the_candidates_of_an_election>>

+ * <<_view_the_results_of_an_election>>

+ * <<_miscellaneous>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin

+ Location::

+   https://apps.fedoraproject.org/nuancier

+ Servers::

+   nuancier01, nuancier02, nuancier01.stg, nuancier02.stg

+ Purpose::

+   Provide a system to submit and vote on supplemental wallpapers

+ 

+ == Create a new election

+ 

+ * Login

+ * Go to the _Admin_ panel via the menu at the top

+ * Click on _Create a new election_

+ * Complete the form:

+ +

+ Election name::

+   A short name used in all the pages, most often since we have one

+   election per release it has been of the form _Fedora XX_

+ Name of the folder containing the pictures::

+   This just links the election with the folder where the images will be

+   uploaded on disk. Keep it simple, safe, something like

+   _fXX_ will do.

+ Year::

+   The year when the election will be happening, this will just give some

+   quick sorting option

+ Submission start date (in UTC)::

+   The date from which the people will be able to submit wallpapers for

+   the election. The submission starts on the exact day at midnight UTC.

+ Start date (in UTC)::

+   The date when the election starts (and thus the submissions end).

+   There is no buffer between when the submissions end and when the votes

+   start which means admins have to keep up with the submissions as they

+   are done.

+ End date (in UTC)::

+   The date when the election ends. There are no embargo on the results,

+   they are available right after the election ends.

+ URL to claim a badge for voting::

+   The URL at which someone can claim a badge. This URL is displayed on

+   the voting page as well as ones people have voted. Which means that

+   having the badge does not ensure people voted, at max it ensures

+   people visited nuancier during a voting phase.

+ Number of votes an user can make::

+   The number of wallpapers an user can choose/vote on. This was made as

+   they was a debate in the design team if having everyone vote on all 16

+   wallpapers was a good idea or not.

+ Number of candidate an user can upload::

+   Restricts the number of wallpapers an user can submit for an election

+   to prevent people from uploading tens of wallpapers in one election.

+ 

+ == Review an election

+ 

+ Admins must do that regularly during a submission phase to avoid

+ candidates from piling up.

+ 

+ * Login

+ * Go to the _Admin_ panel via the menu at the top

+ * Find the election of interest in the list and click on

+ _Review_

+ 

+ If the images are not showing, you can generate the thumbnails using the

+ button _(Re-)generate cache_.

+ 

+ On the review page, you will be able to filter the candidates by

+ _Approved_, _Pending_, _Rejected_ or

+ see them _All_ (default).

+ 

+ You can then check the images one by one, select their checkbox and then

+ either _Approve_ or _Deny_ all the ones you

+ selected.

+ 

+ [NOTE]

+ ====

+ Rejections must be motivated in the _Reason for rejection / Comments_

+ input field. This motivation is then sent by email to the user

+ explaining why a wallpaper they submitted was not accepted into the

+ election.

+ ====

+ 

+ == Vote on an election

+ 

+ Once an election is opened, a link announcing it will be available from

+ the front page and in the page listing the elections

+ (_Elections_ tab in the menu) a green check-mark will appear

+ on the _Votes_ column while a red forbidden sign will appear

+ on the _Submissions_ column.

+ 

+ You can then click on the election name which will take you on the

+ voting page.

+ 

+ There, enlarge the image by clicking on them and make your choice by

+ clicking on the bottom right corner of the image.

+ 

+ On the column on the right the total number of vote available will

+ appear. If you need to change remove a wallpaper from your selection,

+ simply click on it in the right column.

+ 

+ As long as you have not picked the maximum number of candidates allowed,

+ you can cast your vote multiple times (but not on the same candidates of

+ course).

+ 

+ == View all the candidates of an election

+ 

+ All the candidates of an election are only accessible once the election

+ is over. If you wish to see all the images uploaded, simply go to the

+ _Elections_ tab and click on the election name.

+ 

+ == View the results of an election

+ 

+ The results of an election are accessible immediately after the end of

+ it. To see them, simply click the _Results_ tab in the menu.

+ 

+ There you can click on the name of the election to see the wallpaper

+ ordered by their number of votes or on _stats_ to view some

+ stats about the election (such as the number of participants, the number

+ of voters, votes or the evolution of the votes over time).

+ 

+ == Miscellaneous

+ 

+ Nuancier uses a gluster volume shared between the two hosts (in prod and

+ in stg) where are stored the images, making sure they are available to

+ both frontends. This may make things a little trickier sometime, be

+ aware of it.

@@ -0,0 +1,139 @@ 

+ = On Demand Compose Service SOP

+ 

+ [NOTE]

+ ====

+ The ODCS is very new and changing rapidly. We'll try to keep this up to

+ date as best we can.

+ ====

+ 

+ The ODCS is a service generating temporary compose from Koji tag(s)

+ using Pungi.

+ 

+ == Contact Information

+ 

+ Owner::

+   Factory2 Team, Release Engineering Team, Infrastructure Team

+ Contact::

+   #fedora-modularity, #fedora-admin, #fedora-releng

+ Persons::

+   jkaluza, cqi, qwan, threebean

+ Public addresses::

+   * odcs.fedoraproject.org

+ Servers::

+   * odcs-frontend0[1-2].iad2.fedoraproject.org

+   * odcs-backend01.iad2.fedoraproject.org

+ Purpose::

+   Generate temporary compose from Koji tag(s) using Pungi.

+ 

+ == Description

+ 

+ ODCS clients submit request for a compose to _odcs.fedoraproject.org_. The

+ requests are submitted using `python2-odcs-client` Python module or just

+ using plain JSON.

+ 

+ The request contains all the information needed to build a compose:

+ 

+ * *source type*: Type of compose source, for example "tag" or "module"

+ * *source*: Name of Koji tag or list of modules defined by

+ name-stream-version.

+ * *packages*: List of packages to include in a compose.

+ * *seconds to live*: Number of seconds after which the compose is removed

+ from the filesystem and is marked as "removed".

+ * *flags*: Various flags further defining the compose - for example the

+ "no_deps" flag saying that the *packages* dependencies

+ should not be included in a compose.

+ 

+ The request is received by the ODCS flask app running on odcs-frontend

+ nodes. The frontend does input validation of the request and then adds

+ the compose request to database with "wait" state and sends fedmsg

+ message about this event. The compose request gets its unique id which

+ can be used by a client to query its status using frontend REST API.

+ 

+ The odcs-backend node then handles the compose requests in "wait" state

+ and starts generating the compose using the Pungi tool. It does so by

+ generating all the configuration files for Pungi and executing "pungi"

+ executable. Backend also changes the compose request status to

+ "generating" and sends fedmsg message about this event.

+ 

+ The number of concurrent pungi processes can be set using the

+ _num_concurrent_pungi_ variable in ODCS configuration file.

+ 

+ The output directory for a compose is shared between frontend and

+ backend node. Once the compose is generated, the backend changes the

+ status of compose request to "done" and again sends fedmsg message about

+ this event.

+ 

+ The shared directory with a compose is available using httpd on the

+ frontend node and ODCS client can access the generated compose. By

+ default this is on https://odcs.fedoraproject.org/composes/ URL.

+ 

+ If the compose generation goes wrong, the backend changes the state of

+ the compose request to "failed" and again sends fedmsg message about

+ this event. The "failed" compose is still available for

+ *seconds to live* time in the shared directory for further

+ examination of pungi logs if needed.

+ 

+ After the *seconds to live* time, the backend node removes

+ the compose from filesystem and changes the state of compose request to

+ "removed".

+ 

+ If there are compose requests for the very same composes, the ODCS will

+ reuse older compose instead of generating new one and points the new

+ compose to older one.

+ 

+ The "removed" compose can be renewed by a client to generate the same

+ compose as in the past. The *seconds to live* attribute of a

+ compose can be extended by a client when needed.

+ 

+ == Observing ODCS Behavior

+ 

+ There is currently no command line tool to query ODCS, but ODCS provides

+ REST API which can be used to observe the ODCS behavior. This is

+ available on https://odcs.fedoraproject.org/api/1/composes.

+ 

+ The API can be filtered by following keys entered as HTTP GET variables:

+ 

+ * owner

+ * source_type

+ * source

+ * state

+ 

+ It is also possible to see all the current composes in the compose

+ output directory, which is available on the frontend on

+ https://odcs.fedoraproject.org/composes.

+ 

+ == Removing compose before its expiration time

+ 

+ Members of FAS group defined in the _admins_ section of ODCS

+ configuration can remove any compose by sending DELETE request to

+ following URL:

+ 

+ https://odcs.fedoraproject.org/api/1/composes/$compose_id

+ 

+ == Logs

+ 

+ The frontend logs are on odcs-frontend0[1-2] in

+ `/var/log/httpd/error_log` or `/var/log/httpd/ssl_error_log`.

+ 

+ The backend logs are on odcs-backend01. Look in the journal for the

+ _odcs-backend_ service.

+ 

+ == Upgrading

+ 

+ The package in question is _odcs-server_. Please use the

+ https://pagure.io/fedora-infra/ansible/blob/main/f/playbooks/manual/upgrade/odcs.yml[playbooks/manual/upgrade/odcs.yml]

+ playbook.

+ 

+ == Things that could go wrong

+ 

+ === Not enough space on shared volume

+ 

+ In case there are too many composes, member of FAS group defined in the

+ ODCS configuration file _admins_ section should:

+ 

+ * Remove the oldest composes to get some free space immediatelly. List

+ of such composes can be found on

+ https://odcs.fedoraproject.org/composes/ by sorting by Last modified

+ fields.

+ * Decrease the *max_seconds_to_live* in ODCS configuration

+ file.

@@ -0,0 +1,370 @@ 

+ = OpenQA Infrastructure SOP

+ 

+ OpenQA is an automated test system used to run validation tests on

+ nightly and candidate Fedora composes, and also to run a subset of these

+ tests on critical path updates.

+ 

+ OpenQA production instance: https://openqa.fedoraproject.org

+ 

+ OpenQA staging instance: https://openqa.stg.fedoraproject.org

+ 

+ Wiki page on Fedora openQA deployment: https://fedoraproject.org/wiki/OpenQA

+ 

+ Upstream project page: http://open.qa/

+ 

+ Upstream repositories: https://github.com/os-autoinst

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora QA devel

+ Contact::

+   #fedora-qa, #fedora-admin, qa-devel mailing list

+ People::

+   Adam Williamson (adamwill / adamw), Petr Schindler (pschindl)

+ Machines::

+   See ansible inventory groups with 'openqa' in name

+ Purpose::

+   Run automated tests on VMs via screen recognition and VNC input

+ 

+ == Architecture

+ 

+ Each openQA instance consists of a server (these are virtual machines)

+ and one or more worker hosts (these are bare metal systems). The server

+ schedules tests ("jobs", in openQA parlance) and stores results and

+ associated data. The worker hosts run "jobs" and send the results back

+ to the server. The server also runs some fedmsg consumers to handle

+ automatic scheduling of jobs and reporting of results to external

+ systems (ResultsDB and Wikitcms).

+ 

+ == Server

+ 

+ The server runs a web UI for viewing scheduled, running and completed

+ tests and their data, with an admin interface where many aspects of the

+ system can be configured (though we do not use the web UI for several

+ aspects of configuration). There are several separate services that run

+ on each server, and communicate with each other mainly via dbus. Each

+ server requires its own PostgreSQL database. The web UI and websockets

+ server are made externally available via reverse proxying through an

+ Apache server.

+ 

+ It hosts an NFS share that contains the tests, the 'needles'

+ (screenshots with metadata as JSON files that are used for screen

+ matching), and test 'assets' like ISO files and disk images. The path is

+ `/var/lib/openqa/share/factory`.

+ 

+ In our deployment, the PostgreSQL database for each instance is hosted

+ by the QA database server. Also, some paths on the server are themselves

+ mounted as NFS shares from the infra storage server. This is so that

+ these are not lost if the server is re-deployed, and can easily be

+ backed up. These locations contain the data from each executed job. As

+ both the database and these key data files are not actually stored on

+ the server, the server can be redeployed from scratch without loss of

+ any data (at least, this is the intent).

+ 

+ Also in our deployment, an openQA plugin (which we wrote, but which is

+ part of the upstream codebase) is enabled which emits fedmsgs on various

+ events. This works by calling fedmsg-logger, so the appropriate fedmsg

+ configuration must be in place for this to emit events correctly.

+ 

+ The server systems run a fedmsg consumer for the purpose of

+ automatically scheduling jobs in response to the appearance of new

+ composes and critical path updates, and one for the purpose of reporting

+ the results of completed jobs to ResultsDB and Wikitcms. These use the

+ `fedmsg-hub` system.

+ 

+ == Worker hosts

+ 

+ The worker hosts run several individual worker 'instances' (via

+ systemd's 'instantiated service' mechanism), each of which registers

+ with the server and accepts jobs from it, uploading the results of the

+ job and some associated data to the server on completion. The worker

+ instances and server communicate both via a conventional web API

+ provided by the server and via websockets. When a worker runs a job, it

+ starts a qemu virtual machine (directly - libvirt is not used) and

+ interacts with it via VNC and the serial console, following a set of

+ steps dictating what it should do and what response it should expect in

+ terms of screen contents or serial console output. The server 'pushes'

+ jobs to the worker instances over a websocket connection.

+ 

+ Each worker host must mount the `/var/lib/openqa/share/factory` NFS

+ share provided by the server. If this share is not mounted, any jobs run

+ will fail immediately due to expected asset and test files not being

+ found.

+ 

+ Some worker hosts for each instance are denominated 'tap workers',

+ meaning they run some advanced jobs which use software-defined

+ networking (openvswitch) to interact with each other. All the

+ configuration for this should be handled by the ansible scripts, but

+ it's useful to be aware that there is complex software-defined

+ networking stuff going on on these hosts which could potentially be the

+ source of problems.

+ 

+ == Deployment and regular operation

+ 

+ Deployment and normal update of the openQA systems should run entirely

+ through Ansible. Just running the appropriate ansible plays for the

+ systems should complete the entire deployment / update process, though

+ it is best to check after running them that there are no failed services

+ on any of the systems (restart any that failed), and that the web UI is

+ properly accessible.

+ 

+ Regular operation of the openQA deployments is entirely automated. Jobs

+ should be scheduled and run automatically when new composes and critical

+ path updates appear, and results should be reported to ResultsDB and

+ Wikitcms (when appropriate). Dynamically generated assets should be

+ regenerated regularly, including across release boundaries (see the

+ section on createhdds below): no manual intervention should be required

+ when a new Fedora release appears. If any of this does not happen,

+ something is wrong, and manual inspection is needed.

+ 

+ Our usual practice is to upgrade the openQA systems to new Fedora

+ releases promptly as they appear, using `dnf system-upgrade`. This is

+ done manually. We usually upgrade the staging instance first and watch

+ for problems for a week or two before upgrading production.

+ 

+ == Rebooting / restarting

+ 

+ The optimal approach to rebooting an entire openQA deployment is as

+ follows:

+ 

+ [arabic]

+ . Wait until no jobs are running

+ . Stop all `openqa-*` services on the server, so no more will be queued

+ . Stop all `openqa-worker@` services on the worker hosts

+ . Reboot the server

+ . Check for failed services (`systemctl --failed`) and restart any that

+ failed

+ . Once the server is fully functional, reboot the worker hosts

+ . Check for failed services and restart any that failed, particularly

+ the NFS mount service

+ 

+ Rebooting the workers *after* the server is important due to the NFS

+ share.

+ 

+ If only the server needs restarting, the entire procedure above should

+ ideally be followed in any case, to ensure there are no issues with the

+ NFS mount breaking due to the server reboot, or the server and worker

+ getting confused about running jobs due to the websockets connections

+ being restarted.

+ 

+ If only a worker host needs restarting, there is no need to restart the

+ server too, but it is best to wait until no jobs are running on that

+ host, and stop all `open-worker@` services on the host before rebooting

+ it.

+ 

+ There are two ways to check if jobs are running and if so where. You can

+ go to the web UI for the server and click 'All Tests'. If any jobs are

+ running, you can open each one individually (click the link in the

+ 'Test' column) and look at the 'Assigned worker', which will tell you

+ which host the job is running on. Or, if you have admin access, you can

+ go to the admin menu (top right of the web UI, once you are logged in)

+ and click on 'Workers', which will show the status of all known workers

+ for that server, and select 'Working' in the state filter box. This will

+ show all workers currently working on a job.

+ 

+ Note that if something which would usually be tested (new compose, new

+ critpath update...) appears during the reboot window, it likely will

+ _not_ be scheduled for testing, as this is done by a fedmsg consumer

+ running on the server. You will need to schedule it for testing manually

+ in this case (see below).

+ 

+ == Scheduling jobs manually

+ 

+ While it is not normally necessary, you may sometimes need to run or

+ re-run jobs manually.

+ 

+ The simplest cases can be handled by an admin from the web UI: for a

+ logged-in admin, all scheduled and running tests can be cancelled (from

+ various views), and all completed tests can be restarted. 'Restarting' a

+ job actually effectively clones it and schedules the clone to be run: it

+ creates a new job with a new job ID, and the previous job still exists.

+ openQA attempts to handle complex cases of inter-dependent jobs

+ correctly when restarting, but doesn't always manage to do it right;

+ when it goes wrong, the best thing to do is usually to re-run all jobs

+ for that medium.

+ 

+ To run or re-run the full set of tests for a compose or update, you can

+ use the `fedora-openqa` CLI. To run or re-run tests for a compose, use:

+ 

+ ....

+ fedora-openqa compose -f (COMPOSE LOCATION)

+ ....

+ 

+ where `(COMPOSE LOCATION)` is the full URL of the `/compose`

+ subdirectory of the compose. This will only work for Pungi-produced

+ composes with the expected productmd-format metadata, and a couple of

+ other quite special cases.

+ 

+ The `-f` argument means 'force', and is necessary to re-run tests:

+ usually, the scheduler will refuse to re-schedule tests that have

+ already run, and `-f` overrides this.

+ 

+ To run or re-run tests for an update, use:

+ 

+ ....

+ fedora-openqa update -f (UPDATEID) (RELEASE)

+ ....

+ 

+ where `(UPDATEID)` is the update's ID - something like

+ `FEDORA-2018-blahblah` - and `(RELEASE)` is the release for which the

+ update is intended (27, 28, etc).

+ 

+ To run or re-run only the tests for a specific medium (usually a single

+ image file), you must use the lower-level web API client, with a more

+ complex syntax. The command looks something like this:

+ 

+ ....

+ /usr/share/openqa/script/client isos post \

+ ISO=Fedora-Server-dvd-x86_64-Rawhide-20180108.n.0.iso DISTRI=fedora VERSION=Rawhide \

+ FLAVOR=Server-dvd-iso ARCH=x86_64 BUILD=Fedora-Rawhide-20180108.n.0 CURRREL=27 PREVREL=26 \

+ RAWREL=28 IMAGETYPE=dvd SUBVARIANT=Server \

+ LOCATION=http://kojipkgs.fedoraproject.org/compose/rawhide/Fedora-Rawhide-20180108.n.0/compose

+ ....

+ 

+ The `ISO` value is the filename of the image to test (it may not

+ actually be an ISO), the `DISTRI` value is always 'fedora', the

+ `VERSION` value should be the release number or 'Rawhide', the `FLAVOR`

+ value depends on the image being tested (you can check the value from an

+ existing test for the same or a similar ISO), the `ARCH` value is the

+ arch of the image being tested, the `BUILD` value is the compose ID,

+ `CURREL` should be the release number of the current Fedora release at

+ the time the test is run, `PREVREL` should be one lower than `CURREL`,

+ `RAWREL` should be the release number associated with Rawhide at the

+ time the test is run, `IMAGETYPE` depends on the image being tested

+ (again, check a similar test for the correct value), `LOCATION` is the

+ URL to the /compose subdirectory of the compose location, and

+ `SUBVARIANT` again depends on the image being tested. Please ask for

+ help if this seems too daunting. To re-run the 'universal' tests on a

+ given image, set the `FLAVOR` value to 'universal', then set all other

+ values as appropriate to the chosen image. The 'universal' tests are

+ only likely to work at all correctly with DVD or netinst images.

+ 

+ openQA provides a special script for cloning an existing job but

+ optionally changing one or more variable values, which can be useful in

+ some situations. Using it looks like this:

+ 

+ ....

+ /usr/share/openqa/script/clone_job.pl --skip-download --from localhost 123 RAWREL=28

+ ....

+ 

+ to clone job 123 with the `RAWREL` variable set to '28', for instance.

+ For interdependent jobs, you may or may not want to use the

+ `--skip-deps` argument to avoid re-running the cloned job's parent

+ job(s), depending on circumstances.

+ 

+ == Manual updates

+ 

+ In general updates to any of the components of the deployments should be

+ handled via ansible: push the changes out in the appropriate way (git

+ repo update, package update, etc.) and then run the ansible plays.

+ However, sometimes we do want to update or test a change to something

+ manually for some reason. Here are some notes on those cases.

+ 

+ For updating openQA and/or os-autoinst packages: ideally, ensure no jobs

+ are running. Then, update all installed subpackages on the server. The

+ server services should be automatically restarted as part of the package

+ update. Then, update all installed subpackages on the worker hosts, and

+ restart all worker services. A 'for' loop can help with that, for

+ instance:

+ 

+ ....

+ for i in {1..10}; do systemctl restart openqa-worker@$i.service; done

+ ....

+ 

+ on a host with ten worker instances.

+ 

+ For updating the openQA tests:

+ 

+ ....

+ cd /var/lib/openqa/share/tests/fedora

+ git pull (or git checkout (branch) or whatever)

+ ./templates --clean

+ ./templates-updates --update

+ ....

+ 

+ The templates steps are only necessary if there are any changes to the

+ templates files.

+ 

+ For updating the scheduler code:

+ 

+ ....

+ cd /root/fedora_openqa

+ git pull (or whatever changes)

+ python setup.py install

+ systemctl restart fedmsg-hub

+ ....

+ 

+ Updating other components of the scheduling process follow the same

+ pattern: update the code or package, then remember to restart

+ fedmsg-hub, or the fedmsg consumers won't use the new code. It's

+ relatively common for the openQA instances to need fedfind updates in

+ advance of them being pushed to stable, for example when a new compose

+ type is invented and fedfind doesn't understand it, openQA can end up

+ trying to schedule tests for it, or the scheduler consumer can crash;

+ when this happens we have to fix and update fedfind on the openQA

+ instances ASAP.

+ 

+ == Logging

+ 

+ Just about all useful logging information for all aspects of openQA and

+ the scheduling and report tools is logged to the journal, except that

+ the Apache server logs may be of interest in debugging issues related to

+ accessing the web UI or websockets server. To get more detailed logging

+ from openQA components, change the logging level in

+ `/etc/openqa/openqa.ini` from 'info' to 'debug' and restart the relevant

+ services. Any run of the Ansible plays will reset this back to 'info'.

+ 

+ Occasionally the test execution logs may be useful in figuring out why

+ all tests are failing very early, or some specific tests are failing due

+ to an asset going missing, etc. Each job's execution logs can be

+ accessed through the web UI, on the _Logs & Assets_ tab of the job page;

+ the files are `autoinst-log.txt` and `worker-log.txt`.

+ 

+ == Dynamic asset generation (createhdds)

+ 

+ Some of the hard disk image file 'assets' used by the openQA tests are

+ created by a tool called `createhdds`, which is checked out of a git

+ repo to `/root/createhdds` on the servers and also on some guests. This

+ tool uses `virt-install` and the Python bindings for `libguestfs` to

+ create various hard disk images the tests need to run. It is usually run

+ in two different ways. The ansible plays run it in a mode where it will

+ only create expected images that are entirely missing: this is mainly

+ meant to facilitate initial deployment. The plays also install a file to

+ `/etc/cron.daily` causing it to be run daily in a mode where it will

+ also recreate images that are 'too old' (the age-out conditions for

+ images are part of the tool itself).

+ 

+ This process isn't 100% reliable; `virt-install` can sometimes fail,

+ either just quasi-randomly or every time, in which case the cause of the

+ failure needs to be figured out and fixed so the affected image can be

+ (re-)built.

+ 

+ The i686 and x86_64 images for each instance are built on the server, as

+ its native arch is x86_64. The images for other arches are built on one

+ worker host for each arch (nominated by inclusion in an ansible

+ inventory group that exists for this purpose); those hosts have write

+ access to the NFS share for this purpose.

+ 

+ == Compose check reports (check-compose)

+ 

+ An additional ansible role runs on each openQA server, called

+ `check-compose`. This role installs a tool (also called `check-compose`)

+ and an associated fedmsg consumer. The consumer kicks in when all openQA

+ tests for any compose finish, and uses the `check-compose` tool to send

+ out an email report summarizing the results of the tests (well, the

+ production server sends out emails, the staging server just logs the

+ contents of the report). This role isn't really a part of openQA proper,

+ but is run on the openQA servers as it seems like as good a place as any

+ to do it. As with all other fedmsg consumers, if making manual changes

+ or updates to the components, remember to restart `fedmsg-hub` service

+ afterwards.

+ 

+ == Autocloud ResultsDB forwarder (autocloudreporter)

+ 

+ An ansible role called `autocloudreporter` also runs on the openQA

+ production server. This has nothing to do with openQA at all, but is run

+ there for convenience. This role deploys a fedmsg consumer that listens

+ for fedmsgs indicating that Autocloud (a separate automated test system

+ which tests cloud images) has completed a test run, then forwards those

+ results to ResultsDB.

@@ -0,0 +1,73 @@ 

+ = OpenShift SOP

+ 

+ OpenShift is used in Fedora Infrastructure to host a number of

+ applications. This SOP is applicable to the OpenShift cluster and not

+ the application running on it.

+ 

+ Production instance: https://os.fedoraproject.org/

+ 

+ Staging instance: https://os.stg.fedoraproject.org/

+ 

+ == Contact information

+ 

+ Owner::

+   Fedora Infrastrucutre Team

+ Contact::

+   #fedora-admin

+ Persons::

+   .oncall

+ Servers::

+   * os-master01.iad2.fedoraproject.org

+   * os-master02.iad2.fedoraproject.org

+   * os-master03.iad2.fedoraproject.org

+   * os-node01.iad2.fedoraproject.org

+   * os-node02.iad2.fedoraproject.org

+   * os-node03.iad2.fedoraproject.org

+   * os-node04.iad2.fedoraproject.org

+   * os-node05.iad2.fedoraproject.org

+ Purpose::

+   Run Fedora Infrastructure applications

+ 

+ == Things that could go wrong

+ 

+ === Application build is stuck

+ 

+ If an application build seems stuck, it generally helps to restart the

+ docker service on the node used for the build.

+ 

+ First check which builds are currently running on the cluster:

+ 

+ ....

+ [os-master01] # oc get builds --all-namespaces | grep -i running

+ ....

+ 

+ If the builds seems stuck (ie running for more than 20 mins) check on

+ which nodes they are scheduled.

+ 

+ Let's take a bodhi build for example:

+ 

+ ....

+ [os-master01] # oc -n bodhi get builds

+ 

+ [os-master01] # oc -n bodhi describe build bodhi-base-49 | grep os-node

+ ....

+ 

+ Once you have identified which node the build is running on, you can

+ restart the docker service on this node:

+ 

+ ....

+ [os-node02] # systemctl restart docker

+ ....

+ 

+ You can start a new build:

+ 

+ ....

+ [os-master01] # oc -n bodhi start-build bodhi-base

+ ....

+ 

+ Finally you can check if there are any more build stuck. If that's the

+ case just repeat these steps:

+ 

+ ....

+ [os-master01] # oc get builds --all-namespaces

+ ....

@@ -0,0 +1,143 @@ 

+ = OpenVPN SOP

+ 

+ OpenVPN is our server->server VPN solution. It is deployed in a

+ routeless manner and uses ansible managed keys for authentication. All

+ hosts should be given static IP's and a _hostname.vpn.fedoraproject.org_

+ DNS address.

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, sysadmin-main

+ Servers::

+   bastion (vpn.fedoraproject.org)

+ Purpose::

+   Provides vpn solution for our infrastructure.

+ 

+ == Add a new host

+ 

+ === Create/sign the keys

+ 

+ From batcave01 check out the private repo:

+ 

+ ....

+ # This is to ensure that the clone is not world-readable at any point.

+ RESTORE_UMASK=$(umask -p)

+ umask 0077

+ git clone /srv/git/ansible-private

+ $RESTORE_UMASK

+ cd ansible-private/vpn/openvpn

+ ....

+ 

+ Next prepare your environment and run the build-key script. This example

+ is for host "proxy4.fedora.iad2.redhat.com":

+ 

+ ....

+ . ./vars

+ ./build-key $FQDN # ./revoke-full $FQDN to revoke keys that are no longer used.

+ git add .

+ git commit -a

+ git push

+ ....

+ 

+ === Create Static IP

+ 

+ Giving static IP's out in openvpn is mostly painless. Take a look at

+ other examples but each host gets a file and 2 IP's.:

+ 

+ ....

+ git clone https://pagure.io/fedora-infra/ansible.git

+ vi ansible/roles/openvpn/server/files/ccd/$FQDN

+ ....

+ 

+ The file format should look like this:

+ 

+ ....

+ ifconfig-push 192.168.1.314 192.168.0.314

+ ....

+ 

+ Basically the first IP is the IP that is contactable over the vpn and

+ should always take the format "192.168.1.x" and the PtPIP is the same ip

+ on a different network: "192.168.0.x"

+ 

+ Commit and install:

+ 

+ ....

+ git add .

+ git commit -m "What have you done?"

+ git push

+ ....

+ 

+ And then push that out to bastion:

+ 

+ ....

+ sudo -i ansible-playbook $(pwd)/playbooks/groups/bastion.yml -t openvpn

+ ....

+ 

+ === Create DNS entry

+ 

+ After you have your static IP ready, just add the entry to DNS:

+ 

+ ....

+ git clone /srv/git/dns && cd dns

+ vi master/168.192.in-addr.arpa

+ # pick out an ip that's unused

+ vi master/vpn.fedoraproject.org

+ git commit -m "What have you done?"

+ ./do-domains

+ git commit -m "done build."

+ git push

+ ....

+ 

+ And push that out to the name servers with:

+ 

+ ....

+ sudo -i ansible ns\* -a "/usr/local/bin/update-dns"

+ ....

+ 

+ === Update resolv.conf on the client

+ 

+ To make sure traffic actually goes over the VPN, make sure the search

+ line in `/etc/resolv.conf` looks like:

+ 

+ ....

+ search vpn.fedoraproject.org fedoraproject.org

+ ....

+ 

+ for external hosts and:

+ 

+ ....

+ search iad2.fedoraproject.org vpn.fedoraproject.org fedoraproject.org

+ ....

+ 

+ for IAD2 hosts.

+ 

+ == Remove a host

+ 

+ ....

+ # This is to ensure that the clone is not world-readable at any point.

+ RESTORE_UMASK=$(umask -p)

+ umask 0077

+ git clone /srv/git/ansible-private

+ $RESTORE_UMASK

+ cd ansible-private/vpn/openvpn

+ ....

+ 

+ Next prepare your environment and run the build-key script. This example

+ is for host "proxy4.fedora.iad2.redhat.com":

+ 

+ ....

+ . ./vars

+ ./revoke-full $FQDN

+ git add .

+ git commit -a

+ git push

+ ....

+ 

+ == TODO

+ 

+ Deploy an additional VPN server outside of IAD2. OpenVPN does support

+ failover automatically so if configured properly, when the primary VPN

+ server goes down all hosts should connect to the next host in the list.

@@ -0,0 +1,170 @@ 

+ = Orientation Infrastructure SOP

+ 

+ Basic orientation and introduction to the sysadmin group. Welcome

+ aboard!

+ 

+ == Contents

+ 

+ [arabic]

+ . <<_contact_information>>

+ . <<_description>>

+ . Welcome to the team

+ 

+ ____

+ [arabic]

+ . <<_time_commitment>>

+ . <<_prove_yourself>>

+ ____

+ 

+ [arabic, start=4]

+ . <<_doing_work>>

+ 

+ ____

+ [arabic]

+ . <<_ansible>>

+ ____

+ 

+ [arabic, start=5]

+ . <<_our_setup>>

+ . <<_our_rules>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, sysadmin-main

+ Purpose::

+   Provide basic orientation and introduction to the sysadmin group

+ 

+ == Description

+ 

+ Fedora's Infrastructure team is charged with keeping all the lights on,

+ improving pain points, expanding services, designing new services and

+ partnering with other teams to help with their needs. The team is highly

+ dynamic and primarily based in the US. This is only significant in that

+ most of us work during the day in US time. We do have team members all

+ over the globe though and generally have decent coverage. If you happen

+ to be one of those who is not in a traditional US time zone you are

+ encouraged to be around, especially in #fedora-admin during those times

+ when we have less coverage. Even if it is just to say "I can't help with

+ that but $ADMIN will be and he should be here in about 3 hours".

+ 

+ The team itself is generally friendly and honest. Don't be afraid to

+ disagree with someone, even if you're new and they're an old timer. Just

+ make sure you ask yourself what is important to you and make sure to

+ provide data, we like that. We generally communicate on irc.freenode.net

+ in #fedora-admin. We have our weekly meetings on IRC and its the

+ quickest way to get in touch with everyone. Secondary to that we use the

+ mailing list. After that its our ticketing system and

+ talk.fedoraproject.org.

+ 

+ _Welcome to the team!_

+ 

+ == Time commitment

+ 

+ Often times this is the biggest reason for turnover in our group. Some

+ groups like sysadmin-web and certainly sysadmin-main require a huge time

+ commitment. Don't be surprised if you see people working between 10-30

+ hours a week on various tasks and that's the volunteers. Your time

+ commitment is something personal to each individual and its something

+ that you should take some serious thought about. In general it's almost

+ impossible to be a regular part of the team without at least 5-10 hours

+ a week dedicated to the Infrastructure team.

+ 

+ Also note, if you are going to be away, let us know. As a volunteer we

+ can't possibly ask you to always be around all the time. Even if you're

+ in the middle of a project and have to stop, let us know. Nothing is

+ worse then thinking someone is working on something or will be around

+ and they're just not. Really, we all understand, got a test coming up?

+ Busier at work then normal? Going on a vacation? It doesn't matter, just

+ let us know when you're going to be gone and what you're working on so

+ it doesn't get forgotten.

+ 

+ Additionally don't forget that its worth it to discuss with your

+ employer about giving time during work. They may be all for it.

+ 

+ == Prove Yourself

+ 

+ This is one of the most difficult aspects of getting involved with our

+ team. We can't just give access to everyone who asks for it and often

+ actually doing work without access is difficult. Some of the best things

+ you can do are:

+ 

+ * Keep bugging people for work. It shows you're committed.

+ * Go through bugs, look at stale bugs and close bugs that have been

+ fixed

+ * Try to duplicate bugs on your workstation and fix them there

+ 

+ Above all stick with it. Part of proving yourself is also to show the

+ time commitment it actually does take.

+ 

+ == Doing Work

+ 

+ Once you've been sponsored for a team its generally your job to find

+ what work needs to be done in the ticketing system. Be proactive about

+ this. The tickets can be found at:

+ 

+ https://pagure.io/fedora-infrastructure/issues

+ 

+ When you find a ticket that interests you contact your sponsor or the

+ ticket owner and offer help. While you're getting used to the way things

+ work, don't be offput by someone saying no or you can't work on that. It

+ happens, sometimes its a security thing, sometimes its a "I'm half way

+ through it and I'm not happy with where it is thing." Just move on to

+ the next ticket and go from there.

+ 

+ Also don't be surprised if some of the work involved includes testing on

+ your own workstation. Just setup a virtual environment and get to work!

+ There's a lot of work that can be done to prove yourself that involves

+ no access at all. Doing this kind of work is a sure fire way to get in

+ to more groups and get more involved. Don't be afraid to take on tasks

+ you don't already know how to do. But don't take on something you know

+ you won't be able to do. Ask for help when you need it and keep in

+ contact with your sponsor so you know

+ 

+ == Ansible

+ 

+ Things we do gets done in Ansible. It is important that you not make

+ changes directly on servers. This is for many reasons but just always

+ make changes in Ansible. If you want to get more familiar with Ansible,

+ set it up yourself and give it a try. The docs are available at

+ https://docs.ansible.com/

+ 

+ == Our Setup

+ 

+ Most of our work is done via bastion.fedoraproject.org. That host has

+ access to our other hosts, many of which are all over the globe. We have

+ a vpn solution setup so that knowing where the servers physically are is

+ only important when troubleshooting things. When you first get granted

+ access to one of the sysadmin-* groups, the first place you should turn

+ is bastion.fedoraproject.org then from there ssh to batcave01.

+ 

+ We also have an architecture repo available in our git repo. To get a

+ copy of this repo just:

+ 

+ ....

+ dnf install git

+ git clone https://pagure.io/fedora-infrastructure.git

+ ....

+ 

+ This will allow you to look through (and help fix) some of our scripts

+ as well as have access to our architectural documentation. Become

+ familiar with those docs if you're curious. There's always room to do

+ better documentation so if you're interested just ping your sponsor and

+ ask about it.

+ 

+ == Our Rules

+ 

+ The Fedora Infrastructure Team does have some rules. First is the

+ security policy. Please ensure you are compliant with:

+ 

+ https://infrastructure.fedoraproject.org/csi/security-policy/en-US/html-single/

+ 

+ before logging in to any of our servers. Many of those items rely on the

+ honor system.

+ 

+ Additionally note that any of the software we deploy must be available

+ in Fedora. There are some rare exceptions to this (particularly as it

+ relates to specific applications to Fedora). But each exception is taken

+ on a case by case basis.

@@ -0,0 +1,249 @@ 

+ = Outage Infrastructure SOP

+ 

+ What to do when there's an outage or when you're planning to take an

+ outage.

+ 

+ == Contents

+ * <<_contact_information>>

+ * <<_users_no_access>>

+ ** <<_planned_outage>>

+ *** <<_contacts>>

+ ** <<_unplanned_outage>>

+ *** <<_check_first>>

+ *** <<_reporting_or_participating_in_an_outage>>

+ * <<_infrastructure_members_admin_access>>

+ ** <<_planned_outage>>

+ *** <<_planning>>

+ *** <<_preparations>>

+ *** <<_outage>>

+ *** <<_post_outage_cleanup>>

+ ** <<_unplanned_outage>>

+ *** <<_determine_severity>>

+ *** <<_first_steps>>

+ *** <<_fix_it>>

+ *** <<_escalate>>

+ *** <<_the_resolution>>

+ *** <<_the_aftermath>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, sysadmin-main group

+ Location::

+   Anywhere

+ Servers::

+   Any

+ Purpose::

+   This SOP is generic for any outage

+ 

+ == Users (No Access)

+ 

+ [NOTE]

+ ====

+ Don't have shell access? Doesn't matter. Stop by and stay in

+ #fedora-admin if you have any expertise in what is going on, please

+ assist. Random users have helped the team out countless numbers of

+ times. Any time the team doesn't have to go to the docs to look up an

+ answer is a time they can be spending fixing what's busted.

+ ====

+ 

+ === Planned Outage

+ 

+ If a planned outage comes at a terrible time, just let someone know. The

+ Infrastructure Team does its best to keep outages out of the way but if

+ there's a mass rebuild going on that we don't know about and we schedule

+ a koji outage, let someone know.

+ 

+ ==== Contacts

+ 

+ Pretty much all coordination occurs in #fedora-admin on

+ https://libera.chat/[libera.chat]. Stop by there to watch more about what's going on.

+ Just stay on topic.

+ 

+ === Unplanned Outage

+ 

+ ==== Check first

+ 

+ Think something is busted? Please check with others to see if they are

+ also having issues. This could even include checking on another

+ computer. When reporting an outage remember that the admins will

+ typically drop everything they are doing to check what the problem is.

+ They won't be happy to find out your cert has expired or you're using

+ the wrong username. Additionally, check the status dashboard

+ (http://status.fedoraproject.org) to verify that there is no previously

+ reported outage that may be causing and/or related to your issue.

+ 

+ ==== Reporting or participating in an outage

+ 

+ If you think you've found an outage, get as much information as you can

+ about it at a glance. Copy any errors you get to https://paste.centos.org/.

+ Use the following guidelines:

+ 

+ Don't be general::

+   * BAD: "The wiki is acting slow"

+   * Good: "Whenever I try to save

+   https://fedoraproject.org/wiki/Infrastructure, I get a proxy error

+   after 60 seconds"

+ 

+ Don't report an outage that's already been reported::

+   * BAD: "/join #fedora-admin; Is the build system broken?"

+   * Good: "/join #fedora-admin; wait a minute or two; I noticed I can't

+   submit builds, here's the error I get:"

+ 

+ Don't suggest drastic or needless changes during an outage (send it to the list)::

+   * "Why don't you just use lighttpd?"

+   * "You could try limiting MaxRequestsPerChild in Apache"

+ Don't get off topic or be too chatty::

+   * "Transformers was awesome, but yeah, I think you guys know what to

+   do next"

+ 

+ Do research the technologies we're using and answer questions that may come up::

+   * BAD: "Can't you just fix it?"

+   * Good: "Hey guys, I think this is what you're looking for:

+   http://httpd.apache.org/docs/2.2/mod/mod_mime.html#addencoding"

+ 

+ Please try to contact OnCall first. This could be done by typing `.oncall`

+ in #fedora-admin channel.

+ 

+ == Infrastructure Members (Admin Access)

+ 

+ The Infrastructure Members section is specifically written for members

+ with access to the servers. This could be admin access to a box or even

+ a specific web application. Basically anyone with access to fix the

+ problem.

+ 

+ === Planned Outage

+ 

+ Any outage that is intentionally caused by a team member is a planned

+ outage. Even if it has to happen in the next 5 minutes.

+ 

+ ==== Planning

+ 

+ All major planned outages should occur with at least 1 week notice. This

+ is not always possible, use best judgment. Please use our standard

+ outage template at:

+ https://fedoraproject.org/wiki/Infrastructure/OutageTemplate. Make sure

+ to have another person review your template/announcement to check times

+ and services affected. Make sure to send the announcement to the lists

+ that are affected by the outage: announce, devel-announce, etc.

+ 

+ Always create a ticket in the ticketing system:

+ https://fedoraproject.org/wiki/Infrastructure/Tickets Send an email to

+ the fedora-infrastructure-list with more details if warranted.

+ 

+ Remember to follow an existing SOP as much as possible. If anything is

+ missing from the SOP please add it.

+ 

+ ==== Preparations

+ 

+ Remember to schedule an outage in nagios. This is important not just so

+ notifications don't get sent but also important for trending and

+ reporting. https://admin.fedoraproject.org/nagios/

+ 

+ ==== Outage

+ 

+ Prior to beginning an outage to any monitored service on

+ http://status.fedoraproject.org please push an update to reflect the

+ outage (see xref:status-fedora.adoc[status-fedora SOP]).

+ 

+ Report all information in #fedora-admin. Coordination is extremely

+ important, it's rare for our group to meet in person and IRC is our only

+ real-time communication device. If a web site is out please put up some

+ sort of outage page in its place.

+ 

+ ==== Post outage cleanup

+ 

+ Once the outage is over ensure that all services are up and running.

+ Ensure all nagios services are back to green. Notify everyone in

+ #fedora-admin to scan our services for issues. Once all services are

+ cleared update the status.fp.o dashboard. If the outage included a new

+ feature or major change for a group, please notify that group that the

+ change is ready. Make sure to close the ticket for the outage when it's

+ over.

+ 

+ Once the services are restored, an update to the status dashboard should

+ be pushed to show the services are restored.

+ 

+ [IMPORTANT]

+ ====

+ Additionally update any SOP's that may have changed in the course of the outage

+ ====

+ 

+ === Unplanned Outage

+ 

+ Unplanned outages happen, stay cool. As a team member never be afraid to

+ do something because you think you'll get in trouble over it. Be smart,

+ don't be reckless, and never say "I shouldn't do this". If an unorthodox

+ method or drastic change will fix the problem, do it, document it, and

+ let the team know. Messes can always be cleaned up after the outage.

+ 

+ ==== Determine Severity

+ 

+ Some outages require immediate fixing, some don't. A page should never

+ go out because someone can't sign the cla. Most of our admins are in US

+ time, use your best judgment.

+ 

+ Use the following as loose guidelines, just use your best judgment.

+ 

+ * BAD: "I can't see the Recent Changes on the wiki."

+ * Good: "The entire wiki is not viewable"

+ * BAD: I cannot sign the CLA

+ * Good: I can't change my password in the account system, I have admin

+ access and my laptop was just stolen

+ * BAD: I can't access awstats for fedoraproject.org

+ * Good: The mirrors list is down.

+ * BAD: I think someone misspelled some words on the webpage

+ * Good: The web page has been hacked and I think someone notified

+ slashdot.

+ 

+ ==== First Steps

+ 

+ After an outage has been verified, acknowledge the outage in nagios:

+ https://admin.fedoraproject.org/nagios/, update the related system on

+ the status dashboard (see the xref:status-fedora.adoc[status-fedora SOP])

+ and verify changes at http://status.fedoraproject.org, then head in to

+ #fedora-admin to figure out who is around and coordinate the next course

+ of action. Consult any relevent SOP's for corrective actions.

+ 

+ ==== Fix it

+ 

+ Fix it, Fix it, Fix it! Do whatever needs to be done to fix the problem,

+ just don't be stupid about it.

+ 

+ ==== Escalate

+ 

+ Can't fix it? Don't wait, Escalate! All of the team members have

+ expertise with some areas of our environment and weaknesses in other

+ areas. Never be afraid to tap another team member. Sometimes it's

+ required, sometimes it's not. At present our team is small enough that

+ a full escalation path wouldn't do much good. Consult the contact

+ information on each SOP for more information.

+ 

+ ==== The Resolution

+ 

+ Once the services are restored, an update to the status dashboard should

+ be pushed to show the services are restored.

+ 

+ ==== The Aftermath

+ 

+ With any outage there will be questions. Please try as hard as possible

+ to answer the following questions and send them to the

+ fedora-infrastructure-list.

+ 

+ [arabic]

+ . What happened?

+ . What was affected?

+ . How long was the outage?

+ . What was the root cause?

+ 

+ [IMPORTANT]

+ ====

+ Number 4 is especially important. If a kernel build keeps failing

+ because of issues with koji caused by a database failure caused by a

+ full filesystem on db1. Don't say koji died because of a db failure. Any

+ time a root cause is discovered and not being monitored by nagios, add

+ it if possible. Most failures can be prevented or mitigated with proper

+ monitoring.

+ ====

@@ -0,0 +1,37 @@ 

+ = Package Review SOP

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_introduction>>

+ 

+ == Contact Information

+ 

+ Owner::

+   sysadmin-main

+ Contact::

+   #fedora-admin, #fedora-noc or admin@fedoraproject.org

+ Server(s)::

+   sundries01.iad2.fedoraproject.org

+ Purpose::

+   To explain the overall function of this page, where, and how it gets

+   its information.

+ 

+ == Introduction

+ 

+ The Cached Package Review Tracker is used to collect, organize and allow

+ searching through tickets. Organization includes the following

+ _categories_:

+ 

+ * Trivial

+ * New

+ * New EPEL

+ * Need Sponsor

+ * Hidden

+ * Under Review

+ 

+ Each ticket references a source RH Bugzilla Bug entry and generates the

+ categories of tickets as stated above, based off multiple fields values,

+ for easier viewing and report generation. Page also includes searchable

+ fields allowing a search by package name and either email address for

+ reviews, for packages or for commented reviews.

@@ -0,0 +1,109 @@ 

+ = Pagure Infrastructure SOP

+ 

+ Pagure is a code hosting and management site.

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_description>>

+ * <<_when_unresponsive>>

+ * <<_git_repo_locations>>

+ * <<_services_and_what_they_do>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-apps

+ Location::

+   OSUOSL

+ Servers::

+   pagure01, pagure-stg01

+ Purpose::

+   Source code and issue tracking

+ 

+ == Description

+ 

+ Pagure ( https://pagure.io/pagure ) is a source code management and

+ issue tracking application. Its written in flask. It uses: celery,

+ redis, postgresql, and pygit2.

+ 

+ == When unresponsive

+ 

+ Sometimes pagure will stop responding, even though it's still running.

+ You can issue a 'systemctl reload httpd' and that will usually get it

+ running again.

+ 

+ == Git repo locations

+ 

+ * Main repos are in `/srv/git/repositories/<projectname>`

+ * issue/ticket repos are under

+ `/srv/git/repositories/tickets/<projectname>`

+ * Docs are under `/srv/git/repositories/docs/<projectname>`

+ * Releases (not a git repo) are under `/var/www/releases/`

+ 

+ == Services and what they do

+ 

+ * `pagure` service is the main flask application, it runs from httpd wsgi.

+ * `pagure_ci` service talks to jenkins or other CI for testing PR's

+ * `pagure_ev` service talks to websockets and updates issues and comments

+ live for users.

+ * `pagure_loadjson` service takes issues loads from pagure-importer and

+ processes them.

+ * `pagure_logcom` service handles logging.

+ * `pagure_milter` processes email actions.

+ * `pagure_webhook` service processes webhooks to notify about changes.

+ * `pagure worker` service updates git repos with changes.

+ 

+ == Useful commands

+ 

+ This section lists commands that can be useful to fix issues encountered

+ every once in a while.

+ 

+ * Recompile the gitolite configuration file

+ +

+ ....

+ # sudo -u git HOME=/srv/git/ gitolite compile && sudo -u git HOME=/srv/git/ gitolite trigger POST_COMPILE

+ ....

+ * Duplicated projects

+ 

+ We have observed that every so often two different workers create a

+ project in the database. This leads to pagure failing to give access to

+ the project as it finds multiple projects with the same namespace/name

+ where it expects these to be unique.

+ 

+ The following two SQL commands allows finding out which projects are in

+ this situation:

+ 

+ ....

+ select user_id, name, namespace, is_fork from projects where is_fork =

+ FALSE group by namespace, name, is_fork, user_id having count(user_id) >

+ 1;

+ 

+ select user_id, name, namespace, is_fork from projects where is_fork =

+ TRUE group by namespace, name, is_fork, user_id having count(user_id) >

+ 1;

+ ....

+ 

+ This will return you the _namespace_/_name_ as well as the _user_id_ of the

+ user who duplicated the projects in the database.

+ 

+ You can then do:

+ 

+ ....

+ select id, user_id, name, namespace, is_fork from projects where name =

+ '<the name of the project>' order by user_id;

+ ....

+ 

+ In that query you will see the project _id_, _user_id_, _name_ and _namespace_

+ of the project. You will see in this one of the projects is listed twice

+ with the same _user_id_ (the one returned in the previous query).

+ 

+ From there, you will have to delete the duplicates (potentially the one

+ with the highest project id).

+ 

+ If the project remains un-accessible, check the apache logs, it could be

+ that the git repositories have not been created. In that case, the

+ simplest course of action is to delete all the duplicates and let the

+ users re-create the projects as they wish.

@@ -0,0 +1,153 @@ 

+ = PDC SOP

+ 

+ Store metadata about composes we produce and "component groups".

+ 

+ App: https://pdc.fedoraproject.org/

+ 

+ Source for frontend: https://github.com/product-definition-center/product-definition-center

+ 

+ Source for backend: https://github.com/fedora-infra/pdc-updater

+ 

+ == Contact Information

+ 

+ Owner::

+   Release Engineering, Fedora Infrastructure Team

+ Contact::

+   #fedora-apps, #fedora-releng, #fedora-admin, #fedora-noc

+ Servers::

+   pdc-web0\{1,2}, pdc-backend01

+ Purpose::

+   Store metadata about composes and "component groups"

+ 

+ == Description

+ 

+ The Product Definition Center (PDC) is a webapp and API designed for

+ storing and querying product metadata. We automatically populate our

+ instance with data from our existing releng tools/processes. It doesn't

+ do much on its own, but the goal is to enable us to develop more sane

+ tooling down the road for future releases.

+ 

+ The webapp is a django app running on pdc-web0\{1,2}. Unlike most of our

+ other apps, it does not use OpenID for authentication, but it instead

+ uses SAML2. It uses _mod_auth_mellon_ to achieve this (in

+ cooperation with ipsilon). The webapp allows new data to be POST'd to it

+ by admin users.

+ 

+ The backend is a _fedmsg-hub_ process running on

+ _pdc-backend01_. It listens for new composes over fedmsg and then POSTs

+ data about those composes to PDC. It also listens for changes to the

+ fedora atomic host git repo in pagure and updates "component groups" in

+ PDC to reflect what rpm components constitute fedora atomic host.

+ 

+ For long-winded history and explanation, see the original Change

+ document: https://fedoraproject.org/wiki/Changes/ProductDefinitionCenter

+ 

+ [NOTE]

+ ====

+ PDC is being replaced by fpdc (Fedora Product Definition Center)

+ ====

+ 

+ == Upgrading the Software

+ 

+ There is an upgrade playbook in `playbooks/manual/upgrade/pdc.yml` which

+ will upgrade both the frontend and the backend if new packages are

+ available. Database schema upgrades should be handled automatically with

+ a run of that playbook.

+ 

+ == Logs

+ 

+ Logs for the frontend are in `/var/log/httpd/error_log` on

+ pdc-web0\{1,2}.

+ 

+ Logs for the backend can be accessed with

+ `journalctl -u fedmsg-hub -f` on _pdc-backend01_.

+ 

+ == Restarting Services

+ 

+ The frontend runs under apache. So either `apachectl graceful`

+ or `systemctl restart httpd` should do it.

+ 

+ The backend runs as a _fedmsg-hub_, so

+ `systemctl restart fedmsg-hub` should restart it.

+ 

+ == Scripts

+ 

+ The _pdc-updater_ package (installed on _pdc-backend01_) provides three

+ scripts:

+ 

+ * `pdc-updater-audit`

+ * `pdc-updater-retry`

+ * `pdc-updater-initialize`

+ 

+ A possible failure scenario is that we will lose a fedmsg message and

+ the backend will not update the frontend with info about that compose.

+ To detect this, we provide the `pdc-updater-audit` command

+ (which gets run once daily by cron with emails sent to the releng-cron

+ list). It compare all of the entries in PDC with all of the entries in

+ kojipkgs and then raises an alert if there is a discrepancy.

+ 

+ Another possible failure scenario is that the fedmsg message is

+ published and received correctly, but there is some processing error

+ while handling it. The event occurred, but the import to the PDC db

+ failed. The `pdc-updater-audit` script should detect this

+ discrepancy, and then an admin will need to manually repair the problem

+ and retry the event with the `pdc-updater-retry` command.

+ 

+ If doomsday occurs and the whole thing is totally hosed, you can delete

+ the db and re-ingest all information available from releng with the

+ `pdc-updater-initialize` tool. (Creating the initial schema needs to

+ happen on pdc-web01 with the standard django settings.py commands.)

+ 

+ == Manually Updating Information

+ 

+ In general, you shouldn't have to do these things. `pdc-updater` will

+ automatically create new releases and update information, but if you

+ ever need to manipulate PDC data, you can do it with the _pdc-client_

+ tool. A copy is installed on _pdc-backend01_ and there are some

+ credentials there you'll need, so ssh there first.

+ 

+ Make sure that you are root so that you can read

+ `/etc/pdc.d/fedora.json`.

+ 

+ Try listing all of the releases:

+ 

+ ....

+ $ pdc -s fedora release list

+ ....

+ 

+ Deactivating an EOL release:

+ 

+ ....

+ $ pdc -s fedora release update fedora-21-updates --deactivate

+ ....

+ 

+ [NOTE]

+ ====

+ There are lots more attribute you can manipulate on a release (you can

+ change the type, and rename them, etc..) See `pdc --help`

+ and `pdc release --help` for more information.

+ ====

+ 

+ Listing all composes:

+ 

+ ....

+ $ pdc -s fedora compose list

+ ....

+ 

+ We're not sure yet how to flag a compose as the Gold compose, but when

+ we do, the answer should appear here:

+ https://github.com/product-definition-center/product-definition-center/issues/428

+ 

+ == Adding superusers

+ 

+ Some small group of release engineers need to be superuser to set eol

+ dates and add/remove components. You can grant them permissions to do

+ this via some direct database calls. First find out their email address

+ listed in fas, then login to _db01.iad2.fedoraproject.org_:

+ 

+ ....

+ sudo -u postgresql psql pdc pdc-

+ # update kerb_auth_user set is_superuser = 'true' where email = 'usersemailfromfas';

+ ....

+ 

+ The user will now have privs with their normal tokens.

@@ -0,0 +1,62 @@ 

+ = Pesign upgrades/reboots

+ 

+ Fedora has (currently) 2 special builders. These builders are used to

+ build a small set of packages that need to be signed for secure boot.

+ These packages include: _grub2_, _shim_, _kernel_, _pesign-test-app_

+ 

+ When rebooting or upgrading pesign on these machines, you have to follow

+ a special process to unlock the signing keys.

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Release Engineering, Kernel/grub2/shim/pesign maintainers

+ Contact::

+   #fedora-admin, #fedora-kernel

+ Servers::

+   bkernel01, bkernel02

+ Purpose::

+   Upgrade or restart singning keys on kernel/grub2/shim builders

+ 

+ == Procedure

+ 

+ [arabic]

+ . Coordinate with pesign maintainers or _pesign-test-app_

+ commiters as well as releng folks that have the pin to unlock the

+ signing key.

+ 

+ . Remove builder from koji:

+ +

+ ....

+ koji disable-host bkernel01.iad2.fedoraproject.org

+ ....

+ . Make sure all builds have completed.

+ . Stop existing processes:

+ +

+ ....

+ service pcscd stop

+ service pesign stop

+ ....

+ . Perform updates or reboots.

+ . Restart services (if you didn't reboot):

+ +

+ ....

+ service pcscd start

+ service pesign start

+ ....

+ . Unlock signing key:

+ +

+ ....

+ pesign-client -t "OpenSC Card (Fedora Signer)" -u

+ (enter pin when prompted)

+ ....

+ . Make sure no builds are in progress, then Re-add builder to koji,

+ remove other builder:

+ +

+ ....

+ koji enable-host bkernel01.iad2.fedoraproject.org

+ koji disable-host bkernel02.iad2.fedoraproject.org

+ ....

+ . Have a commiter send a build of pesign-test-app and make sure it's

+ signed correctly.

+ . If so, repeat process with second builder.

@@ -0,0 +1,56 @@ 

+ = Planet Subgroup Infrastructure SOP

+ 

+ Fedora's planet infrastructure produces planet configs out of users'

+ `~/.planet` files in their homedirs on fedorapeople.org. You can also

+ create subgroups of users into other planets. This document explains how

+ to setup new subgroups.

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin

+ Servers::

+   batcave01/ planet.fedoraproject.org

+ Purpose::

+   provide easy setup of new planet groups on planet.fedoraproject.org

+ 

+ following:

+ 

+ The Setup

+ 

+ [arabic]

+ . On batcave01:

+ +

+ ....

+ cp -a configs/system/planet/grouptmpl configs/system/planet/newgroupname

+ ....

+ . cd to the new directory

+ . Run:

+ +

+ ....

+ perl -pi -e "s/%%groupname/newgroupname/g" fpbuilder.conf base_config planet-group.cron templates/* 

+ ....

+ +

+ Replacing newgroupname with the groupname you want

+ 

+ . git add the whole dir

+ . Edit `manifests/services/planet.pp`

+ . Copy and paste everything from begging to end of the design team

+ group, to use as a template.

+ . Modify what you copied replacing design with the new group name

+ . Save it

+ . Check everything in

+ . Run ansible on planet and check if it works

+ 

+ == Use

+ 

+ Tell the requester to then copy their current `.planet` file to

+ `.planet.newgroupname`. For example with the design team:

+ 

+ ....

+ cp ~/.planet ~/.planet.design

+ ....

+ 

+ This will then show up on the new feed http://planet.fedoraproject.org/design/

@@ -0,0 +1,82 @@ 

+ = Fedora Infrastructure Machine Classes

+ 

+ == Contact Information

+ 

+ Owner::

+   sysadmin-main, application developers

+ Contact::

+   sysadmin-main

+ Location::

+   Everywhere we have machines.

+ Servers::

+   publictest, dev, staging, production

+ Purpose::

+   Explain our use of various types of machines.

+ 

+ == Introduction

+ 

+ This document explains what are various types of machines are used for

+ in the life cycle of providing an application or resource.

+ 

+ == Public Test machines

+ 

+ publictest instances are used for early investigation into a resource or

+ application. At this stage the application might not be packaged yet,

+ and we want to see if it's worth packaging and starting it on the

+ process to be available in production. These machines are accessible to

+ anyone in the sysadmin-test group, and coordination of use of instances

+ is done on an ad-hoc basis. These machines are re-installed every cycle

+ cleanly, so all work must be saved before this occurs.

+ 

+ Authentication must not be against the production fas server. We have

+ _fakefas.fedoraproject.org_ setup for these systems instead.

+ 

+ [NOTE]

+ ====

+ We're planning on merging publictest into the development servers.

+ Environment-wise they'll be mostly the same (one service per machine, a

+ group to manage them, no proxy interaction, etc) Service by service

+ we'll assign timeframes to the machines before being rebuilt,

+ decommissioned if no progress, etc.

+ ====

+ == Development

+ 

+ These instances are for applications that are packaged and being

+ investigated for deployment. Typically packages and config files are

+ modified locally to get the application or resource working. No caching

+ or proxies are used. Access is to a specific sysadmin group for that

+ application or resource. These instances can be re-installed on request

+ to 'start over' getting configration ready.

+ 

+ Some services hosted on dev systems are for testing new programs. These

+ will usually be associated with an RFR and have a limited lifetime

+ before the new service has to prove itself worthy of continued testing,

+ to be moved on to stg, or have the machine decommissioned. Other

+ services are for developing existing services. They are handy if the

+ setup of the service is tricky or lengthy and the person in charge wants

+ to maintain the .dev server so that newer contributors don't have to

+ perform that setup in order to work on the service.

+ 

+ Authentication must not be against the production fas server. We have

+ _fakefas.fedoraproject.org_ setup for these systems instead.

+ 

+ [NOTE]

+ ====

+ fakefas will be renamed fas01.dev at some point in the future

+ ====

+ 

+ == Staging

+ 

+ These instances are used to integrate the application or resource into

+ ansible as well as proxy and caching setups. These instances should use

+ ansible to deploy all parts of the application or resource possible.

+ Access to these instances is only to a sysadmin group for that

+ application, who may or may not have sudo access. Permissions on stg

+ mirror permissions on production (for instance, sysadmin-web would have

+ access to the app servers in stg the same as production).

+ 

+ == Production

+ 

+ These instances are used to serve the ready for deployment application

+ to the public. All changes are done via ansible and access is

+ restricted. Changes should be done here only after testing in staging.

@@ -0,0 +1,129 @@ 

+ = RabbitMQ SOP

+ 

+ https://www.rabbitmq.com/[RabbitMQ] is the message broker Fedora uses to allow applications

+ to send each other (or themselves) messages.

+ 

+ == Contact Information

+ 

+ === Owner

+ 

+ Fedora Infrastructure Team

+ 

+ === Contact

+ 

+ #fedora-admin

+ 

+ === Servers

+ 

+ * rabbitmq0[1-3].iad2.fedoraproject.org

+ * rabbitmq0[1-3].stg.iad2.fedoraproject.org

+ 

+ === Purpose

+ 

+ General purpose publish-subscribe message broker as well as

+ application-specific messaging.

+ 

+ == Description

+ 

+ RabbitMQ is a message broker written in Erlang that offers a number of

+ interfaces including AMQP 0.9.1, AMQP 1.0, STOMP, and MQTT. At this time

+ only AMQP 0.9.1 is made available to clients.

+ 

+ Fedora uses the RabbitMQ packages provided by the Red Hat Openstack

+ repository as it has a more up-to-date version.

+ 

+ === The Cluster

+ 

+ RabbitMQ supports https://www.rabbitmq.com/clustering.html[clustering]

+ a set of hosts into a single logical

+ message broker. The Fedora cluster is composed of 3 nodes,

+ rabbitmq01-03, in both staging and production. `groups/rabbitmq.yml` is

+ the playbook that deploys the cluster.

+ 

+ === Virtual Hosts

+ 

+ The cluster contains a number of virtual hosts. Each virtual host has

+ its own set of resources - exchanges, bindings, queues - and users are

+ given permissions by virtual host.

+ 

+ ==== /pubsub

+ 

+ The /pubsub virtual host is the generic publish-subscribe virtual host

+ used by most applications. Messages published via AMQP are sent to the

+ "amq.topic" exchange. Messages being bridged from fedmsg into AMQP are

+ sent via "zmq.topic".

+ 

+ ==== /public_pubsub

+ 

+ This virtual host has the "amq.topic" and "zmq.topic" exchanges from

+ /pubsub https://www.rabbitmq.com/federation.html[federated] to it,

+ and we allow anyone on the Internet to

+ connect to this virtual host. For the moment it is on the same broker

+ cluster, but if people abuse it it can be moved to a separate cluster.

+ 

+ === Authentication

+ 

+ Clients authenticate to the broker using x509 certificates. The common

+ name of the certificate needs to match the username of a user in

+ RabbitMQ.

+ 

+ == Troubleshooting

+ 

+ RabbitMQ offers a CLI, rabbitmqctl, which you can use on any node in the

+ cluster. It also offers a web interface for management and monitoring,

+ but that is not currently configured.

+ 

+ === Network Partition

+ 

+ In case of network partitions, the RabbitMQ cluster should handle it and

+ recover on its own. In case it doesn't when the network situation is

+ fixed, the partition can be diagnosed with `rabbitmqctl cluster_status`.

+ It should include the line `{partitions,[]},` (empty array).

+ 

+ If the array is not empty, the first nodes in the array can be

+ restartedi one by one, but make sure you give them plenty of time to

+ sync messages after restart (this can be watched in the

+ `/var/log/rabbitmq/rabbit.log` file)

+ 

+ === Federation Status

+ 

+ Federation is the process of copying messages from the internal

+ `/pubsub` vhost to the external `/public_pubsub` vhost. During network

+ partitions, it has been seen that the Federation relaying process does

+ not come back up. The federation status can be checked with the command

+ `rabbitmqctl eval 'rabbit_federation_status:status().'` on `rabbitmq01`.

+ It should not return the empty array (`[]`) but something like:

+ 

+ ....

+ [[{exchange,<<"amq.topic">>},

+   {upstream_exchange,<<"amq.topic">>},

+   {type,exchange},

+   {vhost,<<"/public_pubsub">>},

+   {upstream,<<"pubsub-to-public_pubsub">>},

+   {id,<<"b40208be0a999cc93a78eb9e41531618f96d4cb2">>},

+   {status,running},

+   {local_connection,<<"<rabbit@rabbitmq01.iad2.fedoraproject.org.2.8709.481>">>},

+   {uri,<<"amqps://rabbitmq01.iad2.fedoraproject.org/%2Fpubsub">>},

+   {timestamp,{{2020,3,11},{16,45,18}}}],

+  [{exchange,<<"zmq.topic">>},

+   {upstream_exchange,<<"zmq.topic">>},

+   {type,exchange},

+   {vhost,<<"/public_pubsub">>},

+   {upstream,<<"pubsub-to-public_pubsub">>},

+   {id,<<"c1e7747425938349520c60dda5671b2758e210b8">>},

+   {status,running},

+   {local_connection,<<"<rabbit@rabbitmq01.iad2.fedoraproject.org.2.8718.481>">>},

+   {uri,<<"amqps://rabbitmq01.iad2.fedoraproject.org/%2Fpubsub">>},

+   {timestamp,{{2020,3,11},{16,45,17}}}]]

+ ....

+ 

+ If the empty array is returned, the following command will restart the

+ federation (again on `rabbitmq01`):

+ 

+ ....

+ rabbitmqctl clear_policy -p /public_pubsub pubsub-to-public_pubsub

+ rabbitmqctl set_policy -p /public_pubsub --apply-to exchanges pubsub-to-public_pubsub "^(amq|zmq)\.topic$" '{"federation-upstream":"pubsub-to-public_pubsub"}'

+ ....

+ 

+ After which the Federation link status can be checked with the same

+ command as before.

@@ -0,0 +1,96 @@ 

+ = rdiff-backup SOP

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin

+ Servers::

+   backup03 and others

+ Purpose::

+   backups of critical data

+ 

+ == Description

+ 

+ We are now running a rdiff-backup of all our critical data on a daily

+ basis. This allows us to keep incremental changes over time as well has

+ have a recent copy in case of disaster recovery.

+ 

+ The backups are run from _backup03_ every day at 22:10UTC as root. All

+ config is in ansible.

+ 

+ The cron job checks out the ansible repo from git, then runs

+ ansible-playbook with the rdiff-backup playbook. This playbook looks at

+ variables to decide which machines and partitions to backup.

+ 

+ * First, machines in the _backup_clients_ group in inventory are operated

+ on. If a host is not in that group it is not backed up via rdiff-backup.

+ * Next, any machines in the _backup_clients_ group will have their `/etc`

+ and `/home` directories backed up by the server running rdiff-backup and

+ using the rdiff-backup ssh key to access the client.

+ * Next, if any of the hosts in _backup_clients_ have a variable set for

+ _host_backup_targets_, those directories will also be backed up in the

+ same manner as above with the rdiff-backup ssh key.

+ 

+ For each backup an email will be sent to _sysadmin-backup-members_ with a

+ summary.

+ 

+ Backups are stored on a netapp volume, so in addition to the

+ incrementals that rdiff-backup provides there are netapp snapshots. This

+ netapp volume is mounted on `/fedora_backups` and is running dedup on the

+ netapp side.

+ 

+ == Rebooting backup03

+ 

+ When backup03 is rebooted, you must restart the ssh-agent and reload the

+ rdiff-backup ssh key into that agent so backups can take place.

+ 

+ ....

+ sudo -i

+ ssh-agent -s > sshagent

+ source sshgent

+ ssh-add .ssh/rdiff-backup-key

+ ....

+ 

+ == Adding a new host to backups

+ 

+ [arabic]

+ . add the host to the backup_clients inventory group in ansible.

+ . If you wish to backup more than `/etc` and `/home`, add a variable to

+   `inventory/group_vars/<app_name>` like: `host_backup_targets: ['/srv']`

+   See https://pagure.io/fedora-infra/ansible/blob/main/f/inventory/group_vars/pagure#_81

+   for example.

+ . On the client to be backed up, install rdiff-backup.

+ .  On the client to be backed up, install the rdiff-backup ssh public key

+ to `/root/.ssh/authorized_keys` It should be restricted from:

+ +

+ ....

+ from="10.5.126.161,192.168.1.64" 

+ ....

+ +

+ and command can be restricted to:

+ +

+ ....

+ command="rdiff-backup --server --restrict-update-only"

+ ....

+ 

+ == Restoring from backups

+ 

+ rdiff backup keeps a copy of the most recent version of files on disk,

+ so if you wish to restore the last backup copy, simply rsync from

+ _backup03_. If you wish an older incremental, see rdiff-backup man page

+ for how to specify the exact time.

+ 

+ == Retention

+ 

+ Backups are currently kept forever, but likely down the road we will

+ look at pruning them some to match available space.

+ 

+ == Public_key:

+ 

+ ....

+ ssh-dss

+ AAAAB3NzaC1kc3MAAACBAJr3xqn/hHIXeth+NuXPu9P91FG9jozF3Q1JaGmg6szo770rrmhiSsxso/Ibm2mObqQLCyfm/qSOQRynv6tL3tQVHA6EEx0PNacnBcOV7UowR5kd4AYv82K1vQhof3YTxOMmNIOrdy6deDqIf4sLz1TDHvEDwjrxtFf8ugyZWNbTAAAAFQCS5puRZF4gpNbaWxe6gLzm3rBeewAAAIBcEd6pRatE2Qc/dW0YwwudTEaOCUnHmtYs2PHKbOPds0+Woe1aWH38NiE+CmklcUpyRsGEf3O0l5vm3VrVlnfuHpgt/a/pbzxm0U6DGm2AebtqEmaCX3CIuYzKhG5wmXqJ/z+Hc5MDj2mn2TchHqsk1O8VZM+1Ml6zX3Hl4vvBsQAAAIALDt5NFv6GLuid8eik/nn8NORd9FJPDBJxgVqHNIm08RMC6aI++fqwkBhVPFKBra5utrMKQmnKs/sOWycLYTqqcSMPdWSkdWYjBCSJ/QNpyN4laCmPWLgb3I+2zORgR0EjeV2e/46geS0MWLmeEsFwztpSj4Tv4e18L8Dsp2uB2Q==

+ root@backup03-rdiff-backup

+ ....

@@ -0,0 +1,81 @@ 

+ = Container registry SOP

+ 

+ Fedora uses the https://github.com/docker/distribution[Docker

+ Distribution] container registry to host its container images.

+ 

+ Production instance: https://registry.fedoraproject.org

+ 

+ CDN instance: https://cdn.registry.fedoraproject.org

+ 

+ == Contact information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin

+ Persons::

+   bowlofeggs cverna puiterwijk

+ Servers::

+   * oci-candidate-registry01.iad2.fedoraproject.org

+   * oci-candidate-registry01.stg.iad2.fedoraproject.org

+   * oci-registry01.iad2.fedoraproject.org

+   * oci-registry01.stg.iad2.fedoraproject.org

+   * oci-registry02.iad2.fedoraproject.org

+ Purpose::

+   Serve Fedora's container images

+ 

+ == Configuring all nodes

+ 

+ Run this command from the _ansible_ checkout to configure

+ all nodes in production:

+ 

+ ....

+ $ sudo rbac-playbook groups/oci-registry.yml

+ ....

+ 

+ == Upgrades

+ 

+ Fedora infrastructure uses the registry packaged and distributed with

+ Fedora. Thus, there is no special upgrade procedure - a simple

+ `dnf update` will do.

+ 

+ == System architecture

+ 

+ The container registry is hosted in a fairly simple design. There are

+ two hosts that run Docker Distribution to serve the registry API, and

+ these hosts are behind a load balancer. These hosts will respond to all

+ requests except for requests for blobs. Requests for blobs will receive

+ a 302 redirect to https://cdn.registry.fedoraproject.org, which is a

+ caching proxy hosted by CDN 77. The primary goal of serving the registry

+ API ourselves is so that we can serve the container manifests over TLS

+ so that users can be assured they are receiving the correct image blobs

+ when they retrieve them. We do not rely on signatures since we do not

+ have a Notary instance.

+ 

+ The two registry instances are configured not to cache their data, and

+ use NFS to replicate their shared storage. This way, changes to one

+ registry should appear in the other quickly.

+ 

+ == Troubleshooting

+ 

+ === Logs

+ 

+ You can monitor the registry via the systemd journal:

+ 

+ ....

+ sudo journalctl -f -u docker-distribution

+ ....

+ 

+ === Running out of disk space

+ 

+ We have a niagos check that monitors the available disk space on

+ `/srv/registry`. An ansible playbook is available to reclaim

+ some disk space if needed:

+ 

+ ....

+ sudo rbac-playbook manual/oci-registry-prune.yml

+ ....

+ 

+ This will delete all the images that are older than 30 days on the

+ candidate registries (prod and stg) and then run the garbage collection

+ on the registries server.

@@ -0,0 +1,184 @@ 

+ = Request for resources SOP

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_introduction>>

+ * <<_pre_sponsorship>>

+ * <<_planning>>

+ * <<_development_instance>>

+ * <<_staging_instance>>

+ * <<_production_deployment>>

+ * <<_maintenance>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin

+ Location::

+   fedoraproject.org/wiki

+ Servers::

+   dev, stg, production

+ Purpose::

+   Explains the technical part of Request for Resources

+ 

+ == Introduction

+ 

+ Once a RFR has a sponsor and has been generally agreed to move forward,

+ this SOP will describe the technical parts of moving a RFR through the

+ various steps it needs from idea to implementation. Note that for high

+ level and non technical requirements, please see the main RFR page.

+ 

+ A RFR will go through (at least) the following steps, but note that it

+ can be dropped, removed or reverted at any time in the process and that

+ MUST items MUST be provided before the next step is possible.

+ 

+ == Pre sponsorship

+ 

+ Until a RFR has a sysadmin-main person who is sponsoring and helping

+ with the request, no further technical action should take place with

+ this SOP. Please see the main RFR SOP to aquire a sponsor and do the

+ steps needed before implementation starts. If your resource requires

+ packages to be complete, please finish your packaging work before moving

+ forward with the RFR (accepted/approved packages in Fedora/EPEL). If

+ your RFR only has a single person working on it, please gather at least

+ another person before moving forward. Single points of failure are to be

+ avoided.

+ 

+ === Requirements for continuing:

+ 

+ * MUST have a RFR ticket.

+ * MUST have the ticket assigned and accepted by someone in

+ infrastructure sysadmin-main group.

+ 

+ == Planning

+ 

+ Once a sponsor is aquired and all needed packages have been packaged and

+ are available in EPEL, we move on to the planning phase. In this phase

+ discussion should take place about the application/resource on the

+ infrastructure list and IRC. Questions about how the resource could be

+ deployed should be considered:

+ 

+ * Should the resource be load balanced?

+ * Does the resource need caching?

+ * Can the resource live on it's own instance to separate it from more

+ critical services?

+ * Who all is involved in maintaining and deploying the instance?

+ 

+ === Requirements for continuing

+ 

+ * MUST discuss/note the app on the infrastructure mailing list and

+ answer feedback there.

+ * MUST determine who is involved in the deployment/maintaining the

+ resource.

+ 

+ == Development Instance

+ 

+ In this phase a development instance is setup for the resource. This

+ instance is a single virtual host running the needed OS. The RFR sponsor

+ will create this instance and also create a group 'sysadmin-resource'

+ for the resource, adding all responsible parties to the group. It's then

+ up to sysadmin-resource members to setup the resource and test it.

+ Questions asked in the planning phase should be investigated once the

+ instance is up. Load testing and other testing should be performed.

+ Issues like expiring old data, log files, acceptable content, packaging

+ issues, configuration, general bugs, security profile, and others should

+ be investigated. At the end of this step a email should be sent to the

+ infrastucture list explaining the testing done and inviting comment.

+ Also, the security officer should be informed that a new service will

+ need a review in the near future. In the request for the security audit,

+ please add the results of self-evaluation of the

+ https://fedora-infra-docs.readthedocs.io/en/latest/dev-guide/security_policy.html[Application

+ Security Policy]. Any deviations from the policy _must_ be noted in the

+ request for audit.

+ 

+ === Requirements for continuing

+ 

+ * MUST have RFR sponsor sign off that the resource is ready to move to

+ the next step.

+ * MUST have answered any outstanding questions on the infrastructure

+ list about the resource. Decisions about caching, load balancing and how

+ the resource would be best deployed should be determined.

+ * MUST add any needed SOP's for the service. Should there be an Update

+ SOP? A troubleshooting SOP? Any other tasks that might need to be done

+ to the instance when those who know it well are not available?

+ * MUST perform self-evaluation of the

+ https://fedora-infra-docs.readthedocs.io/en/latest/dev-guide/security_policy.html[Application

+ Security Policy].

+ * MUST tag in the security officer in the ticket so an audit can be

+ scheduled, including the results of the Security Policy evaluation.

+ 

+ == Staging Instance

+ 

+ The next step is to create a staging instance for the resource. In this

+ step the resource is fully added to Ansible/configuration management.

+ The resource is added to caching/load balancing/databases and tested in

+ this new env. Once initial deployment is done and tested, another email

+ to the infrastructure list is done to note that the resource is

+ available in staging.

+ 

+ The security officer should be informed as soon as the code is

+ reasonably stable, so that they can start the audit or delegate the

+ audit to someone.

+ 

+ === Requirements for continuing

+ 

+ * MUST have sign off of RFR sponsor that the resource is fully

+ configured in Ansible and ready to be deployed.

+ * MUST have a deployment schedule for going to production. This will

+ need to account for things like freezes and availability of

+ infrastructure folks.

+ * MUST have an approved audit by the security officer or appointed

+ delegate.

+ 

+ == Production deployment

+ 

+ Finally the staging changes are merged over to production and the

+ resource is deployed.

+ 

+ Monitoring of the resource is added and confirmed to be effective.

+ 

+ == Maintenance

+ 

+ The resource will then follow the normal rules for production. Honoring

+ freezes, updating for issues or security bugs, adjusting for capacity,

+ etc.

+ 

+ == Ticket comment template

+ 

+ You can copy/paste this template into your RFR ticket. Keep the values

+ empty until you know answers - you can go back later and edit the ticket

+ to fill in information as it develops.

+ 

+ Phase I

+ 

+ * *Software*: <mynewservice>

+ * *Advantage for Fedora*: <It will give us unicorns>

+ * *Sponsor*: <someone>

+ 

+ Phase II

+ 

+ * *Email list thread*: <https://lists.fedoraproject.org/....>

+ * *Upstream source*: <https://github.com/...>

+ * *Development contacts*: <person1, person2>

+ * *Maintainership contacts*: <person2, person3>

+ * *Load balanceable*: <yes/no>

+ * *Caching*: <yes/no, which paths, ...>

+ 

+ Phase III

+ 

+ * *SOP link*: <https://docs.fedoraproject.org/infra/sysadmin_guide/.....>

+ * *Application Security Policy self-evaluation*: ....

+ * *Audit request*: <https://pagure.io/fedora-infrastructure/issue/....>

+ (can be same)

+ * *Audit timeline*: <04-11-2025 - 06-11-2025> (timeline to be provided

+ by the security officer upon audit request)

+ 

+ Phase IV

+ 

+ * *Ansible playbooks*: <ansible/playbooks/groups/myservice.yml>

+ * *Fully rebuilt from ansible*: <yes>

+ * *Production goal*: <08-11-2025>

+ * *Approved audit*: <https://pagure.io/fedora-infrastructure/issue/....>

@@ -0,0 +1,45 @@ 

+ = ResultsDB SOP

+ 

+ Store results from Fedora CI, OpenQA and other test systems.

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora QA Devel, Fedora Infrastructure Team

+ Contact::

+   #fedora-qa, #fedora-admin, #fedora-noc

+ Servers::

+   resultsdb-dev01.qa, resultsdb-stg01.qa, resultsdb01.qa

+ Purpose::

+   store results from Fedora CI, OpenQA and other test systems

+ 

+ == Architecture

+ 

+ ResultsDB as a system is made up of two parts - a results storage API

+ and a simple html based frontend for humans to view the results

+ accessible through that API (`resultsdb` and `resultsdb_frontend`).

+ 

+ == Deployment

+ 

+ The only part of resultsdb deployment that isn't currently in the

+ ansible playbooks is database initialization (disabled due to bug).

+ 

+ Once the resultsdb app has been installed, initialize the database, run:

+ 

+ ....

+ resultsdb init_db

+ ....

+ 

+ == Updating

+ 

+ Database schema changes are not currently supported with resultsdb and

+ the app can be updated like any other web application:

+ 

+ * Update app

+ * Restart httpd

+ 

+ == Backup

+ 

+ All important information in ResultsDB is stored in its database -

+ backing up that database is sufficient for backup and restoring that

+ database from a snapshot is sufficient for restoring.

@@ -0,0 +1,128 @@ 

+ = Retrace SOP

+ 

+ Retrace server - provides complete tracebacks for unhandled crashes and

+ show aggregated information for developers.

+ 

+ == Contact Information

+ 

+ Owner:::

+   Fedora QA Devel, Fedora Infrastructure Team, ABRT team

+ Contact:::

+   #abrt, #fedora-admin, #fedora-noc

+ Servers:::

+   retrace*, faf*

+ Purpose:::

+   Provides complete tracebacks for unhandled crashes and show aggregated

+   information for developers.

+ 

+ == Description

+ 

+ The physical server (_retrace.fedoraproject.org_) runs two main services:

+ _retrace-server_ and _FAF_.

+ 

+ === Retrace-server

+ 

+ The upstream for retrace server lives at:

+ 

+ ____

+ https://github.com/abrt/retrace-server

+ ____

+ 

+ When a user has the ABRT client installed and a process crashes with an

+ unhandled exception (e.g., traceback or core dump), the user can send a

+ request to _retrace-server_. The server will install the same set of

+ packages plus debuginfo, and will return a traceback to the user that

+ includes function names instead of plain pointers. This information is

+ useful for debugging.

+ 

+ The upstream _retrace-server_ allows users to upload coredumps through a

+ web interface, but the Fedora instance disables this feature.

+ 

+ === FAF

+ 

+ When a user decides to report a crash, data is sent to _FAF_. ABRT can

+ also be configured to send microreports automatically, if desired.

+ 

+ _FAF_ can aggregate similar reports into one entity (called a Problem).

+ _FAF_ provides a nice web interface for developers, allowing them to see

+ crashes of their packages. It lives at:

+ 

+ ____

+ https://retrace.fedoraproject.org/faf/

+ ____

+ 

+ == Playbook

+ 

+ The playbook is split into several roles. There are two main roles

+ 

+ * abrt/faf

+ * abrt/retrace

+ 

+ These roles are copied from upstream. You should never update it

+ directly. The new version can be fetched from upstream using:

+ 

+ ....

+ cd ansible/abrt

+ rm -rf faf retrace

+ ansible-galaxy install -f -r requirements.yml --ignore-errors -p ./

+ ....

+ 

+ You should review the new differences, and commit and push.

+ 

+ Then there are some roles which are local for our instance:

+ 

+ ____

+ * abrt/faf-local - This is run _before_ abrt/faf.

+ * abrt/retrace-local - This is run _after_ abrt/retrace.

+ * abrt/retrace-local-pre - This is run _before_ abrt/retrace.

+ ____

+ 

+ == Services

+ 

+ _FAF_ and _retrace-server_ are web applications; only httpd is required.

+ 

+ == Cron

+ 

+ _FAF_ and _retrace-server_ each have cron tasks. They are _not_ installed

+ under `/etc/cron*` but are installed as user cron jobs for the 'faf' and

+ 'retrace' users.

+ 

+ You can list those crons using:

+ 

+ ....

+ sudo -u faf crontab -l

+ sudo -u retrace crontab -l

+ ....

+ 

+ All cronjobs should be Ansible managed. Just make sure if you delete

+ some cron from Ansible that it does not remain on the server (not always

+ possible with state=absent).

+ 

+ == Directories

+ 

+ * `/srv/ssd` - fast disk, used for PostgreSQL storage

+ * `/srv` - big fat disk, used for storing packages. Mainly:

+ ** `/srv/faf/lob`

+ ** `/srv/retrace`

+ * `/srv/faf/db-backup/` - Daily backups of DB. No rotating yet. Needs to be

+   manually deleted occasionally.

+ *  `/srv/faf/lob/InvalidUReport/` - Invalid reports, can be pretty big.

+   No automatic removal too. Need to be purged manually occasionally.

+ 

+ == Front-page

+ 

+ The main web page is handled by the _abrt-server-info-page_

+ package, which can be controlled using:

+ 

+ ....

+ /usr/lib/python2.7/site-packages/abrt-server-info-page/config.py

+ ....

+ 

+ == DB

+ 

+ Only _FAF_ uses a database. We use our own instance of PostgreSQL. You can

+ connect to it using:

+ 

+ ....

+ sudo -u faf psql faf

+ ....

@@ -0,0 +1,310 @@ 

+ = SCM Admin SOP

+ 

+ [WARNING]

+ ====

+ Most information here (probably 1.4 and later) is not updated for pkgdb2

+ and therefore not correct anymore.

+ ====

+ 

+ TODO: This whole SOP needs a revision

+ 

+ == Contents

+ 

+ * <<_creating_new_packages>>

+ ** <<_obtaining_process_git_requests>>

+ ** <<_prerequisites>>

+ ** <<_running_the_script>>

+ ** <<_steps_for_manual_processing>>

+ *** <<_using_pkgdb_client>>

+ *** <<_using_pkgdb2branch>>

+ *** <<_update_koji>>

+ ** <<_helper_scripts>>

+ *** <<_mkbranchwrapper>>

+ *** <<_setup_package>>

+ ** <<_pseudo_users_for_sigs>>

+ * <<_deprecate_packages>>

+ * <<_undeprecate_packages>>

+ * <<_performing_mass_comaintainer_requests>>

+ 

+ == Creating New Packages

+ 

+ Package creation is mostly automatic and most details are handled by a

+ script.

+ 

+ === Obtaining process-git-requests

+ 

+ The script is not currently packaged; lives in the rel-eng git

+ repository. You can check it out with:

+ 

+ ....

+ git clone https://git.fedorahosted.org/git/releng

+ ....

+ 

+ and keep this up to date by running:

+ 

+ ....

+ git pull

+ ....

+ 

+ occasionally somewhere in the checked-out tree occasionally before

+ processing new requests.

+ 

+ The script lives in `scripts/process-git-requests`.

+ 

+ === Prerequisites

+ 

+ You must have the `python-bugzilla` and `python-fedora` packages installed.

+ 

+ Before running `process-git-requests`, you should run:

+ 

+ ....

+ bugzilla login

+ ....

+ 

+ The "Username" you will be prompted for is the email address attached to

+ your bugzilla account. This will obtain a cookie so that the script can

+ update bugzilla tickets. The cookie is good for quite some time (at

+ least a month); if you wish to remove it, delete the

+ `~/.bugzillacookies` file.

+ 

+ It is also advantageous to have your Fedora ssh key loaded so that you

+ can ssh into _pkgs.fedoraproject.org_ without being prompted for a

+ password.

+ 

+ It perhaps goes without saying that you will need unfirewalled and

+ unproxied access to ports 22, 80 and 443 on various Fedora machines.

+ 

+ === Running the script

+ 

+ Simply execute the `process-git-requests` script and follow the prompts.

+ It can provide the text of all comments in the bugzilla ticket for

+ inspection and will perform various useful checks on the ticket and the

+ included SCM request. If there are warnings present, you will need to

+ accept them before being allowed to process the request.

+ 

+ Note that the script only looks at the final request in a ticket; this

+ permits users to tack on a new request at any time and re-raise the

+ fedora-cvs flag. Packagers do not always understand this, though, so it

+ is necessary to read through the ticket contents to make sure that's the

+ request matches reality.

+ 

+ After a request has been accepted, the script will create the package in

+ pkgdb (which may require your password) and attempt to log into the SCM

+ server to create the repository. If this does not succeed, the package

+ name is saved and when you finish processing a command line will be

+ output with instructions on creating the repositories manually. If you

+ hit Crtl-C or the script otherwise aborts, you may miss this

+ information. If so, see below for information on running `pkgdb2branch.py`

+ on the SCM server; you will need to run it for each package you created.

+ 

+ === Steps for manual processing

+ 

+ It is still useful to document the process of handling these requests

+ manually in the case that `process-git-requests` has issues.

+ 

+ [arabic]

+ . Check Bugzilla Ticket to make sure it looks ok

+ . Add the package information to the packagedb with `pkgdb-client`

+ . Use `pkgdb2branch` to create the branches on the cvs server

+ 

+ ____

+ [WARNING]

+ ====

+ Do not run multiple instances of pkgdb2branch in parallel! This will

+ cause them to fail due to mismatching 'modules' files. It's not a good

+ idea to run addpackage, mkbranchwrapper, or setup_package by themselves

+ as it could lead to packages that don't match their packagedb entry.

+ ====

+ ____

+ 

+ [arabic, start=4]

+ . Update koji.

+ 

+ ==== Using pkgdb-client

+ 

+ Use pkgdb-client to update the pkgdb with new information. For instance,

+ to add a new package:

+ 

+ ....

+ pkgdb-client edit -u toshio -o terjeros \

+  -d 'Python module to extract EXIF information' \

+  -b F-10 -b F-11 -b devel python-exif

+ ....

+ 

+ To update that package later and add someone to the initialcclist do:

+ 

+ ....

+ pkgdb-client edit -u toshio -c kevin python-exif

+ ....

+ 

+ To add a new branch for a package:

+ 

+ ....

+ pkgdb-client edit -u toshio -b F-10 -b EL-5 python-exif

+ ....

+ 

+ To allow provenpackager to edit a branch:

+ 

+ ....

+ pkgdb-client edit -u toshio -b devel -a provenpackager python-exif

+ ....

+ 

+ To remove provenpackager commit rights on a branch:

+ 

+ ....

+ pkgdb-client edit -u toshio -b EL-5 -b EL-4 -r provenpackager python-exif

+ ....

+ 

+ More options can be found by running `pkgdb-client --help`

+ 

+ You must be in the cvsadmin group to use pkgdb-client. It can be run on

+ a non-Fedora Infrastructure box if you set the PACKAGEDBURL environment

+ variable to the public URL:

+ 

+ ....

+ export PACKAGEDBURL=https://admin.fedoraproject.org/pkgdb

+ ....

+ 

+ [NOTE]

+ ====

+ You may be asked to CC fedora-perl-devel-list on a perl package. This

+ can be done with the username "perl-sig". This is presently a user, not

+ a group so it cannot be used as an owner or comaintainer, only for CC.

+ ====

+ 

+ === Using pkgdb2branch

+ 

+ Use `pkgdb2branch.py` to create branches for a package. `pkgdb2branch.py`

+ takes a list of package names on the command line and creates the

+ branches that are specified in the packagedb. The script lives in

+ `/usr/local/bin` on the SCM server (_pkgs.fedoraproject.org_) and must be

+ run there.

+ 

+ For instance, `pkgdb2branch.py python-exif qa-assistant` will create

+ branches specified in the packagedb for _python-exif_ and _qa-assistant_.

+ 

+ `pkgdb2branch` can only be run from _pkgs.fedoraproject.org_.

+ 

+ === Update Koji

+ 

+ Optionally you can synchronize pkgdb and koji by hand: it is done

+ automatically hourly by a cronjob. There is a script for this in the

+ `admin/` directory of the CVSROOT module.

+ 

+ Since _dist-f13_ and later inherit from _dist-f12_, and currently _dist-f12_

+ is the basis of our stack, it's easiest to just call:

+ 

+ ....

+ ./owner-sync-pkgdb dist-f12

+ ....

+ 

+ Just run `./owners-sync-pkgdb` for usage output.

+ 

+ This script requires that you have a properly configured koji client

+ installed.

+ 

+ `owner-sync-pkgdb` requires the koji client libraries which are not

+ available on the cvs server. So you need to run this from one of your

+ machines.

+ 

+ == Helper Scripts

+ 

+ These scripts are invoked by the scripts above, doing some of the heavy

+ lifting. They should not ordinarily be called on their own.

+ 

+ === mkbranchwrapper

+ 

+ `/usr/local/bin/mkbranchwrapper` is a shell script which takes a list of

+ packages and branches. For instance:

+ 

+ ....

+ mkbranchwrapper foo bar EL-5 F-11

+ ....

+ 

+ will create modules foo and bar for devel if they don't exist and branch

+ them for the other 4 branches passed to the script. If the devel branch

+ exists then it just branches. If there is no branches passed the module

+ is created in devel only.

+ 

+ `mkbranchwrapper` has to be run from cvs-int.

+ 

+ [IMPORTANT]

+ ====

+ mkbranchwrapper is not used by any current programs. Use pkgdb2branch

+ instead.

+ ====

+ 

+ === setup_package

+ 

+ `setup_package` creates a new blank module in devel only. It can be run

+ from any host. To create a new package run:

+ 

+ ....

+ setup_package foo

+ ....

+ 

+ setup_package needs to be called once for each package. it could be

+ wrapped in a shell script similar to:

+ 

+ ....

+ #!/bin/bash

+ 

+ PACKAGES=""

+ 

+ for arg in $@; do

+   PACKAGES="$PACKAGES $arg"

+ done

+ 

+ echo "packages=$PACKAGES"

+ 

+ for package in $PACKAGES; do

+   ~/bin/setup_package $package

+ done

+ ....

+ 

+ then call the script with all branches after it.

+ 

+ [NOTE]

+ ====

+ `setup_package` is currently called from `pkgdb2branch`.

+ ====

+ 

+ === Pseudo Users for SIGs

+ 

+ See Package_SCM_admin_requests#Pseudo-users_for_SIGs for the current

+ list.

+ 

+ === Deprecate Packages

+ 

+ Any packager can deprecate a package. click on the deprecate package

+ button for the package in the webui. There's currently no `pkgdb-client`

+ command to deprecate a package.

+ 

+ === Undeprecate Packages

+ 

+ Any cvsadmin can undeprecate a package. Simply use `pkgdb-client` to

+ assign an owner and the package will be undeprecated:

+ 

+ ....

+ pkgdb-client -o toshio -b devel qa-assistant

+ ....

+ 

+ As a cvsadmin you can also log into the pkgdb webui and click on the

+ unretire package button. Once clicked, the package will be orphaned

+ rather than deprecated.

+ 

+ === Performing mass comaintainer requests

+ 

+ * Confirm that the requestor has 'approveacls' on all packages they wish

+ to operate on. If they do not, they MUST request the change via FESCo.

+ * Mail maintainers/co-maintainers affected by the change to inform them

+ of who requested the change and why.

+ * Download a copy of this script:

+ http://git.fedorahosted.org/git/?p=fedora-infrastructure.git;a=blob;f=scripts/pkgdb_bulk_comaint/comaint.py;hb=HEAD

+ * Edit the script to have the proper package owners and package name

+ pattern.

+ * Edit the script to have the proper new comaintainers.

+ * Ask someone in `sysadmin-web` to disable email sending on bapp01 for

+ the pkgdb (following the instructions in comments in the script)

+ * Copy the script to an infrastructure host (like cvs01) that can

+ contact bapp01 and run it.

@@ -0,0 +1,125 @@ 

+ = SELinux Infrastructure SOP

+ 

+ SELinux is a fundamental part of our Operating System but still has a

+ large learning curve and remains quite intimidating to both developers

+ and system administrators. Fedora's Infrastructure has been growing at

+ an unfathomable rate, and is full of custom software that needs to be

+ locked down. The goal of this SOP is to make it simple to track down and

+ fix SELinux policy related issues within Fedora's Infrastructure.

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_step_one_realizing_you_have_a_problem>>

+ * <<_step_two_tracking_down_the_violation>>

+ * <<_step_three_fixing_the_violation>>

+ ** <<_toggling_an_selinux_boolean>>

+ ** <<_setting_custom_context>>

+ ** <<_fixing_odd_errors_from_the_logs>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, sysadmin-main, sysadmin-web groups

+ Purpose::

+   To ensure that we are able to fully wield the power of SELinux within

+   our infrastructure.

+ 

+ == Step One: Realizing you have a problem

+ 

+ If you are trying to find a specific problem on a host go look in the

+ audit.log per-host on our cental log server. See the syslog SOP for more

+ information.

+ 

+ == Step Two: Tracking down the violation

+ 

+ Generate SELinux policy allow rules from logs of denied operations. This

+ is useful for getting a quick overview of what has been getting denied

+ on the local machine:

+ 

+ ....

+ audit2allow -la

+ ....

+ 

+ You can obtain more detailed audit messages by using ausearch to get the

+ most recent violations:

+ 

+ ....

+ ausearch -m avc -ts recent

+ ....

+ 

+ Again -see the syslog SOP for more information here.

+ 

+ == Step Three: Fixing the violation

+ 

+ Below are examples of using our current ansible configuration to make

+ SELinux deployment changes. These constructs are currently home-brewed,

+ and do not exist in upstream Ansible. For these functions to work, you

+ must ensure that the host or servergroup is configured with 'include

+ selinux', which will enable SELinux in permissive mode. Once a host is

+ properly configured, this can be changed to 'include selinux-enforcing'

+ to enable SELinux Enforcing mode.

+ 

+ [NOTE]

+ ====

+ Most services have $service_selinux manpages that are automatically

+ generated from policy.

+ ====

+ 

+ === Toggling an SELinux boolean

+ 

+ SELinux booleans, which can be viewed by running `semanage boolean -l`,

+ can easily be configured using the following syntax within

+ your ansible configuration.:

+ 

+ ....

+ seboolean: name=httpd_can_network_connect_db state=yes persistent=yes

+ ....

+ 

+ === Setting custom context

+ 

+ Our infrastructure contains many custom applications, which may utilize

+ non-standard file locations. These issues can lead to trouble with

+ SELinux, but they can easily be resolved by setting custom file

+ context.:

+ 

+ ....

+ "file: path=/var/tmp/l10n-data recurse=yes setype=httpd_sys_content_t"

+ ....

+ 

+ === Fixing odd errors from the logs

+ 

+ If you see messages like this in the log reports:

+ 

+ ....

+ restorecon:/etc/selinux/targeted/contexts/files/file_contexts: Multiple same / specifications for /home/fedora.

+ matchpathcon: / /etc/selinux/targeted/contexts/files/file_contexts: Multiple same / / specifications for /home/fedora.

+ ....

+ 

+ Then it is likely you have an overlapping filecontext in your local

+ selinux context configuration - in this case likely one added by ansible

+ accidentally.

+ 

+ To find it run this:

+ 

+ ....

+ semanage fcontext -l | grep /path/being/complained/about

+ ....

+ 

+ sometimes it is just an ordering problem and reversing them solves it

+ other times it is just an overlap, period.

+ 

+ look at the context and delete the one you do not want or reorder.

+ 

+ To delete run:

+ 

+ ....

+ semanage fcontext -d '/entry/you/wish/to/delete'

+ ....

+ 

+ This just removes that filecontext - no need to worry about files being

+ deleted.

+ 

+ Then rerun the triggering command and see if the problem is solved.

@@ -0,0 +1,78 @@ 

+ = Sigul servers upgrades/reboots

+ 

+ Fedora currently has 1 sign-bridge and 2 sign-vault machines for

+ primary, there is a similar setup for secondary architectures. When

+ upgrading or rebooting these machines, some special steps must be taken

+ to ensure everything is working as expected.

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Release Engineering

+ Contact::

+   #fedora-admin, #fedora-noc

+ Servers::

+   sign-vault03, sign-vault04, sign-bridge02, secondary-bridge01.qa

+ Purpose::

+   Upgrade or restart sign servers

+ 

+ == Description

+ 

+ . Coordinate with releng on timing. Make sure no signing is

+ happening, and none is planned for a bit.

+ +

+ ____

+ Sign-bridge02, secondary-bridge01.qa:

+ ____

+ +

+ ____

+ [arabic]

+ . Apply updates or changes

+ . Reboot virtual instance

+ . Once it comes back, start the sigul_bridge service and enter empty

+ password.

+ ____

+ +

+ ____

+ Sign-vault03/04:

+ ____

+ +

+ ____

+ [arabic]

+ . Determine which server is currently primary. It's the one that has

+   the floating ip address for sign-vault02 on it.

+ . Login to the non primary server via serial or management console.

+   (There is no ssh access to these servers)

+ . Take a lvm snapshot:

+ +

+ ....

+ lvcreate --size 5G --snapshot --name YYYMMDD /dev/mapper/vg_signvault04-lv_root

+ 

+ Replace YYMMDD with todays year, month, day and the vg with the correct name 

+ Then apply updates. 

+ ....

+ .  Confirm the server comes back up ok, login to serial console or management

+   console and start the sigul_server process. Enter password when

+   prompted.

+ . On the primary server, down the floating ip address:

+ +

+ ....

+ ip addr del 10.5.125.75 dev eth0

+ ....

+ . On the secondary server, up the floating ip address:

+ +

+ ....

+ ip addr add 10.5.125.75 dev eth0

+ ....

+ . Have rel-eng folks sign some packages to confirm all is working.

+ . Update/reboot the old primary server and confirm it comes back up ok.

+ ____

+ 

+ [NOTE]

+ ====

+ Changes to database

+ 

+ When making any changes to the database (new keys, etc), it's important

+ to sync the data from the primary to the secondary server. This process

+ is currently manual.

+ ====

@@ -0,0 +1,48 @@ 

+ = simple-koji-ci

+ 

+ _simple-koji-ci_ is a small service running in our infra cloud that

+ listens for fedmsg messages coming from pagure on dist-git about new

+ pull-requests. It then creates a SRPM based on the content of each

+ pull-request, kicks off a scratch build in koji and reports the outcome

+ of that build on the pull-request.

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, #fedora-apps

+ Persons::

+   pingou

+ Location::

+   the cloud ☁

+ Servers::

+   * simple-koji-ci-dev.fedorainfracloud.org

+   * simple-koji-ci-prod.fedorainfracloud.org

+ Purpose::

+   Performs scratch builds for pull-request opened on dist-git

+ 

+ == Hosts

+ 

+ The current deployment is made in a single host:

+ 

+ * `simple-koji-ci-prod.fedorainfracloud.org` for prod

+ * `simple-koji-ci-dev.fedorainfracloud.org` for stagging

+ 

+ == Service

+ 

+ _simple-koji-ci_ is a fedmsg-based service, so it can be turned on or off

+ via the `fedmsg-hub` service.

+ 

+ It interacts with koji via a keytab created by the `keytab/service` role

+ in ansible.

+ 

+ The configuration of the service (including the weight of the builds

+ kicked off in koji) is located at `/etc/fedmsg.d/simple_koji_ci.py`.

+ 

+ One can monitor the service using: `journalctl -lfu fedmsg-hub`.

+ 

+ == Impact

+ 

+ This service is purely informative, nothing does nor should rely on it.

+ If anything goes wrong, there are no consequences for stopping it.

@@ -0,0 +1,153 @@ 

+ = SSH Access Infrastructure SOP

+ 

+ == Contents

+ 

+ [arabic]

+ * <<_contact_information>>

+ * <<_introduction>>

+ * <<_ssh_configuration>>

+ * <<_ssh_agent_forwarding>>

+ * <<_troubleshooting>>

+ 

+ == Contact Information

+ 

+ Owner::

+   sysadmin-main

+ Contact::

+   #fedora-admin or admin@fedoraproject.org

+ Location::

+   IAD2

+ Servers::

+   All IAD2 and VPN Fedora machines

+ Purpose::

+   Access via ssh to Fedora project machines.

+ 

+ == Introduction

+ 

+ This page will contain some useful instructions about how you can safely

+ login into Fedora IAD2 machines successfully using a public key

+ authentication. As of 2011-05-27, all machines require a SSH key to

+ access. Password authentication will no longer work. Note that this SOP

+ has nothing to do with actually gaining access to specific machines. For

+ that you MUST be in the correct group for shell access to that machine.

+ This SOP simply describes the process once you do have valid and

+ appropriate shell access to a machine.

+ 

+ == SSH configuration

+ 

+ First of all: (on your local machine):

+ 

+ ....

+ vi ~/.ssh/config

+ ....

+ 

+ [NOTE]

+ ====

+ This file, and any keys, need to be chmod 600, or you will get a "Bad

+ owner or permissions" error. The .ssh directory must be mode 700.

+ ====

+ 

+ then, add the following:

+ 

+ ....

+ Host bastion.fedoraproject.org

+   HostName bastion-iad01.fedoraproject.org

+   User FAS_USERNAME (all lowercase)

+   ProxyCommand none

+   ForwardAgent no

+ Host *.iad2.fedoraproject.org *.qa.fedoraproject.org 10.3.160.* 10.3.161.* 10.3.163.* 10.3.165.* 10.3.167.* *.vpn.fedoraproject.org batcave01

+   User FAS_USERNAME (all lowercase)

+   ProxyCommand ssh -W %h:%p bastion.fedoraproject.org

+ ....

+ 

+ How ProxyCommand works?

+ 

+ A connection is established to the bastion host:

+ 

+ ....

+ +-------+            +--------------+

+ |  you  | ---ssh---> | bastion host |

+ +-------+            +--------------+

+ ....

+ 

+ Bastion host establish a connction to the target server:

+ 

+ ....

+ +--------------+          +--------+

+ | bastion host | -------> | server |

+ +--------------+          +--------+

+ ....

+ 

+ Your client then connects through the Bastion and reaches the target

+ server:

+ 

+ ....

+ +-----+                  +--------------+                +--------+

+ | you |                  | bastion host |                | server |

+ |     | ===ssh=over=bastion============================> |        |

+ +-----+                  +--------------+                +--------+

+ ....

+ 

+ == PuTTY SSH configuration

+ 

+ You can configure Putty the same way by doing this:

+ 

+ [arabic, start=0]

+ . In the session section type _batcave01.fedoraproject.org_ port 22

+ . In Connection:Data enter your FAS_USERNAME

+ . In Connection:Proxy add the proxy settings

+ 

+ ____

+ * ProxyHostname is bastion-iad01.fedoraproject.org

+ * Port 22

+ * Username FAS_USERNAME

+ * Proxy Command `plink %user@%proxyhost %host:%port`

+ ____

+ 

+ [arabic, start=3]

+ . In Connection:SSH:Auth remember to insert the same key file for

+ authentication you have used on FAS profile

+ 

+ == SSH Agent forwarding

+ 

+ You should normally have:

+ 

+ ....

+ ForwardAgent no

+ ....

+ 

+ For Fedora hosts (this is the default in OpenSSH). You can override this

+ on a per-session basis by using '-A' with ssh. SSH agents could be

+ misused if you connect to a compromised host with forwarding on (the

+ attacker can use your agent to authenticate them to anything you have

+ access to as long as you are logged in). Additionally, if you do need

+ SSH agent forwarding (say for copying files between machines), you

+ should remember to logout as soon as you are done to not leave your

+ agent exposed.

+ 

+ == Troubleshooting

+ 

+ * 'channel 0: open failed: administratively prohibited: open failed'

+ +

+ ____

+ If you receive this message for a machine proxied through bastion, then

+ bastion was unable to connect to the host. This most likely means that

+ tried to SSH to a nonexistent machine. You can debug this by trying to

+ connect to that machine from bastion.

+ ____

+ * if your local username is different from the one registered in FAS,

+ please remember to set up a User variable (like above) where you

+ specify your FAS username. If that's missing SSH will try to login by

+ using your local username, thus it will fail.

+ * `ssh -vv` is very handy for debugging what sections are matching and

+ what are not.

+ * If you get access denied several times in a row, please consult with

+ #fedora-admin. If you try too many times with an invalid config your

+ IP could be added to denyhosts.

+ * If you are running an OpenSSH version less than 5.4, then the -W

+ option is not available. In that case, use the following ProxyCommand

+ line instead:

+ +

+ ....

+ ProxyCommand ssh -q bastion.fedoraproject.org exec nc %h %p

+ ....

@@ -0,0 +1,26 @@ 

+ = SSH known hosts Infrastructure SOP

+ 

+ Provides Known Hosts file that is globally deployed and publicly

+ available at https://admin.fedoraproject.org/ssh_known_hosts

+ 

+ == Contact Information

+ 

+ Owner:::

+   Fedora Infrastructure Team

+ Contact:::

+   #fedora-admin, sysadmin group

+ Location:::

+   all

+ Servers:::

+   all

+ Purpose:::

+   Provides Known Hosts file that is globally deployed.

+ 

+ == Adding a host alias to the ssh_known_hosts

+ 

+ If you need to add a host alias to a host in _ssh_known_hosts_ simply go

+ to the dir for the host in _infra-hosts_ and add a file named host_aliases

+ to the git repo in that dir. Put one alias per line and save.

+ 

+ Then the next time fetch-ssh-keys runs it will add those aliases to

+ known hosts.

@@ -0,0 +1,128 @@ 

+ = Staging SOP

+ 

+ == Introduction

+ 

+ Fedora uses a set of staging servers for several purposes:

+ 

+ * When applications are initially being deployed, the staging version of

+ those applications are setup with a staging server that is used to

+ create the initial Ansible configuration for the application/service.

+ * Established applications/services use staging for testing. This

+ testing includes:

+ ** Bugfix updates

+ ** Configuration changes managed by Ansible

+ ** Upstream updates to dependent packages (httpd changes for example)

+ 

+ == Goals

+ 

+ The staging servers should be self contained and have all the needed

+ databases and such to function. At no time should staging resources talk

+ to production instances. We use firewall rules on our production servers

+ to make sure no access is made from staging.

+ 

+ Staging instances do often use dumps of production databases and data,

+ and thus access to resources in staging should be controlled as it is in

+ production.

+ 

+ == DNS and naming

+ 

+ All staging servers should be in the `stg.fedoraproject.org`

+ domain. `/etc/hosts` files are used on stg servers to override dns in

+ cases where staging resources should talk to the staging version of a

+ service instead of the production one. In some cases, one staging server

+ may be aliased to several services or applications that are on different

+ machines in production.

+ 

+ == Syncing databases

+ 

+ === Syncing FAS

+ 

+ Sometimes you want to resync the staging fas server with what's on

+ production. To do that, dump what's in the production db and then import

+ it into the staging db. Note that resyncing the information will remove

+ any of the information that has been added to the staging fas servers.

+ So it's good to mention that you're doing this on the infra list or to

+ people who you know are working on the staging fas servers so they can

+ either save their changes or ask you to hold off for a while.

+ 

+ On db01:

+ 

+ ....

+ $ ssh db01

+ $ sudo -u postgres pg_dump -C fas2 |xz -c fas2.dump.xz

+ $ scp fas2.dump.xz db02.stg:

+ ....

+ 

+ On fas01.stg (postgres won't drop the database if something is accessing

+ it) (ATM, fas in staging is not load balanced so we only have to do this

+ on one server):

+ 

+ ....

+ $ sudo /etc/init.d/httpd stop

+ ....

+ 

+ On db02.stg:

+ 

+ ....

+ $ echo 'drop database fas2' |sudo -u postgres psql

+ $ xzcat fas2.dump.xz | sudo -u postgres psql

+ ....

+ 

+ On fas01.stg:

+ 

+ ....

+ $ sudo /etc/init.d/httpd start

+ ....

+ 

+ Other databases behave similarly.

+ 

+ == External access

+ 

+ There is http/https access from the internet to staging instances to

+ allow testing. Simply replace the production resource domain with

+ stg.fedoraproject.org and it should go to the staging version (if any)

+ of that resource.

+ 

+ == Ansible and Staging

+ 

+ All staging machine configurations is now in the same branch as

+ master/production.

+ 

+ There is a 'staging' environment - Ansible variable "env" is equal to

+ "staging" in playbooks for staging things. This variable can be used to

+ differentiate between producion and staging systems.

+ 

+ == Workflow for staging changes

+ 

+ [arabic]

+ . If you don't need to make any Ansible related config changes, don't do

+ anything. (ie, a new version of an app that uses the same config files,

+ etc). Just update on the host and test.

+ . If you need to make Ansible changes, either in the playbook of the

+ application or outside of your module:

+ +

+ ____

+ * Make use of files ending with .staging (see resolv.conf in global for

+ an example). So, if there's persistent changes in staging from

+ production like a different config file, use this.

+ * Conditionalize on environment:

+ +

+ ....

+ - name: your task

+   ...

+   when: env == "staging"

+ 

+ - name: production-only task

+   ...

+   when: env != "staging"

+ ....

+ * These changes can stay if they are helpful for further testing down

+ the road. Ideally normal case is that staging and production are

+ configure in the same host group from the same Ansible playbook.

+ ____

+ 

+ == Time limits on staging changes

+ 

+ There is no hard limit on time spent in staging, but where possible we

+ should limit the time in staging so we are not carrying changes from

+ production for a long time and possible affecting other staging work.

@@ -0,0 +1,166 @@ 

+ = Fedora Status Service - SOP

+ 

+ Fedora-Status is the software that generates the page at

+ http://status.fedoraproject.org/. This page should be kept up to date

+ with the current status of the services ran by Fedora Infrastructure.

+ 

+ This page is hosted at AWS.

+ 

+ == Contact Information

+ 

+ Owner:::

+   Fedora Infrastructure Team

+ Contact:::

+   #fedora-admin, #fedora-noc

+ Servers:::

+   AWS S3/CloudFront

+ Purpose:::

+   Give status information to users about the current status of our

+   public services.

+ Repository:::

+   https://github.com/fedora-infra/statusfpo

+ 

+ == How it works

+ 

+ To keep this website as stable as can be, the page is hosted external to

+ our main infrastructure, in AWS.

+ 

+ It is based on an S3 bucket with the files, fronted by a CloudFront

+ distribution for TLS termination and CNAMEs.

+ 

+ The website is statically generated using Pelican on your local machine,

+ and then pushed to S3.

+ 

+ == Adding and changing outages

+ 

+ === Making Changes

+ 

+ Before pushing changes live to S3, use the Pelican's devserver to stage

+ and view changes.

+ 

+ [arabic]

+ . Install the packages you need to run the devserver with:

+ +

+ ....

+ sudo dnf install pelican python-packaging

+ ....

+ . Check out the repo at:

+ +

+ ....

+ git@github.com:fedora-infra/statusfpo.git

+ ....

+ . Run the devserver with:

+ +

+ ....

+ make devserver

+ ....

+ . View the generated site at http://0.0.0.0:8000. Note that any changes

+ to the content and theme will automatically regenerate.

+ . Commit changes (or open a Pull Request) to

+ https://github.com/fedora-infra/statusfpo

+ 

+ === Create a new outage

+ 

+ [arabic]

+ . Add a markdown file to either content/planned/, content/ongoing, or

+ content/resolved/. The name of the file needs to be unique, so check the

+ resolved outages for an idea on how to name your file.

+ . Add your outage notice to the markdown file, for example:

+ +

+ ....

+ Title: Buzzilla Slow

+ Date: 2021-04-28 10:22+0000

+ OutageFinish: 2021-04-28 13:30+0000

+ Ticket: 123456

+ 

+ A swarm of bees have taken up residence in one of 

+ the Buzzilla Server rooms. Consequently, some 

+ requests to Buzzilla may respond slower than

+ usual. An apiarist has been called to capture 

+ and relocate the swarm.

+ ....

+ +

+ ____

+ * Note that OutageFinish is optional, but should really only be ommited

+ if the projected / or actual outage time is unknown.

+ * When providing dated, keep the timezone offset at +0000 / UTC

+ datetimes

+ ____

+ 

+ === Moving an outage

+ 

+ To move an outage, say from Planned to Ongoing simply move the markdown

+ file into a different status directory in `content/`, and regenerate.

+ 

+ == Publishing

+ 

+ Only members of sysadmin-main and people given the AWS credentials can

+ update the status website.

+ 

+ === Initial Configuration for Publishing

+ 

+ [arabic]

+ . First, install the AWS command line tool with:

+ +

+ ....

+ sudo dnf install aws-cli

+ ....

+ . Grab `ansible-private/files/aws-status-credentials` and store in

+ `~/.aws/credentials`.

+ . Run:

+ +

+ ....

+ aws configure set preview.cloudfront true

+ ....

+ 

+ === Publishing changes live

+ 

+ Once you are satisfied with your changes and how they look on the

+ devserver, and they have been committed to Git, and push the built

+ changes live with the command:

+ 

+ ....

+ make upload

+ ....

+ 

+ Note that this command only updates content changes (i.e. adding /

+ moving outages)

+ 

+ === Publishing theme changes

+ 

+ If your changes involve changes to the theme, run the following command

+ to upload everything content and theme changes to the live server:

+ 

+ ....

+ make upload-theme

+ ....

+ 

+ == Renewing SSL certificate

+ 

+ [arabic]

+ . Run certbot to generate certificate and have it signed by LetsEncrypt

+ (you can run this command anywhere certbot is installed, you can use

+ your laptop or _certgetter01.iad2.fedoraproject.org_):

+ +

+ ....

+ rm -rf ~/certbot

+ certbot certonly --agree-tos -m admin@fedoraproject.org --no-eff-email --manual --manual-public-ip-logging-ok -d status.fedoraproject.org -d www.fedorastatus.org --preferred-challenges http-01 --config-dir ~/certbot/conf --work-dir ~/certbot/work --logs-dir ~/certbot/log

+ ....

+ . You will be asked to make specific file available under specific URL.

+ In a different terminal upload requested file to AWS S3 bucket:

+ +

+ ....

+ echo SOME_VALUE >myfile

+ aws --profile statusfpo s3 cp myfile s3://status.fedoraproject.org/.well-known/acme-challenge/SOME_FILE

+ ....

+ . Verify that uploaded file is available under the rigt URL. If previous

+ certificate already expired you may need to run curl with -k option:

+ +

+ ....

+ curl -kL http://www.fedorastatus.org/.well-known/acme-challenge/SOME_FILE

+ ....

+ . After making sure that curl outputs expected value, go back to certbot

+ run and continue by pressing Enter. You will be asked to repeat steps 2

+ and 3 for another domain. Note that S3 bucket name should stay the same.

+ . Deploy generated certificate to AWS. This requires additional

+ permissions on AWS.

@@ -0,0 +1,161 @@ 

+ = Log Infrastructure SOP

+ 

+ Logs are centrally referred to our loghost and managed from there by

+ rsyslog to create several log outputs.

+ 

+ Epylog provides twice-daily log reports of activities on our systems. It

+ runs on our central loghost and generates reports on all systems

+ centrally logging.

+ 

+ == Contact Information

+ 

+ Owner:::

+   Fedora Infrastructure Team

+ Contact:::

+   #fedora-admin, sysadmin-main

+ Servers:::

+   log01.iad2.fedoraproject.org

+ Purpose:::

+   Provides our central logs and reporting

+ 

+ == Essential data/locations:

+ 

+ * Logs compiled using rsyslog on log01 into a single set of logs for all

+ systems:

+ +

+ ....

+ /var/log/merged/

+ ....

+ +

+ These logs are rotated every day and kept for only 2 days. This set of

+ logs is only used for immediate analysis and more trivial 'tailing' of

+ the log file to watch for events.

+ * Logs for each system separately in `/var/log/hosts`

+ +

+ These logs are maintained forever, practically, or for as long as we

+ possibly can. They are broken out into a `$hostname/$YEAR/$MON/$DAY`

+ directory structure so we can locate a specific day's log immediately.

+ * Log reports generated by epylog: Log reports generated by epylog are

+ outputted to `/srv/web/epylog/merged`

+ +

+ This path requires a username and a password to access. To add your

+ username and password you must first join the sysadmin-logs group then

+ login to `log01.iad2.fedoraproject.org` and run this command:

+ +

+ ....

+ htpasswd -m /srv/web/epylog/.htpasswd $your_username

+ ....

+ +

+ when prompted for a password please input a password which is NOT YOUR

+ FEDORA ACCOUNT SYSTEM PASSWORD.

+ 

+ [IMPORTANT]

+ ====

+ Let's say that again to be sure you got it:

+ 

+ DO *NOT* HAVE THIS BE THE SAME AS YOUR FAS PASSWORD

+ ====

+ 

+ == Configs

+ 

+ Epylog configs are controlled by ansible - please see the ansible epylog

+ module for more details. Specifically the files in

+ `roles/epylog/files/merged/`

+ 

+ === Generating a one-off epylog report

+ 

+ If you wish to generate a specific log report you will need to run the

+ following command on log01:

+ 

+ ....

+ sudo /usr/sbin/epylog -c /etc/epylog/merged/epylog.conf --last 5h

+ ....

+ 

+ You can replace '5h' with other time measurements to control the amount

+ of time you want to view from the merged logs. This will mail a report

+ notification to all the people in the sysadmin-logs group.

+ 

+ === Audit logs, centrally

+ 

+ We've taken the audit logs and enabled our rsyslogd on the hosts to

+ relay the audit log contents to our central log server.

+ 

+ Here's how we did that:

+ 

+ [arabic]

+ . modify the selinux policy so that rsyslogd can read the file(s) in

+ `/var/log/audit/audit.log`

+ 

+ ____

+ BEGIN Selinux policy module:

+ 

+ ....

+ module audit_via_syslog 1.0;

+ 

+ require {

+     type syslogd_t;

+     type auditd_log_t;

+     class dir { search };

+     class file { getattr read open };

+ 

+ }

+ 

+ #============= syslogd_t ==============

+ allow syslogd_t auditd_log_t:dir search;

+ allow syslogd_t auditd_log_t:file { getattr read open };

+ ....

+ 

+ END selinux policy module

+ ____

+ 

+ . add config to rsyslog on the clients to repeatedly send all changes to

+ their audit.log file to the central syslog server as local6:

+ +

+ ....

+ # monitor auditd log and send out over local6 to central loghost

+ $ModLoad imfile.so

+ 

+ # auditd audit.log

+ $InputFileName /var/log/audit/audit.log

+ $InputFileTag tag_audit_log:

+ $InputFileStateFile audit_log

+ $InputFileSeverity info

+ $InputFileFacility local6

+ $InputRunFileMonitor

+ ....

+ 

+ ____

+ then modify your emitter to the syslog server to send local6.* there

+ ____

+ 

+ . on the syslog server - setup log destinations for:

+ * merged audit logs of all hosts explicitly drop any non-AVC audit

+ message here) magic exclude line is:

+ +

+ ....

+ :msg, !contains, "type=AVC" ~

+ ....

+ +

+ that line must be directly above the log entry you want to filter and it

+ has a cascade effect on everything below it unless you disable the

+ filter

+ +

+ ____

+ * per-host audit logs - this is everything from audit.log

+ ____

+ . On the syslog server - we can run audit2allow/audit2why on the audit

+ logs sent there by doing this:

+ +

+ ....

+ grep 'hostname' /var/log/merged/audit.log | sed 's/^.*tag_audit_log: //' | audit2allow

+ ....

+ +

+ the sed is to remove the log prefix garbage from syslog transferring the

+ msg

+ 

+ == Future

+ 

+ * additional log reports for errors from http processes or servers

+ * SEC Simple Event Coordinator to report, immediately, on events from a

+ log stream - available in fedora/epel.

+ * New report modules within epylog

@@ -0,0 +1,32 @@ 

+ = Tag2DistRepo Infrastructure SOP

+ 

+ == Contents

+ 

+ * <<_contact_information>>

+ * <<_description>>

+ * <<_configuration>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin

+ Primary upstream contact::

+   Patrick Uiterwijk - FAS: puiterwijk

+ Servers::

+   bodhi-backend02.iad2.fedoraproject.org

+ Purpose::

+   Tag2DistRepo is a Fedmsg Consumer that waits for tag operations in

+   specific tags, and then instructs Koji to create Distro Repos.

+ 

+ == Description

+ 

+ Tag2DistRepo is a Fedmsg Consumer that waits for tag operations in

+ specific tags, and then instructs Koji to create Distro Repos.

+ 

+ == Configuration

+ 

+ Configuration is handled by the `bodhi-backend.yaml` playbook in Ansible.

+ This can also be used to reconfigure application, if that becomes

+ nessecary.

@@ -0,0 +1,138 @@ 

+ = Torrent Releases Infrastructure SOP

+ 

+ 

+ http://torrent.fedoraproject.org/ is our master torrent server for

+ Fedora distribution. It runs out of ibiblio.

+ 

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, sysadmin-torrent group

+ Location::

+   ibiblio

+ Servers::

+   torrent.fedoraproject.org

+ Purpose::

+   Provides the torrent master server for Fedora distribution

+ 

+ == Torrent Release

+ 

+ When you want to add a new torrent to the tracker at

+ http://torrent.fedoraproject.org you need to take the following

+ steps to have it listed correctly:

+ 

+ [arabic]

+ . login to _torrent02.fedoraproject.org_. If you are unable to do so please

+ contact the fedora infrastructure group about access. This procedure

+ requires membership in the torrentadmin group.

+ . Change the group ID to torrentadmin

+ +

+ ....

+ newgrp torrentadmin

+ ....

+ . Remove everything from the working directory `/srv/torrent/new/fedora/`

+ +

+ ....

+ rm -r /srv/torrent/new/fedora/*

+ ....

+ . rsync all the iso's from ibiblio

+ +

+ ....

+ rsync -avhHP rsync://download-ib01.fedoraproject.org/fedora-stage/<Version>_<Release>-<Label>/*/*/iso/ /srv/torrent/new/fedora/

+ ....

+ . cd into `/srv/torrent/new/fedora/` to change the directory structure

+ +

+ ....

+ cd /srv/torrent/new/fedora/

+ ....

+ . Directories should be created by removing Label in the iso's name

+ +

+ ....

+ for iso in $(ls *iso); do dest=$(echo $iso|sed -e 's|-<Label>.iso||g' ); mkdir $dest; mv $iso $dest; done

+ ....

+ . Copy the checksum's into the associated directories

+ +

+ ....

+ for checksum in $(ls *CHECKSUM); do for file in $(grep "SHA256 (" $checksum |sed -e 's|SHA256 (||g' -e 's|-<Label>.*||g' ); do cp $checksum $file ; done; done

+ ....

+ . Verify if all the checksums are copied into the right locations

+ +

+ ....

+ ls */

+ ....

+ . Remove the manifest files and checksums for netinst (since we dont

+ mirror netinst images) and other files

+ +

+ ....

+ rm -rf *manifest *netinst* *CHECKSUM *i386 *x86_64

+ ....

+ . Run the maketorrent script from `/srv/torrent/new/fedora/`

+ +

+ ....

+ ../maketorrent *

+ ....

+ +

+ ____

+ [NOTE]

+ ====

+ Next steps should be run 12 hours before the release time which is

+ generally 14:00 UTC on Tuesday.

+ ====

+ ____

+ 

+ . Grab `fedora-torrent-init.py` from releng scripts and change it to

+ executable

+ +

+ ....

+ cd ~

+ wget https://pagure.io/releng/raw/master/f/scripts/fedora-torrent-ini.py

+ chmod 755 ~/fedora-torrent-ini.py

+ ....

+ . Run the following command from `/srv/torrent/new/fedora/`

+ +

+ ....

+ ~/fedora-torrent-ini.py <Version>_<Release> <Current_Date> > <Version>_<Release>.ini

+ ....

+ . Copy all the torrents to `/srv/web/torrents/`

+ +

+ ....

+ cp *torrent /srv/web/torrents/

+ ....

+ . Copy everything in `/srv/torrent/new/fedora/` to `/srv/torrent/btholding/`

+ +

+ ....

+ cp -rl * /srv/torrent/btholding/

+ ....

+ . Copy the .ini file created in step 12 to `/srv/torrent/torrent-generator/`

+ +

+ ....

+ sudo cp <Version>_<Release>.ini /srv/torrent/torrent-generator/

+ ....

+ . Restart rtorrent and opentracker services

+ +

+ ....

+ systemctl restart opentracker-ipv4 opentracker-ipv6

+ sudo -i

+ su -s /bin/bash torrent

+ tmux (or tmux attach if the session is already running)

+ control-q if rtorrent is already running.

+ cd /srv/torrent/btholding

+ rtorrent *.torrent

+ control-b d (disconnect from tmux)

+ ....

+ 

+ [NOTE]

+ ====

+ For final release, remove all the alpha and beta directories and torrent

+ files corresponding to the release in `/srv/torrent/btholding/` directory.

+ ====

+ 

+ [NOTE]

+ ====

+ At EOL of a release, remove all the directories and torrent files

+ corresponding to the release in `/srv/torrent/btholding/` directory.

+ ====

+ 

@@ -0,0 +1,17 @@ 

+ = Fedora Infra Unbound Notes

+ 

+ Sometimes, especially after updates/reboots you will see alerts like

+ this:

+ 

+ ....

+ 18:46:55 < zodbot> PROBLEM - unbound-tummy01.fedoraproject.org/Unbound 443/tcp is WARNING: DNS WARNING - 0.037 seconds response time (dig returned an error status) (noc01)

+ 18:51:06 < zodbot> PROBLEM - unbound-tummy01.fedoraproject.org/Unbound 80/tcp is WARNING: DNS WARNING - 0.035 seconds response time (dig returned an error status) (noc01)

+ ....

+ 

+ To correct this, restart unbound on the relevant node (in the example

+ case above, _unbound-tummy01_), by running the `restart_unbound` Ansible

+ playbook from _batcave01_:

+ 

+ ....

+ sudo -i ansible-playbook /srv/web/infra/ansible/playbooks/restart_unbound.yml --extra-vars="target=unbound-tummy01.fedoraproject.org"

+ ....

@@ -0,0 +1,72 @@ 

+ = Fedora Infrastructure Kpartx Notes

+ 

+ == How to mount virtual partitions

+ 

+ There can be multiple reasons you need to work with the contents of a

+ virtual machine without that machine running.

+ 

+ [arabic]

+ . You have decommisioned the system and found you need to get something

+   that was not backed up.

+ . The system is for some reason unbootable and you need to change some

+   file to make it work.

+ . Forensics work of some sort.

+ 

+ In the case of 1 and 2 the following commands and tools are invaluable.

+ In the case of 3, you should work with the Fedora Security Team and

+ follow their instructions completely.

+ 

+ == Steps to Work With Virtual System

+ 

+ [arabic]

+ . Find out what physical server the virtual machine image is on.

+ +

+ ____

+ [upperalpha]

+ .. Log into _batcave01.iad2.fedoraproject.org_

+ .. search for the hostname in the file `/var/log/virthost-lists.out`:

+ +

+ ....

+ $ grep proxy01.phx2.fedoraproject.org /var/log/virthost-lists.out

+ virthost05.phx2.fedoraproject.org:proxy01.phx2.fedoraproject.org:running:1

+ ....

+ .. If the image does not show up in the list then most likely it is an

+ image which has been decommissioned. You will need to search the virtual

+ hosts more directly:

+ +

+ ....

+ # for i in `awk -F: '{print $1}' /var/log/virthost-lists.out |

+         sort -u`; do

+             ansible $i -m shell -a 'lvs | grep proxy01.phx2'

+         done

+ ....

+ ____

+ 

+ . Log into the virtual server and make sure the image is shutdown. Even

+ in cases where the system is not working correctly it may have still

+ have a running qemu on the physical server. It is best to confirm that

+ the box is dead.

+ +

+ ....

+ # virsh destroy <hostname>

+ ....

+ . We will be using the kpartx command to make the guest image ready for

+ mounting.

+ +

+ ....

+ # lvs | grep <hostname>

+ # kpartx -l /dev/mapper/<volume>-<hostname>

+ # kpartx -a /dev/mapper/<volume>-<hostname>

+ # vgscan

+ # vgchange -ay /dev/mapper/<new volume-name>

+ # mount /dev/mapper/<partition we want> /mnt

+ ....

+ . Edit the files as needed.

+ . Tear down the tree.

+ +

+ ....

+ # umount /mnt

+ # vgchange -an <volume-name>

+ # vgscan

+ # kpartx -d /dev/mapper/<volume>-<hostname>

+ ....

@@ -0,0 +1,119 @@ 

+ = Fedora Infrastructure Libvirt Notes

+ 

+ Notes/FAQ on using `libvirt/virsh/virt-manager` in our environment

+ 

+ == How do I migrate a guest from one virthost to another

+ 

+ . Setup an unpassworded root ssh key to allow communication between

+ the two virthosts as root. This is only temporary, so, while scary it

+ is not a big deal. Right now, this also means modifying the

+ `/etc/ssh/sshd_config` to `permitroot without-password`.

+ . Determine whatever changes need to be made to the guest. This can be

+ the number of cpus, the amount of memory, or the disk location as this

+ may not be standard on the current server.

+ . Make a dump of the current virtual guest using the virsh

+ command. Use `virsh dumpxml --migratable guestname` and then edit any

+ changes in disk layout, memory and cpu needed.

+ . setup storage on the destination end to match the source storage.

+ `lvs` will give the amount of disk space. Due to some vaguries on disk

+ sizes, it is always better to round up so if the original server says it

+ is using 19.85 GB, make the next image 20 GB. On the new server, use

+ `lvcreate -L+$\{SIZE}GB -n $\{FQDN} vg_guests`

+ . as root on source location:

+ +

+ ....

+ virsh -c qemu:///system  migrate --xml ${XML_FILE_FROM_3} \

+   --copy-storage-all ${GUESTNAME} \

+   qemu+ssh://root@destinationvirthost/system

+ 

+ This should start the migration process and it will output absolutely 

+ jack-squat on the cli for you to know this. On the destination system 

+ go look in /var/log/libvirt/qemu/myguest.log (tail -f will show you the 

+ progress results as a percentage completed)

+ ....

+ . Once the migration is complete you will probably need to run this on

+ the new virthost:

+ +

+ ....

+ scp ${XML_FILE_FROM_3} root@destinationvirthost:

+ 

+ ssh root@destinationvirthost

+ virsh define ${XML_FILE_FROM_3}

+ virsh autostart ${GUESTNAME}

+ ....

+ . Edit ansible host_vars of the guest and make sure that the associated

+ values are correct:

+ +

+ ....

+ volgroup: /dev/vg_guests

+ vmhost: virthost??.phx2.fedoraproject.org

+ ....

+ . Run the noc.yml ansible playbook to update nagios.

+ 

+ This should work for most systems. However in some cases, the virtual

+ servers on either side may have too much activity to 'settle' down

+ enough for a migration to work. In other cases the guest may be on a

+ disk like ISCSI which may not allow for direct migration. In this case

+ you will need to use a more direct movement.

+ 

+ [arabic]

+ . Schedule outage time if any. This will need to be long enough to copy

+ the data from one host to another, so will depend on guest disk size.

+ . Turn off monitoring in nagios

+ . setup an unpassworded root ssh key to allow communication between

+ the two virthosts as root. This is only temporary, so, while scary it

+ is not a big deal. Right now, this also means modifying the

+ `/etc/ssh/sshd_config` to `permitroot without-password`.

+ . Determine whatever changes need to be made to the guest. This can be

+ the number of cpus, the amount of memory, or the disk location as this

+ may not be standard on the current server.

+ . Make a dump of the current virtual guest using the virsh

+ command. Use `virsh dumpxml --migratable guestname` and then edit any

+ changes in disk layout, memory and cpu needed.

+ . setup storage on the destination end to match the source storage.

+ `lvs` will give the amount of disk space. Due to some vaguries on disk

+ sizes, it is always better to round up so if the original server says it

+ is using 19.85 GB, make the next image 20 GB. On the new server, use

+ `lvcreate -L+$\{SIZE}GB -n $\{FQDN} vg_guests`

+ . Shutdown the guest.

+ . Insert iptables rule for nc transfer:

+ +

+ ....

+ iptables -I INPUT 14 -s <source host> -m tcp -p tcp --dport 11111 -j ACCEPT

+ ....

+ . On the destination host:

+ +

+ ____

+ * RHEL-7:

+ +

+ ....

+ nc -l 11111 | dd of=/dev/<guest_vg>/<guest-partition>

+ ....

+ ____

+ . On the source host:

+ +

+ ....

+ dd if=/dev/<guest_vg>/<guest-partition> | nc desthost 11111

+ ....

+ +

+ Wait for the copy to finish. You can do the following to track how far

+ something has gone by finding the dd pid and then sending a 'kill -USR1'

+ to it.

+ . Once the migration is complete you will probably need to run this on

+ the new virthost:

+ +

+ ....

+ scp ${XML_FILE_FROM_3} root@destinationvirthost:

+ 

+ ssh root@destinationvirthost

+ virsh define ${XML_FILE_FROM_3}

+ virsh autostart ${GUESTNAME}

+ ....

+ . Edit ansible host_vars of the guest and make sure that the associated

+ values are correct:

+ +

+ ....

+ volgroup: /dev/vg_guests

+ vmhost: virthost??.iad2.fedoraproject.org

+ ....

+ . Run the `noc.yml` ansible playbook to update nagios.

@@ -0,0 +1,19 @@ 

+ = Virtio Notes

+ 

+ We have found that virtio is faster/more stable than emulating other

+ cards on our VMs.

+ 

+ To switch a VM to virtio:

+ 

+ * Remove from DNS if it's a proxy

+ * Log into the vm and shut it down

+ * Log into the virthost that the VM is on, and

+ `sudo virsh edit <VM FQDN>`

+ * Add this line to the appropriate bridge interface(s):

+ +

+ ....

+ <model type='virtio'/>

+ ....

+ * Save/quit the editor

+ * `sudo virsh start <VM FQDN>`

+ * Re-add to DNS if it's a proxy

@@ -0,0 +1,183 @@ 

+ = Voting Infrastructure SOP

+ 

+ The live voting instance can be found at https://admin.fedoraproject.org/voting

+ 

+ and the staging instance at https://admin.stg.fedoraproject.org/voting/

+ 

+ The code base can be found at

+ http://git.fedorahosted.org/git/?p=elections.git

+ 

+ == Contents

+ 

+ [arabic]

+ * <<_contact_information>>

+ * <<_creating_a_new_election>>

+ ** <<_creating_the_elections>>

+ ** <<_adding_candidates>>

+ ** <<_who_can_vote>>

+ * <<_modifying_an_election>>

+ ** <<_edit_a_candidate>>

+ ** <<_removing_a_candidate>>

+ ** <<_releasing_the_results_of_an_embargoed_election>>

+ * <<_results>>

+ * <<_legacy>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin, elections

+ Servers::

+   elections0\{1,2}, elections01.stg, db02

+ Purpose::

+   Provides a system for voting on Fedora matters

+ 

+ == Creating a new election

+ 

+ === Creating the elections

+ 

+ * Log in

+ * Go to "Admin" in the menu at the top, select "Create new election" and

+ fill in the form.

+ * The "usefas" option results in candidate names being looked up as FAS

+ usernames an displayed as their real name.

+ * An alias should be added when creating a new election as this is used

+ in the link on the page of listed elections on the frontpage.

+ * Complete the election form:

+ +

+ Alias::

+   A short name for the election. It is the name that will be used in the

+   templates.

+ +

+ `Example: FESCo2014`

+ Summary::

+   A simple name that will be used in the URLs and as in the links in the

+   application

+ +

+ `Example: FESCo elections 2014`

+ Description::

+   A short description about the elections that will be displayed above

+   the choices in the voting page

+ Type::

+   Allow setting the types of elections (more on that below)

+ Maxium Range/Votes::

+   Allow setting options for some election type (more on that below)

+ URL::

+   A URL pointing to more information about the election

+ +

+ `Example: the wiki page presenting the election`

+ Start Date::

+   The Start of the elections (UTC)

+ End Date::

+   The Close of the elections (UTC)

+ Number Elected::

+   The number of seats that will be selected among the candidates after

+   the election

+ Candidates are FAS users?::

+   Checkbox allowing integration between FAS account and their names

+   retrieved from FAS.

+ Embargo results::

+   If this is set then it will require manual intervention to release the

+   results of the election

+ Legal voters groups::

+   Used to restrict the votes to one or more FAS groups.

+ Admin groups::

+   Give admin rights on that election to one or more FAS groups

+ 

+ === Adding Candidates

+ 

+ The list of all the elections can be found at `voting/admin/`

+ 

+ Click on the election of interest and and select "Add a candidate".

+ 

+ Each candidate is added with a name and an URL. The name can be his/her

+ FAS username (interesting if the checkbox that candidates are FAS users

+ has been checked when creating the calendar) or something else.

+ 

+ The URL can be a reference to the wiki page where they nominated

+ themselves.

+ 

+ This will add extra candidates to the available list.

+ 

+ === Who can vote

+ 

+ If no 'Legal voters groups' have been defined when creating the

+ election, the election will be opened to anyone having signed the CLA

+ and being in one other group (commonly referred to CLA+1).

+ 

+ == Modifying an Election

+ 

+ Changing the details of an Election

+ 

+ [NOTE]

+ ====

+ This page can also be used to verify details of an election before it

+ opens for voting.

+ ====

+ The list of all the elections can be found at `/voting/admin/`

+ 

+ After finding the right election, click on it to have the overview and

+ select "Edit election" under the description.

+ 

+ === Edit a candidate

+ 

+ On the election overview page found via `/voting/admin/` (and clicking

+ on the election of interest), next to each candidate is an

+ _edit_ button allowing the admins to edit the information

+ relative to the candidate.

+ 

+ === Removing a candidate

+ 

+ On the election overview page found via `/voting/admin/` (and clicking

+ on the election of interest), next to each candidate is an

+ _x_ button allowing the admins to remove the candidatei

+ from the election.

+ 

+ === Releasing the results of an embargoed election

+ 

+ Visit the elections admin interface and edit the election to uncheck the

+ 'Embargo results?' checkbox.

+ 

+ == Results

+ 

+ Admins have early access to the results of the elections (regardless of

+ the embargo status).

+ 

+ The list of the closed elections can be found at `/voting/archives`.

+ 

+ Find there the election of interest and click on the "Results" link in

+ the last column of the table. This will show you the Results page

+ included who was elected based on the number of seats elected entered

+ when creating the election.

+ 

+ You may use these information to send out the results email.

+ 

+ == Legacy

+ 

+ [NOTE]

+ ====

+ The information below should now be included in the Results page (see

+ above) but I left them here in case.

+ ====

+ === Other things you might need to query

+ 

+ The current election software doesn't retrieve all of the information

+ that we like to include in our results emails. So we have to query the

+ database for the extra information. You can use something like this to

+ retrieve the total number of voters for the election:

+ 

+ ....

+ SELECT e.id, e.shortdesc, COUNT(distinct v.voter) FROM elections AS e LEFT

+ JOIN votes AS v ON e.id=v.election_id WHERE e.shortdesc in ('FAmSCo - February

+ 2014') GROUP BY e.id, e.shortdesc;

+ ....

+ 

+ You may also want to include the vote tally per candidate for

+ convenience when the FPL emails the election results:

+ 

+ ....

+ SELECT e.id, e.shortdesc, c.name, c.novotes FROM elections AS e LEFT JOIN

+ fvotecount AS c ON e.id=c.election_id WHERE e.shortdesc in ('FAmSCo - February

+ 2014', 'FESCo - February 2014') ;

+ ....

@@ -0,0 +1,110 @@ 

+ = WaiverDB SOP

+ 

+ WaiverDB is a service for recording waivers, from humans, that

+ correspond with results in ResultsDB.

+ 

+ On its own, it doesn't do much.

+ 

+ Importantly, the _Greenwave_ service queries resultsdb _and_ waiverdb

+ and makes decisions (for _Bodhi_ and other tools) based on the

+ combination of data from the two sources. A result in resultsdb may

+ matter, unless waived in waiverdb.

+ 

+ == Contact Information

+ 

+ Owner::

+   Factory2 Team, Fedora QA Team, Infrastructure Team

+ Contact::

+   #fedora-qa, #fedora-admin

+ Persons::

+   dcallagh, gnaponie (giulia), lholecek, ralph (threebean)

+ Public addresses::

+   * https://waiverdb-web-waiverdb.app.os.fedoraproject.org/api/v1.0/about

+   * https://waiverdb-web-waiverdb.app.os.fedoraproject.org/api/v1.0/waivers

+ Servers::

+   * In OpenShift.

+ Purpose::

+   Record waivers and respond to queries about them.

+ 

+ == Description

+ 

+ See the https://waiverdb.readthedocs.io/en/latest/index.html[the upstream API docs for

+ detailed information]. The information here will be contextual to the

+ Fedora environment.

+ 

+ There _will be_ two ways of inserting waivers into waiverdb:

+ 

+ First, a cli tool, which performs a HTTP POST from the packager's

+ machine.

+ 

+ Second, a proxied request from bodhi. In this case, the packager will

+ click a button in the Bodhi UI (next to a failing test result). Bodhi

+ will receive the request from the user and in turn submit a POST to

+ waiverdb on the user's behalf. Here, the Bodhi Server will authenticate

+ _as_ the bodhi user, but request that the waiver be recorded as having

+ been submitted _by_ the original packager. Bodhi's account will have to

+ be given special _proxy_ privileges in waiverdb. See

+ https://pagure.io/waiverdb/issue/77

+ 

+ == Observing WaiverDB Behavior

+ 

+ Login to _os-master01.iad2.fedoraproject.org_ as

+ _root_ (or authenticate remotely with openshift using

+ `oc login https://os.fedoraproject.org`, and run:

+ 

+ ....

+ $ oc project waiverdb

+ $ oc status -v

+ $ oc logs -f dc/waiverdb-web

+ ....

+ 

+ == Removing erroneous waivers

+ 

+ In general, don't do this. But if for some reason we _really_ need to,

+ the database for waiverdb lives outside of openshift in our standard

+ environment. Connect to db01:

+ 

+ ....

+ [root@db01 ~][PROD]# sudo -u postgres psql waiverdb

+ 

+ waiverdb=# \d

+               List of relations

+  Schema |     Name      |   Type   |  Owner

+ --------+---------------+----------+----------

+  public | waiver        | table    | waiverdb

+  public | waiver_id_seq | sequence | waiverdb

+ (2 rows)

+ 

+ waiverdb=# select * from waiver;

+ ....

+ 

+ Be careful. You can delete individual waivers with SQL.

+ 

+ == Upgrading

+ 

+ You can roll out configuration changes by changing the files in

+ https://pagure.io/fedora-infra/ansible/blob/main/f/roles/openshift-apps/waiverdb[roles/openshift-apps/waiverdb/]

+ and running the `playbooks/openshift-apps/waiverdb.yml` playbook.

+ 

+ To understand how the software is deployed, take a look at these two

+ files:

+ 

+ * `roles/openshift-apps/waiverdb/templates/imagestream.yml`

+ * `roles/openshift-apps/waiverdb/templates/buildconfig.yml`

+ 

+ See that we build a fedora-infra specific image on top of an app image

+ published by upstream. The _latest_ tag is automatically

+ deployed to staging. This should represent the latest commit to the

+ _master_ branch of the upstream git repo that passed its

+ unit and functional tests.

+ 

+ The _prod_ tag is manually controlled. To upgrade prod to

+ match what is in stage, move the _prod_ tag to point to the

+ same image as the _latest_ tag. Our buildconfig is

+ configured to poll that tag, so a new os.fp.o build and deployment

+ should be automatically created.

+ 

+ You can watch the build and deployment with _oc_ commands.

+ 

+ You can poll this URL to see what version is live at the moment:

+ https://waiverdb-web-waiverdb.app.os.fedoraproject.org/api/v1.0/about

@@ -0,0 +1,90 @@ 

+ = What Can I Do For Fedora SOP

+ 

+ == Contents

+ 

+ [arabic]

+ * <<_contact_information>>

+ * <<_introduction>>

+ * <<_deployment>>

+ * <<_initial_deployment>>

+ * <<_logs>>

+ 

+ == Contact Information

+ 

+ Owner::

+   sysadmin-main

+ Contact::

+   #fedora-admin, #fedora-noc or admin@fedoraproject.org

+ Location::

+   Openshift

+ Public addresses::

+   * whatcanidoforfedora.org

+   * stg.whatcanidoforfedora.org

+ Server(s)::

+   * os.fedoraproject.org

+   * os.stg.fedoraproject.org

+ Purpose::

+   To explain the overall function of the whatCanIDoForFedora webpage..

+   including some back story, how to build your own, and site navigation.

+ 

+ == Introduction

+ 

+ The 'What Can I Do For Fedora' (https://whatcanidoforfedora.org) page was the

+ brainchild of Ralph Bean after getting inspiration from

+ 'whatcanidoformozilla.org' created by Josh Matthews, Henri Koivuneva and

+ a few others. Ralph wanted to make the whatcanidoforfedora (wcidff) as

+ configurable as possible. The purpose of this site is to assist, in as

+ user friendly a way as possible, new and prospective community members

+ and help them realize what skills they may posess that can be helpful

+ for the Fedora Project.

+ 

+ == Deployment

+ 

+ The application deployment is managed from the github repository using

+ the 'staging' and 'production' branch to deploy a new version.

+ 

+ For example a new deployment to staging would look like that:

+ 

+ ....

+ git clone git@github.com:fedora-infra/asknot-ng.git

+ cd asknot-ng

+ git checkout staging

+ git rebase develop

+ git push origin staging

+ ....

+ 

+ The github repository has a webhook configured to send the push

+ informations to our Openshift instance. Once Openshift receives the

+ webhook requests it will trigger a new build of the using the

+ repository's

+ https://github.com/fedora-infra/asknot-ng/blob/develop/Dockerfile[Dockerfile].

+ 

+ The 'asknot-ng' container runs Apache HTTP web server and the

+ configuration is store in the

+ https://github.com/fedora-infra/asknot-ng/tree/develop/container[git

+ repository].

+ 

+ == Initial Deployment

+ 

+ The following playbook is used to create the initial Openshift project

+ with the correct configuration:

+ 

+ ....

+ sudo rbac-playbook openshift-apps/asknot.yml

+ ....

+ 

+ == Logs

+ 

+ Logs can be retrive by accessing the

+ https://os.fedoraproject.org[Openshift webconsole] or by using the

+ openshift command line:

+ 

+ ....

+ $ oc login os-master01.iad2.fedoraproject.org

+ You must obtain an API token by visiting https://os.fedoraproject.org/oauth/token/request

+ 

+ $ oc login os-master01.iad2.fedoraproject.org --token=<Your token here>

+ $ oc -n asknot get pods

+ asknot-28-bfj52          1/1       Running     522        28d

+ $ oc logs asknot-28-bfj52

+ ....

@@ -0,0 +1,37 @@ 

+ = Wiki Infrastructure SOP

+ 

+ Managing our wiki.

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team / Fedora Website Team

+ Contact::

+   #fedora-admin or #fedora-websites on libera.chat 

+ Location::

+   https://fedoraproject.org/wiki/

+ Servers::

+   proxy[1-3] app[1-2,4]

+ Purpose::

+   Provides our production wiki

+ 

+ == Description

+ 

+ Our wiki currently runs mediawiki.

+ 

+ [IMPORTANT]

+ ====

+ Whenever you changes anything on the wiki (bugfix, configuration,

+ plugins, ...), please update the page at

+ https://fedoraproject.org/wiki/WikiChanges .

+ ==== 

+ 

+ == Dealing with Spammers

+ 

+ If you find a spammer is editing pages in the wiki do the following:

+ 

+ [arabic]

+ . Admin disable their account in fas, add 'wiki spammer' as the comment

+ . Block their account in the wiki from editing any additional pages

+ . Go to the list of pages they've edited and rollback their changes -

+ one by one. If there are many get someone to help you.

@@ -0,0 +1,104 @@ 

+ = Zodbot Infrastructure SOP

+ 

+ Zodbot is a supybot based irc bot that we use in our #fedora channels.

+ 

+ == Contents

+ 

+ [arabic]

+ * <<_contact_information>>

+ * <<_description>>

+ * <<_startup_issues>>

+ * <<_processing_interrupted_meeting_logs>>

+ * <<_becoming_an_admin>>

+ 

+ == Contact Information

+ 

+ Owner::

+   Fedora Infrastructure Team

+ Contact::

+   #fedora-admin

+ Servers::

+   value01

+ Purpose::

+   Provides our IRC bot

+ 

+ == Description

+ 

+ Zodbot is a supybot based irc bot that we use in our #fedora channels.

+ It runs on _value01_ as the daemon user. We do not manage the

+ `zodbot.conf` because supybot makes changes to it on its own. Therefore it

+ gets backed up and is treated as data.

+ 

+ shutdown::

+   `killall supybot`

+ startup::

+ +

+ ....

+ cd /srv/web/meetbot

+ # zodbot current needs to be started in the meetbot directory.

+ # This requirement will go away in a later meetbot

+ release

+ sudo -u daemon supybot -d /var/lib/zodbot/conf/zodbot.conf

+ ....

+ 

+ == Startup issues

+ 

+ If the bot won't connect, with an error like:

+ 

+ ....

+ "Nick/channel is temporarily unavailable"

+ ....

+ 

+ found in `/var/lib/zodbot/logs/messages.log`, hop on libera.chat (with your

+ own IRC client) and do the following:

+ 

+ ....

+ /msg nickserv release zodbot [the password]

+ ....

+ 

+ The password can be found on the bot's host in

+ `/var/lib/zodbot/conf/zodbot.conf`

+ 

+ This should allow the bot to connect again.

+ 

+ == Processing interrupted meeting logs

+ 

+ zodbot forgets about meetings if they are in progress when the bot goes

+ down; therefore, the meetings never get processed. Users may request a

+ ticket in our Trac instance to have meeting logs processed.

+ 

+ Trac tickets for meeting log processing should consist of a URL where

+ zodbot had saved the log so far and an uploaded file containing the rest

+ of the log. The logs are stored in `/srv/web/meetbot`. Append the

+ remainder of the log uploaded to Trac (don't worry too much about

+ formatting; `meeting.py` works well with irssi- and XChat-like logs), then

+ run:

+ 

+ ....

+ sudo python /usr/lib/python2.7/site-packages/supybot/plugins/MeetBot/meeting.py replay /path/to/fixed.log.txt

+ ....

+ 

+ Close the Trac ticket, letting the user know that the logs are processed

+ in the same directory as the URL they gave you.

+ 

+ == Becoming an admin

+ 

+ Register with zodbot on IRC.:

+ 

+ ....

+ /msg zodbot misc help register

+ ....

+ 

+ You have to identify to the bot to do any admin type commands, and you

+ need to have done so before anyone can give you privs.

+ 

+ After doing this, ask in #fedora-admin on IRC and someone will grant you

+ privs if you need them. You'll likely be added to the admin group, which

+ has the following capabilities (the below snippet is from an IRC log

+ illustrating how to get the list of capabilities).

+ 

+ ....

+ 21:57 < nirik> .list admin

+ 21:57 < zodbot> nirik: capability add, capability remove, channels, ignore add,

+ ignore list, ignore remove, join, nick, and part

+ ....

There are many additions as this repo was mostly empty.

The bulk import was done from https://pagure.io/infra-docs/blob/main/f/docs/sysadmin-guide/sops with autoconversion from rst to adoc with pandoc.

The repo is currently rendered in https://gifted-engelbart-b02e8b.netlify.app/infra/

Any help in reviewing is appreciated :-)

A great start!

I notice somethings redirect/link to readthedocs... is that intended? Ideally we would drop that too when we move infra-docs...

Ths sop's have 'SOP in review' ? Is that to make sure we review them? or some artifact of conversion?

The following sops can just be dropped:

askbot
basset
bladecenter
cloud
communishift
cyclades
darkserver
denyhosts
fedmsg* (possibly we should keep this for now since we do still run fedmsg)
fedparapackages
fedorapastebin
fpdc
freemedia
freshmaker
gitweb
hosted_git_to_svn
hubs
ibm_rsa_ii
infra-yubikey
linktracking
loopabull
privatefedorahosted
reviewboard

orentation and index might be better up a level from sops?

Thanks for doing this, it's a good place to start with...

2 new commits added

  • Incorporate Kevins comments and remove obsolete guides
  • Remove old sysadmin guide
2 years ago

yeah i'm +1 for this.

get the content in there then we can go from there with re-arraging it, and working on checking and updating each SOP

1 new commit added

  • Review ansible SOP
2 years ago

6 new commits added

  • Review blockerbugs SOP
  • Review bastio-hosts-info SOP
  • Review aws-access SOP
  • Review arm SOP
  • Review archive-old-fedora SOP
  • Review apps-fp-o SOP
2 years ago

26 new commits added

  • Review fmn SOP
  • Review fedorawebsites SOP
  • Review fedora-releases SOP
  • Review fedocal SOP
  • Review fedmsg-websocket SOP
  • Review fedmsg-relay SOP
  • Review fedmsg-new-message-type SOP
  • Review fedmsg-irc SOP
  • Review fedmsg-introduction SOP
  • Review fedmsg-gateway SOP
  • Review fedmsg-certs SOP
  • Review fas-openid SOP
  • Review fas-notes SOP
  • Review docs SOP
  • Review dns SOP
  • Review departing-admin SOP
  • Review debuginfod SOP
  • Review datanommer SOP
  • Review database SOP
  • Review copr SOP
  • Review contenthosting SOP
  • Review compose-tracker SOP
  • Review collectd SOP
  • Review bugzilla SOP
  • Review bugzilla2fedmsg SOP
  • Review bodhi SOP
2 years ago

1 new commit added

  • Remove freenode irc SOP
2 years ago

2 new commits added

  • Review gdpr_delete SOP
  • Remove gather-easyfix SOP
2 years ago

5 new commits added

  • Review greenwave SOP
  • Review github SOP
  • Review github2fedmsg SOP
  • Review geoip-city-wsgi SOP
  • Review gdpr_sar SOP
2 years ago

11 new commits added

  • Review infra-retiremachine SOP
  • Review infra-repo SOP
  • Review infra-raidmismatch SOP
  • Review infra-hostrename SOP
  • Review infra-git-repo SOP
  • Review index page
  • Review hotness SOP
  • Review hotfix SOP
  • Review haproxy SOP
  • Review guestedit SOP
  • Review guestdisk SOP
2 years ago

4 new commits added

  • Review kerneltest-harness SOP
  • Review jenkins-fedmsg SOP
  • Review iscsi SOP
  • Review ipsilon SOP
2 years ago

5 new commits added

  • Review koschei SOP
  • Review koji-builder-setup SOP
  • Review koji-archive SOP
  • Review koji SOP
  • Review kickstarts SOP
2 years ago

21 new commits added

  • Review openvpn SOP
  • Review openshift SOP
  • Review odcs SOP
  • Review nuancier SOP
  • Review nonhumanaccounts SOP
  • Review new-hosts SOP
  • Review netapp SOP
  • Review nagios SOP
  • Review mote SOP
  • Review mirrormanager SOP
  • Review mirrormanager-S3-EC2-netblocks SOP
  • Review mirrorhiding SOP
  • Review message-tagging-service SOP
  • Review memcached SOP
  • Review mbs SOP
  • Review mastermirror SOP
  • Review massupgrade SOP
  • Review making-ssl-certificates SOP
  • Review mailman SOP
  • Remove librariesio2fedmsg SOP
  • Review layered-image-buildsys SOP
2 years ago

7 new commits added

  • Review planetsubgroup SOP
  • Review pesign-upgrade SOP
  • Review pdc SOP
  • Review pagure SOP
  • Review packagereview SOP
  • Remove packagedatabase SOP
  • Review outage SOP
2 years ago

Metadata Update from @pbokoc:
- Request assigned

2 years ago

27 new commits added

  • Review zodbot SOP
  • Review wiki SOP
  • Review wcidff SOP
  • Review waiverdb SOP
  • Review voting SOP
  • Review virt-notes SOP
  • Review virtio SOP
  • Review virt-image SOP
  • Review unbound SOP
  • Review torrentrelease SOP
  • Review tag2distrepo SOP
  • Review syslog SOP
  • Review status-fedora SOP
  • Review staging SOP
  • Review sshknownhosts SOP
  • Review sshaccess SOP
  • Review simple_koji_ci SOP
  • Review sigul-upgrade SOP
  • Review selinux SOP
  • Review scmadmin SOP
  • Review retrace SOP
  • Review resultsdb SOP
  • Review requestforresources SOP
  • Review registry SOP
  • Review rdiff-backup SOP
  • Review rabbitmq SOP
  • Review publictest-dev-stg-production SOP
2 years ago

I went through the migrated docs and tried to fix anything strange I found. Removing SOPs that are no longer relevant, fixing URLs, where I was able to find the replacement, added TODO to few places.

The docs still needs to be revised by somebody who has the knowledge about the topic of the SOP, but at least they are now rendered without issues.

IMHO we should merge this now/asap. It's better than what we have now and will give us a base to build on/fix.

@kevin I concur, let's squash the commits and get this in. I'm being entirely selfish btw, the DNF Counting SOP skeleton + review process doc PR #9 builds on top of it :wink:.

@pbokoc Do you need help with the review?

Damn, that's a big one. Everything looks fine on a quick glance. Thanks everyone!

Pull-Request has been merged by pbokoc

2 years ago

@pbokoc Is this already somewhere on docs.fp.o?

Metadata
Changes Summary 127
+0 -2
file changed
antora.yml
+3 -5
file changed
modules/ROOT/pages/services.adoc
-0
file removed
modules/communishift/assets/images/.keep
-1
file removed
modules/communishift/nav.adoc
-10
file removed
modules/communishift/pages/index.adoc
-0
file removed
modules/old_sysadmin_guide/assets/images/.keep
-1
file removed
modules/old_sysadmin_guide/nav.adoc
+0
file added
modules/sysadmin_guide/assets/images/ansible-repositories.png
+118 -1
file changed
modules/sysadmin_guide/nav.adoc
+98
file added
modules/sysadmin_guide/pages/2-factor.adoc
+294
file added
modules/sysadmin_guide/pages/accountdeletion.adoc
+210
file added
modules/sysadmin_guide/pages/anitya.adoc
+249
file added
modules/sysadmin_guide/pages/ansible.adoc
+31
file added
modules/sysadmin_guide/pages/apps-fp-o.adoc
+104
file added
modules/sysadmin_guide/pages/archive-old-fedora.adoc
+206
file added
modules/sysadmin_guide/pages/arm.adoc
+152
file added
modules/sysadmin_guide/pages/aws-access.adoc
+43
file added
modules/sysadmin_guide/pages/bastion-hosts-info.adoc
+156
file added
modules/sysadmin_guide/pages/blockerbugs.adoc
+431
file added
modules/sysadmin_guide/pages/bodhi.adoc
+116
file added
modules/sysadmin_guide/pages/bugzilla.adoc
+73
file added
modules/sysadmin_guide/pages/bugzilla2fedmsg.adoc
+68
file added
modules/sysadmin_guide/pages/collectd.adoc
+26
file added
modules/sysadmin_guide/pages/compose-tracker.adoc
+126
file added
modules/sysadmin_guide/pages/contenthosting.adoc
+427
file added
modules/sysadmin_guide/pages/copr.adoc
+219
file added
modules/sysadmin_guide/pages/database.adoc
+121
file added
modules/sysadmin_guide/pages/datanommer.adoc
+133
file added
modules/sysadmin_guide/pages/debuginfod.adoc
+62
file added
modules/sysadmin_guide/pages/departing-admin.adoc
+332
file added
modules/sysadmin_guide/pages/dns.adoc
+66
file added
modules/sysadmin_guide/pages/docs.fedoraproject.org.adoc
+154
file added
modules/sysadmin_guide/pages/fas-notes.adoc
+42
file added
modules/sysadmin_guide/pages/fas-openid.adoc
+178
file added
modules/sysadmin_guide/pages/fedmsg-certs.adoc
+104
file added
modules/sysadmin_guide/pages/fedmsg-gateway.adoc
+57
file added
modules/sysadmin_guide/pages/fedmsg-introduction.adoc
+29
file added
modules/sysadmin_guide/pages/fedmsg-irc.adoc
+73
file added
modules/sysadmin_guide/pages/fedmsg-new-message-type.adoc
+56
file added
modules/sysadmin_guide/pages/fedmsg-relay.adoc
+70
file added
modules/sysadmin_guide/pages/fedmsg-websocket.adoc
+35
file added
modules/sysadmin_guide/pages/fedocal.adoc
+364
file added
modules/sysadmin_guide/pages/fedora-releases.adoc
+304
file added
modules/sysadmin_guide/pages/fedorawebsites.adoc
+198
file added
modules/sysadmin_guide/pages/fmn.adoc
+33
file added
modules/sysadmin_guide/pages/gather-easyfix.adoc
+110
file added
modules/sysadmin_guide/pages/gdpr_delete.adoc
+153
file added
modules/sysadmin_guide/pages/gdpr_sar.adoc
+57
file added
modules/sysadmin_guide/pages/geoip-city-wsgi.adoc
+70
file added
modules/sysadmin_guide/pages/github.adoc
+51
file added
modules/sysadmin_guide/pages/github2fedmsg.adoc
+111
file added
modules/sysadmin_guide/pages/greenwave.adoc
+138
file added
modules/sysadmin_guide/pages/guestdisk.adoc
+72
file added
modules/sysadmin_guide/pages/guestedit.adoc
+139
file added
modules/sysadmin_guide/pages/haproxy.adoc
+51
file added
modules/sysadmin_guide/pages/hotfix.adoc
+136
file added
modules/sysadmin_guide/pages/hotness.adoc
+72
file added
modules/sysadmin_guide/pages/index.adoc
+55
file added
modules/sysadmin_guide/pages/infra-git-repo.adoc
+113
file added
modules/sysadmin_guide/pages/infra-hostrename.adoc
+68
file added
modules/sysadmin_guide/pages/infra-raidmismatch.adoc
+112
file added
modules/sysadmin_guide/pages/infra-repo.adoc
+34
file added
modules/sysadmin_guide/pages/infra-retiremachine.adoc
+215
file added
modules/sysadmin_guide/pages/ipsilon.adoc
+132
file added
modules/sysadmin_guide/pages/iscsi.adoc
+40
file added
modules/sysadmin_guide/pages/jenkins-fedmsg.adoc
+61
file added
modules/sysadmin_guide/pages/kerneltest-harness.adoc
+170
file added
modules/sysadmin_guide/pages/kickstarts.adoc
+44
file added
modules/sysadmin_guide/pages/koji-archive.adoc
+119
file added
modules/sysadmin_guide/pages/koji-builder-setup.adoc
+206
file added
modules/sysadmin_guide/pages/koji.adoc
+210
file added
modules/sysadmin_guide/pages/koschei.adoc
+280
file added
modules/sysadmin_guide/pages/layered-image-buildsys.adoc
+112
file added
modules/sysadmin_guide/pages/mailman.adoc
+53
file added
modules/sysadmin_guide/pages/making-ssl-certificates.adoc
+412
file added
modules/sysadmin_guide/pages/massupgrade.adoc
+74
file added
modules/sysadmin_guide/pages/mastermirror.adoc
+204
file added
modules/sysadmin_guide/pages/mbs.adoc
+69
file added
modules/sysadmin_guide/pages/memcached.adoc
+83
file added
modules/sysadmin_guide/pages/message-tagging-service.adoc
+34
file added
modules/sysadmin_guide/pages/mirrorhiding.adoc
+20
file added
modules/sysadmin_guide/pages/mirrormanager-S3-EC2-netblocks.adoc
+171
file added
modules/sysadmin_guide/pages/mirrormanager.adoc
+103
file added
modules/sysadmin_guide/pages/mote.adoc
+89
file added
modules/sysadmin_guide/pages/nagios.adoc
+131
file added
modules/sysadmin_guide/pages/netapp.adoc
+337
file added
modules/sysadmin_guide/pages/new-hosts.adoc
+25
file added
modules/sysadmin_guide/pages/nonhumanaccounts.adoc
+142
file added
modules/sysadmin_guide/pages/nuancier.adoc
+139
file added
modules/sysadmin_guide/pages/odcs.adoc
+370
file added
modules/sysadmin_guide/pages/openqa.adoc
+73
file added
modules/sysadmin_guide/pages/openshift.adoc
+143
file added
modules/sysadmin_guide/pages/openvpn.adoc
+170
file added
modules/sysadmin_guide/pages/orientation.adoc
+249
file added
modules/sysadmin_guide/pages/outage.adoc
+37
file added
modules/sysadmin_guide/pages/packagereview.adoc
+109
file added
modules/sysadmin_guide/pages/pagure.adoc
+153
file added
modules/sysadmin_guide/pages/pdc.adoc
+62
file added
modules/sysadmin_guide/pages/pesign-upgrade.adoc
+56
file added
modules/sysadmin_guide/pages/planetsubgroup.adoc
+82
file added
modules/sysadmin_guide/pages/publictest-dev-stg-production.adoc
+129
file added
modules/sysadmin_guide/pages/rabbitmq.adoc
+96
file added
modules/sysadmin_guide/pages/rdiff-backup.adoc
+81
file added
modules/sysadmin_guide/pages/registry.adoc
+184
file added
modules/sysadmin_guide/pages/requestforresources.adoc
+45
file added
modules/sysadmin_guide/pages/resultsdb.adoc
+128
file added
modules/sysadmin_guide/pages/retrace.adoc
+310
file added
modules/sysadmin_guide/pages/scmadmin.adoc
+125
file added
modules/sysadmin_guide/pages/selinux.adoc
+78
file added
modules/sysadmin_guide/pages/sigul-upgrade.adoc
+48
file added
modules/sysadmin_guide/pages/simple_koji_ci.adoc
+153
file added
modules/sysadmin_guide/pages/sshaccess.adoc
+26
file added
modules/sysadmin_guide/pages/sshknownhosts.adoc
+128
file added
modules/sysadmin_guide/pages/staging.adoc
+166
file added
modules/sysadmin_guide/pages/status-fedora.adoc
+161
file added
modules/sysadmin_guide/pages/syslog.adoc
+32
file added
modules/sysadmin_guide/pages/tag2distrepo.adoc
+138
file added
modules/sysadmin_guide/pages/torrentrelease.adoc
+17
file added
modules/sysadmin_guide/pages/unbound.adoc
+72
file added
modules/sysadmin_guide/pages/virt-image.adoc
+119
file added
modules/sysadmin_guide/pages/virt-notes.adoc
+19
file added
modules/sysadmin_guide/pages/virtio.adoc
+183
file added
modules/sysadmin_guide/pages/voting.adoc
+110
file added
modules/sysadmin_guide/pages/waiverdb.adoc
+90
file added
modules/sysadmin_guide/pages/wcidff.adoc
+37
file added
modules/sysadmin_guide/pages/wiki.adoc
+104
file added
modules/sysadmin_guide/pages/zodbot.adoc