#5747 How to publish html to our web infrastructure
Closed: Fixed 6 years ago Opened 7 years ago by bex.

I've been working on an ascii_binder generated website for the budget. I am about ready to push a stage and then start doing it in production.

The end result of the build process in Jenkins is a "pile of html" that is website ready. What is the best way to get this to the web serving infrastructure? We have the ability to put a private key into the Jenkins job. Should we do that? To what location would the files be uploaded?

What is the next step?


So, our regular static websites build on an internal machine called 'sundries01'. It's just a cron job that pulls from git and runs the make, etc and outputs the end result files to a directory. Then, there's cron jobs on our proxies that pull from that machine and serve the static content locally.

For developer site, they build somewhere and push that built version into a github git repo and we pull that on sundries01 and proxies sync it.

So, I guess for this it would be easiest if the jenkins job could push the completed output to a git repo and we can just pull that like we do for developer?

This works for me. Two questions:

  1. I haven't found docs describing staging for our sites. How do I get something published to stage versus production?
  2. What key can I use to push to pagure.io that isn't my key?

staging is setup just like production, except with stg in the name... so https://stg.getfedora.org/ is the stg version of https://getfedora.org.

The websites team usually adjusts the build script (in this case running on sundries01.stg) to use a different branch so they can make changes just in stg, then they merge back to master for production.

I am not sure on key. I know pagure allows you to get API tokens, but not sure if you can commit changes with those or not. Perhaps @pingou could chime in on that or suggest something?

In your settings page you can add new public ssh keys that will be linked to your account but allows you to have multiple keys.

In your settings page you can add new public ssh keys that will be linked to your account but allows you to have multiple keys.

I am hesitant to have a key with a password known to others and not just me with access to everything that I have beyond this project.

I am hesitant to have a key with a password known to others and not just me with access to everything that I have beyond this project.

I completely understand that concern, however, I am not sure that gitolite supports a different workflow (maybe worth checking) and that's what we rely on.

I am hesitant to have a key with a password known to others and not just me with access to everything that I have beyond this project.

I completely understand that concern, however, I am not sure that gitolite supports a different workflow (maybe worth checking) and that's what we rely on.

Do we have a way to create an ID which we could restrict to this repository and have that ID carry the key? Essentially, we need a FAS account for a non-person, as I understand it.

Instead of creating a new account for this, I would suggest look at filing a Pagure RFE for project push keys (or something).

@puiterwijk I do not think gitolite supports this

It sounds like the preferred solution to this is going to take time to research, architect, and engineer. Today, for testing, I scp the output to a VM running a web server. It uses a unique to it key. Can we put an interim solution in place that is similar?

This would also allow us to expand the problem space to rethink the publishing of sites that are built as opposed to maintained statically.

Also, for consideration, we'd like to be able to do pushes to zanata as well. This will also require some form of secrets management. So it may be the case that this is the problem we wish to generally solve.

@bex https://pagure.io/pagure/pull-request/1873

I am going to say -1 to adding an SSH key out-of-band to any servers.
Currently, the SSH keys are all managed by fasClient, installed from FAS, and having a unique key on a single server would require out-of-band keys.
I would suggest to just use the feature for Deploy keys I just filed for Pagure.

@bex https://pagure.io/pagure/pull-request/1873
I am going to say -1 to adding an SSH key out-of-band to any servers.
Currently, the SSH keys are all managed by fasClient, installed from FAS, and having a unique key on a single server would require out-of-band keys.
I would suggest to just use the feature for Deploy keys I just filed for Pagure.

I am +1 on Deploy Keys if they can be implemented in relatively short order. I hate to block this if this is going to be a long process. That said, it looks like you are both make short work of this. Thank you!!! I cannot wait to get something publishing to stage!

And... deploy keys are now live in production.

What are next steps here?

And... deploy keys are now live in production.
What are next steps here?

I am finishing tests this week of the new deploy key features. I had hit a bug and haven't had time to cycle bag since a fix was applied. My goal is to be able to supply a repo and branch for use with staging publishing to test.

What is the next step?

Ok, it is working. I am pushing to pagure.io/fedora-budget-site in the stg branch.

ok, so now we should pull that and publish it somewhere in our stg.

Should this go to budget.stg.fedoraproject.org ? Or some other URI?

ok, so now we should pull that and publish it somewhere in our stg.
Should this go to budget.stg.fedoraproject.org ? Or some other URI?

I need to talk to the websig about using that name - they currently have it on a static site missing the new data hotness.

Is a temp name easy, or should this wait until that conversation resolves?

We could do another name, but it would be somewhat of a hassle. I'd prefer to wait if thats ok.

Whats the status here? Anything further to do until the conversation finishes in docs?

I have a test push in pagure.io/fedora-budget-site that will be used for budget.fedoraproject.org. The site is being reviewed for content and needs a few CSS tweaks still. I also need to engage with the websites group about replacing the site with this one. I'll try to open that email this week.

I am blocked on getting pagure to trigger jenkins jobs on pushes. @pingou and I have gone around on this several times but I am still doing something wrong. I do have a push deploy key working, so once this is unblocked, the repo will auto-publish.

Do we have an idea of whether we want to implement push-based publishing (having my -site repo trigger a job to do the publish) or what?

I hope to use this as a test for the docs publishing theory as it is a smaller and more self-contained project.

/cc @immanetize , @ryanlerch

Is this all done now? Or is there something more pending?

I believe there are three open questions:

  1. Should the deploy key be stored in git or somehow stored in Jenkins (maybe as a cat in the script)?

  2. This ticket or a new one needs to be open blocked on either getting a webhook in pagure that can fire Jenkins or loopabull in place.

  3. Open planning question, are we considering moving to an on-demand (possibly rate limited) publishing or sticking with cron?

  1. I guess it should be in jenkins, otherwise anyone could get it and deploy over the valid stuff.

  2. I don't know the state of webhooks in pagure for this. Is there a pagure ticket?
    Loopabull should be along before too long...

  3. I like the idea of on-demand/on-changes over cron jobs, so if we can do that I am in favor.

I guess we solved/did 1 and 2 here... leaving question 3..

Now that we have an openshift instance I wonder if it might make sense to use that.
We could possibly just build on demand there and then sync out to proxies from there, or even just build and use those instances directly. It would mean not having so much geographic diversity, but static docs should be pretty easy to serve...

I guess we solved/did 1 and 2 here... leaving question 3..
Now that we have an openshift instance I wonder if it might make sense to use that.
We could possibly just build on demand there and then sync out to proxies from there, or even just build and use those instances directly. It would mean not having so much geographic diversity, but static docs should be pretty easy to serve...

I'm 100% in favor of this for stage (stages with branches). I leave it up to infra to decide how prod should work.

I believe there is orior art in the Atomic Docs we can borrow. I'll get on this when I'm back after the 16th.

@kevin what were you thinking in terms of on demand building? What is the trigger mechanism supported by our openshift? Maybe we should have a quick meeting to discuss some ideas (and related issues around PRs and stage)?

Lots of ways we could do it, and perahaps we should wait until we have our new cloud up and a dev openshift so we can try various things more easily.

That said, I would think it would make sense to have the dev openshift trigger on git commits and build and publish, then if you push that same commit to staging or prod branches those trigger and build and publish.

Lets discuss further at some point and file a new ticket with any needed changes?

(I don't think tickets are a great way to discuss ideas, they are much better for concrete actions after those ideas are decided on).

Hopefully in the next few months we will have a new cloud/dev openshift and it should be much easier to fire off proof of concept things.

:ticket:

Metadata Update from @kevin:
- Issue close_status updated to: Fixed
- Issue status updated to: Closed (was: Open)

6 years ago

Login to comment on this ticket.

Metadata