#5970 Let's put OSTrees on mirrorlist and dl.fpo
Opened 2 years ago by bowlofeggs. Modified 5 months ago

There was a discussion today about how we could start including OSTrees in mirrorlist, and have only two mirrors backing it for Fedora 26:

  • One mirror, served by dl.fpo, would contain static delta OSTree repos generated by the two week atomic release script.
  • The fall back mirror, served by koji packages, would contain the full OSTree.

I talked with @puiterwijk and he suggested that we would need infra to make these changes:

[11:02] <puiterwijk> For dl.fp.o: we want a location to throw it in, and then an rsync module.
[11:02] <puiterwijk> For mirror lists: after that module is created, it needs to be made known to mirrorlists

This would help accomplish https://pagure.io/atomic-wg/issue/265.


I think we agreed to:

  • 1st - have mirrorlist with only 1 mirror in it (kojipkgs)
  • 2nd - modify static generation to not fall back to small files (so that we can get a static delta only repo)
  • 3nd - after confirm 1 is working, then add dl.fp.o with static only delta repo to the mirrorlist

How do we test this in fedora 26 before we start doing fedora 26 releases?

another option is that we just mirror the entire repo into amazon cloudfront since they are offering that service to us: https://pagure.io/fedora-infrastructure/issue/6022

ok I just did a test with cloudfront. I configured it to front our unified repo and then kicked off an update from 27.1.6 (our first F27AW released commit) to 27.88 from today.

on the first run I get:

3134 metadata, 18591 content objects fetched; 816389 KiB transferred in 410 seconds

I then trash the machine and re-run again (this time the content should be cached in cloudfront:

3134 metadata, 18591 content objects fetched; 816389 KiB transferred in 160 seconds

so from 410 to 160 seconds is a decent improvement (sample size 1, very scientific)

+1 yes let's enable this

How do we get the logs of downloads from cloudfront?

How do we get the logs of downloads from cloudfront?

API call to amazon using amazon tools. This should be pretty easy to do.

+1 to mirroring the entire repo. It's only going to be fetching what is requested.
You can break the various Fedora releases into Cloudront distributions to keep them distinct if that helps.

examples of users having download issues:

2018-02-08 13:09:13     Prudhvi_        Ouch. I should have used the ISO from that url AdrianCeleste. My rpm-ostree upgrade is taking forever
2018-03-03 09:40:00     alatiera        dustymabe: yea,  I am currently running atomic-WS rawhide, I was hoping having to avoid installing the f27 iso since it's from November
2018-03-03 09:40:20     alatiera        last time ti took me like 5hours to download the updates with my crappy DSL
2018-03-03 09:50:01     alatiera        dustymabe: I think I will end up writting a small script that restarts rpm-ostree upgrade when the net will timeout and watch a movie :P

I don't think we should/could do this right now without doing the other work.

My main reason against switching this around at this moment is that we really want either metalink support (which was why that was the very first point on the list, and I've yet to see mirrormanager patches for it), or we want to make very sure that at least the repo config and refs/ files are requested from the master mirror, to make sure that an outdated or malicious mirror can't serve stale refs that would contain known-vulnerable software.
This was the whole reason we agreed that metalink support would be action point number one.

While I don't believe that cloudfront would do something like that on purpose, your proposal has us set the cache time on their side very high, meaning users would still get stale refs.

I know there was discussion about this at the recent hackfest... what did we decided to do?
who is doing it and what is it's status?

:interrobang:

Metadata Update from @kevin:
- Issue priority set to: Waiting on Asignee

8 months ago

we currently have https://dl.fedoraproject.org/atomic/repo/ configured as the repo for our atomic host nodes and the objects directory (https://dl.fedoraproject.org/atomic/repo/objects) is fronted by cloudfront. I think longer term we'd like to have have proper mirroring, but considering the rojig work, it might not be necessary. I think for now this one is just WAITING on further input from Atomic WG.

Metadata Update from @kevin:
- Issue priority set to: Waiting on External (was: Waiting on Asignee)

8 months ago

We currently have /objects/ and /deltas/ frontended by cloudfront.

How is this working out?

In the month of june I see: ~600GB transferred from cloudfront to users over ~17.7 million https requests. The traffic to our origin was pretty close to nothing ( 0.0002 GB ).

Shall we just close this now until we have metalink support available?

Most of the things discussed here I do not understand, but I see that MirrorManager and metalink is mentioned a few times. I do not see any issues or PRs opened against MirrorManager concerning this topic. If somebody who actually understands what is required could open a ticket at https://github.com/fedora-infra/mirrormanager2 with the necessary details, that would be really good.

@kevin
We currently have /objects/ and /deltas/ frontended by cloudfront.
How is this working out?

seems to be working ok, but there are some optimizations we probably could make either in infra or ostree itself to make things a little more speedy. see https://github.com/ostreedev/ostree/issues/1541

In the month of june I see: ~600GB transferred from cloudfront to users over ~17.7 million https requests. The traffic to our origin was pretty close to nothing ( 0.0002 GB ).

hmm. all of the content needed to be transferred to cloudfront endpoints didn't it? for some reason I would have expected more than .0002 GB. Did you make any optimizations in your calculation?

Shall we just close this now until we have metalink support available?

probably?? @walters or @puiterwijk have any input?

@adrian
I do not see any issues or PRs opened against MirrorManager concerning this topic.

you are right. I don't think we have an feature requests for you at this time.

Login to comment on this ticket.

Metadata