#9392 low bandwidth when rsyncing from dl-iad0[45].fedoproject.org
Closed: Fixed 3 years ago by smooge. Opened 3 years ago by bellet.

Hi,

I'm running the fr2.rpmfind.net mirror in France, and have difficulties to maintain this tier1 mirror up-to-date, with the current bandwidth observed when rsyncing from dl-iad04 or dl-iad05 (ie dl-tier1). Typically the bandwidth is between 5-10Mbits, very variable, with sometimes peaks at 60Mbits (this mirror, site 105 in mirrormanager, is located in the french educational network).

With the amount of content generated daily on the master site, this bandwidth makes each run of quick-fedora-mirror take a significant number of hours to complete. I think, this mitigates the importance to run fedora-quick-mirror frequently to pick up urgent packages updates. Moreover, during this long delay for the rsync completion, the data has enough time to change upstream, causing download errors, with files no longer available. In that case, the synchronisation completes partially only (I think the final delete phase is not executed when there 're download errors).

I tried to switch over to download-ib01, and I obtain a much better bandwidth (100-200Mbits), however I see sometimes quick-fedora-mirror errors stating that the file list is outdated, so I'm not sure I should stay with ib01 as a tier1 mirror.

This issue is not new to me, this is just a slow degradation I observe from my point of view.


I can be pinged as 'fab' on freenode if needed.

Could you please attach a mtr / mtr -n (or something equivalent) so we can see which path is congested. The download servers are pretty 'clear' in use and bandwidth 'locally' seems ok so I am guessing we have a congested outbound network causing problems.

Metadata Update from @smooge:
- Issue priority set to: Waiting on Assignee (was: Needs Review)
- Issue tagged with: medium-gain, medium-trouble, ops

3 years ago

Does this issue need to be private? There's some other folks seeing slow transfers also that might be able to add info here if we make it public...

Here is the mtr -n:

 Host                                        Loss%   Snt   Last   Avg  Best  Wrst StDev
 1. 195.220.108.1                             2.9%    35    0.2   0.4   0.1   2.0   0.4
 2. 134.214.201.169                           0.0%    35    0.4   1.1   0.3  11.9   2.3
 3. 172.16.17.10                              2.9%    35    0.8   3.3   0.6  41.3   8.9
 4. 193.55.215.222                            2.9%    35    1.1   1.2   0.8   3.2   0.4
 5. 193.51.183.250                            0.0%    35    1.3   1.6   1.1   3.7   0.6
 6. 193.51.180.4                              0.0%    35    1.1   2.2   1.0  10.5   1.8
 7. 193.51.177.222                            0.0%    35    5.5   6.8   5.1  28.2   4.7
 8. 149.6.155.233                             0.0%    35    5.7   6.0   5.4   8.1   0.6
 9. 154.54.38.169                             0.0%    34    5.8   6.1   5.5   8.3   0.6
10. 154.54.57.206                             2.9%    34   17.4  17.8  17.1  20.5   0.6
11. 154.54.57.241                             0.0%    34   21.7  22.0  21.6  23.4   0.4
12. 154.54.61.133                             0.0%    34   26.6  26.9  26.4  29.0   0.5
13. 154.54.85.245                             2.9%    34   97.0  97.1  96.4  99.0   0.6
14. 154.54.30.66                              0.0%    34   97.5  98.1  97.5 101.8   0.9
15. 38.32.106.90                              0.0%    34   97.2  99.5  97.1 155.6  10.0
16. 209.132.185.254                           0.0%    34  118.7 106.7  98.1 190.7  16.6
17. 38.145.60.25                              0.0%    34  103.9 104.1  98.0 127.7   5.6

Metadata Update from @bellet:
- Issue private status set to: False (was: True)

3 years ago

OK the problem is a maxed out network connection going into Fedora networks. For some reason various BGP routes are defaulting to go through cogentco.com and we are at a 2GB max on that line.

Interesting. I have also seen slow rates on my mirror using dl-tier1.fedoraproject.org, which seems to point at exactly those two hosts. I haven't looked closer at it, it just felt slower and it is always difficult to tell why rsync takes a lot of time if not looking if it is just a lot of local (or remote) I/O or network speed.

 1. 129.143.116.9                   0.0%    34    0.3   0.3   0.2   0.7   0.1
 2. 134.108.47.1                    0.0%    34    0.4   0.4   0.3   0.7   0.1
 3. 129.143.116.5                   0.0%    34    1.5   1.5   1.3   1.6   0.1
 4. 129.143.60.70                   0.0%    33    1.4   1.4   1.3   1.5   0.1
 5. 149.6.20.5                      0.0%    33    1.8   1.8   1.6   1.9   0.1
 6. 154.54.37.101                   0.0%    33    4.6   4.6   4.4   6.4   0.5
 7. 130.117.48.210                  0.0%    33    4.7   4.7   4.5   4.9   0.1
 8. 154.54.58.238                   0.0%    33   13.8  13.8  13.6  14.1   0.1 
 9. 154.54.61.117                   0.0%    33   26.1  26.2  26.1  26.5   0.1 
10. 154.54.85.245                   0.0%    33   93.5  93.3  93.1  93.6   0.1 
11. 154.54.30.66                    0.0%    33   97.5  97.5  97.3  97.7   0.1 
12. 38.32.106.90                    0.0%    33   97.2  98.4  97.0 120.8   5.0 
13. 209.132.185.254                 0.0%    33   94.2 103.4  94.2 198.1  21.6 
14. 38.145.60.25                    0.0%    33  104.4 110.8  97.9 195.0  23.5 

Because of that I switched from running quick-fedora-mirror for all modules at once, to run quick-fedora-mirror for each module separately.

I am looking to see if we can figure out why we have 2GB of bandwidth being used nearly constantly. I will also see if traffic shaping to particular servers is possible and helpful.

@smooge is there an internal ticket on this? perhaps we should file one and raise things with mgmt? (If we need to try and get additional BW resources)

It would be good to sort this before the f33 release. ;(

Additionally, will enabling ipv6 help out here? Does that use the same routing/upstream?

So the previous site had 5GB for all of PHX2 which was split between all Red Hat traffic and Fedora. We now have 2 2 GB links to the internet, but are down to the whims of upstream BGP routing depending on which one gets used. As far as I can tell, something is weighing all the traffic to cogentco versus the other link.

Bandwidth costs money and takes contract changes which will take time with lawyers etc. I don't think we will have this fixed by F33 and may have it dealt with by F34. I would say it is good to put it into a ticket and get it in front of Engineering management that there is a X cost per month we need to take into consideration.

Adding ipv6 would help nothing as it would be going through the same BGP AS destination. Turning on ipv6 is a very complicated change as we found lots of applications inside of PHX2 sending some traffic to ipv4 and ipv6 so builds and other traffic failed. I would recommend we look at that when we have a full 2 month focus for that. [AKA need to put that in as a Q1 project for our sprints].

So the previous site had 5GB for all of PHX2 which was split between all Red Hat traffic and Fedora. We now have 2 2 GB links to the internet, but are down to the whims of upstream BGP routing depending on which one gets used. As far as I can tell, something is weighing all the traffic to cogentco versus the other link.

Yeah, we should be able to weight the ASN of one less and balance that down more I would hope?

Bandwidth costs money and takes contract changes which will take time with lawyers etc. I don't think we will have this fixed by F33 and may have it dealt with by F34. I would say it is good to put it into a ticket and get it in front of Engineering management that there is a X cost per month we need to take into consideration.

Well, one of the reasons we are in the datacenter we are is that we were told turning up new connections would be easy for them to do. I'm sure there's delay and legal and finance, which is a reason to start right now?

Adding ipv6 would help nothing as it would be going through the same BGP AS destination. Turning on ipv6 is a very complicated change as we found lots of applications inside of PHX2 sending some traffic to ipv4 and ipv6 so builds and other traffic failed. I would recommend we look at that when we have a full 2 month focus for that. [AKA need to put that in as a Q1 project for our sprints].

Often ipv6 routes are very different from ipv4 routes. From here to rdu-cc is 22 hops over ipv4, and 10 over ipv6. So, yes, they would use the same pipes at the IAD2 end, but they might be distributed differently accross them.

The problem when we had ipv6 enabled before was that there was no actual working ipv6 route, so nothing could reach the outside.

I agree we need to come up with a plan for enabling it, but I don't think it will take months. ;)

I was going off that F33 is supposed to be released next week and at worst 2-3 weeks from now. That is where getting larger bandwidth may be a problem. Getting the BGP routes changed are faster to fix, but I will have to hand that off to someone else to put the tickets in.

I wanted to add that QFM tosses out "looks like the file list is outdated" when it tries to transfer something that no longer exists on the master. It won't retry with the current file lists and instead just exits so the next run will pick up the new lists. This has always happened to some degree as the amount of churn on the mirrors has been very high for years now, but is manageable with sufficient bandwidth. But if you don't have that bandwidth then mirrors are just going to be out of date while they sync over a million files of alt/stage content which doesn't seem to be hardlinked even though it should be identical to files already in the archive.

Really, fixing bandwidth issues is only part of the solution here. Since alt is a dumping ground, it may be prudent to simply say that nobody should ever mirror it.

I'll be happy to no longer mirror alt, if in fact that's the recommended way of saving churn and bandwidth. Just let me know if that's indeed the official recommendation of the Fedora Project.

I've filed an internal ticket on this to get more visibility internally and see if there's any options.

I am hardlinking the RC's, as this is a big delta that could be causing a lot more BW usage.
@mohanboddu when/if you make another RC, can you hardlink stage/ after it?

I'll be happy to no longer mirror alt, if in fact that's the recommended way of saving churn and bandwidth. Just let me know if that's indeed the official recommendation of the Fedora Project.

I would not say that. There's things in alt that are nice to mirror and used by some. That said, the users of it are smaller than any other module I am pretty sure.
The RC hardlinking should help make alt more manageable I hope.

An update:

  • Internally quotes and approvals are being gathered to see if we can just increase BW on all our upstream pipes.

  • I have disabled rsync on dl01/02/03 for now to get more BW for master mirrors. If we get increased BW we can re-enable this (I hope we can)

  • I've asked for any other short term options from networking folks.

  • Some fullfile lists were not right, I have regenerated them ( see https://pagure.io/fedora-infrastructure/issue/9407 )

Also: Can folks try using:

download-ib01.fedoraproject.org (has everything)
or
download-cc-rdu01.fedoraproject.org (has everything except archive)

They should be up to date.

Yes, I get dramatically better results with either of the two hosts you listed -- I get 200+ Mbps transfer speeds instead of 10-12 Mbps with tier1. I'm hoping that we'll be able to get mirrors.kernel.org caught up today.

I confirm that download-ib01 and download-cc-rdu01 are very fast (transatlantic network path is geant.net --> internet2.edu in my case) , and dl04/05 still slow.

Yesterday there were a few routing problems (mainly looping and some crazy paths needlessly bouncing through Europe) but after everything settled down I've seen much better transfer rates to dl-iad04 and dl-iad05. Not nearly as good as the rates were before the data center move but still at least five times what it was a couple of days ago.

This afternoon rawhide new daily content, approx 50 GBytes incl. fedora-secondary, has been synced in only 2 hours, at the rate of 60 Mbits. I still see the same traceroute.

Now that dl01/02/03 are reenabled, I observe that the bandwidth is back to its previous low value, around 7Mbits, 12 GBytes synced in 4 hours, started at 7AM CET, still running.

Thanksfully, my mirror is in sync with F33.

I also see much lower transfer rates again for my mirror.

Kevin pointed out that ip traffic can be plotted from our collectd data

Link to premade collectd graph

I am thinking that we should move the dl03 into the tier1 tree to cut down the congestion and look at ways to limit rsync bandwidth on dl01/dl02

So, how do things look for everyone now? Still slow?

Manageable for now? or you can't get fully synced?

I don't think moving 03 to tier1 will help much, it's not cpu or load, it's just sheer bandwith.

If tier1's can keep up now, we can just wait for my BW to be enabled. If they cannot then I think we need to disable the non tier1 rsync's until we have more BW.

Works for my mirror. Not really faster, but the main problem was the large amount of changes leading up to the F33 release. Now, with just the usual updates coming through, it is not such a big problem. Files are synced in a reasonable time.

For my mirror, it works too. The whole thing is under control, mainly by splitting the sync in separate parts.

I am going to say this is closed as fixed then. I think that for our next release we will look at cutting down the rsync to tier 1 for those 2 weeks and expand the tier 1

Metadata Update from @smooge:
- Issue close_status updated to: Fixed
- Issue status updated to: Closed (was: Open)

3 years ago

Just for the record, quick-fedora-mirror logs some useful tranfer statistics, so (assuming you are logging to the journal) you can get a quick picture with:

journalctl -t quick-fedora-mirror |grep -A2 'stat: downloaded'

This gives both a idea of mirror churn and good picture of the average bandwidth you're getting. I'm sure with a few minutes someone who knew rrdtool or the like could turn these into graphs. If you can give me some hints, let me know; I've wanted to make a monitoring utility for some time.

Login to comment on this ticket.

Metadata
Boards 1
ops Status: Done