#1084 Fix proxy -> app docs
Closed: Fixed None Opened 15 years ago by mmcgrath.

Our proxy app docs have chagned quite a bit from whats documented on our wiki/Infrastructure/Architecture page.

  • We have enabled some caching and now haproxy does the load balancing.
  • Also we have proxy servers at 4 different sites
  • We have app servers at multiple sites
  • Also we have a vpn connecting all app servers and proxy servers

This might be a good overview to make use of:

{{{
21:24 < G_work> sspreitzer: we have 3x db servers, db1 - mysql, db2 - postgres, db3 - postgres (for koji only)
21:24 < sspreitzer> mind is I keep the log of this convo to remind later?
21:24 < sspreitzer> *if
21:24 < G_work> sspreitzer: sure
21:24 < sspreitzer> thanks
21:24 < G_work> it's not really a secret :)
21:25 < G_work> we have 6 proxy servers, 3 at PHX, 1 in Germany, 1 in Denver, and one in
21:25 < G_work> it's in the US though
21:26 < sspreitzer> ok
21:26 < G_work> the proxys have the static web content (like fp.o)
21:27 < G_work> and pass on requests to the webapps via the VPN etc
21:27 < G_work> we have 6 app servers (plus a 7th which is a hotbackup that is used mainly to run scripts on)
21:28 < G_work> 4 are in PHX, 1 in Denver, and the other in Germany
21:28 < G_work> the 7th is also in PHX
21:29 < G_work> all the apps except for FAS, Wiki and Bodhi run on all the app servers, wiki and bodhi run only in PHX
21:29 < G_work> FAS runs on their own dedicated servers, and ditto for koji
}}}

Slight correction: We have only 5 proxy servers, proxy{1,2} in PHX, proxy3 in Denver, proxy4 in Raleigh, proxy5 in Germany.

Bodhi currently runs on app1 and app2 (in PHX), with a masher instance running on releng2.

[[BR]]
Here's what I have so far for the FIG physical inventory:

Phoenix Datacenter:

2 Proxy Servers[[BR]]
4 App Servers (1 Hot Spare)

Denver Datacenter:

1 Proxy Server[[BR]]
1 App Server

Raleigh Datacenter:

1 Proxy Server

Germany Datacenter:

1 Proxy Server[[BR]]
1 App Server

The goal is to create a comprehensive list of:

  1. Hardware[[BR]]
  2. Software[[BR]]
  3. Associated hardware and software, i.e. which apps / agents run on which servers[[BR]]
  4. Logical network diagram

Items needed to continue:

  1. Installed applications on each node [[BR]]
  2. Hostnames[[BR]]
  3. Virtual server Inventory?[[BR]]
    A. Which virtual servers are on which physical hosts[[BR]]
    B. Same info as physical inventory in regards to apps and agents[[BR]]
  4. Any other equipment (vpn hardware, storage, etc...)[[BR]]
  5. What types of servers are they? Dell, HP, IBM, etc..?[[BR]]
  6. Where are the database servers located?[[BR]]
    A. DB{1,2,3}[[BR]]
  7. Are there clustered servers / services that need to be documented?[[BR]]
  8. Are there asterisk servers that need to be documented?[[BR]]
  9. Is there a test environment that needs to be documented?[[BR]]
  10. Whatever else that I'm missing that needs to be documented in regards to FIG inventory.

So I think the biggest initial project should just be service / location overviews. Proxy servers, app servers, where they are, etc.

After that, we should probably document each app at the app level that explaines each bit of that app. For example smolts.org might have

Smolts -
Proxy servers
App servers (basic info about rpms, etc)
Database (db server, name)

That sort of thing. If we can, I'd like to focus on stuff that can lead to other things. For example, for smolt we don't need to list the config files because with the rpm we can just rpm -qcl smolt-server to get that info. And if we do it at the RPM level, its always up to date whereas the docs might not be.

I've installed OCS and GLPI on publictest10. I installed agents on a few other public test boxes. Need to test FAS integration after the freeze is lifted.
[[BR]]
http://publictest10.fedoraproject.org/ocsreports/
[[BR]]
http://publictest10.fedoraproject.org/glpi/
[[BR]]
username: fedora
[[BR]]
Password: Get with me in IRC, nick: collier_s

adding note that this ticket is tied to ticket 1171

changing target finish date to F12.

Sorry boodle, I think this ticket may have been a bit confusing, this one was less about inventory management and more about architectural documentation.

Any further word on this ticket?
Seems to have dropped off the radar.

Replying to [comment:10 kevin]:

Any further word on this ticket?
Seems to have dropped off the radar.

Hey Kevin, it has been a while, sorry about that. How bout i hop on IRC and we can discuss whether or not it's still needed and best way to proceed? I'll try to get on tonight, if not, i'll def be on tomorrow night.

Just poking some old tickets here... are you still able to work on this?

Just looking through the easy fixes and saw this one hadn't been touched in a while but looks like one I could do. What needs to be accomplished at this point?

Just poking through easy-fixes, and wondering if there is something I could do on this ticket?

Sure.

I think it's a bit daunting because new folks who might work on it don't have all the info to work on it. ;)

However, if you would like to thats great and happy to provide info for doing so.

Basically we have:

http://fedoraproject.org/wiki/Infrastructure/Architecture

It's old and out of date. We want a updated set of diagrams how things are spread out.

Here's a bit of a info dump:

fedoraproject.org and admin.fedoraproject.org are round robin dns entries.

They are populated based on geoip information. For example, for North america they get a pool of servers in North america.

Each of those servers in dns is a proxy server. It accepts connections using apache. apache uses haproxy as a backend, and in turn some (but not all) services use varnish for caching. Requests are replied to from cache if varnish has it cached, otherwise it sends into a backend application server. Many of these are in the main datacenter in phx2. Some are at other sites. The application server processes the request and sends it back.

For sites/datacenters you should be able to look in puppet for the various datacenter definitions.

Hope that helps some with background.

hello folks!

I have started working towards this issue. Let me know if anyone else is working on this as soon as possible.

Updated!!

so, we have following datacenters setup

phx2 - our main datacenter in phoenix
2 Proxy Servers
4 App Servers (1 Hot Spare)
these servers resides in 10.5.126.0/24 Network
rdu - a datacenter in north carolina
dowload servers
tummy - colorado, usa
1 Proxy Sever
1 App Server
serverbeach - san antonio, tx, usa
download servers
telia - germany
1 Proxy Sever
1 App Server
osuosl - oregon, usa
1 Proxy Sever
1 App Server
bodhost - UK
1 Proxy Sever
1 App Server
ibiblio - north carolina, usa
1 Proxy Sever
1 App Server
internetx - germany
1 Proxy Sever
1 App Server
colocation america - LA, USA
1 Proxy Sever
1 App Server

what does download server do? do ve have database servers as well, if there are any, then where?

All these proxy and app servers are connected via VPN and load balancing is done using [haproxy][http://haproxy.1wt.eu/]

Please correct the information and provide additional information, if there is any.

Replying to [comment:18 vipin]:

so, we have following datacenters setup

phx2 - our main datacenter in phoenix
2 Proxy Servers[[BR]]

4 App Servers (1 Hot Spare)[[BR]]

these servers resides in one of 10.5.125.0/24, 10.5.126.0/24,10.5.126.0/24 or 10.5.124.128/25 Networks[[BR]]

Yes, all of them are in the 10.5.126.0/24 network.

rdu - a datacenter in north carolina
1 Proxy Server

No proxy in rdu. We only have download servers there.

tummy - colorado, usa
1 Proxy Sever[[BR]]

1 App Server

serverbeach - san antonio, tx, usa[[BR]]

telia - germany[[BR]]

osuosl - oregon, usa[[BR]]

bodhost - UK[[BR]]

ibiblio - north carolina, usa[[BR]]

internetx - germany[[BR]]

colocation america - LA, USA

1 proxy server in telia-Germany or internetx-Germany?[[BR]]

1 app server in telia-Germany or internetx-Germany?[[BR]]

yes. 1 of each at each.

and what about other datacenters? doesn't they have proxy and/or app servers?[[BR]][[BR]]

serverbeach has 0 app or proxy servers. All the rest have 1 app server and 1 proxy.

10.5.125.0/24 - build network (build hosts and release engineering are on this network)[[BR]]

10.5.126.0/24 - main network (most hosts are here)[[BR]]

10.5.127.0/24 - storage network for nfs mounts and storage only.[[BR]]

10.5.124.128/25 - qa / community network (qa machines, secondary arch machines are here)

Please correct the information and provide additional information, if there is any.

See above. ;)

Thanks again for gathering this info.

Replying to [comment:18 vipin]:

Updated!!
...
serverbeach - san antonio, tx, usa
download servers
At serverbeach we have our fedorahosted and collab servers (mailing lists processing)

...

what does download server do? do ve have database servers as well, if there are any, then where?

download servers are the servers that handle downloads for people wishing to download Fedora or our packages, etc. Yes, we do have database servers, they are all in phx2.

All these proxy and app servers are connected via VPN and load balancing is done using [haproxy][http://haproxy.1wt.eu/]

Well, load balancing is done via:

a) DNS. There's per region dns that points people in that region to the proxy servers that are 'nearby' them.
So, for example, if you are in europe, you would get a proxy server list of the ones in germany and UK.

b) after someone gets a proxy server from dns, then the request goes to that proxy and hits apache/haproxy. haproxy there load balances requests over the app servers. Some it reaches over vpn. In some cases there is also 'varnish' between haproxy and the app servers to help cache information.

Please correct the information and provide additional information, if there is any.

See above. Note that mostly here we want to get an update to our proxy/app services, so I wouldn't worry about sites with no proxy/app or get into too many details of database, etc.

Login to comment on this ticket.

Metadata