@ngompa and I are in the process of setting up some infra on AWS to produce compose-like artifacts for Hyperscale, and we'd like these to be accessible from https://hyperscale.sig.centos.org . Filing this ticket to get the process started, will fill in the actual IPs later once everything is agreed upon. Meanwhile, here's the template:
This template outlines the conditions of a SIG managing infra themselves for which the centos team is not directly responsible on a centos.org subdomain.
These can also be found at https://sigs.centos.org/guide/
@dcavalca and @ngompa will be PoC for this
Agreed
We should be able to enforce this via a security group, which will block incoming traffic from subnets we can't serve to. Do you happen to have a list already we can consume?
<sig_name>.sig.centos.org
<sig_name>.unmanaged-by.centos.org
How does this work in practice? Should I get my own SSL certificate for hyperscale.sig.centos.org, or should I use a preexisting certificate supplied by infra? I can issue certs pretty easily via ACM, but it feels iffy to get a certificate for a domain I don't actually own (and we'd still need to sort out how to do validation in that case).
By creating this ticket you are agreeing to the terms laid out above
Please provide us with the full domain name required and the A/AAAA record you wish to use. For example:
Domain Name: hyperscale.sig.centos.org ip4: tbd ip6: tbd Point of Contact: @dcavalca, @ngompa
No content should be served to any T5 country We should be able to enforce this via a security group, which will block incoming traffic from subnets we can't serve to. Do you happen to have a list already we can consume?
Turns out this is easier, we can use WAF to apply a geographical restriction, which will take care of it automatically.
Also one more clarification:
TLS v1.2
there's more that one: do we want TLSv1.2_2018, TLSv1.2_2019 or TLSv1.2_2021 ?
Metadata Update from @arrfab: - Issue assigned to arrfab
Metadata Update from @arrfab: - Issue tagged with: feature-request, low-gain, low-trouble
Also one more clarification: TLS v1.2 there's more that one: do we want TLSv1.2_2018, TLSv1.2_2019 or TLSv1.2_2021 ?
I should have a look at this myself but the higher the better :) Just communicate the ipv4/ipv6 addresses you'd like to use for the hyperscale.sig.centos.org record and we can proceed whenever you want. Just curious (more than anything else) : is that to build and distribute something that can't be built through koji and distributed to mirrors ?
hyperscale.sig.centos.org
Metadata Update from @arrfab: - Issue priority set to: Waiting on Reporter (was: Needs Review)
Yeah, the tl;dr is that @ngompa is building a tool to create compose-like artifacts of SIG content, that can be combined with an existing compose of the base CentOS system and with EPEL into a sort of unified artifact, that one can then reference (and, say, qualify against or deploy or whatever). Right now we're prototyping this on AWS in a Meta-sponsored account. Down the road if it proves useful we'd like to extend it to other SIGs if there's interest, and potentially look at whether it'd make sense to host it on CentOS infra and/or make it more official.
I've spun up cloudfront on https://d28cd39727j177.cloudfront.net/ (which isn't serving anything useful at the moment, just a stub). I'm trying to find out if there's a way to get static IPs assigned to it, if not we might have to use a CNAME for this.
cname is the way to go for cloudfront :
dig +short +noshort mirror.stream.centos.org mirror.stream.centos.org. 515 IN CNAME d10wicvahrqhw.cloudfront.net.
Do you know when you'd like that dns record to be pushed ? Marking ticket as "blocked" for now (so that we can ignore it until it's updated with info)
Metadata Update from @arrfab: - Issue tagged with: blocked
@arrfab we need to figure out what to do about the SSL certificate first. I can get one from ACM, but that requires going through domain validation (https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html). Alternatively, if you already have a certificate (or a way to make one), I can deploy that instead, I just need the key and cert. What do you prefer?
for our own cloudfront setup, I always used ACM as it's managed automatically by AWS (so nothing to upload/modify/update like a letsencrypt cert). If you want, we can still do that but ACM is a better choice (imho) or we'd have to reach out on regular basis so that you'd then update cloudfront with new key/cert. As dns validation happens only once (for ACM), feel free to use that and reach out for one time mod and we'll be able to proceed
Sounds good, let's do ACM then. I'll set things up on my end and ping you when validation is needed.
Ok looks like we need _7691e165ee66915de11eb38bdbf26d07.hyperscale.sig.centos.org. to be a CNAME to _b69e35366c2f6183c9e7fa73909cd0a6.djqtsrsxkq.acm-validations.aws. for validation to pass.
_7691e165ee66915de11eb38bdbf26d07.hyperscale.sig.centos.org.
_b69e35366c2f6183c9e7fa73909cd0a6.djqtsrsxkq.acm-validations.aws.
pushed :
dig +short +noshort _7691e165ee66915de11eb38bdbf26d07.hyperscale.sig.centos.org _7691e165ee66915de11eb38bdbf26d07.hyperscale.sig.centos.org. 60 IN CNAME _b69e35366c2f6183c9e7fa73909cd0a6.djqtsrsxkq.acm-validations.aws.
@dcavalca : looking at open tickets and just waiting on your feedback about ip address/cname to put (if that's for cloudfront). Let us know if you plan on doing this soon or if we can just close this ticket and you'll reopen it once you'll be ready at your side ? What do you think ?
I have the cloudfront stuff setup already, but there's only a placeholder there for now, as this is pending on @ngompa finalizing the deployment. If we don't mind having a placeholder up, we can put in the CNAME now, it should point to d28cd39727j177.cloudfront.net . Thanks!
Done :
dig +short +noshort @ns1.centos.org hyperscale.sig.centos.org hyperscale.sig.centos.org. 600 IN CNAME hyperscale.sig.unmanaged-by.centos.org. hyperscale.sig.unmanaged-by.centos.org. 600 IN CNAME d28cd39727j177.cloudfront.net.
Metadata Update from @arrfab: - Issue close_status updated to: Fixed with Explanation - Issue status updated to: Closed (was: Open)
Log in to comment on this ticket.