#10303 AWS: please create an new VPC for fedora-ci/testing-farm
Closed: Fixed 2 years ago by mobrien. Opened 2 years ago by mvadkert.

Simlarly to https://pagure.io/fedora-infrastructure/issue/9466

My team is now working on automating our deployments, streamline development and so on. And will start hitting the 254 IPs limit on our current VPC we use:

fedora-ci-eks-stg - vpc-0618f77c2f99c9956

Could we ask for a new VPC, ideally again in us-east-2 ...

Thanks


Metadata Update from @kevin:
- Issue priority set to: Waiting on Assignee (was: Needs Review)
- Issue tagged with: low-gain, low-trouble, ops

2 years ago

Metadata Update from @mohanboddu:
- Issue priority set to: Waiting on External (was: Waiting on Assignee)

2 years ago

Metadata Update from @mohanboddu:
- Issue assigned to mobrien
- Issue priority set to: Waiting on Assignee (was: Waiting on External)

2 years ago

I have created the new VPC in us-east-2

fedora-ci-eks-stg-02: vpc-0709d5affa675f857

subnets:
subnet-05d489836b767cfb6
subnet-09239976b98871543

The route table is left default which will allow for all VPC internal traffic

There were no internet/Nat gateways set up on the currently in use VPC so I didn't set them up here either. Let me know if you need them

Metadata Update from @mobrien:
- Issue close_status updated to: Fixed
- Issue status updated to: Closed (was: Open)

2 years ago

@mobrien sorry this took so long to test. The subnets must be in different AZs, sorry for not mentioning it right away, this is an EKS requirement.

Metadata Update from @mvadkert:
- Issue status updated to: Open (was: Closed)

2 years ago

@mvadkert sorry I should have done that by default for high availability anyway.

So subnet-05d489836b767cfb6(az2) is still there and the other subnet is replaced with subnet-06ed190439a0e2037(az3)

Hopefully that should all be ok for you now

Metadata Update from @mobrien:
- Issue close_status updated to: Fixed
- Issue status updated to: Closed (was: Open)

2 years ago

Login to comment on this ticket.

Metadata
Boards 1
ops Status: Backlog