#8898 AWS: Add internet gateway to our vpc-0896aedab4753e76f
Closed: Fixed 2 months ago by kevin. Opened 2 months ago by mvadkert.

After detailed look we found out our VPC has no internet access, this can cause issues with networking between cluster and workers.

Should be done here, we do not have perms:

After that hopefully we will be able to do required settings:


aws ec2 modify-subnet-attribute --map-public-ip-on-launch --subnet-id subnet-0b84fdcd88b5803c2
aws ec2 modify-subnet-attribute --map-public-ip-on-launch --subnet-id subnet-03089904253762f32

Now these commands do not work

@kevin another day, another try :( still not in working state


Metadata Update from @smooge:
- Issue assigned to kevin
- Issue priority set to: Waiting on Assignee (was: Needs Review)
- Issue tagged with: aws, medium-gain, medium-trouble

2 months ago

Metadata Update from @kevin:
- Issue close_status updated to: Fixed
- Issue status updated to: Closed (was: Open)

2 months ago

@kevin thanks, can you also run those 2 commands please?

$ aws ec2 modify-subnet-attribute --map-public-ip-on-launch --subnet-id subnet-03089904253762f32
$ aws ec2 modify-subnet-attribute --map-public-ip-on-launch --subnet-id subnet-0b84fdcd88b5803c2

I am not able to do that:

An error occurred (UnauthorizedOperation) when calling the ModifySubnetAttribute operation: You are not authorized to perform this operation.

Metadata Update from @mvadkert:
- Issue status updated to: Open (was: Closed)

2 months ago

Done. try now?

I'll leave open for results.

@kevin thanks, machines now get public IPs, but I am not able to connect to them. I believe now what missing is a gw route table, similary as how taiga and docs have it?


These 2 worker nodes then should be pingable if added correctly:

$ ping
$ ping


When an Internet gateway is created it needs to be referenced in a route table normally for the cidr block

This route table then needs to be associated with a subnet, making it a public subnet.

For the nodes to be accessible they will then need to be in this public subnet.

I just added the route there... so it looks all in place, but I can't ping those ips still. Not sure if I missed something, or if they need to reboot/be restarted?

Are the nodes which have the IPs in the subnet that is associated with the route table?

Also, are the security groups opened up to allow ping or are there any acl rules blocking it? I've had similar issues like this before.

@kevin thanks!

Seems at last what confused me was AWS complaining that the nodes did not connect ...

But kubectl actually now shows them \o/

$ kubectl get nodes
NAME                           STATUS   ROLES    AGE   VERSION
ip-10-123-0-222.ec2.internal   Ready    <none>   29m   v1.15.10-eks-bac369
ip-10-123-0-60.ec2.internal    Ready    <none>   29m   v1.15.10-eks-bac369

And I just deployed Ansible AWX \o/ via helm

$ kubectl get pods
NAME                                   READY   STATUS    RESTARTS   AGE
awx-1588889288-memcached-0             1/1     Running   0          105s
awx-1588889288-memcached-1             1/1     Running   0          86s
awx-1588889288-memcached-2             0/1     Pending   0          63s
awx-1588889288-postgresql-0            1/1     Running   0          105s
awx-1588889288-rabbitmq-0              1/1     Running   0          105s
awx-1588889288-task-7687b6bdc8-c6l75   1/1     Running   0          105s
awx-1588889288-task-7687b6bdc8-qt6wq   1/1     Running   0          105s
awx-1588889288-web-6c44fc4f4-67bmh     0/1     Running   1          105s


@mobrien @kevin thanks for all the pointers and help, seems I am done here

I will open new issues if I encounter them

Metadata Update from @kevin:
- Issue close_status updated to: Fixed
- Issue status updated to: Closed (was: Open)

2 months ago

Login to comment on this ticket.