#9035 redo maintainer-test instances with large / partition
Closed: Fixed 3 years ago by mobrien. Opened 3 years ago by kevin.

We run some maintainer-test instances in the us-west-1 aws region.

These are for current fedora/el releases and allow maintainers to test things.

currently we have
f30-test.fedorainfracloud.org (should be retired)
f31-test.fedorainfracloud.org
f32-test.fedorainfracloud.org
rawhide-test.fedorainfracloud.org
el6-test.fedorainfracloud.org
el7-test.fedorainfracoud.org

There's a ansible playbook to set them up.

Currently the have:
nvme0n1 259:2 0 6G 0 disk
└─nvme0n1p1 259:3 0 6G 0 part /
nvme1n1 259:0 0 46.6G 0 disk
└─nvme1n1p1 259:1 0 46.6G 0 part /var/lib/mock

but people just fill up /, so might be good to find a flavor thats just got a 50-100gb single disk.


I can't find where this is set in ansible. There is a couple of options we can do here.

1) We could relaunch fresh instances with ebs volumes which could be resized easily in future instead of using attached storage. These can be slightly slower (but still very quick) so it depends on the expected performance of the machine

2) We could use an instance type m5d.large which has 75GB attached SSD compared to the current 50GB. It also doubles the memory to 8GB. This means every time we wish to grow the storage it needs a new instance and the data will need to be migrated manually.

Both options have their merit so we can go with whichever you prefer.

These are in us-west-2 btw

We don't spin up/down/define any of these in aws currently, we manually create them then run ansible against them. ;(

  1. Sure, that would be fine.

  2. That would also be fine, no data needs migrating. These are test instances and we reserve the right to wipe them anytime.

Possibly 2 is easier?

Metadata Update from @kevin:
- Issue priority set to: Waiting on Assignee (was: Needs Review)
- Issue tagged with: groomed, low-trouble, medium-gain

3 years ago

Metadata Update from @mobrien:
- Issue assigned to mobrien

3 years ago

I ended up going with option 1. These are created from EBS backed AMIs so they require an EBS volume as the root one so I thought it best to just have a single 100GB volume rather than 2 medium sized volumes.

I didn't have access to the AMIs for rawhide or f32 for some reason so I couldn't do those ones.

I created new instances for

el6-test.fedorainfracloud.org: 54.244.110.185
el7-test.fedorainfracoud.org: 52.38.92.7
f31-test.fedorainfracloud.org: 34.220.67.238

I don't have perms to update the DNS as far as I know so I left the old instances there until thats done. I can delete them once the DNS is updated

Also I didn't create a new f30 one since it needs to be retired anyway. A note on that one, if you do delete it you will also need to release the elastic IP attached. It was attached manually to this instance only for some reason.

Mark, I just pushed out a DNS update for el6/el7/f31

Thanks nb

I have updated the ansible playbook to cater for el7 only having one disk now but I don't have the permissions to run the playbook

ok. I added you to sysadmin-noc and added that group to be able to run the playbook. You will need to use rbac-playbook, like:

sudo rbac-playbook groups/maintainer-test.yml

The hosts I created are using the same AMIs as the ones that they have replaced but ansible is failing to find a python interpreter. Is there a setup playbook that needs to be run in advance that would set a python path?

[WARNING]: Unhandled error in Python interpreter discovery for host el6-test.fedorainfracloud.org: unexpected output from Python interpreter discovery

These have all been updated and should be running with a large single disk with the exception of f30 which is end of life

Metadata Update from @mobrien:
- Issue close_status updated to: Fixed
- Issue status updated to: Closed (was: Open)

3 years ago

I thought I nuked the f30 instance... but in case I didn't, can you make sure it's terminated?

I have nuked that instance now

Login to comment on this ticket.

Metadata