#10203 Increased storage on Datanommer required
Closed: Fixed 2 years ago by kevin. Opened 2 years ago by scoady.

Describe what you would like us to do:


The datanommer/datagrepper initiative has discovered that datanommer, once upgraded to the new version and the migration is complete, will require roughly double the amount of storage than is currently available on the datanommer server.

We are requesting the storage be doubled on that server.

When do you need this to be done by? (2021/09/14)


We are planning to go to production next week and it would probably be easier for the team but also for the admins if the space is increased before we start migrating data.

cc @abompard @amoloney


There is currently 2TB allocated to that server, I'm not sure we would have the capacity to give 4TB

Metadata Update from @mohanboddu:
- Issue priority set to: Waiting on Assignee (was: Needs Review)
- Issue tagged with: medium-gain, medium-trouble, ops

2 years ago

We currently have no machines that would have 4TB free space on them. ;(

So, you mean the db itself would double? or ?

Right now on the host:

/dev/mapper/GuestVolGroup00-root 2.0T 990G 961G 51% /

xz compressed backup is in /backups:
53G /backups/

But... you need a new host right? this existing host needs to stay in operation for the migration.

We may be able to scrape up 3TB on an older machine. vmhost-x86-08/09 need to be reinstalled with rhel8, but they each have 3TB space available on them. If only the db is growing that should work for now... ?

Thoughts?

Honestly if 3 to 4 TB is needed for the DB, then something needs to be done about what data is being kept and why it is using so much space. [Because if it is going to double now it will grow beyond 4TB in a year or so].

Honestly if 3 to 4 TB is needed for the DB, then something needs to be done about what data is being kept and why it is using so much space. [Because if it is going to double now it will grow beyond 4TB in a year or so].

The 3 to 4Tb was because the data needs to be duplicated, one copy of the DB for
the current version and one copy for the new version of the DB.
Once the migration is done, we'll be back to its current size.

One thing we could check is: the current disk size is 2Tb but how much of that
is used by postgresql? Because that's the part that needs to be doubled (so if
postgresql only uses 1T then having 2T/2.5T should be enough).

One thing we could check is: the current disk size is 2Tb but how much of that
is used by postgresql? Because that's the part that needs to be doubled (so if
postgresql only uses 1T then having 2T/2.5T should be enough).

The current disk usage is ~1TB so we could grow to 2.5/3TB for the duration of the migration and then reduce it after?

When we did this for the ARC team we used a 3TB drive and had no issues with space

Sorry, I should have been more explicit in my original issue. Yes, that's correct it's just for the DB migration, essentially while we migrate there is going to be double what is currently there.

I think growing to double while the migration is happening and then deleting all the old data before we shrink will work for sure.

Wrt to the freeze, how soon do we think we could do that? We were discussing it this morning and were wondering what impact it would have on other applications (if any). Essentially how soon are you happy with us to do this :)

Sorry, I should have been more explicit in my original issue. Yes, that's correct it's just for the DB migration, essentially while we migrate there is going to be double what is currently there.

I think growing to double while the migration is happening and then deleting all the old data before we shrink will work for sure.

Wrt to the freeze, how soon do we think we could do that? We were discussing it this morning and were wondering what impact it would have on other applications (if any). Essentially how soon are you happy with us to do this :)

We should have enough space there to grow another little bit. Changing the disk size would cause a small outage to the server though so would need a freeze break request.

so just to note... that machine is using XFS. You cannot shrink xfs ever. :)

db-datanommer02 is up and running. Thanks @mobrien !

Metadata Update from @kevin:
- Issue close_status updated to: Fixed
- Issue status updated to: Closed (was: Open)

2 years ago

Login to comment on this ticket.

Metadata
Boards 1
ops Status: Done