list table quadruples when upgrading labkey from v17 to v19.1

LabKey Support Forum
list table quadruples when upgrading labkey from v17 to v19.1 qing chang  2020-10-06 07:19
Status: Closed
 
We have recently upgraded one of our labkey servers from v17 to v20.7. We had to first upgrade to v19.1 before going to 20.7. We use postgresql 9.6 as backend.

We noticed there was a big jump in storage usage after upgraded to 19.1. A closer look revealed that ALL list type tables have quadrupled in storage usage without any changes in contents. One of the table was 10G, it became 40G after upgrade.
-------
 list.c185d2069_modc_test | 10 GB
 list.c185d2069_modc_test | 40 GB
-------

The row count is 50 mil before and after upgrade. The first 10 rows are identical. I have no reason to believe there is any difference in other rows.

Can someone shed some light on this?

Thanks in advance.

Qing Chang
 
 
Jon (LabKey DevOps) responded:  2020-10-27 23:00
Hello Qing,

Is it possible for you to send us your labkey.log file? It should have recorded full log activity from when the server was upgraded from v17 to v19.1.

The list module hasn't had any updates to it regarding SQL since 2014, but if the table has increased in size, it sounds like some other SQL would have had to have caused this. The labkey.log file should be able to confirm what ran.

Also, what specific version of LabKey 17 were you originally on? 17.1? 17.2? 17.3?

Regards,

Jon
 
qing chang responded:  2020-10-28 07:15
Status: Active
Hi Jon,

thanks for responding. My apologies, in fact the upgrade was from 16.3 to 19.1. Is there know issues with 16.3 to 19.1?

For some reason labkey logs did not rotate properly, I don't have the logs for the change from 16.3 to 19.1. If it is vital to the troubleshooting process, I'll try to make a clone to recreate the issue.

There are other log files as below, is any of them useful?
-----
labkeyMemory.log
labkey-audit.log
labkey-action-stats.tsv.3
labkey-query-stats.tsv.3
labkey-action-stats.tsv.2
labkey-query-stats.tsv.2
labkey-errors.log.3
labkey-action-stats.tsv.1
labkey-query-stats.tsv.1
labkey-errors.log.2
labkey-errors.log.1
labkey-action-stats.tsv
labkey-query-stats.tsv
labkey.log.1
labkey-errors.log
labkey.log
-----

Regards,
Qing
 
Jon (LabKey DevOps) responded:  2020-10-29 10:24
Status: Closed
Hi Qing,

Ideally, it would be preferred to get the labkey.log and other labkey.log files (such as labkey.log.1) that contains the logs that corresponds with the upgrade that took place.

That said, did you by chance do any kind of database maintenance? Specifically running a VACUUM on your PostgreSQL database to have it reclaim space?

https://www.postgresql.org/docs/current/sql-vacuum.html

Often times, much of that database bloat on tables are due to having various deleted objects that aren't fully deleted since it requires the VACUUM to be ran. Although some people enable the AUTOVACUUM feature, it doesn't always get turned on properly.

If you haven't ran a VACUUM on your database, give that a try and then recheck your table size again. Keep in mind that a VACUUM can lock tables or the whole database, so it's recommended to do this during off-hours when the server isn't in use.

Regards,

Jon
 
qing chang responded:  2020-11-02 13:26
Hi Jon,

By doing a VACUUM FULL, space was recovered.

Thanks,
Qing