Why is Postgres unable to archive write-ahead logs fast enough?

Issue

Heroku Postgres is unable to archive write-ahead logs fast enough, resulting in a risk of database shutdown and data loss.

Resolution

The connection limit for the default credential, and all of your user created credentials, has been temporarily significantly reduced to prevent the WAL storage from filling up and causing your database to crash.

Heroku Postgres will automatically remove this connection limitation once WAL archiving is able to keep up with database writes.

Consider taking the following action.

Reduce database load

If you have recently kicked off a process that is producing a large amount of write volume, such as an ETL process, consider stopping the process and batching the process, spreading it out over a larger window of time. Please be aware of database migrations that may update a very large table or tables with data (such as a new column with a NOT NULL constraint).

Ensure writes are not redundant

Unnecessary writes (where values are being updated to the same value) can be a cause – ensure that you are writing a true delta of changes.

Disable triggers

If the table(s) that you are writing to have triggers that also produce writes, you may be suffering from write amplification due to the triggers. Consider disabling the triggers until your write process is complete.

Prefer DROP and TRUNCATE for deletion

If possible, instead of using DELETE FROM in order to delete data, it is preferable to TRUNCATE or DROP entire tables at once in order to reduce write volume.

For more information, see https://devcenter.heroku.com/articles/postgres-write-ahead-log-usage

Ask on Stack Overflow

Engage with a community of passionate experts to get the answers you need

Ask on Stack Overflow

Heroku Support

Create a support ticket and our support experts will get back to you

Contact Heroku Support
Terms of Service Privacy Cookies © 2020 Salesforce.com