Sometime last week, an attempted automated GitLab via Docker upgrade failed catastrophically, and left that Docker image in a pretty botched state. Kudos to the GitLab team for making it so many years without this being a problem. It was a real surprise to see it broken!
This took a fair bit of troubleshooting to track down, partly because I’m a noob, and partly because the sheer volume of logs to sift through on any single restart of GitLab is hilariously huge.
Doing a grep
for error
helped me discover a few of these when trying to run gitlab-rake db:migrate
:
PG::InternalError: ERROR: no unpinned buffers available
I also noticed that gitlab-rake db:migrate:status
listed quite a few of them as down
instead of up
– so I could tell things weren’t able to finish, I just didn’t really understand how that could happen.
Then I found this issue…
I ended up needing to modify my docker-compose.yml
file to bump up the amount of shared memory made available to PostgreSQL:
version: '3.6'
services:
gitlab:
container_name: gitlab
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'https://jjj.software'
postgresql['shared_buffers'] = "512MB"
That last line is new, and is annoyingly necessary because GitLab’s internal upgrade routines are now increasingly likely to bump into GitLab’s own internal shared memory limit, which appears to be 256MB
by default.
Now that that’s fixed, I get to start some fun hobby project stuff while I’m on vacation this week!