GitLab pipeline exiting with error code 137 when running Cypress

I'm creating a Docker image based on alpine:3.13, which is used for my test stage, running in a pipeline on GitLab.

There I install all the dependencies. The app consists of two components, which I will call front and back.

I run the following command to set up front and back and finally execute cypress in headless mode.

"e2e:run": "concurrently -n front, back \"yarn front\" \"yarn back\" \"yarn front:wait && yarn back:wait && yarn cypress:run\""

It builds front and back fine, but then the job log doesn't show any progress for a few minutes until I finally get this exit code:

ERROR: Job failed: command terminated with exit code 137

From my research so far, I concluded it seems to be related to a lack of memory.

  1. Is there any other reasonable option?
  2. What could I do to provide more memory/reduce memory consumption?

As @SamBob mentioned, this issue is likely due to low memory within the running docker container, and the shm_size parameter can increase it. However, since you're not directly running your image in the job (ie, doing docker run...) but rather the gitlab-runner process is, you'll have to set the shm_size parameter within the Runner's configuration for the Docker executor. To do this, you'll also have to run your own runners if you aren't already.

When running your own runners, each will have a config.toml file in /etc/gitlab-runner that looks like this by default:

listen_address = ":9252"
concurrent = 1
check_interval = 0

[session_server]
  session_timeout = 1800

[[runners]]
  name = "runner-1"
  url = "https://gitlab.example.com"
  token = "TOKEN"
  executor = "docker"
  [runners.custom_build_dir]
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
  [runners.docker]
    image = "alpine:latest"
    privileged = false
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/cache"]
    shm_size = 0

As you can see, by default the shm_size parameter is set to 0 bytes. You can edit this file to increase the shm_size, then restart the gitlab-runner service to reload the new config.

One other thing I do with my runners is to add a shm-increased tag on those runners I've increased since only a couple jobs in my pipelines need more shared memory.

To see more information on running your own Gitlab Runners, see here.

To see more on the shm_size parameter for Gitlab Runners, and other advanced runner configuration options, see here.

To see information on tagging runners and jobs, see here.