Laravel / PHP / MySQL - Packets out of order [duplicate]

I have a Laravel Spark project that uses Horizon to manage a job queue with Redis.

Locally, (on my Homestead box, Mac OS) everything works as expected, but on our new Digital Ocean (Forge provisioned) Droplet, which is a memory-optimized 256GB, 32vCPUs, 10TB, and 1x 800GB VPS, I keep getting the error:

PDOException: Packets out of order. Expected 0 received 1. Packet size=23

Or some variation of that error, where the packet size info may be different.

After many hours/days of debugging and research, I have come across many posts on StackOverflow and elsewhere, that seem to indicate that this can be fixed by doing a number of things, listed below:

  1. Set PDO::ATTR_EMULATE_PREPARES to true in my database.php config. This has absolutely no effect on the problem, and actually introduces another issue, whereby integers are cast as strings.

  2. Set DB_HOST to 127.0.0.1 instead of localhost, so that it uses TCP instead of a UNIX socket. Again, this has no effect.

  3. Set DB_SOCKET to the socket path listed in MySQL by logging into MySQL (MariaDB) and running show variables like '%socket%'; which lists the socket path as /run/mysqld/mysqld.sock. I also leave DB_HOST set to localhost. This has no effect either. One thing I did note, was that the pdo_mysql.default_socket variable is set to /var/run/mysqld/mysqld.sock, I'm not sure if this is part of the problem?

  4. I have massively increased the MySQL configuration settings found in /etc/mysql/mariadb.conf.d/50-server.cnf to the following:

    • key_buffer_size = 2048M
    • max_allowed_packet = 2048M
    • max_connections = 1000
    • thread_concurrency = 100
    • query_cache_size = 256M

I must admit, that changing these settings was a last resort/clutching at straws type scenario. However, this did alleviate the issue to some degree, but it did not fix it completely, as MySQL still fails 99% of the time, albeit at a later stage.

In terms of the queue, I have a total of 1,136 workers split between 6 supervisors/queues and it's all handled via Laravel Horizon, which is being run as a Daemon.

I am also using the Laravel Websockets PHP package for broadcasting, again, which is also being run as a Daemon.

My current environment configuration is as follows (sensitive info omitted).

APP_NAME="App Name"
APP_ENV=production
APP_DEBUG=false
APP_KEY=thekey
APP_URL=https://appurl.com
LOG_CHANNEL=single

DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=databse
DB_USERNAME=username
DB_PASSWORD=password

BROADCAST_DRIVER=pusher
CACHE_DRIVER=file
QUEUE_CONNECTION=redis
SESSION_DRIVER=file
SESSION_LIFETIME=120

REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379

MAIL_MAILER=smtp
MAIL_HOST=smtp.gmail.com
MAIL_PORT=587
[email protected]
MAIL_PASSWORD=password
MAIL_ENCRYPTION=tls
[email protected]
MAIL_FROM_NAME="${APP_NAME}"

AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_DEFAULT_REGION="us-east-1"
AWS_BUCKET=

PUSHER_APP_ID=appid
PUSHER_APP_KEY=appkey
PUSHER_APP_SECRET=appsecret
PUSHER_APP_CLUSTER=mt1

MIX_PUSHER_APP_KEY="${PUSHER_APP_KEY}"
MIX_PUSHER_APP_CLUSTER="${PUSHER_APP_CLUSTER}"

AUTHY_SECRET=

CASHIER_CURRENCY=usd
CASHIER_CURRENCY_LOCALE=en
CASHIER_MODEL=App\Models\User
STRIPE_KEY=stripekey
STRIPE_SECRET=stripesecret

# ECHO SERVER
LARAVEL_WEBSOCKETS_PORT=port

The server setup is as follows:

  • Max File Upload Size: 1024
  • Max Execution Time: 300
  • PHP Version: 7.4
  • MariaDB Version: 10.3.22

I have checked all logs (see below) at the time the MySQL server crashes/goes away, and there is nothing in the MySQL logs at all. No error whatsoever. I also don't see anything in:

  • /var/log/nginx/error.log
  • /var/log/nginx/access.log
  • /var/log/php7.4-fpm.log

I'm currently still digging through and debugging, but right now, I'm stumped. This is the first time I've ever come across this error.

Could this be down to hitting the database (read/write) too fast?

A little information on how the queues work.

  1. I have an initial controller that dispatches a job to the queue.
  2. Once this job completes, it fires an event which then starts the process of running several other listeners/events in sequence, all of which depend on the previous jobs completing before new events are fired and new listeners/jobs take up the work.
  3. In total, there are 30 events that are broadcast.
  4. In total, there are 30 listeners.
  5. In total there are 5 jobs.

These all work sequentially based on the listener/job that was run and the event that it fires.

I have also monitored the laravel.log live and when the crash occurs, nothing is logged at all. Although, I do occasionally get production.ERROR: Failed to connect to Pusher. whether MySQL crashes or not, so I don't think that has any bearing on this problem.

I even noticed that the Laravel API rate limit was being hit, so I made sure to drastically increase that from 60 to 500. Still no joy.

Lastly, it doesn't seem to matter which Event, Job, or Listener is running as the error occurs on random ones. So, not sure it's code-specific, although, it may well be.

Hopefully, I've provided enough background and detailed information to get some help with this, but if I've missed anything, please do let me know and I'll add it to the question. Thanks.


For me what fixed it was increasing the max packet size.

In my.cnf, I added:

max_allowed_packet=200M

And then service mysql stop, service mysql start, and it worked :)


We were getting a similar PHP warning about packets out of order. What solved it for us is increasing max_connections in the MySQL my.cnf. Your current max_connections are probably 1024. We increased ours to 4096 and the warning went away. In MySQL you can see your current max_connections with this command:

SHOW VARIABLES LIKE "%max_connections%";
or
mysqladmin variables | grep max_connections