Dovecot auth-sql driver not respecting sql username options, how do I get around this?

Dovecot is running in a jail and set up properly for SQL connections.

dovecot-sql-conf.ext has the appropriate options, the main one that's problematic is connect.

connect = host=127.0.0.1 dbname=mailserver user=mailuser password=password

sql user is set to 'mailuser'@'127.0.0.1' so there are no issues with dovecot or postfix trying to access a socket it can't access from the jail.

Dovecot starts up, no issues. Attempt an imap login, Temporary Auth Failure.

Logs read as follows:

dovecot: auth-worker(1295): Error: mysql(127.0.0.1): Connect failed to database ((mailserver)): Access denied for user 'mailuser'@'localhost' (using password: YES).

Does anyone know of a way to force dovecot to use a %u (username=user@domain) format for the sql-driver username instead of of %n (user)@'localhost'...

I've literally tried everything I can think/find including diving into the source for changing the 'localhost' parameter. It seems to be immutable.

The option_file looked promising but testing shows it doesn't actually read most of the connection parameters and there's absolutely no documentation on the format they are looking for other than starting with a option_group of [client] to avoid the fatal error.

I'd really rather not have to move an sql socket to the dovecot folder and have to create a separate sql username just so dovecot can make queries if at all possible.

I'm hoping someone here might have an idea about how to go about working around this...

For reference, I'm using the 2.2.33.2 package available to Bionic. I plan compiling the newest version of dovecot tomorrow as I get time (though there were no bug/issues about this.

Edit: @anx, I've included the SELECT User,Host,Plugin from mysql.user; I Had to grant additional privileges to pull this; Edit: I've adjusted the mysql tests to include the dbname; I had previously simply typed USE mailserver;

+-----------+-----------+-------------+
| user      | host      | plugin      |
+-----------+-----------+-------------+
| root      | localhost | unix_socket |
| mailuser  | 127.0.0.1 |             |
| mailadmin | localhost |             |
+-----------+-----------+-------------+

The command(s) I used to test the login with mailuser are below, both succeed.

-----------------------------------
mysql -u mailuser -p -h 127.0.0.1.
MariaDB: USE mailserver;
-----------------------------------
mysql -u mailuser -p -h 127.0.0.1 --database='mailserver'
-----------------------------------

(Same output for both commands)
MariaDB[mailserver]> SELECT * from virtual_users
+----+-----------+------------------+------------------+
| id | domain_id | email            | password         |
+----+-----------+------------------+------------------+
|  1 |         1 | [email protected] | {SHA256-CRYPT}.. |
+----+-----------+------------------+------------------+

Testing the authentication through dovecot was done as follows:

openssl s_client -connect 127.0.0.1:993 -crlf
IMAP> a login [email protected] password

Temporary Authentication Failure

The dovecot mysql driver above contains the dbname in the connection string.

Logs show many entries like the one below where authentication to SQL fails because its not being identified correctly.

dovecot: auth-worker(1394): Error: mysql(127.0.0.1): Connect failed to database (mailserver): Access denied for user 'mailuser'@'localhost' (using password: YES) - waiting for 125 seconds before retry.

EDIT: See accepted answer for details. TL;DR the issue was a hardware(ASPM)/docker network corruption issue.


Thanks Michael, I've adjusted the post accordingly.

Basically, the aforementioned stack was a postfix/dovecot/msql stack that had been containerized and running for a few years. The build was updated recently and it would pass tests but would fail once deployed.

The issue was a strange one where authenticating with dovecot would not authenticate during manual testing.

Dovecot Authentication would work without issue when the components were in separate containers. Dovecot Authentication would fail when connecting or testing over the loopback adapter within the container.

About a week after the post, I'd worked down the stack and ended up taking a TCP dump at various locations and stages during the authentication process.

Someone from the dovecot developer list noticed there were some checksum errors where the packets were not being discarded and these packets were causing the container service's running on the loopback to fail.

Around the same time, while digging into this issue I noticed a PCIe bus error on the host, where a L2 error was being written to the kernel ringbuffer with status code 00001100, intermittently and random.

It became clear eventually that the errors were not completely random as they seemed to trend during times where a large number of containers were being disposed or created (inconsistently).

The error was showing as a corrected error, and normal tcp,udp,icmp tests all succeeded without issue, no other problems were present which was why it wasn't looked at previously.

I moved the image to another host with different hardware and the issue disappeared.

Returning back to the original host and digging in I found ASPM was the cause of the error thanks to a post by Thomas Krenn; and passing the pcie_aspm=off option to the kernel resolved the error.

Re-testing the issue afterwards showed the issue was no longer present. Three weeks after, the issue has not yet resurfaced.

The TL;DR is the issue wasn't a dovecot issue but an underlying hardware issue that triggered packet corruption over some docker networking interfaces and the corrupted packets were not being discarded for some reason.

In our case, tests run from the host or passed in from a runner didn't have an issue and tests initiated on an interactive console in the container or run by the container's service that traversed the loopback would unexpectedly fail.

If you are using any kind of VM or Containerized Infrastructure; in a perfect world virtual networking would work just like physical networking and its definitely not a perfect world.

Thanks to everyone that helped.