Don’t break production: learn about MySQL locks

Simon Ninon
9 min readAug 1, 2021

One web application I used to maintain was using MySQL 5.6 as the main database for a couple of years, but we decided to take a leap and upgrade to MySQL 8.0 to benefit from various improvements: performance, extended online DDL operations, JSON support, …

After weeks of testing, once everything looked good, we rolled out the upgrade and served all our production database traffic from MyQSL 8.0. We closely monitored for any issues in the following weeks: everything was running smoothly, and we could finally put the upgrade behind us! 🥳

This is not until a few more weeks that it all shattered. About 1.5 months after releasing the upgrade, our service got disrupted after running a database migration during a regular deployment: our main production replica seemed stuck with thousands of queries hanging indefinitely, causing our webservers to hang as well, and leaving us with no choice other than to reboot the replica. Two days later, while we were still troubleshooting the root cause, the same incident happened: our replica went nuts after running migrations during a deployment.

We immediately connected the incident to the upgrade of MySQL to 8.0, but it was unclear to us why exactly. In particular, we have been running MySQL 8.0 in production, and performed a few migrations, for more than 1.5 months without any issue. So, what is causing this, and why now?

It turned out that MySQL 8 comes with some changes in how metadata locks are implemented. That, combined with some long-running transactions, can cause read queries to some specific tables to be held for quite a long time. These incidents were a good opportunity for us to learn more about the internal locking mechanism of MySQL. They were also a solid reminder that we have a lot to learn about our own database, how our teams are using it, and how these different usages can conflict one way or another.

MySQL 8.0.3+ Metadata Lock Extension

Metadata locks are locks on the table itself to prevent concurrent changes to its structure. It is typically used when running DDL (Data Definition Language) statements, such as ALTER TABLE, since it modifies the schema.

In MySQL 8.0.3, MySQL did some changes related to metadata locks, as described in their changelog:

MySQL now extends metadata locks, as necessary, to tables that are related by a foreign key constraint. Extending metadata locks prevents conflicting DML and DDL operations from executing concurrently on related tables. This feature also enables updates to foreign key metadata when a parent table is modified.

Previously, foreign key metadata, which is owned by the child table, could not be updated safely.

If a table is locked explicitly with LOCK TABLES, any tables related by a foreign key constraint are now opened and locked implicitly. For foreign key checks, a shared read-only lock (LOCK TABLES READ) is taken on related tables. For cascading updates, a shared-nothing write lock (LOCK TABLES WRITE) is taken on related tables that are involved in the operation.

If LOCK TABLES is active for a table in a foreign key relationship, ALTER TABLE … RENAME is not permitted for that table. This is a temporary restriction, lifted in MySQL 8.0.4 by the patch for Bug #26647340.

Similarly, the MySQL 8.0 Metatada locking documentation states the following:

Metadata locks are extended, as necessary, to tables related by a foreign key constraint to prevent conflicting DML and DDL operations from executing concurrently on the related tables. When updating a parent table, a metadata lock is taken on the child table while updating foreign key metadata. Foreign key metadata is owned by the child table.

One main implication of this change is when running DDL statements like the following: ALTER TABLE tbl_1 ADD CONSTRAINT some_constraint FOREIGN KEY col_tbl_1 REFERENCES tbl_2 (col_tbl_2);. On previous MySQL versions, a metadata lock will be requested on tbl_1 only. After MySQL 8.0.3, a metadata lock will be requested on both tbl_1 and tbl_2 .

Shared, Exclusive, and Implicit Locks

MySQL handles concurrent operations with various locks. Not all locks detailed in the documentation are related to our incident, and explaining all of them is beyond the scope of this article (and my knowledge!). Instead, the MySQL documentation contains some good resources to detail these: InnoDB Locking and Metadata Locking.

That being said, to better understand this incident, it is important to know some basics about MySQL locks: shared, exclusive, and implicit locks.

Shared Lock

A kind of lock that allows other transactions to read the locked object, and to also acquire other shared locks on it, but not to write to it. The opposite of exclusive lock.

Source: MySQL Documentation

Exclusive Lock

A kind of lock that prevents any other transaction from locking the same row. Depending on the transaction isolation level, this kind of lock might block other transactions from writing to the same row, or might also block other transactions from reading the same row. The default InnoDB isolation level, REPEATABLE READ, enables higher concurrency by allowing transactions to read rows that have exclusive locks, a technique known as consistent read.

Source: MySQL Documentation

Implicit Lock

DDL (Data Definition Language) & DML (Data Manipulation Language) queries will implicitly lock some resources (rows, tables, …) without you doing it explicitly.

For example, MySQL defines implicit row lock as follows:

A row lock that InnoDB acquires to ensure consistency, without you specifically requesting it.

Waiting for table metadata lock

In our Django application, we defined migrations like the one below:

This transaction will be translated to a DDL query similar to the following: ALTER TABLE example_table_article ADD CONSTRAINT some_constraint FOREIGN KEY (author) REFERENCES example_table_user;.

Thus, this migration will hold an exclusive metadata lock on both example_table_article and example_table_user tables.

The exclusive metadata lock requested by this DDL query can only be acquired if no ongoing request is holding an exclusive or shared metadata lock. Read queries do hold a shared metadata lock. For most queries, they will execute fast and release it once completed, allowing the DDL query to execute as soon as all read transactions received before itself completed.

This is backed by the MySQL 8.0 Metadata Locking documentation:

If there are multiple waiters for a given lock, the highest-priority lock request is satisfied first, with an exception related to the max_write_lock_count system variable. Write lock requests have higher priority than read lock requests. However, if max_write_lock_count is set to some low value (say, 10), read lock requests may be preferred over pending write lock requests if the read lock requests have already been passed over in favor of 10 write lock requests. Normally this behavior does not occur because max_write_lock_count by default has a very large value.

Statements acquire metadata locks one by one, not simultaneously, and perform deadlock detection in the process.

DML statements normally acquire locks in the order in which tables are mentioned in the statement.

DDL statements, LOCK TABLES, and other similar statements try to reduce the number of possible deadlocks between concurrent DDL statements by acquiring locks on explicitly named tables in name order. Locks might be acquired in a different order for implicitly used tables (such as tables in foreign key relationships that also must be locked).

However, if someone starts running a long transaction and queries the example_table_user table at some point, the shared metadata lock on the example_table_usertable will be held for the whole transaction duration, until it is committed or rolled back.

In this situation, the DDL operation (ALTER TABLE ...) will hang and wait until it can acquire the lock. But worst, all the read queries on the example_table_usertable received after receiving the DDL statement will also wait because MySQL will prioritize the write operation (otherwise, it may never be able to acquire the lock): that's how we end up with all the example_table_user table queries suddenly stuck and waiting for the metadata lock, as shown below:

We were able to reproduce this behavior with 3 MySQL shell sessions:

  • Session 1
    Start a transaction and read something from the tbl_1 table, but do not commit or rollback the transaction yet: we just leave it opened
  • Session 2
    Run a CREATE TABLE statement adding a foreign key constraint referencing tbl_1: this statement will just hang, waiting to acquire the metadata lock on the tbl_1, but it is already acquired by the transaction in the first session
  • Session 3
    Try to read something from the tbl_1 table: this one will also hang because MySQL will queue the query until the second session can acquire the lock on tbl_1 (write prioritized over reads)

This scenario is shown in the screenshot below:

As soon as we end the transaction in the first session, the DDL and DML queries from the 2nd and 3rd sessions can complete, as shown below.

When the queries of the 2nd and 3rd sessions are pending, we can also confirm that they are waiting to acquire the metadata lock that the transaction of the 1st session is holding:

Note that we performed this experiment on MySQL 5.6, 5.7, and 8.0, and were only able to reproduce it on MySQL 8.0.3+, confirming that the behavior was introduced in MySQL 8.0.

Solving the incident

Following this experiment, and given that this issue consistently occurred every time we performed a migration adding foreign keys referencing theexample_table_user tables after upgrading to MySQL 8.0.21, we should expect some long-running transactions on the production holding a shared metadata lock on the example_table_user table.

This is exactly what we found! Solving the incident was much easier from there, as we just needed to track down where was the transaction coming from. In our case, one of our services was running an old Django version (1.5) configured not to use auto-commit, effectively performing hours-long transactions when keeping a Django shell session open for a while.

Of course, besides dropping these long-running transactions, there is more to be done to prevent similar incidents from happening again in the future. Among other things:

  • Our production services have some database routing logic, deciding on what database node to send a database request. However, this routing logic failed to identify misbehaving nodes. As we continued to forward traffic on the faulty replica, our service got disrupted.
  • Monitoring to detect such unexpected long-running transactions can be helpful.
  • Having teams using their own database rather than sharing one central database would provide better isolation and likely minimize the likelihood and impact of such incidents.

Epilogue

As always, a production incident is a good opportunity to learn and grow better. I have personally learned a lot in the past few years by investigating production incidents as deeply as possible.

The database is usually the core part of infrastructure architectures, but mastering it can prove challenging. Besides the complex internal details of the database engine, many different teams rely on the same database to run various services and platforms. This can lead to many blind spots that could yield to a service interruption sooner or later.

Learning more about the database internals, as well as how different teams use that database, is most likely a key part of building reliable and performance infrastructures.

--

--