Status

Check our services status at a glance

Title mysql2 down
ID Operation #111
State completed
Beginning date 06/18/2013 12:05 a.m.
End date 06/18/2013 1:41 a.m.
Affected servers
  • mysql2

Messages

06/18/2013 12:15 a.m.

Kernel panic. We’ve rebooted the server.

06/18/2013 12:32 a.m.

Once again. We’re investigating.

06/18/2013 12:38 a.m.

MySQL is suddently using more than the available RAM, which causes the issue. We’re still investigating why.

06/18/2013 1:02 a.m.

MySQL has been upgraded, to no avail. We’re still investigating and trying solutions.

06/18/2013 1:13 a.m.

We’ve blocked SQL requests from http10 to mysql2 as a specific account on this server is probably triggering a MySQL bug.

06/18/2013 1:25 a.m.

We may have isolated the account sending the requests that trigger the bug.

06/18/2013 1:41 a.m.

The account has been confirmed. The customer has been contacted and we will investigate more together tomorrow.

06/18/2013 1 p.m.

After more investigation, we’ve isolated the query triggering the bug. It’s actually rather simple, but an index was recently created and caused the bug. This is a known issue with MySQL.

Note that PostgreSQL cannot suffer from such issues (memory leak) thanks to its multi-process model.