Servers status

Check our services status at a glance

ID:
111
Title:
mysql2 down
Status:
completed
Started date:
06/18/2013 12:05 a.m.
End date:
06/18/2013 1:41 a.m.
Involved servers:
mysql2

Upgrades

06/18/2013
0:15

Kernel panic. We’ve rebooted the server.

06/18/2013
0:32

Once again. We’re investigating.

06/18/2013
0:38

MySQL is suddently using more than the available RAM, which causes the issue. We’re still investigating why.

06/18/2013
1:02

MySQL has been upgraded, to no avail. We’re still investigating and trying solutions.

06/18/2013
1:13

We’ve blocked SQL requests from http10 to mysql2 as a specific account on this server is probably triggering a MySQL bug.

06/18/2013
1:25

We may have isolated the account sending the requests that trigger the bug.

06/18/2013
1:41

The account has been confirmed. The customer has been contacted and we will investigate more together tomorrow.

06/18/2013
13:00

After more investigation, we’ve isolated the query triggering the bug. It’s actually rather simple, but an index was recently created and caused the bug. This is a known issue with MySQL.

Note that PostgreSQL cannot suffer from such issues (memory leak) thanks to its multi-process model.