[TIP] Alter huge table

This is a bit off topic but I figured someone else might encounter this issue.

A database used for Packr has a table that is more than 5GB. To add a new feature in the app, it became necessary to add a new column to this table.
On my test server, I tried a simple ALTER TABLE ADD COLUMN…
This took more than an hour and of course the table was locked. This couldn’t be possible on the production server as the table is used almost every minute of every day.

So I tried a different approach:
-create an empty copy of the table, add the column
-copy all data from old_table to new_table in chunks of 1 million rows (this took two hours)
-swap old_table and new_table

The advantage of this approach was that it had no downtime (table locked) but it didn’t take into account any INSERT/UPDATE/DELETE from old_table while data was being copied.

[h]TL;DR[/h]

Finally I discovered Percona Toolkit, a set of free tools for managing databases.

It has this very handy function that follows my second approach and also handles INSERT/UPDATE/DELETE with triggers.
The simple command is like:

pt-online-schema-change --execute --alter "ADD COLUMN travelerID INT NULL DEFAULT NULL AFTER userID" D=DB_NAME,t=TABLE_NAME

Yes, I do it like jeremie describes plus I save, drop and recreate triggers and indices in code. You also want to make sure that autocommit is OFF before you append from the old to new tables.

If you are using MySQL 5.6 onwards you can use OnlineDDL to add columns / indexes without locking the table.