// Create and manage database migrations with reversible changes, proper naming conventions, and zero-downtime deployment strategies. Use this skill when creating database migration files, modifying schema, adding or removing tables/columns, managing indexes, or handling data migrations. Apply when working with migration files (e.g., db/migrate/, migrations/, alembic/, sequelize migrations), schema changes, database versioning, rollback implementations, or when you need to ensure backwards compatibility during deployments. Use for any task involving database structure changes, index creation, constraint modifications, or data transformation scripts.
| name | Backend Migration Standards |
| description | Create and manage database migrations with reversible changes, proper naming conventions, and zero-downtime deployment strategies. Use this skill when creating database migration files, modifying schema, adding or removing tables/columns, managing indexes, or handling data migrations. Apply when working with migration files (e.g., db/migrate/, migrations/, alembic/, sequelize migrations), schema changes, database versioning, rollback implementations, or when you need to ensure backwards compatibility during deployments. Use for any task involving database structure changes, index creation, constraint modifications, or data transformation scripts. |
Apply these rules when creating or modifying database migrations. Migrations are permanent records of schema evolution and must be treated with extreme care.
This Skill provides Claude Code with specific guidance on how to adhere to coding standards as they relate to how it should handle backend migrations.
Reversibility is Mandatory: Every migration MUST have a working rollback method. Test the down migration immediately after writing the up migration. If a change cannot be reversed safely (e.g., dropping a column with data), document why in comments and consider a multi-step approach.
One Logical Change Per Migration: Each migration should do exactly one thing - add a table, add a column, create an index, etc. This makes debugging easier, rollbacks safer, and code review clearer. If you need to make multiple related changes, create multiple migrations.
Never Modify Deployed Migrations: Once a migration runs in any shared environment (staging, production), it becomes immutable. Create a new migration to fix issues. Modifying deployed migrations breaks version control and causes deployment failures.
Naming Convention: Use timestamps and descriptive names that indicate the change:
20241118120000_add_email_to_users.py20241118120100_create_orders_table.rb20241118120200_add_index_on_users_email.jsThe name should answer "what does this migration do?" without reading the code.
File Organization:
migrations/schema/migrations/data/Adding Columns: Always specify default values for NOT NULL columns on existing tables to avoid locking issues:
# BAD - locks table during backfill
op.add_column('users', sa.Column('status', sa.String(), nullable=False))
# GOOD - uses default, no lock
op.add_column('users', sa.Column('status', sa.String(), nullable=False, server_default='active'))
Removing Columns: Use multi-step approach for zero-downtime:
Renaming Columns: Treat as add + remove for zero-downtime:
Creating Indexes: Use concurrent index creation on large tables to avoid blocking writes:
# PostgreSQL
op.create_index('idx_users_email', 'users', ['email'], postgresql_using='btree', postgresql_concurrently=True)
# MySQL
op.create_index('idx_users_email', 'users', ['email'], mysql_algorithm='INPLACE', mysql_lock='NONE')
Index Naming: Use pattern idx_<table>_<column(s)> for clarity:
idx_users_emailidx_orders_user_id_created_atWhen to Index: Add indexes for:
Separate from Schema: Never mix schema and data changes in one migration. Schema changes are structural and fast; data changes are operational and slow.
Batch Processing: Process large datasets in batches to avoid memory issues and long-running transactions:
def upgrade():
batch_size = 1000
connection = op.get_bind()
while True:
result = connection.execute(
"UPDATE users SET status = 'active' WHERE status IS NULL LIMIT %s",
batch_size
)
if result.rowcount == 0:
break
Idempotency: Data migrations should be safe to run multiple times:
# BAD - fails on second run
op.execute("INSERT INTO settings (key, value) VALUES ('feature_flag', 'true')")
# GOOD - idempotent
op.execute("INSERT INTO settings (key, value) VALUES ('feature_flag', 'true') ON CONFLICT (key) DO NOTHING")
Backwards Compatibility: New migrations must work with the currently deployed code version. Deploy order:
Additive Changes First: When changing column types or constraints:
Foreign Key Constraints: Add in separate migration after data is consistent to avoid validation failures.
Before Committing:
rake db:migrate or equivalentrake db:rollback or equivalentTest with Production-Like Data: Use anonymized production data dump to test migrations against realistic data volumes and edge cases.
Alembic (Python):
def upgrade():
op.add_column('users', sa.Column('email', sa.String(255), nullable=True))
op.create_index('idx_users_email', 'users', ['email'])
def downgrade():
op.drop_index('idx_users_email', 'users')
op.drop_column('users', 'email')
Rails (Ruby):
def change
add_column :users, :email, :string
add_index :users, :email
end
Sequelize (JavaScript):
module.exports = {
up: async (queryInterface, Sequelize) => {
await queryInterface.addColumn('users', 'email', {
type: Sequelize.STRING,
allowNull: true
});
await queryInterface.addIndex('users', ['email']);
},
down: async (queryInterface, Sequelize) => {
await queryInterface.removeIndex('users', ['email']);
await queryInterface.removeColumn('users', 'email');
}
};
If you're about to:
STOP. Review this document and plan a safer approach.