with one click
hanami-db
// Expert guidance on Hanami DB: configuration, relations, queries, joins, combines, migrations, transactions, commands, changesets, repositories, and structs
// Expert guidance on Hanami DB: configuration, relations, queries, joins, combines, migrations, transactions, commands, changesets, repositories, and structs
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | hanami-db |
| description | Expert guidance on Hanami DB: configuration, relations, queries, joins, combines, migrations, transactions, commands, changesets, repositories, and structs |
This skill provides expert guidance on Hanami's persistence layer built on ROM (Ruby Object Mapper). It covers database configuration, relations (schemas, associations, querying, joins, combines, datasets, scopes), migrations, transactions, commands, changesets (validation, mapping, associations), repositories, and structs.
Create a new app with a database
hanami new bookshelf --database=sqlite # default
hanami new bookshelf --database=postgres
hanami new bookshelf --database=mysql
hanami new bookshelf --skip-db # no DB layer
Configure DATABASE_URL
# .env (development)
DATABASE_URL=sqlite://config/db/development.sqlite
DATABASE_URL=postgres://localhost/bookshelf_development
DATABASE_URL=mysql2://user:password@localhost/bookshelf_dev
_test to the development namemysql2:// prefixConfigure slice-specific databases
# For a slice named `admin`
MAIN__DATABASE_URL=sqlite://slices/main/config/db/development.sqlite
SLICE_NAME__DATABASE_URL conventionslices/<name>/config/dbAdd advanced configuration via providers
# config/providers/db.rb (app-level)
# or slices/<name>/config/providers/db.rb (slice-level)
Hanami.app.configure_provider :db do
config.gateway :default do |gw|
gw.database_url = "postgres://localhost:5432/mydb"
gw.adapter :sql do |sql|
# ROM plugins
sql.plugin relations: :auto_restrictions
# Sequel extensions
sql.extension :caller_logging, :error_sql, :sql_comments
sql.extension :pg_array, :pg_enum, :pg_json, :pg_range
end
end
end
Skip defaults when needed
gw.adapter :sql do |sql|
sql.skip_defaults # skip everything
sql.skip_defaults :plugins # skip ROM plugins only
sql.skip_defaults :extensions # skip Sequel extensions only
end
Configure multiple gateways
DATABASE_URL=postgres://localhost:5432/bookshelf_development
DATABASE_URL__LEGACY=mysql://localhost:3306/legacy
# In a relation
class LegacyUsers < Hanami::DB::Relation
gateway :legacy
schema infer: true
end
Set connection options
Hanami.app.configure_provider :db do
config.gateway :default do |gw|
gw.connection_options search_path: ['public', 'alt']
end
config.gateway :legacy do |gw|
gw.connection_options max_connections: 4
gw.adapter :sql
end
end
Access DB via container keys
db.config — Final ROM configuration objectdb.rom — ROM instance for the slicedb.gateway — Default DB gatewaydb.gateways.default — Explicitly-named gatewaydb.gateways.<name> — Any additional gatewayDefine a basic relation
# app/relations/books.rb
module Bookshelf
module Relations
class Books < Hanami::DB::Relation
schema :books, infer: true
end
end
end
app/relations/ or slices/<name>/relations/relations namespace (e.g., relations.books)Customize schema with type coercion
class Books < Hanami::DB::Relation
schema :books, infer: true do
primary_key :id
attribute :status, Types::String, read: Types::Coercible::Symbol
end
end
read: type is used when reading from DBROM::SQL::Types (built on dry-types)Define custom types for complex data
class Credentials < Hanami::DB::Relation
JWKS = Types.define(JWT::JWK::Set) do
input { |jwks| Types::PG::JSONB[jwks.export] }
output { |jsonb| JWT::JWK::Set.new(jsonb.to_h) }
end
schema infer: true do
attribute :jwks, JWKS
end
end
Define one-to-many associations
class Publishers < Hanami::DB::Relation
schema :publishers, infer: true do
associations do
has_many :books
end
end
end
has_many is aliased as one_to_manyDefine many-to-one associations
class Books < Hanami::DB::Relation
schema :books, infer: true do
associations do
belongs_to :language
end
end
end
class Languages < Hanami::DB::Relation
schema :languages, infer: true do
associations do
has_many :books
end
end
end
belongs_to is a shortcut for many_to_one :languages, as: :languageDefine many-to-many associations through a join table
class Books < Hanami::DB::Relation
schema :books, infer: true do
associations do
has_many :authors, through: :authorships
end
end
end
class Authorships < Hanami::DB::Relation
schema :authorships do
primary_key :id
attribute :book_id, Types.ForeignKey(:books)
attribute :author_id, Types.ForeignKey(:authors)
attribute :order, Types::Integer
associations do
belongs_to :book
belongs_to :author
end
end
end
class Authors < Hanami::DB::Relation
schema :authors, infer: true do
has_many :books, through: :authorships
end
end
Use custom foreign keys
class Credentials < Hanami::DB::Relation
schema :credentials, infer: true do
attribute :user_id, Types.ForeignKey(:users, Types::PG::UUID)
end
end
Alias relations
class Authorships < Hanami::DB::Relation
schema :books_authors, infer: true, as: :authorships
end
class Books < Hanami::DB::Relation
schema :books, infer: true do
associations do
has_many :books_authors, as: :authorships, relation: :authorships
end
end
end
Query with hash-based syntax
books.where(publication_date: Date.new(2024, 11, 5))
books.where(id: 1).one
books.where(title: "Hanami").to_a
books.exclude(pages: ...1000)
books.fetch(1) # shortcut for where(id: 1).one
Query with expression-based syntax (Sequel VirtualRows)
books.where { publication_date.is(Date.new(2024, 11, 5)) }
books.where { date_part('year', publication_date) > 2020 }
books.exclude { pages < 1000 }
Select specific columns
books.select(:id, :title).first
books.select { [id, title] }.first
# => { id: 1, title: "To Kill a Mockingbird" }
books.select(:id, :title).select_append(:pages).first
select calls replace the existing projectionselect_append to add columnsOrder results
books.order(:title)
books.order { [publication_date.desc, title.asc] }
books.unordered # remove ordering
Use dynamic typed columns
books.select {[
integer::count(:id).as(:total),
integer::count(:id).filter(pages < 300).as(:short),
integer::count(:id).filter(pages > 300).as(:long)
]}.unordered.one
# => { total: 2, short: 1, long: 1 }
bool, date, datetime, decimal, float, hash, integer, json, range, serial, string, timeUse case expressions
books.select {[
id,
title,
string::case(
quantity.is(0) => "out-of-stock",
(quantity < 100) => "low-stock",
else: "in-stock"
).as(:status)
]}.to_a
Inspect generated SQL
books.dataset.sql
Join relations at the SQL level
class Users < Hanami::DB::Relation
schema :users, infer: true do
associations do
has_many :tasks
has_many :posts
end
end
def with_tasks
join(tasks)
end
def with_posts
left_join(posts)
end
end
users.with_tasks.to_a # INNER JOIN
users.with_posts.to_a # LEFT JOIN
Join with explicit options
class Users < Hanami::DB::Relation
schema :users, infer: true do
associations do
has_many :tasks
end
end
def with_tasks
join(:tasks, { id: :user_id }, table_alias: :user_tasks)
end
end
Use right_join when needed
users.right_join(posts)
Combine relations to load nested data
# Load a user with their projects (lazy loading — no N+1)
users.by_id(2).combine(:projects).one
# => {:id=>2, :username=>"john", :projects=>[{:id=>1, :user_id=>2, :name=>"Project A"}]}
# ROM never loads associated data unless explicitly combined
users.by_id(2).one
# => {:id=>2, :username=>"john"} (no :projects key)
Nested combine
users.by_id(2).combine(projects: :project_tasks).one
# Complex nesting
users.by_id(2).combine(
projects: [{ project_tasks: :reviewed_by }, :reviewed_by]
).one
Adjust nested data with node
users.by_id(2)
.combine(projects: :project_tasks)
.node(projects: :project_tasks) { |tasks_rel|
tasks_rel.where { description == 'Task 1' }
}
.one
node must come after combine in the call chainnode with select to limit columns in nested relationsSet default dataset
class Books < Hanami::DB::Relation
schema :books, infer: true
dataset do
select(:id, :title, :publication_date).order(:publication_date)
end
end
Simulate soft deletes with dataset
class Books < Hanami::DB::Relation
schema :books, infer: true
dataset { where(archived_at: nil) }
end
# Bypass default filter
books.unfiltered.exclude(archived_at: nil)
Define custom scopes
class Books < Hanami::DB::Relation
schema :books, infer: true
def recent
where { publication_date > Date.new(2020, 1, 1) }
end
end
books.recent
by_pk automaticallyone, to_a, eachGenerate migrations via CLI
hanami db new create_users
hanami db new create_posts
hanami db migrate # run pending migrations
hanami db rollback # undo last migration
hanami db seed # run seed files
Write a basic migration (auto-inferable down)
ROM::SQL.migration do
change do
create_table :users do
primary_key :id
foreign_key :account_id, :accounts, on_delete: :cascade, null: false
column :given_name, String, null: false
column :family_name, String, null: false
column :email, "citext", null: false
end
end
end
Write migration with explicit up/down
ROM::SQL.migration do
up do
alter_table :users do
add_unique_constraint [:email], name: :users_email_uniq
end
end
down do
alter_table :users do
drop_constraint :users_email_uniq
end
end
end
Run migration outside a transaction
ROM::SQL.migration do
no_transaction
up do
alter_table :users do
add_index :email, concurrently: true
end
end
down do
alter_table :users do
drop_index :email, concurrently: true
end
end
end
Use raw SQL as an escape hatch
ROM::SQL.migration do
up do
execute <<~SQL
CREATE TRIGGER posts_tsvector_update()
BEFORE INSERT OR UPDATE ON public.posts
FOR EACH ROW
WHEN (
OLD.title IS DISTINCT FROM NEW.title OR
OLD.content IS DISTINCT FROM NEW.content
)
EXECUTE PROCEDURE tsvector_update_trigger(search_tsvector, 'public.english', title, content)
SQL
end
down do
execute "DROP TRIGGER posts_tsvector_update() ON public.posts"
end
end
Column type options in migrations
create_table :users do
# Explicit SQL type
column :email, "varchar(255)", null: false
# Ruby type (inferred SQL)
column :email, String, null: false
# Helper method (no inference, SQL type: text)
text :email, null: false
# Ruby type method (inferred SQL)
String :email, null: false
end
Define constraints in migrations
create_table :users do
primary_key :id
column :name, String, null: false
constraint(:name_min_length) { char_length(name) > 2 }
end
Use structure.sql for database schema snapshots
config/db/structure.sql# Automatic rollback on error
users.transaction do |txn|
users.command(:create).call(name: "Jane")
tasks.command(:create).call(title: "Task 1", user_id: 1)
end
# Manual rollback
users.transaction do |txn|
users.command(:create).call(name: "Jane")
txn.rollback! # everything rolled back
end
relation.transaction { ... }changeset.transactionDefine a repository
class UserRepo < Hanami::DB::Repo
def find(email)
users.where(email:).one
end
end
Hanami::DB::Repousers)Repository provides a stable API
# Before: emails as identity
class UserRepo < Hanami::DB::Repo
def find(email)
users.where(email:).one
end
end
# After: usernames as identity — only this file changes
class UserRepo < Hanami::DB::Repo
def find(username)
users.where(username:).one
end
end
Use commands for direct write operations
# Create
users.command(:create).call(name: "Jane", email: "jane@example.com")
# Create multiple at once
create = users.command(:create, result: :many)
create.call([
{ name: "Jane", email: "jane@example.com" },
{ name: "John", email: "john@example.com" }
])
# Update
users.by_pk(1).command(:update).call(name: "Jane Doe")
# Delete
users.by_pk(1).command(:delete).call
command(:create), command(:update), command(:delete)result: :many enables batch insertsPersist nested data with combined commands
# Persist a user with associated tasks in one call
users.combine(:tasks).command(:create).call(
name: "Jane",
email: "jane@example.com",
tasks: [
{ title: "Task 1" },
{ title: "Task 2" }
]
)
:create commands only:update/:delete on aggregates, use changesetsDefine custom command types
class CustomCreate < ROM::SQL::Commands::Create
relation :users
register_as :custom_create
def execute(tuple)
tuple[:slug] = tuple[:name].downcase.gsub(/\s+/, '-')
super
end
end
users.command(:custom_create).call(name: "Jane")
Use changesets for data validation and mapping
# Create a changeset with validation
changeset = users.changeset(:create, name: "Jane", email: "jane@example.com")
# Validate
changeset.validate
# => true/false
# Commit (persist)
changeset.commit
# => #<ROM::Struct[User] id=1 name="Jane" email="jane@example.com">
# With transaction
tasks.transaction do
user = users.changeset(:create, name: "Jane").commit
tasks.changeset(:create, title: "Task 1").associate(user).commit
end
changeset.validate before commit for explicit validationassociate(other_struct) auto-sets foreign keys based on schema associationsPre-configured mapping in changesets
class NewUserChangeset < ROM::Changeset::Create
map do
unwrap :address, prefix: true # { address: { city: "NYC" } } → address_city: "NYC"
add_timestamps # sets created_at and updated_at
end
end
users.changeset(NewUserChangeset, name: "Jane", address: { city: "NYC" })
On-demand mapping
users
.changeset(:create, name: "Joe", email: "joe@example.com")
.map(:add_timestamps)
.commit
:add_timestamps, :touch, unwrapCustom mapping block
class NewUserChangeset < ROM::Changeset::Create
map do |tuple|
tuple.merge(created_on: Date.today)
end
end
Define a custom struct
module Main
module Structs
class User < Hanami::DB::Struct
def full_name
"#{given_name} #{family_name}"
end
def mailbox
"#{full_name} <#{email}>"
end
end
end
end
Use structs as different projections
User for general displayCredential for authenticationRole for authorizationVisitor for identity displayWhen assisting with Hanami DB tasks, follow this workflow:
Identify the persistence layer need
Configure the database
Define relations
schema :table, infer: true as the baselineTypes.define for complex data transformationsBuild queries through relations
join/left_join for SQL-level joinscombine for loading nested associated data (avoids N+1)node to adjust nested data within combinesWrite data
relation.command(:create)) for bulk operations (~1.5–2x faster)relation.transaction for atomicitycombine(:tasks).command(:create) for simple nested createschangeset.validate before commit for explicit validationWrite migrations
change blocks when the reverse can be inferredup/down for complex operationsno_transaction for operations that don't support transactions (e.g., concurrent index creation)Create repositories and structs
Review and refine
When detailed information is needed about specific topics, consult the Hanami DB documentation:
Additional ROM references:
infer: true as the starting pointchange blocks in migrations when the reverse operation can be inferredup/down for non-inferable migrationsno_transaction only for operations that genuinely cannot run in transactionsDATABASE_URL__GATEWAY) when possibleconfigure_from_parent = true)schema :table, infer: true and override only what is necessaryread: type coercion when SQL and Ruby types differTypes.define for complex data (JSONB, UUID, etc.)ForeignKey type for non-integer foreign keysunfiltered to bypass default dataset filters when neededone for single records, to_a for collectionsjoin/left_join/right_join for SQL-level joins in scopescombine to load nested associated data (prevents N+1 queries)combine explicitlynode after combine to filter or limit columns in nested relationsrelation.transaction for atomicityassociate for automatic foreign key setting in transactionscombine(:tasks).command(:create) for simple nested createschange blocks for simple migrations with auto-inferable reversalsup/down for complex or non-inferable operationsno_transaction only for DDL operations that don't support transactionsstructure.sql when adding migrations to maintain schema snapshots