Introduction Last updated: 2022-12-29

Slinq is a feature-rich PostgreSQL query builder for Scala that takes a different approach from traditional ORMs and database libraries. Instead of abstracting SQL behind collection-like APIs or object-relational mappings, Slinq mirrors SQL structure directly in your Scala code. This design makes queries highly readable, the resulting SQL predictable, and complex database operations practical to write and maintain.

The library embraces PostgreSQL's powerful features rather than limiting itself to a lowest-common-denominator SQL subset. It provides comprehensive support for JSONB fields, array operations, advanced datetime handling, subqueries, and streaming—all while maintaining compile-time type safety and delivering performance comparable to raw JDBC through intelligent query caching.

Available with native integration for ZIO 1, ZIO 2, and a version using only Scala ExecutionContext.

Key features:

  • Query caching - SQL strings are built only once and reused for optimal performance
  • Extensive JSONB support - Query, update, and return JSON data with full PostgreSQL JSONB capabilities
  • Streaming - Stream large datasets to and from the database with configurable batch sizes
  • Array fields - Complete support for PostgreSQL array types with rich operators
  • Advanced datetime - Comprehensive date, time, and timestamp operations with intervals and formatting
  • Subqueries - Use subqueries in WHERE conditions and SELECT columns
  • Transactions - Execute multiple operations atomically with full streaming support
  • Flexible results - Return rows as tuples, case classes, vectors, or JSON strings
  • Type safety - Comprehensive compile-time type checking without confusing wildcard types
Slinq projects on GitHub

On Github for ZIO 1 https://github.com/karimagnusson/kuzminki-zio

On Github for ZIO 2 https://github.com/karimagnusson/kuzminki-zio-2

On Github for EC https://github.com/karimagnusson/kuzminki-ec

Slinq is a query builder for PostgreSQL that mirrors SQL structure directly in Scala code. Instead of learning library-specific abstractions, you write code that follows SQL's familiar structure, making complex queries practical and readable with compile-time type checking.

Design Philosophy

Slinq supports only PostgreSQL (and compatible databases like CockroachDB), allowing it to fully leverage PostgreSQL's capabilities—JSONB, arrays, window functions, and advanced data types. The API prioritizes clarity over abstraction: you already know SQL, and Slinq lets you apply that knowledge directly in type-safe Scala.

Performance

Query caching builds each SQL string once, then reuses it. After initial compilation, queries execute at raw JDBC speed. All statements can be cached except SELECT queries with optional WHERE conditions. Streaming processes large datasets in configurable batches.

JSONB Support

Slinq provides comprehensive JSONB support for querying, updating, and returning JSON data. Return rows directly as formatted JSON strings, build nested objects from multiple tables, and serve JSON to clients without transformations. See the zio-http demo project for examples.

Type Safety

Column references, operators, and result types are verified at compile time. Results can be returned as tuples, case classes, Seq, or JSON strings. The library avoids wildcard types that produce unclear errors.

Installation

Slinq is available for both Scala 2.13 and Scala 3. Use %% for cross-compilation.

For ZIO 1
libraryDependencies += "io.github.karimagnusson" %% "kuzminki-zio" % "0.9.5"
For ZIO 2
libraryDependencies += "io.github.karimagnusson" %% "kuzminki-zio-2" % "0.9.5"
For EC
libraryDependencies += "io.github.karimagnusson" %% "kuzminki-ec" % "0.9.5"

Connecting to the database

Config

Configure your database connection using DbConfig. The configuration uses a fluent API with sensible defaults for common settings.

								val dbConfig = DbConfig
  .forDb("{DB-NAME}")
  .withMaxPoolSize(10) // default = 10
  .withMinPoolSize(4)  // default = 4
  .withHost("{HOST}")  // default = localhost
  .withPort("{PORT}")  // default = 5432
  .withUser("{USER}}")
  .withPassword("{PASSWORD}")
  .withOptions(Map(...))
            	

Layer

Slinq integrates seamlessly with ZIO's layer system for dependency injection. The Slinq.layer method creates a managed resource layer that handles connection pooling using ZIO's ZPool. Connection lifecycles, including acquisition and release, are automatically managed with proper resource cleanup on application shutdown.

Provide the layer to your application using .provide(), making the Slinq service available to all database operations in your ZIO workflow.

For ZIO 2

                object MyApp extends ZIOAppDefault {
  val dbLayer = Slinq.layer(DbConfig.forDb("company"))

  val job = for {
    results <- sql.select(client).cols3(_.all).run
  } yield results

  def run = job.provide(dbLayer)
}
              

For ZIO 1

							object MyApp extends zio.App {
  val dbLayer = Slinq.layer(DbConfig.forDb("company"))
  val job = // ...

  override def run(args: List[String]): ZIO[ZEnv, Nothing, ExitCode] = {
    job.provideCustomLayer(dbLayer).exitCode
  }
}
            	

For EC (ExecutionContext)

Slinq-ec uses Scala's standard ExecutionContext and Future, making it compatible with Akka, Pekko, Play Framework, and plain Scala applications. Use Slinq.create to initialize a connection pool with HikariCP, providing both a standard ExecutionContext and a blocking dispatcher for database I/O operations.

							import kuzminki.api._
import org.apache.pekko.actor.ActorSystem

object MyApp extends App {
  implicit val system = ActorSystem()
  implicit val ec = system.dispatcher
  
  // Create database connection with blocking dispatcher for DB operations
  implicit val db = Slinq.create(
    DbConfig.forDb("company"),
    system.dispatchers.lookup("pekko.actor.default-blocking-io-dispatcher")
  )

  val job = for {
    results <- sql.select(client).cols3(_.all).run
  } yield results

  job.onComplete { _ =>
    db.close()  // Close connection pool
    system.terminate()  // Shutdown actor system
  }
}
            	

Model

Creating a model

Models provide type-safe mappings from your database tables to Scala classes. Define a model by extending Model with the table name as a parameter, then declare each column using column[Type]("column_name"). The column types are verified at compile time, ensuring type safety throughout your queries.

Models are typically registered once at application startup using Model.register[YourModel], which stores a singleton instance for reuse. See Data types for all available column types.

								import kuzminki.api._
import java.sql.Timestamp

class User extends Model("user_profile") {
  val id = column[Int]("id")
  val username = column[String]("username")
  val email = column[String]("email")
  val name = column[String]("name")
  val age = column[Int]("age")
  val gender = column[String]("gender")
  val country = column[String]("country")
  val city = column[String]("city")
  val discount = column[Int]("discount")
  val isActive = column[Boolean]("is_active")
  val created = column[Timestamp]("created")
}

Model.register[User]
            	

Custom methods

Define custom methods in your model to create reusable column selections. This is particularly useful for common query patterns—instead of repeatedly specifying the same columns, define helper methods that return frequently used column combinations.

These methods are especially valuable when you consistently select the same set of columns across multiple queries, or when you want to create semantic groupings like "summary fields" or "detail fields".

								class Customer extends Model("customer") {
  val id = column[Int]("id")
  val userId = column[Int]("user_id")
  val name = column[String]("name")
  val email = column[String]("email")
  val spending = column[Int]("spending")
  val created = column[Timestamp]("created")

  // Common selections
  def all = (id, userId, name, email, spending, created)
  def summary = (id, name, spending)
  def contact = (name, email)
}

// Use in queries
sql.select(customer).cols6(_.all).run
// List[(Int, Int, String, String, Int, Timestamp)]

sql.select(customer).cols3(_.summary).where(_.spending > 1000).run
// List[(Int, String, Int)]

sql.select(customer).cols2(_.contact).where(_.id === 42).run
// List[(String, String)]
            	

Create a model instance

Models are accessed in queries using Model.get[User], which retrieves a cached singleton instance. If the model hasn't been created yet, Model.get automatically instantiates and caches it.

Using Model.register[User] at application startup is optional but recommended in concurrent applications. While Model.get handles lazy creation, explicitly registering models upfront prevents potential race conditions when multiple threads call Model.get simultaneously for the same model type.

								// At application startup
Model.register[User]
Model.register[Customer]

// Later, in your application code
val user = Model.get[User]
val customer = Model.get[Customer]

// Use in queries
sql.select(user).cols3(_.all).where(_.age > 18).run
sql.select(customer).cols2(t => (t.name, t.email)).run
            	

Select

Select query

SELECT queries in Slinq follow SQL's familiar structure: select columns, specify conditions, add ordering, and apply limits. Column selections are specified as tuples, and the query returns a list of tuples matching those types.

Use cols2, cols3, etc. based on the number of columns (up to 22). For more columns, use colsSeq to return vectors. Ordering defaults to DESC; use .asc for ascending order.

								sql
  .select(user)
  .cols2(t => (
    t.id,
    t.username
  ))
  .where(t => Seq(
    t.gender === "f",
    t.age > 25
  ))
  .orderBy(_.age.desc)
  .limit(10)
  .run
// List[(Int, String)]
            	
								SELECT
  "id",
  "username"
FROM "user_profile"
WHERE "gender" = 'f'
AND "age" > 25
ORDER BY "age" DESC
LIMIT 10
            	

Row as case class

The runType method automatically converts query results to case classes using compile-time derivation. The column types in your tuple must match the case class field types in order and type—the compiler verifies this at compile time.

This uses Shapeless Generic in Scala 2 and Mirror in Scala 3 for automatic derivation, eliminating the need for manual mapping code.

              case class SimpleUser(id: Int, username: String, email: String)

sql
  .select(user)
  .cols3(t => (
    t.id,
    t.username,
    t.email
  ))
  .where(_.age < 25)
  .runType[SimpleUser]
// List[SimpleUser]
              

Row as Seq

Use colsSeq when you need to work with a dynamic number of columns or when you exceed Scala's 22-field tuple limit. Each row is returned as a Vector[Any], requiring runtime type checking to access values.

This approach trades compile-time type safety for flexibility—useful when column selections are determined at runtime or when working with very wide tables.

              sql
  .select(user)
  .colsSeq(t => Seq(
    t.id,
    t.username,
    t.email
  ))
  .where(_.age < 25)
  .run
// List[Vector[Any]]
              

Row as JSON

Slinq can return query results directly as JSON, which is particularly useful when building REST APIs or serving data to frontend applications. This eliminates the need to manually convert database rows to JSON format.

There are two main approaches: colsJson returns rows as JSON strings using PostgreSQL's native JSON functions (highly efficient), while colsNamed with runAs converts rows to your preferred JSON library's types.

Basic JSON output

Use colsJson to get rows as JSON strings. Column names become JSON keys automatically.

                sql
  .select(user)
  .colsJson(t => Seq(
    t.id,
    t.username
  ))
  .where(_.id === 5)
  .runHead
  // {"id": 5, "username": "Joe"}
              
Custom keys and nested objects

Rename keys using "key" -> column or column.as("key"), and create nested JSON objects with Fn.json.

                .colsJson(t => Seq(
  "id" -> t.id,
  t.username.as("username"),
  Fn.json(Seq(
    t.country,
    t.city
  )).as("location")
))
// {"id": 5, "username": "Joe", "location": {"country": "IT", "city": "Rome"}}
              
Nested arrays from subqueries

One of Slinq's most powerful features is the ability to construct complex nested JSON structures in a single query. You can embed related data from other tables as nested JSON arrays or objects, eliminating the need for multiple queries and manual data aggregation.

The pattern uses a subquery with colsJson that correlates to the outer query (using <=> for column correlation), then calls asColumn to convert it into a selectable column. The subquery returns all matching rows as a JSON array. Add .first after asColumn if you only need a single nested object instead of an array.

This approach builds the entire JSON structure in PostgreSQL and returns it as a single result, which is significantly more efficient than fetching data separately and assembling it in application code.

                // Returns a user with all their travels as a nested array
sql
  .select(user)
  .colsJson(t => Seq(
    t.id,
    t.username,
    sql
      .select(travels)
      .colsJson(s => Seq(
        s.country,
        s.year
      ))
      .where(s.userId <=> t.id)  // Correlate with outer query
      .orderBy(_.year.desc)
      .asColumn  // Convert subquery to a column
      .as("travels")
  ))
  .where(_.id === 5)
  .runHead

// Result:
// {"id": 5, "username": "Joe", "travels": [
//   {"country": "France", "year": 2023},
//   {"country": "Spain", "year": 2022},
//   {"country": "Italy", "year": 2021}
// ]}
              
Using your own JSON library

While colsJson uses PostgreSQL's native JSON functions and returns JSON strings, you may prefer to work with your application's JSON library types (Play JSON's JsValue, Circe's Json, etc.). This gives you more control over serialization, allows you to manipulate the JSON before sending it, and integrates better with existing application code.

The pattern uses colsNamed to select columns with their names as keys, then runAs with an implicit converter that transforms the Seq[(String, Any)] into your JSON type. Column names are used as JSON keys by default, or you can specify custom keys with "key" -> column.

                // Define an implicit converter (see implementation below)
implicit val loadJson: Seq[(String, Any)] => JsValue = PlayJsonLoader.load

sql
  .select(user)
  .colsNamed(t => Seq(
    t.id,
    t.username,
    t.email.as("email_address")  // Custom key
  ))
  .where(_.age < 25)
  .runAs[JsValue]
// List[JsValue] - Each row is a JsValue you can manipulate

// You can now work with the result using your JSON library's API
// result.map(json => (json \ "username").as[String])
              
Example JSON converter

Here's a complete implementation of a converter for Play JSON. The converter handles all common database types including primitives, dates, UUIDs, Options, Sequences, and even JSONB fields. You can adapt this pattern for other libraries like Circe or uPickle by replacing the Play JSON specific types (JsValue, JsString, etc.) with your library's equivalents.

                import java.util.UUID
import java.sql.Time
import java.sql.Date
import java.sql.Timestamp
import play.api.libs.json._
import kuzminki.api.Jsonb

object PlayJsonLoader {

  val toJsValue: Any => JsValue = {
    case v: String      => JsString(v)
    case v: Boolean     => JsBoolean(v)
    case v: Short       => JsNumber(v)
    case v: Int         => JsNumber(v)
    case v: Long        => JsNumber(v)
    case v: Float       => JsNumber(v)
    case v: Double      => JsNumber(v)
    case v: BigDecimal  => JsNumber(v)
    case v: Time        => Json.toJson(v)
    case v: Date        => Json.toJson(v)
    case v: Timestamp   => Json.toJson(v)
    case v: UUID        => JsString(v.toString)
    case v: Jsonb       => Json.parse(v.value)
    case v: Option[_]   => v.map(toJsValue).getOrElse(JsNull)
    case v: Seq[_]      => JsArray(v.map(toJsValue))
    case v: JsValue     => v
    case _              => throw new Exception("Cannot convert to JsValue")
  }

  def load(data: Seq[Tuple2[String, Any]]): JsValue = {
    JsObject(data.map(p => (p._1, toJsValue(p._2))))
  }
}
              

Where

The where clause filters query results using conditions on columns. For a single condition, pass it directly. For multiple conditions, pass them as a Seq—all conditions are combined with AND. Slinq provides a rich set of type-safe operators that mirror SQL's comparison and logical operators.

See Operators for the complete list of available operators including equality, comparison, pattern matching, and more.

								// Single condition
.where(_.id > 100)

// Multiple conditions (combined with AND)
.where(t => Seq(
  t.gender === "f",
  t.age > 25,
  t.isActive === true
))
// Equivalent to: WHERE gender = 'f' AND age > 25 AND is_active = true
            	

GROUP BY / HAVING

Use groupBy to aggregate rows by one or more columns, typically combined with aggregation functions like Agg.count, Agg.avg, Agg.sum, Agg.min, or Agg.max. The having clause filters groups after aggregation, similar to how where filters rows before aggregation.

Important distinction: where filters individual rows before grouping, while having filters grouped results. You can use both in the same query.

                sql
  .select(user)
  .cols2(t => (
    t.gender,
    Agg.avg(t.age)
  ))
  .where(_.age > 0)        // Filter rows before grouping
  .groupBy(_.gender)        // Group by gender
  .having(_.gender !== "")  // Filter groups after aggregation
  .orderBy(t => Agg.avg(t.age).desc)
  .run
// List[(String, BigDecimal)]
              
                SELECT "gender", avg("age")::numeric
FROM "user_profile"
WHERE "age" > 0
GROUP BY "gender"
HAVING "gender" != ''
ORDER BY avg("age")::numeric DESC
              

AND / OR

By default, conditions in a Seq are combined with AND. To create OR conditions, use the Or() function from kuzminki.fn. You can nest And() and Or() to build complex logical expressions with proper parentheses in the generated SQL.

This approach gives you full control over operator precedence and logical grouping, ensuring your conditions are evaluated exactly as intended.

								import kuzminki.fn._

// Mix AND with OR
.where(t => Seq(
  t.age > 25,
  Or(
    t.country === "RU",
    t.country === "FR"
  )
))
// WHERE "age" > 25 AND ("country" = 'RU' OR "country" = 'FR')

// Nested AND/OR for complex logic
.where(t => Or(
  And(
    t.country === "RU",
    t.city === "Moscow"
  ),
  And(
    t.country === "FR",
    t.city === "Paris"
  )
))
// WHERE ("country" = 'RU' AND "city" = 'Moscow') OR ("country" = 'FR' AND "city" = 'Paris')
            	

Optional conditions

When building queries with conditional filters (such as from HTTP query parameters or user preferences), use whereOpt to handle Option values. Conditions with None are automatically excluded from the WHERE clause, while conditions with Some values are included.

This is particularly useful for search forms or REST APIs where filters are optional. Instead of building multiple query variations or using if/else logic, you can construct a single query where filters conditionally apply based on whether values are present.

								// Single optional condition
.whereOpt(_.id > Some(100))
// WHERE "id" > 100

// Multiple optional conditions - None values are excluded
.whereOpt(t => Seq(
  t.gender === None,        // Excluded
  t.age > Some(25)          // Included
))
// WHERE "age" > 25

// Optional OR conditions
.whereOpt(t => Seq(
  t.age > Some(25),
  Or.opt(
    t.country === Some("RU"),
    t.country === Some("FR")
  )
))
// WHERE "age" > 25 AND ("country" = 'RU' OR "country" = 'FR')

// Practical example with HTTP query params
case class SearchParams(
  minAge: Option[Int],
  country: Option[String],
  isActive: Option[Boolean]
)

def searchUsers(params: SearchParams) = {
  sql.select(user)
    .cols3(_.all)
    .whereOpt(t => Seq(
      t.age > params.minAge,
      t.country === params.country,
      t.isActive === params.isActive
    ))
    .run
}
            	

Distinct

Use distinct to eliminate duplicate rows from your result set. This is particularly useful when joining tables or selecting combinations of values where duplicates might occur.

PostgreSQL's DISTINCT ON is a powerful extension that returns the first row for each unique value (or combination of values) in the specified column(s). This is especially useful for "latest record per group" queries where you want one row per category, user, or identifier.

                sql
  .select(user)
  .cols2(t => (
    t.username,
    t.age
  ))
  .distinct
  .all
  .orderBy(_.age.asc)
  .run
              
                SELECT DISTINCT "username", "age"
FROM "user_profile"
ORDER BY "age" ASC
)
              
                sql
  .select(user)
  .cols2(t => (
    t.username,
    t.age
  ))
  .distinctOn(_.age)
  .all
  .orderBy(_.age.asc)
  .run
              
                // Returns one row per unique age value
// Combined with ORDER BY, you get the first username for each age
// List[(String, Int)]
              
                SELECT DISTINCT ON ("age") "username", "age"
FROM "user_profile"
ORDER BY "age" ASC
)
              

Nested query

Nested queries (also called subqueries) allow you to use the result of one query as part of another query's condition. This is particularly useful for filtering based on data from related tables without using joins.

Slinq supports subqueries in WHERE clauses using operators like in, notIn, and comparison operators. The inner query must select a single column that matches the type of the outer query's comparison field.

								class Newsletter extends Model("newsletter") {
  val email = column[String]("email")
  val isSubscribed = column[Boolean]("is_subscribed")
}

val newsletter = Model.get[Newsletter]

sql
  .select(user)
  .cols1(_.username)
  .where(_.email.in(
    sql
      .select(newsletter)
      .cols1(_.email)  // Subquery must return single column
      .where(_.isSubscribed === true)
  ))
  .run
// Returns usernames for all users whose email appears in the newsletter table
// with is_subscribed = true
// List[String]
            	
								SELECT "username"
FROM "user_profile"
WHERE "email" = ANY(
  SELECT "email"
  FROM "newsletter"
  WHERE "is_subscribed" = true
)
            	

Pages

The asPages method provides pagination support for SELECT queries. Instead of loading all results at once, you can efficiently retrieve data page by page, which is essential for handling large datasets and building paginated user interfaces.

The pages object maintains internal state to track the current position and provides methods for sequential navigation (next, prev) as well as random access (page(n)).

                val pages = sql
  .select(user)
  .cols3(t => (
    t.id,
    t.firstName,
    t.lastName
  ))
  .orderBy(_.id.asc)  // ORDER BY is required for consistent pagination
  .asPages(10)         // 10 rows per page

val job = for {
  next  <- pages.next     // Get next page (uses LIMIT/OFFSET)
  page3 <- pages.page(3)  // Jump to specific page (page 3)
} yield (next, page3)
// Both return: List[(Int, String, String)]
              

Cache

Calling cache on a query converts it into a prepared statement that can be reused with different parameters. The SQL string is compiled only once, giving you the same performance as raw JDBC prepared statements while maintaining Slinq's type safety.

This is particularly valuable for queries executed repeatedly with different parameters (e.g., search filters, user lookups). The cached statement is stored and reused, avoiding the overhead of rebuilding the SQL string on each execution.

								val stm = sql
  .select(user)
  .cols1(_.username)
  .all
  .orderBy(_.age.asc)
  .pickWhere2(t => (
    t.country.use === Arg,  // First parameter
    t.age.use > Arg         // Second parameter
  ))
  .cache  // SQL compiled once, reusable with different params

stm.run(("CN", 25))  // List[String]
stm.run(("US", 30))  // Reuse same statement with different values
            	
								SELECT "username"
FROM "user_profile"
WHERE "country" = 'CN'
AND "age" > 25
ORDER BY "age" ASC
            	

You can also cache queries with no parameters. This is useful for frequently-run queries where you want to avoid the SQL compilation overhead.

                val stm = sql
  .select(user)
  .cols2(t => (  // Fixed: was col2, should be cols2
    t.username,
    t.created
  ))
  .all
  .orderBy(_.created.desc)
  .cache

stm.run  // List[(String, Timestamp)]
// Reuse multiple times without recompiling SQL
              

Cached queries can include subqueries with their own parameters. The parameters are flattened in order: first the outer query parameters, then the subquery parameters.

                val stm = sql
  .select(user)
  .cols1(_.username)
  .all
  .pickWhere2(t => (  // Fixed: was pickWhere(t = (, should be pickWhere2(t => (
    t.age.use > Arg,
    t.email.use.in(
      sql
        .select(newsletter)
        .cols1(_.email)
        .all
        .pickWhere1(_.isActive.use === Arg)
    )
  ))
  .cache

stm.run(25, true)  // First param: age, Second param: isActive
// List[String]
              
                SELECT "username" FROM "user_profile"
WHERE "age" > 25
AND "email" = ANY(
  SELECT "a"."email"
  FROM "client" "a"
  WHERE "a"."is_active" > true
)
              

You can combine cached parameters with static WHERE conditions, GROUP BY, and HAVING clauses. The parameterized conditions (defined with pickWhere) work alongside your static filters.

                val stm = sql
  .select(user)
  .cols2(t => (
    t.gender,
    Agg.avg(t.age)
  ))
  .all
  .groupBy(_.gender)
  .having(_.gender !== "")
  .orderBy(t => Agg.avg(t.age).desc)
  .pickWhere1(_.age.use > Arg)  // Parameterized condition
  .cache

stm.run(18)  // List[(String, BigDecimal)]
// WHERE clause filters data before GROUP BY aggregation
              
                SELECT "gender", avg("age")::numeric
FROM "user_profile" WHERE "age" > 18
GROUP BY "gender"
HAVING "gender" != ''
ORDER BY avg("age")::numeric ASC
              

Cached with WHERE

Combine static where conditions with parameterized pickWhere conditions. The static conditions are always applied, while the parameterized conditions change based on the runtime arguments you provide.

This pattern is ideal when you have some filters that never change (like age > 25) combined with dynamic filters (like country selection).

								val stm = sql
  .select(user)
  .cols1(_.username)
  .where(_.age > 25)              // Static condition (always applied)
  .orderBy(_.age.asc)
  .pickWhere1(_.country.use === Arg)  // Parameterized condition
  .cache

stm.run("CN")  // List[String]
stm.run("US")  // Both queries include age > 25 filter
            	
								SELECT "username"
FROM "user_profile"
WHERE "age" > 25
AND "country" = 'CN'
ORDER BY "age" ASC
            	

Results

Slinq offers several methods to execute queries and retrieve results, each suited to different use cases. All methods are type-safe—the return types are determined by the columns you select.

The basic run method returns all matching rows as tuples in a list. When you need only the first result, use runHead, which will throw an exception if no rows are found, or runHeadOpt to safely get an Option.

To convert rows to case classes instead of tuples, use runType[MyType] which automatically derives the conversion using Shapeless (Scala 2) or Mirror (Scala 3). For custom transformation logic, provide an implicit converter function and use runAs[MyType].

When executing update or delete operations, run performs the operation without returning a value, while runNum returns the count of affected rows as an Int.

                // query
val stm = sql.select(client).cols3(_.all).where(_.age > 25).limit(5)
stm.run: List[T]
stm.runHead: T
stm.runHeadOpt: Option[T]

// row as type (checked by the compiler)
stm.runType[MyType]: List[MyType]
stm.runHeadType[MyType]: MyType
stm.runHeadOptType[MyType]: Option[MyType]

// modify row
implicit val toMyType: SomeRow => MyType = row => //...
stm.runAs[MyType]: List[MyType]
stm.runHeadAs[MyType]: MyType
stm.runHeadOptAs[MyType]: Option[MyType]

// operation
val stm = sql.update(client).set(_.age ==> 24).where(_.id === 4)
stm.run: Unit
stm.runNum: Int
              

Join

Select join

Slinq supports joins between tables by passing multiple model instances to the select method. The first model is accessible as a, the second as b, and so on for additional models.

Use joinOn to specify the join condition, connecting columns from the two tables. You can then select columns from either model, filter on both, and order by any selected column.

							sql
  .select(user, customer)
  .cols3(t => (
    t.a.id,
    t.a.username,
    t.b.spending
  ))
  .joinOn(_.id, _.userId)  // First arg is 'a' (user), second is 'b' (customer)
  .where(t => Seq(
    t.a.age > 25,
    t.b.spending > 1000
  ))
  .orderBy(_.b.spending.desc)
  .limit(10)
  .run
// List[(Int, String, Int)]
            	
								SELECT
  "a"."id",
  "a"."username",
  "b"."spending"
FROM "user_profile" "a"
INNER JOIN "customer" "b"
ON "a"."id" = "b"."user_id"
WHERE "a"."age" > 25
AND "b"."spending" > 1000
ORDER BY "b"."spending" DESC
LIMIT 10
            	

Join types

Slinq supports all standard SQL join types. Choose the appropriate join based on whether you want to include unmatched rows from the left table, right table, both, or neither.

Important: When using outer joins (LEFT, RIGHT, FULL OUTER), columns from the nullable side may be NULL. Refer to the Null values section to handle these cases properly.

							.joinOn(_.id, _.userId) // INNER JOIN

.innerJoinOn(_.id, _.userId) // INNER JOIN

.leftJoinOn(_.id, _.userId) // LEFT JOIN

.leftOuterJoinOn(_.id, _.userId) // LEFT OUTER JOIN
 
.rightJoinOn(_.id, _.userId) // RIGHT JOIN

.rightOuterJoinOn(_.id, _.userId) // RIGHT OUTER JOIN

.fullOuterJoinOn(_.id, _.userId) // FULL OUTER JOIN

.crossJoin // CROSS JOIN
            	

Insert

Basic insert

Insert new rows by specifying the columns to populate and their values. The cols method defines which columns to insert into, and values provides the data as a tuple.

							sql
  .insert(user)
  .cols2(t => (
    t.username,
    t.email
  ))
  .values(("bob", "bob@mail.com"))
  .run  // Unit
            

For tables with more than 22 columns (Scala's tuple limit), use the data method instead of cols. This accepts a sequence of column-value pairs using the ==> operator.

              sql
  .insert(user)
  .data(t => Seq(
    t.username ==> "bob",
    t.email ==> "bob@mail.com"
    // ... add as many columns as needed
  ))
  .run  // Unit
            

Cache insert statements to reuse them with different values, avoiding the overhead of rebuilding the SQL for each insert.

              val stm = sql
  .insert(user)
  .cols2(t => (
    t.username,
    t.email
  ))
  .cache

stm.run(("bob", "bob@mail.com"))  // Unit
stm.run(("alice", "alice@mail.com"))  // Reuse same prepared statement
            	
								INSERT INTO "user_profile" ("username", "email") VALUES ('bob', 'bob@mail.com')
            	

Insert case class

Instead of tuples, you can insert case class instances directly. The case class fields must match the column types in the same order as specified in cols.

When cached, use runType to insert a single case class or runListType to insert multiple instances in a batch.

              case class NewUser(name: String, email: String)

// One-time insert
sql
  .insert(user)
  .cols2(t => (
    t.username,
    t.email
  ))
  .valuesType(NewUser("Bob", "bob@mail.com"))
  .run

// Cached for reuse
val stm = sql
  .insert(user)
  .cols2(t => (
    t.username,
    t.email
  ))
  .cache

stm.runType(NewUser("Bob", "bob@mail.com"))
stm.runListType(List(
  NewUser("Bob", "bob@mail.com"),
  NewUser("Alice", "alice@mail.com")
))  // Batch insert
              
                INSERT INTO "product" ("name", "price") VALUES ('Banana', 12.5)
              

Insert returning

Use RETURNING to get back the inserted row's values immediately after insertion. This is particularly useful for retrieving auto-generated IDs or default values set by the database.

Use runHead for single inserts or run for batch inserts with RETURNING.

							sql
  .insert(user)
  .cols2(t => (
    t.username,
    t.email
  ))
  .values(("bob", "bob@mail.com"))
  .returning3(t => (
    t.id,        // Auto-generated ID
    t.username,
    t.email
  ))
  .runHead  // (Int, String, String)

// Cached version
val stm = sql
  .insert(user)
  .cols2(t => (
    t.username,
    t.email
  ))
  .returning3(t => (
    t.id,
    t.username,
    t.email
  ))
  .cache

stm.runHead(("bob", "bob@mail.com"))  // (Int, String, String)
            	
								INSERT INTO "user_profile"
("username", "email")
VALUES ('bob', 'bob@mail.com')
RETURNING
  "id",
  "username",
  "email"
            	

Insert on conflict do nothing

Use onConflictDoNothing to silently ignore insert attempts that would violate a UNIQUE constraint. Instead of raising an error, PostgreSQL will skip the conflicting row.

This is useful when you want to insert data but don't care if it already exists (e.g., ensuring a record exists without checking first).

							  sql
  .insert(user)
  .cols2(t => (
    t.username,
    t.email
  ))
  .values(("bob", "bob@mail.com"))
  .onConflictDoNothing  // If username or email already exists, skip insert
  .run  // Unit

// Cached version
val stm = sql
  .insert(user)
  .cols2(t => (
    t.username,
    t.email
  ))
  .onConflictDoNothing
  .cache

stm.run(("bob", "bob@mail.com"))
            	
								INSERT INTO "user_profile"
("username", "email")
VALUES ('bob', 'bob@mail.com')
ON CONFLICT DO NOTHING
            	

Upsert

An upsert (INSERT ... ON CONFLICT ... DO UPDATE) allows you to insert a row or update it if it already exists. Specify the conflict column (typically a UNIQUE or PRIMARY KEY column) and which columns to update on conflict.

Note: The updated column must be one of the columns you're inserting.

							sql
  .insert(user)
  .cols2(t => (
    t.username,
    t.email
  ))
  .values(("bob", "bob@hotmail.com"))
  .onConflictOnColumn(_.username)  // If username exists...
  .doUpdate(_.email)                // ...update the email (can update multiple fields)
  .run  // Unit

// Cached version
val stm = sql
  .insert(user)
  .cols2(t => (
    t.username,
    t.email
  ))
  .onConflictOnColumn(_.username)
  .doUpdate(_.email)
  .cache

stm.run(("bob", "bob@hotmail.com"))
            	
								INSERT INTO "user_profile"
("username", "email")
VALUES ('bob', 'bob@mail.com')
ON CONFLICT ("username")
DO UPDATE SET "email" = 'bob@mail.com'
            	

Insert where not exists

Use whereNotExists when you need to prevent duplicates on a column that doesn't have a UNIQUE constraint. This approach is also safer for concurrent inserts (e.g., from streams) where onConflictDoNothing might not behave as expected due to race conditions.

The insert will only execute if no row matching the specified condition exists.

							sql
  .insert(user)
  .cols2(t => (
    t.username,
    t.email
  ))
  .values(("bob", "bob@mail.com"))
  .whereNotExists(_.username)
  .run

// cache
val stm = sql
  .insert(user)
  .cols2(t => (
    t.username,
    t.email
  ))
  .whereNotExists(_.username)
  .cache

stm.run(("bob", "bob@mail.com"))
            	
								INSERT INTO "user_profile"
("username", "email")
SELECT 'bob', 'bob@mail.com'
WHERE NOT EXISTS (
  SELECT 1
  FROM "user_profile"
  WHERE "username" = 'bob'
)
            	

Insert from select

Instead of providing explicit values, you can insert rows directly from a SELECT query. This is useful for copying or transforming data from one table to another, or for populating a table based on query results.

The columns selected must match the types and order of the insert columns.

							sql
  .insert(newsletter)
  .cols1(_.email)
  .fromSelect(
    sql
      .select(user)
      .cols1(_.email)  // Must match newsletter.email type
      .where(_.isActive === true)
  )
  .run  // Unit
// Inserts all active user emails into newsletter table

// Cached version with parameterized SELECT
val stm = sql
  .insert(newsletter)
  .cols1(_.email)
  .pickSelect(  // Use pickSelect for cached INSERT...SELECT
    sql
      .select(user)
      .cols1(_.email)
      .pickWhere1(_.isActive.use === Arg)
  )
  .cache

stm.run(true)  // Insert emails where isActive = true
stm.run(false) // Insert emails where isActive = false
            	
								INSERT INTO "newsletter" ("email")
SELECT "email"
FROM "user_profile"
WHERE "is_active" = true
            	

Insert many

Use runList to insert multiple rows in a batch operation. This is more efficient than running individual insert statements in a loop.

All inserts are executed within a transaction, so either all succeed or all fail together.

              val stm = sql
  .insert(user)
  .cols2(t => (
    t.username,
    t.email
  ))
  .cache

stm.runList(Seq(
  ("bob", "bob@mail.com"),
  ("jane", "jane@mail.com"),
  ("jack", "jack@mail.com")
))  // Unit
// All three rows inserted in a single transaction
              

Update

Update statement

Update rows by specifying which columns to modify with set and which rows to update with where. You can update multiple columns by passing a sequence to set.

See Update operators for advanced update operations like incrementing, appending to arrays, etc.

								sql
  .update(user)
  .set(_.country ==> "JP")  // Update single column
  .where(_.id === 103)
  .run  // Unit

// Cached version with parameters
val stm = sql
  .update(user)
  .pickSet1(_.country.use ==> Arg)   // First param: new country value
  .pickWhere1(_.id.use === Arg)      // Second param: user ID
  .cache

stm.run("JP", 103)  // Update user 103's country to "JP"
stm.run("US", 104)  // Update user 104's country to "US"
            	
								UPDATE "user" SET "country" = 'JP' WHERE id = 103
            	

Update returning

Use RETURNING to retrieve the updated row's values immediately after the update. This is useful for getting the current state of a row after modification, especially when other columns might have changed due to triggers or default values.

Use runHeadOpt when updating a single row (returns Option in case no rows matched), or run for potentially multiple rows.

								sql
  .update(user)
  .set(t => Seq(
    t.country ==> "IS",
    t.city ==> "Reykjavik"
  ))
  .where(_.id === 31)
  .returning3(t => (
    t.id,
    t.country,
    t.city
  ))
  .runHeadOpt  // Option[(Int, String, String)]

// Cached version
val stm = sql
  .update(user)
  .pickSet2(t => (
    t.country.use ==> Arg,  // First two params: country and city
    t.city.use ==> Arg
  ))
  .pickWhere1(_.id.use === Arg)  // Third param: user ID
  .returning3(t => (
    t.id,
    t.country,
    t.city
  ))
  .cache

stm.runHeadOpt(("IS", "Reykjavik"), 31)  // Option[(Int, String, String)]
            	
								UPDATE "user_profile"
SET country = 'IS',
       city = 'Reykjavik'
WHERE "id" = 31
RETURNING
  "id",
  "country",
  "city"
            	

Delete

Delete statement

Delete rows from a table using the where clause to specify which rows to remove. To delete all rows from a table, you must explicitly use .all instead of .where.

								sql
  .delete(user)
  .where(_.id === 103)  // Delete specific user
  .run  // Unit

// Cached version
val stm = sql
  .delete(user)
  .pickWhere1(_.id.use === Arg)
  .cache

stm.run(103)  // Delete user 103
stm.run(104)  // Delete user 104
            	
								DELETE FROM "user_profile" WHERE "id" = 103
            	

Delete returning

Use RETURNING to retrieve column values from the deleted row(s). This is useful for logging, auditing, or confirming what was deleted.

Use runHeadOpt when deleting a single row (returns None if no rows matched), or run for potentially multiple rows.

								sql
  .delete(user)
  .where(_.id === 103)
  .returning1(_.email)  // Get the deleted user's email
  .runHeadOpt  // Option[String]

// Cached version
val stm = sql
  .delete(user)
  .pickWhere1(_.id.use === Arg)
  .returning1(_.email)
  .cache

stm.runHeadOpt(103)  // Option[String] - email of deleted user
            	
								DELETE FROM "user_profile"
WHERE "id" = 103
RETURNING "email"
            	

Null values

Null values in results

When a column can contain NULL values (especially common with outer joins or nullable columns), you need to handle them explicitly in your queries.

Use .asOpt to convert a column from type T to Option[T]. This is required for columns that might be NULL, such as columns from the nullable side of an outer join.

                sql
  .select(user)
  .cols2(t => (
    t.username,      // String (never NULL)
    t.city.asOpt     // Option[String] (might be NULL)
  ))
  .run
// List[(String, Option[String])]
              

Alternatively, use PostgreSQL's COALESCE function via .default() to provide a fallback value when the column is NULL. This keeps the type as T instead of Option[T].

                sql
  .select(user)
  .cols2(t => (
    t.username,
    t.city.default("No city")  // Returns "No city" if city is NULL
  ))
  .run
// List[(String, String)] - no Option needed
              

Insert and update null values

To insert or update a column with a NULL value, use .asOpt on the column and pass None as the value.

This is useful when you want to explicitly set a column to NULL or when working with optional data from forms or APIs.

                // Insert with NULL value
sql
  .insert(client)
  .cols2(t => (
    t.username,     // Required field
    t.age.asOpt     // Optional field (can be NULL)
  ))
  .values(("Joe", None))  // age will be NULL
  .run
    
// Update to NULL
sql
  .update(client)
  .set(_.age.asOpt ==> None)  // Set age to NULL
  .where(_.id === 4)
  .run

// Update with Some value
sql
  .update(client)
  .set(_.age.asOpt ==> Some(30))  // Set age to 30
  .where(_.id === 4)
  .run
              

Aggregation

Count

Use count to get the number of rows matching a condition. This is a shorthand for SELECT count(*) and always returns a single Long value.

								sql
  .count(user)
  .where(_.country === "IT")
  .runHead  // Long
            	
								SELECT count(*) FROM "user_profile" WHERE "country" = 'IT'
            	

Avg Max Min

Use aggregation functions from Agg to compute statistics over columns. These functions work on entire result sets or within groups when combined with GROUP BY.

Available aggregation functions: avg, max, min, sum, count, and more.

								import kuzminki.api._
import kuzminki.fn._

sql
  .select(user)
  .cols3(t => (
    Agg.avg(t.age),  // Average age
    Agg.max(t.age),  // Maximum age
    Agg.min(t.age)   // Minimum age
  ))
  .where(_.country === "US")
  .runHead  // (BigDecimal, Int, Int)
            	
								SELECT
  avg("age"),
  max("age"),
  min("age")
FROM "user_profile"
WHERE "country" = 'US'
            	

Streaming

Stream from the database

Instead of loading all results into memory at once, use stream to process query results incrementally as a ZIO Stream. This is essential for handling large datasets efficiently.

Streaming fetches data in batches from the database. The default batch size is 100 rows, but you can customize it with .stream(1000) for larger batches.

                sql
  .select(client)
  .cols3(_.all)
  .all
  .orderBy(_.id.asc)
  .stream        // Stream with default batch size (100)
  .map(makeLine) // Transform each row
  .run(fileSink(Paths.get("clients.txt")))
// ZStream[Slinq, SQLException, (Int, String, Int)]

// Larger batch size for better performance with large datasets
sql
  .select(client)
  .cols3(_.all)
  .all
  .orderBy(_.id.asc)
  .stream(1000)  // Fetch 1000 rows per batch
              

You can stream results directly into case class instances using streamType:

                case class UserInfo(name: String, age: Int)

sql
  .select(oldUser)
  .cols2(t => (
    t.name,
    t.age
  ))
  .all
  .orderBy(_.id.asc)
  .streamType[UserInfo]  // ZStream[Slinq, SQLException, UserInfo]
// .streamType[UserInfo](1000) for custom batch size
              

Stream into the database

Use ZIO Streams to insert data into the database from external sources (files, APIs, other databases). Cached insert statements can be converted into sinks that consume streams.

You can insert rows one at a time with asSink, or batch them for better performance using asChunkSink with transactions. The same pattern works for UPDATE and DELETE operations.

                val insertStm = sql
  .insert(client)
  .cols2(t => (t.username, t.age))
  .cache

// Insert one row at a time
readFileIntoStream("clients.txt")
  .map(makeTupleFromLine)
  .run(insertStm.asSink)

// Insert in batches of 100 using transactions (more efficient)
readFileIntoStream("clients.txt")
  .map(makeTupleFromLine)
  .transduce(insertStm.collect(100))
  .run(insertStm.asChunkSink)  // Each chunk in its own transaction
              

Stream case classes directly into the database using type-safe sinks:

                case class UserInfo(name: String, age: Int)

// Stream from one table to another
sql
  .select(oldUser)
  .cols2(t => (
    t.name,
    t.age
  ))
  .all
  .orderBy(_.id.asc)
  .streamType[UserInfo]  // Stream as case classes
  .run(
    sql
      .insert(user)
      .cols2(t => (
        t.name,
        t.age
      ))
      .cache
      .asTypeSink[UserInfo]  // Insert case classes one at a time
      // Use .asTypeChunkSink[UserInfo] for batched inserts
  )
              

Transaction

Execute multiple operations

Transactions ensure that a group of operations either all succeed or all fail together, maintaining data consistency. Use sql.transaction to execute multiple INSERT, UPDATE, and DELETE operations atomically.

If any operation in the transaction fails, all changes are rolled back automatically.

                sql.transaction(
  sql.insert(user).cols2(t => (t.username, t.age)).values(("Joe", 25)),
  sql.update(user).set(_.age ==> 31).where(_.id === 45),
  sql.delete(user).where(_.id === 83)
).run  // Unit
// All three operations succeed together or all fail together
              

For dynamic lists of operations, use transactionList with rendered cached statements. This is useful when you need to execute a variable number of similar operations.

                val stm = sql
  .delete(user)
  .pickWhere1(_.id.use === Arg)
  .cache

// Generate 10 delete statements with different IDs
val statements = (1 to 10).map(id => stm.render(id))

sql.transactionList(statements).run  // Unit
// Delete users 1-10 in a single transaction
              

Debug

Print the query

Use debug methods to inspect the SQL that Slinq generates. These methods print to stdout and return the query unchanged, so they can be chained into your query pipeline.

Available debug methods:

  • printSql - Print the SQL statement with placeholders (?)
  • printSqlWithArgs - Print the SQL with arguments interpolated
  • printSqlAndArgs - Print the SQL and arguments separately
                // Print SQL with arguments interpolated
sql
  .select(client)
  .cols3(_.all)
  .where(_.age > 25)
  .limit(5)
  .printSqlWithArgs  // Prints to stdout, returns query
  .run
// SELECT "id", "username", "age" FROM "client" WHERE "age" > 25 LIMIT 5

// Print SQL with placeholders
sql
  .update(client)
  .set(_.age ==> 24)
  .where(_.id === 4)
  .printSql  // Shows prepared statement format
  .run
// UPDATE "client" SET "age" = ? WHERE "id" = ?
              

Fields

Jsonb field

Slinq provides comprehensive support for PostgreSQL's JSONB type, allowing you to store, query, and update JSON data efficiently. Use standard PostgreSQL JSONB operators like -> for object field access, ->> for text extraction, #> and #>> for path-based access, and operators like +=, -=, and #-= for updates.

Reference: PostgreSQL JSON Functions

                {
  "name": "Angela Barton",
  "is_active": true,
  "company": "Magnafone",
  "address": "178 Howard Place, Gulf, Washington, 702",
  "latitude": 19.793713,
  "longitude": 86.513373,
  "tags": ["enim", "aliquip", "qui"],
  "residents": {
    "name": "Rick",
    "age": 31
  }
}
              
                class Customer extends Model("customer") {
  val data = column[Jsonb]("data")
}

// Insert JSON data
sql
  .insert(customer)
  .cols1(_.data)
  .values(Jsonb(jsonString))
  .run

// SELECT - Extract text values

sql
  .select(customer)
  .cols1(_.data ->> "company")  // Extract top-level field as text
  .where(_.id === 3)
  .runHead  // String: "Magnafone"

sql
  .select(customer)
  .cols1(_.data #>> Seq("residents", "name"))  // Extract nested field as text
  .where(_.id === 3)
  .runHead  // String: "Rick"

sql
  .select(customer)
  .cols1(_.data -> "tags" ->> 1)  // Access array element by index
  .where(_.id === 3)
  .runHead  // String: "aliquip"

// SELECT - Extract JSON objects

sql
  .select(customer)
  .cols1(_.data -> "residents")  // Extract nested object as Jsonb
  .where(_.id === 3)
  .runHead  // Jsonb({"name" : "Rick", "age" : 31})

sql
  .select(customer)
  .cols1(_.data - "residents")  // Remove key, return remaining JSON
  .where(_.id === 3)
  .runHead  // Jsonb (without "residents" field)

sql
  .select(customer)
  .cols1(t => t.data || t.other)  // Concatenate two JSONB columns
  .where(_.id === 3)
  .runHead  // Jsonb (merged)

// UPDATE - Modify JSON data

sql
  .update(customer)
  .set(_.data += Json.obj("address" -> "Somewhere 12"))  // Add/update field
  .where(_.id === 3)
  .run

sql
  .update(customer)
  .set(_.data -= "address")  // Remove top-level key
  .where(_.id === 3)
  .run

sql
  .update(customer)
  .set(_.data #-= Seq("residents", "age"))  // Remove nested field by path
  .where(_.id === 3)
  .run
              

Array field

PostgreSQL array fields are supported for all standard types, including JSONB. Use column[Seq[T]] to define an array column in your model.

Common array operations include appending elements (+=), checking for element existence (?), and accessing by index.

                class Nums extends Model("demo") {
  val id = column[Int]("id")
  val numbers = column[Seq[Int]]("numbers")  // Array of integers
}

val nums = Model.get[Nums]

for {
  id <- sql
    .insert(nums)
    .cols1(_.numbers)
    .values(List(1, 2, 3))  // Insert array
    .returning1(_.id)
    .runHead  // Int

  _ <- sql
    .update(nums)
    .set(_.numbers += 4)  // Append element to array
    .where(_.id === id)
    .run

  numbers <- sql
    .select(nums)
    .cols1(_.numbers)
    .where(_.numbers ? 2)  // Check if array contains 2
    .run
} yield numbers  // List[Vector[Int]]
              

Unique and sorted arrays

Slinq provides specialized methods for maintaining unique and/or sorted arrays. These methods automatically deduplicate elements and can maintain sort order, which is particularly useful for tag systems, ranked lists, or ordered collections.

                // Add element and ensure uniqueness (removes duplicates)
.set(_.numbers.add(4))

// Add multiple elements and ensure all are unique
// If array is [1, 2, 3], adding [8, 7, 7, 2] results in [1, 2, 3, 8, 7]
.set(_.numbers.add(List(8, 7, 7, 2)))

// Add element, ensure uniqueness, and maintain ascending sort order
.set(_.numbers.addAsc(4))

// Add element, ensure uniqueness, and maintain descending sort order
.set(_.numbers.addDesc(4))

// For JSONB arrays: specify a key for uniqueness check and sorting
// Useful for arrays of objects where you want unique items by a specific field
.set(_.jsonArray.addDesc(jsonObj, "index"))

// Cached versions with parameters
.pickSet1(_.numbers.addAsc(Arg))  // Single element parameter

// Use ArgSeq for list/sequence parameters
.pickSet1(_.numbers.addAsc(ArgSeq))  // List parameter
              

Timestamp, Date, Time

Slinq provides rich support for PostgreSQL's temporal types: Timestamp, Date, and Time. You can perform date arithmetic, extract components, and format output.

Use Fn.interval() to add or subtract time periods from temporal columns in both queries and updates.

                class Demo extends Model("demo") {
  val id = column[Int]("id")
  val eventTime = column[Time]("event_time")
  val eventDate = column[Date]("event_date")
  val updatedAt = column[Timestamp]("updated_at")
}

val demo = Model.get[Demo]

// Date arithmetic in updates
sql
  .update(demo)
  .set(t => Seq(
    t.eventTime += Fn.interval(hours = 3, minutes = 10),
    t.eventDate += Fn.interval(years = 1, days = 2),
    t.updatedAt += Fn.interval(months = 4, hours = 5)
  ))
  .where(_.id === 25)
  .run

// Format temporal values for display
sql
  .select(demo)
  .cols3(t => (
    t.eventTime.format("HH24:MI"),           // "14:30"
    t.eventDate.format("DD Mon YYYY"),        // "15 Jan 2023"
    t.updatedAt.format("DD Mon YYYY HH24:MI") // "15 Jan 2023 14:30"
  ))
  .where(_.id === 25)
  .runHead  // (String, String, String)

// Extract components and perform date arithmetic in queries
sql
  .select(demo)
  .cols4(t => (
    t.id,
    t.eventDate.month,                          // Extract month (1-12)
    t.updatedAt.week,                           // Extract week number
    t.updatedAt + Fn.interval(days = 10)        // Add 10 days
  ))
  .where(t => Seq(
    t.eventDate.year === 2022,    // Filter by year
    t.eventDate.quarter === 2     // Filter by quarter (Q2)
  ))
  .run  // List[(Int, Int, Int, Timestamp)]
              

Functions

Postgres functions

Slinq provides access to PostgreSQL's built-in functions through the Fn object. Use these functions to transform data in your queries - string manipulation, type conversion, mathematical operations, and more.

Import kuzminki.fn._ to access all available functions. See the Available functions section for a complete list.

                import kuzminki.api._
import kuzminki.fn._

class Profile extends Model("profile") {
  val firstName = column[String]("first_name")
  val lastName = column[String]("last_name")
  val bigNum = column[BigDecimal]("big_num")
}

val profile = Model.get[Profile]

sql
  .select(profile)
  .cols3(t => (
    Fn.concatWs(" ", t.firstName, t.lastName),  // Concatenate with separator
    Fn.initcap(t.lastName),                      // Capitalize first letter
    Cast.asString(t.bigNum)                      // Convert BigDecimal to String
  ))
  .all
  .run  // List[(String, String, String)]
              

Create functions

When the built-in functions don't cover your needs, you can create custom function wrappers. This allows you to use any PostgreSQL function or even your own custom database functions.

Extend the appropriate base class based on the return type: StringFn, IntFn, BooleanFn, etc.

                import kuzminki.fn.StringFn

// Custom function with hardcoded values
case class FullName(
  title: String,
  first: TypeCol[String],
  second: TypeCol[String]
) extends StringFn {
  val name = "full_name"
  val template = s"concat_ws(' ', '$title', %s, %s)"  // %s replaced by columns
  val cols = Vector(first, second)
}

sql
  .select(user)
  .cols2(t => (
    t.id,
    FullName("Mr", t.firstName, t.lastName)  // "Mr John Doe"
  ))
  .where(_.id === 10)
  .runHead  // (Int, String)

// Custom function with parameterized values (for caching)
// Use when you need the driver to fill in arguments via prepared statement

case class FullNameParam(
  title: String,
  first: TypeCol[String],
  second: TypeCol[String]
) extends StringParamsFn {
  val name = "full_name"
  val template = s"concat_ws(' ', ?, %s, %s)"  // ? becomes a parameter
  val cols = Vector(first, second)
  val params = Vector(title)  // Values passed to driver as parameters
}
              

Available functions

Below is a list of commonly used PostgreSQL functions available in Slinq. For complete PostgreSQL function documentation, see: PostgreSQL String Functions

Function Type Description
Fn.coalesce(col, default) String Return default value if column is NULL
Fn.concat(col, ...) String Concatenate multiple strings
Fn.concatWs(glue, col, ...) String Concatenate with separator (e.g., " ", ", ")
Fn.substr(col, start) String Extract substring from position to end
Fn.substr(col, start, len) String Extract substring with specified length
Fn.trim(col) String Remove leading and trailing whitespace
Fn.upper(col) String Convert to uppercase
Fn.lower(col) String Convert to lowercase
Fn.initcap(col) String Capitalize first letter of each word
Fn.round(col) BigDecimal Round to nearest integer, returns BigDecimal
Fn.roundStr(col) BigDecimal Round to nearest integer, returns String
Cast.asString(col) Any Cast to String (text)
Cast.asShort(col) Any Cast to Short (smallint)
Cast.asInt(col) Any Cast to Int (integer)
Cast.asLong(col) Any Cast to Long (bigint)
Cast.asFloat(col) Any Cast to Float (real)
Cast.asDouble(col) Any Cast to Double (double precision)
Cast.asBigDecimal(col) Any Cast to BigDecimal (numeric)

Raw SQL

Select

When Slinq's query builder doesn't support a specific PostgreSQL feature you need, you can use raw SQL with the rsql string interpolator. This creates a prepared statement with safe parameter binding.

Use db.query() to execute raw SELECT statements and get results as a list of rows.

								// Define raw SQL query with interpolated parameters
def rawStm(country: String, minAge: Int) =
  rsql"""SELECT * FROM "user_profile" WHERE country = $country AND age > $minAge"""
  // Parameters are safely bound (prevents SQL injection)

val job = for {
  users <- db.query(rawStm("TR", 25))  // List[Vector[Any]]
} yield users
							

Operations

Use db.exec() to execute raw INSERT, UPDATE, or DELETE statements. Like queries, parameters are safely bound through prepared statements.

								val username = "bob"
val email = "bob@mail.com"

// Execute INSERT with interpolated parameters
db.exec(
  rsql"""INSERT INTO "user_profile" ("username", "email") VALUES ($username, $email)"""
)  // Unit

// Works with UPDATE and DELETE too
db.exec(
  rsql"""UPDATE "user_profile" SET "email" = $email WHERE "username" = $username"""
)

db.exec(
  rsql"""DELETE FROM "user_profile" WHERE "username" = $username"""
)
            	

Data types and operators

Data types

Slinq provides automatic mapping between PostgreSQL and Scala types. When you define a column with column[T]("name"), the type T determines both the Scala type and the corresponding PostgreSQL type.

Postgres Scala
varchar / text String
bool Boolean
int2 Short
int4 Int
int8 Long
float4 Float
float8 Double
numeric BigDecimal
time java.sql.Time
date java.sql.Date
timestamp java.sql.Timestamp
uuid java.util.UUID
jsonb kuzminki.api.Jsonb(value: String)

Operators

Slinq provides type-safe operators for WHERE clause conditions. All operators use Scala-friendly syntax while generating the correct PostgreSQL operators.

Operator Column type Argument Description
=== Any T Equals (SQL: =)
!== Any T Not equals (SQL: !=)
in Any Seq[T], Subquery[T] Value in list or subquery
notIn Any Seq[T], Subquery[T] Value not in list or subquery
> Numbers and time T Greater than
< Numbers and time T Less than
>= Numbers and time T Greater than or equal
<= Numbers and time T Less than or equal
~ String String Match
~* String String Match case insensitive
!~ String String Not match
!~* String String Not match case insensitive
like String String SQL LIKE pattern matching
begins String String String starts with (adds % wildcard)
ends String String String ends with (adds % wildcard)
similarTo String String SQL SIMILAR TO (regex-like pattern)
? Array T Array contains element
!? Array T Array does not contain element
@> Array Seq[T] Array contains all elements
!@> Array Seq[T] Array does not contain all elements
<@ Array Seq[T] Array is contained by (subset of)
!<@ Array Seq[T] Array is not contained by
&& Array Seq[T] Arrays have any elements in common
!&& Array Seq[T] Arrays have no elements in common
? Jsonb String JSONB contains key
!? Jsonb String JSONB does not contain key
?| Jsonb Seq[String] JSONB contains any of the keys
?& Jsonb Seq[String] JSONB contains all of the keys
@> Jsonb Jsonb JSONB contains all key-value pairs
!@> Jsonb Jsonb JSONB does not contain all key-value pairs
<@ Jsonb Jsonb JSONB is contained by (subset of)
!<@ Jsonb Jsonb JSONB is not contained by

Update operators

Use these operators in UPDATE statements with the .set() method to modify column values. Operators support arithmetic, array manipulation, JSONB updates, and more.

Operator Column type Argument Description
==> Any T Set column to value (SQL: =)
+= Numbers T Increment by value
-= Numbers T Decrement by value
setNow Timestamp, Date, Time - Set to current time/date (SQL: NOW())
+= Timestamp, Date, Time Fn.interval Add time interval
-= Timestamp, Date, Time Fn.interval Subtract time interval
+= Array T, Seq[T] Append element(s) to end of array
:= Array T, Seq[T] Prepend element(s) to start of array
-= Array T Remove all occurrences of element
add Array T, Seq[T] Add element(s), ensuring uniqueness
addAsc Array T, Seq[T] Add unique element(s) and sort ascending
addDesc Array T, Seq[T] Add unique element(s) and sort descending
+= Jsonb Jsonb Merge JSONB objects (concatenate)
-= Jsonb String, Int Remove key (String) or array element (Int)
#-= Jsonb Seq[String] Remove nested field by path (e.g., Seq("user", "age"))

Modifiers

Modifiers are methods you can call on columns in SELECT queries to transform values. They're similar to functions but called directly on column objects (e.g., t.username.upper).

Modifier Column type Argument Description
default Any T Return default value if NULL (COALESCE)
concat String String, ... Concatenate strings
concatWs String (String, Any, ...) Concatenate with separator (first arg)
substr String Int, (Int, Int) Extract substring (start) or (start, length)
replace String (String, String) Replace all occurrences (from, to)
trim String - Remove leading/trailing whitespace
upper String - Convert to uppercase
lower String - Convert to lowercase
initcap String - Capitalize first letter of each word
asShort String - Cast to Short (smallint)
asInt String - Cast to Int (integer)
asLong String - Cast to Long (bigint)
asFloat String - Cast to Float (real)
asDouble String - Cast to Double (double precision)
asBigDecimal String - Cast to BigDecimal (numeric)
round Numbers Int Round to decimal places, returns BigDecimal
roundStr Numbers Int Round to decimal places, returns String
age Timestamp - Time elapsed since timestamp (PgInterval)
epochSecs Timestamp - Unix timestamp in seconds (Long)
epochMillis Timestamp - Unix timestamp in milliseconds (Long)
century Timestamp, Date - Extract century (Int)
decade Timestamp, Date - Extract decade (Int)
year Timestamp, Date - Extract year (Int)
quarter Timestamp, Date - Extract quarter 1-4 (Int)
month Timestamp, Date - Extract month 1-12 (Int)
week Timestamp, Date - Extract week of year (Int)
day Timestamp, Date - Extract day of month 1-31 (Int)
dow Timestamp, Date - Day of week 0-6 (Sunday=0) (Int)
doy Timestamp, Date - Day of year 1-366 (Int)
isoDow Timestamp, Date - ISO day of week 1-7 (Monday=1) (Int)
isoYear Timestamp, Date - ISO 8601 week-numbering year (Int)
hour Timestamp, Time - Extract hour 0-23 (Int)
minute Timestamp, Time - Extract minute 0-59 (Int)
second Timestamp, Time - Extract second 0-59 (Int)
microseconds Timestamp, Time - Extract microseconds (Int)
milliseconds Timestamp, Time - Extract milliseconds (Int)
asDate Timestamp - Convert to Date (date part only)
asTime Timestamp - Convert to Time (time part only)
asTimestamp Timestamp - Cast to Timestamp (explicit cast)
asString Timestamp, Date, Time - Cast to String (text)
format Timestamp, Date, Time String Format using PostgreSQL pattern (e.g., "YYYY-MM-DD")
unnest Array - Expand array to rows (one element per row)
length Array - Number of elements in array (Int)
trim Array Int Get first N elements (truncate array)
get Array Int Get element at index (1-based)
pos Array T Find index of element (1-based, 0 if not found)
first Array - Get first element
last Array - Get last element
join Array String Join array elements into string with separator
|| Array TypeCol[T] Concatenate two arrays
-> Jsonb String, Int Get JSONB object field or array element (returns Jsonb)
->> Jsonb String, Int Get JSONB object field or array element (returns String)
#> Jsonb Seq[String] Get JSONB object at path (returns Jsonb)
#>> Jsonb Seq[String] Get JSONB object at path (returns String)
|| Jsonb TypeCol[Jsonb] Concatenate/merge two JSONB objects
- Jsonb String, Int, Seq[String] Remove key, array element, or keys
#- Jsonb Seq[String] Remove object field at path