Category Archives: Slick

Slick(2) – study note

1. Database Connection:

There are three ways to do Database Connection:

  1. using a JDBC URL
  2. using a DataSource
  3. using a JNDI name


  1. We can use the Session object’s withTransaction method to create a transaction. The block passed to its executed in a single transaction.

2. Queries:

  • sorting and filtering
  • joining:
    • explicit join:
      • innerJoin
      • leftJoin
      • rightJoin
      • outerJoin
      • zip
    • implicit join: an implicit cross-join is created with a flagMap operation on a Query (i.e. by introducing more than one generator in a for comprehension)
  • unions:
    • Two queries can be concatenated with the ++ (or unionAll) and union operators if they have compatible types. Unlike union which filters out duplicate values, ++ simply concatenates the results of the individual queries, which is usually more efficient.
  • aggregation:
    • min
    • max
    • sum
    • avg
    • length
    • exists

3. Deleting:

A query for deleting must only select from a single table.

4. Inserting:

Inserts are done based on a projection of columns from a single table. When you include an AutoInc column in an insert operation, it is silently ignored, so that the database can generate the proper value.

5. Updating:

The query must only return raw columns (not computed values) selected from a single table.



Slick (3) – connection pool

Before talking into detailed knowledge about connection pool in Slick, we need to figure out two things: One is about Slick. Slick does not provide a connection pool implementation of its own. The other thing is about JDBC, which not provides asynchronous database driver for JVM. So we need to thread pool in Play Framework application to help improve performance.

1. Add connection pool to Slick:

Event Database contains an AsyncExecutor that manages the thread pool for asynchronous execution of Database I/O Actions. Its size is the main parameter to tune for the best performance of the Database object. It should be set to the value that we would use for the size of the connection pool in traditional, blocking application. For Slick, its default connection pool is HikariCP, which is a “zero-overhead” production-quality connection pool.

import play.api.db.DB
import play.api.Play.current
import slick.driver.MySQLDriver.simple._
object Contexts {
  def withSession[Res](f: Session => Res) = {
      AsyncExecutor(<executor_name>, <num_threads>, <queue_size>)) withSession f

2. connection pool configuration

There’re some important default configuration which we need to know. Of course, we can override them to optimize.

# HikariCP connection pool configure
play.db.hikaricp.connectionTimeout=30 seconds
play.db.hikaricp.maxLifetime=30 minutes

3. thread pool vs connection pool

There is dependency between these two pool setting is in maximum attribute.Let us assume that your database can support maximum of 50 connections then it is straight forward to configure the connection pool maximum attribute as 50. But how this setting going to impact thread pool setting? Configure application server thread pool as maximum of 100. In this scenario application server will allow 100 request to be processed simultaneously however database have only 50 connection. So only 50 thread will get connection another 50 thread will fight to get database connection hence connections will be switched between threads frequently that slow down overall application performance.

Just adding database connections will not improve performance, and can compromise availability because of the need to reestablish all the connections during failover.

Let us assume we set thread pool maximum setting same as connection pool maximum setting of 500. Application server will allow 50 thread to be processed simultaneously the remaining thread will be in wait state. All the 50 thread will get database connection immediately hence it will processed quickly.

The above example assumed each thread will use one database connection (may be multiple connections but sequentially); if your application uses two database connections parallels by each thread then configure thread pool maximum setting as half of the connection pool maximum.

Setting the thread pool size too large can cause performance problems because if there are too many concurrent threads, task switching overhead becomes a serious bottleneck. Any more threads than the number of connections available could be wasteful given contention for the connections. Fewer threads will not consume the number of connections available. 

4. number of processor core vs the size of a thread pool

  • More threads mean more memory usage. Each thread requires a thread stack. For recent HotSopt JVMs, the minimum thread stack size is 64KB, and the default can be as much as 1MB. That can be significant. In addition, any thread that is alive is likely to own or share objects in the heap whether or not it is currently runnable. Therefore it is reasonable to expect that more threads mean a larger memory working set.
  • A JVM cannot have more threads actually running than their cores (or hyperthreaded cores) on the execution hardware.

5. Involve connection pool to database querying

def delete(userID: Int): Boolean = Contexts.withSession { implicit session =>
  val users = TableQuery[Users]
  val filterInfo = users.filter( === userID)

  if ( {
  } else false

6. Reuse session within one logic

We know the more sessions we use, the much easier the connection pool uses up. So within one logic, we suggestion to pass session around to finish one purpose. In order to reuse session, we need to pass session as implicit.

def save(implicit session: Session, traitInfo: TraitInfo): Int = {
  val traits = TableQuery[Traits]
  (traits returning += traitInfo

7. Additional important things to know

Some Concepts on Slick we need to know/understand:

  1. For every database session (withSession blocks), Slick is opening a new connection to the database server.
  2. Session Handling:
    1. The Database object’s withSession method creates a Session, passes it to a given function and closes it afterward. If we use a connection pool, closing the Session returns the connection to the pool.
    2. Only the method actually executing the query in the database requires a Session. The executing methods are made available via implicit conversions.
    3. It is not recommended to handle the lifetime of a Session manually.
  3. All methods that execute a query take an implicit Session value. Of course, you can also pass a session explicit if you prefer.

Basic Knowledge on connection pool:

  1. Connection pools are common in network applications and they’re often tricky to implement. For example, it is often desirable to have timeouts on acquisition from the pool since various clients have different latency requirement.
  2. Pools are simple in principle: we maintain a queue of connections and we satisfy waiters as they come in. With additional synchronization primitives this typically involves keeping two queues: one of waiters (when there are no connections), and one of connections (when there are no waiters).

8. Todo In Future:

There are some points we can optimize.

  1. Slick uses prepared statements wherever possible but it does not cache them on its own. We should therefore enable prepared statement caching in the connection pool’s configuration and select a sufficiently large pool size.

Slick (1) – fix more than 22 columns case

Slick is a modern database query and access library for Scala. So it is quite clear to understand Slick’s purpose is to  query database. This post is to explain how to fix the access to a table which has more than 22 columns. First, we give known solution for less than 22 columns case. And then, we explain why this solution is not applied to more than 22 columns case. Last, we give modified solution to fix more than 22 columns case.

Solution – less than 22 columns case

Here is a graph to show this simple table’s schema.

Screenshot 2016-03-31 09.52.41

Here is the solution by Slick to access this table.

case class Friend(
 id: Int,
 name: String,
 firstName: String,
 lastName: String,
 createTime: String)

class Friends(tag: Tag) extends Table[Friend](tag, "friends") {
 def id = column[Int]("id", O.AutoInc)
 def name = column[String]("name")
 def firstName = column[String]("first_name")
 def lastName = column[String]("last_name")
 def createTime = column[String]("create_time")

 def * = (id, name, firstName, lastName, createTime) <> (Friend.tupled, Friend.unapply _)


The * operator is to project to a custom type. the <> operator as Scala tuple. Because the result row of a query as a Scala tuple. The type of the tuple will match the Projection that is defined. The root reason is caused by tupled.  Scala programming language limits the max size of a tuples to 22 and tuples are a nice way to represent table rows. You can’t use tupled and unapply for more than 22 columns case.

Solution – more than 22 columns case

Here we give out solution directly. The important part is to build a custom type to satisfy * operator. So you can see we package some columns to one case Class and then apply tupled method back. Here I use three different case Class according to different meanings. You can use your logic to select which columns packaging together, which not.

This is table schema, I also use different colors to help you to know where are the three packages. In fact, these case Class will not influence real physical columns.

Screenshot 2016-03-31 09.55.49

Here is the real code, please notice the colorful marks which will help you to grab important point quickly.

case class User(
                 id: Int,
                 var basicInfo: BasicInfo,
                 var oauth1Info: Oauth1Info,
                 var oauth2Info: Oauth2Info)

class Users(tag: Tag) extends Table[User](tag, "users") {
  def id = column[Int]("id", O.AutoInc)
  def name = column[String]("name")
  def firstName = column[String]("first_name")
  def lastName = column[String]("last_name")
  def email = column[String]("email")
  def avatarUrl = column[String]("avatar_url")

  def timeZone = column[Int]("time_zone")
  def token = column[String]("token")
  def showStatus = column[String]("show_status")
  def showTutorial = column[String]("show_tutorial")
  def createTime = column[String]("create_time")
  def updateTime = column[String]("update_time")

  def provider = column[String]("provider")
  def password = column[String]("password")
  def teamIDs = column[String]("team_ids")

  def oauth1Id = column[String]("oauth1_id")
  def oauth1Token = column[String]("oauth1_token")
  def oauth1Secret = column[String]("oauth1_secret")

  def oauth2Id = column[String]("oauth2_id")
  def oauth2AccessToken = column[String]("oauth2_access_token")
  def oauth2Scope = column[String]("oauth2_scope")
  def oauth2ExpiresIn = column[String]("oauth2_expires_in")
  def oauth2LongLivedToken = column[String]("oauth2_long_lived_token")

  def * = (id,
    (name, firstName, lastName, email, avatarUrl, timeZone, token, showStatus, showTutorial,
      createTime, updateTime, provider, password, teamIDs),
    (oauth1Id, oauth1Token, oauth1Secret),
    (oauth2Id, oauth2AccessToken, oauth2Scope, oauth2ExpiresIn, oauth2LongLivedToken)).shaped <> (
    {case (id, basicInfo, oauth1Info, oauth2Info) =>
    { u: User =>
      def f1(p: BasicInfo) = BasicInfo.unapply(p).get
      def f2(p: Oauth1Info) = Oauth1Info.unapply(p).get
      def f3(p: Oauth2Info) = Oauth2Info.unapply(p).get
      Some((, f1(u.basicInfo), f2(u.oauth1Info), f3(u.oauth2Info)))}

case class BasicInfo(
                      name: String,
                      firstName: String,
                      lastName: String,
                      email: String,
                      avatarUrl: String,
                      timeZone: Int,
                      token: String,
                      showStatus: String,
                      showTutorial: String,
                      createTime: String,
                      updateTime: String,
                      provider: String,
                      password: String,
                      teamIDs: String)

case class Oauth1Info(
                       oauth1Id: String,
                       oauth1Token: String,
                       oauth1Secret: String)

case class Oauth2Info(
                       oauth2Id: String,
                       oauth2AccessToken: String,
                       oauth2Scope: String,
                       oauth2ExpiresIn: String,
                       oauth2LongLivedToken: String)

Another important thing is that when you use filter to do query or update by Slick, the additional case Class will not influence it. Only when you do save or fetch , you need to open each case Class.  Here we give examples: The first one is to filter.

def isExist(token: String,): Boolean = {
  implicit val db = Database.forDataSource(DB.getDataSource()).createSession()
  val users = TableQuery[Users]
  var findFlag = false
  val filterInfo = users.filter(_.token === token)
  if ( findFlag = true

The second one is to fetch.

def search(name: String): Option[JsArray] = {
  implicit val db = Database.forDataSource(DB.getDataSource()).createSession()
  val users = TableQuery[Users]
  val filterInfo = users.filter{ u =>}
  val result =
    if ( {
      var resultInfo = Json.arr()
      filterInfo.foreach{ f =>
        val tempInfo = Json.obj(
          "id" ->,
          "avatar_url" -> f.basicInfo.avatarUrl)
        resultInfo ++= Json.arr(tempInfo)
    } else None


Here are some useful links which greatly help me to understand and fix this problem. I recommend to read them again if you have time.

Scala Slick method I can not understand so far

Custom mapping to nested case class structure in Slick (more than 22 columns)