Monthly Archives: April 2016

Scala (20) – Execution Context

Execution Context:

  • An ExecutionContext is similar to an Executor: it is free to execute computations in a new thread, in a pooled thread or in the current thread (although executing the computation in the current thread is discouraged)

The Global Execution Context:

  • is an ExecutionContext backed by a ForkJoinPool. It should be sufficient for most situations but requires some care.
    A ForkJoinPool manages a limited amount of threads (the maximum amount of thread being referred to as parallelism level). The number of concurrently blocking computations can exceed the parallelism level only if each blocking call is wrapped inside a blocking call. Otherwise, there is a risk that the thread pool in the global execution context is starved, and no computation can process.
  • By default, the sets the parallelism level of its underlying fork-join-pool to the amount of available processors (Runtime.availableProcessors).  This configuration can be overridden by  setting the following VM attributes: scala.concurrent.context.minThreads, scala.concurrent.context.numThreads, scala.concurrent.context.maxThreads.

Thread Pool:

  • If each incoming request results in a multitude of requests to get another tier of systems, in these systems, thread pools must be managed so that they are balanced according to the ratios of requests in each tier: mismanagement of one thread pool bleeds into another.

Scala (19) – Futures


  • They hold the promise for the result of a computation that is not yet complete. They are a simple container- a placeholder. A computation could fail of course, and this must also be encoded. a Future can be in exactly one of 3 states:
    • pending
    • failed
    • completed
  • With flatMap we can define a Future that is the result of two futures sequenced, the second future computed based on the result of the first one.
  • Future defines many useful methods:
    • Use Future.value() and Future.exception() to create pre-satisfied futures
    • Future.collect(), Future.join() and provide combinators that turn many futures into one (i.e. the gather part of a scatter-gather operation)
  • By default, futures and promises are non-blocking, making use of callbacks instead of typical blocking operations. Scala provides combinators such as flatMap, foreach and filter used to compose futures in a non-blocking method.

Akka (7) -Configuration

There are serval places which we can configure Akka:

  • log level and logger backend
  • enable remote
  • message serializers
  • definition of routers
  • tuning of dispatchers

Two important concepts we need to understand when we do configuration:

  • Throughput
    It defines the number of messages that are processed in a batch before the thread is returned to the pool.
  • parallelism factor
    The parallelism factor is used to determine thread pool size using the following formula: ceil (available processors * factor). Resulting size is then bounded by the parallelism-min and parallelism-max values.

Play Framework (11) – Basic Concept

Play Framework is event-driven server. NodeJS is threaded server.

Screenshot 2016-04-21 12.03.51

Non-blocking IO

  • build on top of Netty
    Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers and clients.
  • no sensitivity to downstream slowness
  • easy to parallelize IO and make parallel request easy
  • supports many concurrent and long running connections, enabling websockets, coment, server-sent-events.

JVM Memory Management (2)

This is a study note from “Everything I Ever Learned about JVM Performance Tuning @Twitter“.

  • Adaptive Sizing Policy:
    Throughput collectors an automatically tune themselves:

    • -XX:+UseAdaptiveSizePolicy
    • -XX:+MaxGCPauseMillis=…(i.e. 100)
    • -XX:+GCTimeRatio=…(i.e. 19)
  • tune young generation tool:
    • enable -XX:+PrintGCDetails, -XX:+PrintHeapAtGC, and -XX:+PrintTenuringDistribution
    • watch survivor size
    • watch the tenuring threshold; it might need to tune it to tenure long lived objects faster.
  • some rules:
    • Too many live objects during young generation GC:
      Reduce NewSize, reduce survivor spaces, reduce tenuring threshold.
    • Too many threads:
      Find the minimal concurrency level, or split the service into serval JVMs.
    • Eden should be big enough to hold more than one times transactions. In this case, there is no stop-the-world and through output would be big.
    • Each survivor should be big enough to maintain alive objects and aged objects.
    • Increasing threshold will put long aged objects to old generation asap to release more space to survivor.
  • Now we use 64-bit JVM, 64-bit pointer will cause CPU buffer is smaller than 32-bit pointer. Involving -XX:+UseCompressedOops will compress 64-bit pointer to 32-bit pointer, but it still will use 64-bit memory space.
    Object stored in memory split into 3 parts:

    • header: mark word + klass pointer
      • mark word stores running data for object itself.
      • klass pointer points to the object’s class metadata.
    • instance data:
      Screenshot 2016-04-21 11.14.09
    • padding: 0 <= padding <= 8
      (header + instance data + padding) % 8 == 0

Akka (6) – Mailbox

An Akka Mailbox holds the messages that are destined for an Actor. Normally each Actor has its own mailbox, but with for example a BalancingPool all routes will share a single mailbox instance.

When the mailbox is not specified as the default mailbox is used. By default it is an unbounded mailbox, which is backed by a java.util.concurrent.ConcurrentLinkedQueue.

Using bounded mailbox:

  • When mailbox is full, messages go to DeadLetters.
  • mailbox-push-timeout-time: how long to wait when mailbox is full
  • Doesn’t work for distributed Akka systems.


Akka (5) – Scheduler

The default implementation of Scheduler used by Akka is based on job buckets which are emptied according to a fixed schedule. The scheduler method returns an instance of which is unique per ActorSystem and is used internally for scheduling things to happen at specific points in time.

Scheduler implementation are loaded reflectively at ActorSystem start-up with the following constructor arguments:

  • the system’s com.typesafe.config.Config (from system.settings.config)
  •  a akka.ever.LoggingAdapter
  • a java.util.concurrent.ThreadFactory

You can schedule sending of message to actors and execution of tasks (functions or Runnable). You will get a Cancellable back that you can call cancel on to cancel the execution of the scheduled operation. Cancels this Cancellable and returns true if that was successful. If this cancellable was (concurrently) cancelled already, then this method will return false though isCancelled will return true.

Cancellation flow:

  • a cancellation signal set by a consumer is propagated to its producer.
  • the producer uses onCancellation on Promise to listen to this signal and act accordingly.