- An ExecutionContext is similar to an Executor: it is free to execute computations in a new thread, in a pooled thread or in the current thread (although executing the computation in the current thread is discouraged)
The Global Execution Context:
- ExecutionContext.global is an ExecutionContext backed by a ForkJoinPool. It should be sufficient for most situations but requires some care.
A ForkJoinPool manages a limited amount of threads (the maximum amount of thread being referred to as parallelism level). The number of concurrently blocking computations can exceed the parallelism level only if each blocking call is wrapped inside a blocking call. Otherwise, there is a risk that the thread pool in the global execution context is starved, and no computation can process.
- By default, the ExecutionContext.global sets the parallelism level of its underlying fork-join-pool to the amount of available processors (Runtime.availableProcessors). This configuration can be overridden by setting the following VM attributes: scala.concurrent.context.minThreads, scala.concurrent.context.numThreads, scala.concurrent.context.maxThreads.
- If each incoming request results in a multitude of requests to get another tier of systems, in these systems, thread pools must be managed so that they are balanced according to the ratios of requests in each tier: mismanagement of one thread pool bleeds into another.
There are serval places which we can configure Akka:
- log level and logger backend
- enable remote
- message serializers
- definition of routers
- tuning of dispatchers
Two important concepts we need to understand when we do configuration:
It defines the number of messages that are processed in a batch before the thread is returned to the pool.
- parallelism factor
The parallelism factor is used to determine thread pool size using the following formula: ceil (available processors * factor). Resulting size is then bounded by the parallelism-min and parallelism-max values.
Play Framework is event-driven server. NodeJS is threaded server.
- build on top of Netty
Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers and clients.
- no sensitivity to downstream slowness
- easy to parallelize IO and make parallel request easy
- supports many concurrent and long running connections, enabling websockets, coment, server-sent-events.
This is a study note from “Everything I Ever Learned about JVM Performance Tuning @Twitter“.
- Adaptive Sizing Policy:
Throughput collectors an automatically tune themselves:
- -XX:+MaxGCPauseMillis=…(i.e. 100)
- -XX:+GCTimeRatio=…(i.e. 19)
- tune young generation tool:
- enable -XX:+PrintGCDetails, -XX:+PrintHeapAtGC, and -XX:+PrintTenuringDistribution
- watch survivor size
- watch the tenuring threshold; it might need to tune it to tenure long lived objects faster.
- some rules:
- Too many live objects during young generation GC:
Reduce NewSize, reduce survivor spaces, reduce tenuring threshold.
- Too many threads:
Find the minimal concurrency level, or split the service into serval JVMs.
- Eden should be big enough to hold more than one times transactions. In this case, there is no stop-the-world and through output would be big.
- Each survivor should be big enough to maintain alive objects and aged objects.
- Increasing threshold will put long aged objects to old generation asap to release more space to survivor.
- Now we use 64-bit JVM, 64-bit pointer will cause CPU buffer is smaller than 32-bit pointer. Involving -XX:+UseCompressedOops will compress 64-bit pointer to 32-bit pointer, but it still will use 64-bit memory space.
Object stored in memory split into 3 parts:
- header: mark word + klass pointer
- mark word stores running data for object itself.
- klass pointer points to the object’s class metadata.
- instance data:
- padding: 0 <= padding <= 8
(header + instance data + padding) % 8 == 0
An Akka Mailbox holds the messages that are destined for an Actor. Normally each Actor has its own mailbox, but with for example a BalancingPool all routes will share a single mailbox instance.
When the mailbox is not specified as the default mailbox is used. By default it is an unbounded mailbox, which is backed by a java.util.concurrent.ConcurrentLinkedQueue.
Using bounded mailbox:
- When mailbox is full, messages go to DeadLetters.
- mailbox-push-timeout-time: how long to wait when mailbox is full
- Doesn’t work for distributed Akka systems.
The default implementation of Scheduler used by Akka is based on job buckets which are emptied according to a fixed schedule. The scheduler method returns an instance of akka.actor.Scheduler which is unique per ActorSystem and is used internally for scheduling things to happen at specific points in time.
Scheduler implementation are loaded reflectively at ActorSystem start-up with the following constructor arguments:
- the system’s com.typesafe.config.Config (from system.settings.config)
- a akka.ever.LoggingAdapter
- a java.util.concurrent.ThreadFactory
You can schedule sending of message to actors and execution of tasks (functions or Runnable). You will get a Cancellable back that you can call cancel on to cancel the execution of the scheduled operation. Cancels this Cancellable and returns true if that was successful. If this cancellable was (concurrently) cancelled already, then this method will return false though isCancelled will return true.
- a cancellation signal set by a consumer is propagated to its producer.
- the producer uses onCancellation on Promise to listen to this signal and act accordingly.