Bernhardt_WS_00 By Manuel Bernhardt

This article was excerpted from the book “Reactive Web Applications.”

 

A stateless web-application can keep its state in several places as shown on Figure 1:


 

Bernhardt_WS_01

Figure 1 Location of state in a stateless web-application


 

Traditionally, most data is kept in a (relational) database (1), which may or may not be replicated. Many applications also make use of a memory-based caching layer using technologies such as memcached or Redis . In a stateless application, “traditional” server-side state (3) should be avoided, as a node may go up or down at anytime, but we will see that there are some exceptions to this, if that state is managed in such a way that it can be recovered after a crash or when a node comes up.

Client-side state plays an important role in modern web-applications: HTML5 introduced local storage (4) which can be very useful in making an application resilient towards temporary network outages. Finally, a cookie-based client-side session (5) is what acts as a substitute for the server-side session typically found in a Java’s HttpSession.

Let’s see how to make use of these different mechanisms by exploring a very simple and yet extremely common use-case: the one of user authentication.


 

Databases

We need to take certain precautions with databases. Some may have asynchronous drivers that will fit nicely into our asynchronous application model, while some of them may not – in which case we have to configure things properly.

A WORD ON ASYNCHRONOUS DATABASE ACCESS

Asynchronous database access helps improve the resource consumption in terms of thread usage and memory usage of the client application wanting to access the database. It further makes it possible to take advantage of asynchronous data manipulation techniques.

The availability of asynchronous drivers to a database is highly dependent on the database itself. Younger DMBs such as MongoDB and CouchBase have asynchronous drivers, such as ReactiveMongo and ReactiveCouchbase. If those drivers are available, then their usage is pretty easy: They offer APIs that rely on Futures in order to provide an asynchronous query and statement interfaces.

For some of the more traditional RDBMs such as MySQL andPostgreSQL, the mysql-async and posgresql-async community drivers offer an asynchronous alternative to the blocking JDBC counter-part. These two drivers don’t influence how these DBMs work (i.e., the communication isn’t truly asynchronous on the server-side), but they make use of Netty in order to offer asynchronous communication between the client and the server, which has the advantage of not hogging onto threads while the database executes a query or statement. Other RDBMS such as Microsoft SQL Server support asynchronous operations natively and offer asynchronous drivers.

Finally, another alternative for asynchronous access is offered by Slick which leverages the Reactive Streams API in order to stream data from the database whilst building on top of existing (blocking) JDBC drivers.

At the time this article was written, asynchronous database drivers for the most popular RDBMs are still in development stage and not feature-full, and may not be sufficient for all the richness of features that these systems have to offer. Instead of exploring technologies that are prone to rapid change, let’s instead focus on a much more common use-case: accessing relational databases in an optimal manner and with a solid toolset.

CONFIGURING PLAY FOR SYNCHRONOUS DATABASE ACCESS

As of version 2.4, Play uses HikariCP as a connection pool to manage database connections. A connection pool is used in order to optimize the costs associated with establishing and maintaining connections with a database. Most relational databases (or the implementation of their drivers) are synchronous in their communication via a connection – you send a query and get back a result on that same connection. This means that a thread on the JVM, executing one statement, is coupled to one database connection. It’s not exactly possible to have more than one thread talk to a database connection at the same time, because the database would get rather confused about what it is exactly the client wants from it by sending a new statement while the query has not been answered yet. It wouldn’t quite be possible to figure out which query a result set belongs to.

Threads that issue statements to a database are short-lived in comparison to the life-span of a database connection. Since those connections are expensive to create, it makes sense to re-use them across threads (one after another, that is), which is exactly what a pool such as HikariCP does.

When it comes to figuring out how many threads to provide to the context that will interact with the database, we first have to consider how large the database connection pool itself should be – after all, there is likely going to be a relation between those two figures. There is an excellent article in HikariCP’s documentation that discusses connection pool sizing. Although seemingly counter-intuitive, it is in practice much more performant to have a small connection pool rather than a large one, because at the end of the day the number of real parallelizable operations across these connections depends on the real number of CPU cores.

As you may have guessed, then, until many-core architectures become mainstream, the connection pool size is going to be rather small. The PostgreSQL project discusses this subject and proposes a formula that provides a starting point for establishing a connection pool size:

connections = ((core_count * 2) + effective_spindle_count)

where:

  • connections is the size of the pool
  • core_count is the real amount of cores, without taking hyperthreading into account
  • effective_spindle_count is the amount of hard drives (spindles)

According to that formula, if you have a server with a quad-core CPU and one hard-drive (or a RAID 1 set-up) then a good starting point for the connection pool is (4 * 2) + 1 = 9 connections.

To paraphrase the PostgreSQL documentation:

A formula which has held up pretty well across a lot of benchmarks for years is that for optimal throughput the number of active connections should be somewhere near

((core_count * 2) + effective_spindle_count).

Core count should not include [hyerthreaded] threads, even if hyperthreading is enabled. Effective spindle count is zero if the active data set is fully cached, and approaches the actual number of spindles as the cache hit rate falls. There hasn’t been any analysis so far regarding how well the formula works with SSDs.

The consequence this has for our thread pool configuration, or in other words, for the ExecutionContext we are going to use in Play, is that we will configure a dedicated thread pool of the size of the maximum available number of connections. The reason is simple: If you had fewer threads than available connections, less of them could be used, and if you had more, then there would be threads potentially contending for the same connection which would also be sub-optimal since contention is known to impact performance in a negative manner.

note On real-world connection pool sizing

Although this formula should help you to get started quickly, it does not replace tuning the connection pool size for your specific application and deployment. In this regard, the FlexyPool tool  can be quite useful as it offers the ability to monitor pools and allows you to resize them dynamically.