Our thoughts, knowledge, insights and opinions

Your first microservices using Scala and Lagom

Back in the days the business was much simpler and required all-in-one straightforward solutions that usually end up as monoliths. Thanks to that supporting those systems was simpler in the beginning, but over the time you had much more chances to end up with a clumsy, tightly coupled system. Right now markets are changing rapidly. You either adapt quickly or you go out of business. Software has to comply to this new reality.

Changing way you want to develop your system means you need to change your mindset, learn new stuff and apply best practices. This is the place where dedicated platform might help the newcomers.

Lagom platform

Lagom

Here comes the Lagom

Lagom is a platform delivering you not only the microservice framework but also a whole toolset for developing applications and creating, managing and monitoring your services. Although it is a pretty young (version 1.0.0-M2) project which currently targets Java developers, we decided to give it a try and pair it with the Scala power.

The Platform is based on the popular technologies, mostly from Scala’s ecosystem:

  • sbt (Scala’s build system, project definitions),
  • Play (REST endpoints, Guice dependency injections)
  • Akka (processing)
  • Cassandra (default data storage)

Most of the technologies are hidden behind interfaces, and most of the newcomers won’t need to deal with them directly, but you have such a possibility. This is extremely important when it comes to developers working on monolithic J2EE codebase. Lagom will allow them to split the problematic domain easily into services without having to learn a lot up front. It also comes with a set of improvements - hot redeploy, easier testing and app management.

How is it build?

Since Lagom is build specifically for microservices the whole concept is async by nature - based on a familiar request-response cycle. Services communicate in a non-blocking way, thus making the whole app more efficient. The inter-service communication can be done by HTTP or by injecting an API reference and invoking methods.

By convention every microservice is divided into 2 parts: API and implementation. API is the formal contract. It tells the other developers and teams what the given service can do for them and how to interact with it. Implementation is where the code lives. It’s strongly decoupled from API so you can evolve it separately, as long as the contract holds.

To build an entry point we use a Service trait that allows us to define external endpoints and ServiceCalls which declare how to transform request into a response. The process closely resembles the mechanism of preparing a service block and delegating each call to proper service function.

When you access any path on your service then Play router is taking the responsibility for matching it against a path descriptor and delegating call to proper method. Processing is based on Akka’s infrastructure, the message-driver actor system, which ensures the calls will be processed concurrently, reliably and fast.

Service call declaration example:

Descriptor example:

When it comes to reliability by default all failures will be handled by Lagom’s exception handler and returned to you as HTTP 500 responses. The exception handler can be easily written and changed if you have such desire, which can be extremely useful when you want to have a fine grained control over your failures. By implementing your own ExceptionSerializer you can control not only response codes but also message body or different responses per accepted types.

Lagom uses a Persistence module to support your storage backed by Cassandra, a scalable, fault-tolerant database. With the Persistence module Lagom brings two main concepts of the whole Platform - Event Sourcing and CQRS.

Event sourcing is a way of operating on the storage as on a log. This means you are dealing with immutable domain events from which you can derive a state. This way implementation is much simpler and delivers a clear state history and a good write performance.

CQRS (Command Query Responsibility Segregation) gives an advantage of read and write separation. This means we are able to treat both groups differently e.g. scale it differently or put our attention to to either writes or reads. Thanks to that we can easily add more attention for processing on the read side without impact on the write itself.

If you want to resign from Event Sourcing or have need for any other database you can introduce it yourself by using a proper driver.

Your first microservices

For our purposes we’ve created a new project which is converting values between currencies. The project is based on 2 microservices. The first one is responsible for conversion and the second one delivers currency values. We should be able to modify currency values on the fly and our changes should be reflected in any calls after modification.

Let’s see how our calculator declaration looks like:

As you can see we define only one REST endpoint for GET method "/api/calculator/exchange?fromValue&fromUnit&toUnit" which will allow us to get the result in toUnit currency of fromValue value. Besides endpoint declaration we have also used some builder methods for descriptor:

  1. path param serializer from String to BigDecimal - is needed to serialize our fromValue parameter while passing to the calculate method,
  2. set custom exception serializer - customize a way we handle some of the exceptionally ended results,
  3. auto acl set to true - by default Lagom services do not have any Access Control Rules allowing us to access given resources and our request will be denied. By setting an auto acl we enable auto generation of access rules to our endpoints. The other option is to define those rules yourself.

Our exchange rates microservice declaration is relatively similar to the previous one:

Here we define two endpoints:

  1. GET endpoint to retrieve current ratio for given from and to units "/api/exchangerates/:fromUnit/:toUnit"
  2. PUT endpoint to set a new ratio for given from and to units "/api/exchangerates/:fromUnit/:toUnit" which requires from us a body like { "rate": 1.23 }

As you can see our example is simple, based on REST and default validations (compilation time), but yet is complex enough to show the concept.

Custom parameter serializer

There are times when you would like to customize the way how request/response parameters are handled. This can be easily achievable by preparing your own param serializer making usage of Lagom’s path param serializer concept.

As you can see we used a required factory method, that creates a required parameter for BigDecimal and declares two methods using Scala’s Function1. First of them transforms String url param to BigDecimal (serializes) and the second one makes the opposite - from BigDecimal it returns the String representation.

This serializer allows us to serialize path params like ?value=1.23 to BigDecimals, by declaring a method that takes the BigDecimal as an input

Custom exception handler

Lagom equips you with a default exception-handling mechanism, but as your ecosystem grows you will probably prepare your own implementation. This approach introduces a variety of ways to keep fine-grained control over your stack.

Above you can see my attempt to prepare a custom exception handling. It is worth mentioning that in actor systems like this one most exception will probably be caused by failed futures (it is a common pattern to return a failure from the future by ending it exceptionally).

In our implementation we wanted to have 2 flows: the first one is a default flow, that returns the default message and the second one processes CompletionExceptions that comes from the CompletableFuture. If we can match a ServerError in the second flow we return custom message otherwise we return a default message. By doing this we expose those information we really care about.

As you can see a RawExceptionMessage (the main entity of the exception serializer) allows you to specify:

  1. error code for a transport layer,
  2. protocol,
  3. response message.

As you have noticed our implementation is really simple, but it shows the core idea of exception serializers.

Let’s use it:

  1. download our example, unzip and navigate to unzipped folder,
  2. run sbt runAll,
  3. access REST url with POST method, filling proper values POST /api/exchangerates/:fromUnit/:toUnit

    e.g. /api/exchangerates/EUR/PLN with body {"rate": 1.23} ,

  4. call the calculation, once more fill the proper values /api/calculator/exchange?fromValue&fromUnit&toUnit

    e.g. /api/calculator/exchange?fromValue=1&fromUnit=EUR&fromUnit=PLN,

  5. you should receive a JSON response with message { "value": 1.23, "currencyUnit": "PLN" } .

Congratulations, you have just made a usage of our example!

Scala’s gluing code

As you probably recall we have mentioned that Lagom is pretty young (M2 artifact) and targets java developers right now. That’s why we needed to use some gluing code to provide more Scala-like syntax while coding our services. This glue code can be found in utils project.

The snippet contains some implicit conversions from Scala functions to Java functional interfaces like BiConsumer, BiFunction and a few more, but in some cases it was also required to specify return type or parameters explicitly since type system had troubles with recognizing it (e.g. in ExchangeStorage).

Utils project should go away when Scala DSL will be ready, at the same point of time some places in exchange rates and calculator projects should simplify much.

Platform or separate libs

You can ask what are the benefits of using Lagom over Akka and Play.

Lagom focus totally on microservices, you won’t build a web application using Lagom only(which is possible using Play), you will probably use Lagom with AngularJS or any other modern front-end framework. When it comes to Akka, Lagom hides the complexity of actors by exposing only minimal set of functions like ask. Every developer to whom the microservices concept is new should be able to use Lagom easily. If you don’t know Scala or your company isn’t ready to adopt it, you can still use Lagom with the official Java DSL. Add ConductR magic to it (post on ConductR is coming soon), that handles dev-ops related side for you and you have a Play&Akka based platform for microservices, that can be considered to use as a main tool for transition from monolith to service based architecture.

Personally, I see another benefit of using Lagom - imagine a team of Java developers, who want to split a monolith application quickly, but at the same time plan on trying the Scala ecosystem. I think in the future Lagom can deliver such Scala Lagom plugin and they will be able to be productive with Java’s DSL for old parts, and start with Scala’s DSL for newly-designed microservices and this will happen in the same ecosystem, stack, using same technologies, similar documentation but with different languages, all of these designed and tested by well-known company. Doesn’t it feel good to you? This can be an important argument for the management to give it a try, explore new areas and choose the most valuable solution from the business point of view in a long term

Summary

Lagom for sure has a potential, which I believe will appear in a couple of next months. The most important milestones will be releasing 1.0 version (completing the Java’s DSL) and a first artifact of Scala’s DSL. After that I believe there will be a phase of extending the ecosystem for e.g. additional database support. It is good to remember that Lagom is not a library or a framework, it is a full-blown platform delivering you a whole, ready-to-use toolset which can be a key player when it comes to making decisions about big changes in not so small companies.

You like this post? Want to stay updated? Follow us on Twitter or subscribe to our Feed.


comments powered by Disqus