Categories
Code camp Coding Containers Devops Distributed Systems Docker Getting Started in Coding LinkedIn reddit Scala Software Engineering Uncategorized Web

A Better Docker Container Tagging Strategy for CI/CD

Continuous delivery is difficult, but if your applications are containerized with Docker you’re moving in the right direction to make things easier! Containers provide a ton of flexibility and portability, but they can become a nightmare once you realize the pain of container management. One thing to make it easier is to have a standard container tagging strategy to provide common assumptions and vernacular amongst the team.

Container-Tagging-Logo
Docker container build pipelines, tagging strategies, and CI/CD should go hand-in-hand.

Do I Need a Better Tagging Strategy?

You might want to rethink your Docker Image tagging strategy if you don’t immediately know the answer to the following questions:

  1. “What git commit hash of our app is currently running in production?”
  2. “Which container version in our registry is currently running in production?”

Strategy: Release Candidate Lifecycle Tagging

The tagging method I find most attractive is what I call “Release Candidate Lifecycle Tagging”. The tag values should follow a flavor of release candidate terminology along the delivery pipeline similar to:

Build Stage Tag Value Development Stage
Initial Build
  • <Commit Hash>
  • unstable
“Alpha”
Passed Tests
(Contract, Integration, Service)
  • stable
“Release Candidate”
Deployed to Production +
Smoke Tested
  • live
“GA” (General Availability)

What it Looks like in a Build Pipeline:

In the following example of a release of the app “app”, the current git checkout sha hash is “ff613f”Initially building the Docker Image with the git sha hash is a pivotal piece that allows teams to know where/how to checkout the application for local or remote debugging of the exact version of the application.

Tagging-Flow-Diagram
A CI/CD build pipeline with incremental image tagging.

Taking it Further

Post-production Tags

With canary or blue/green deployments, additional tagging stages could be added incrementally to not only reflect that containers have made it to production, but that they reached levels of validity or traffic based performance metrics.

Retiring Images

Once an app image has been replaced by it’s subsequently upgraded version, the previous image needs to remain in the docker registry for an arbitrary amount of time in case a rollback is required. This can be accomplished by adding another tag after the image is retired like “retired-<RETIRED_DATE>”. Then, a reaping processes could take advantage of this new tag and only remove imagess that are X days old.

 


Credit

I want to give a shout out to Daniel Nephin, as his detailed and explanatory Github comments and issue discussions have led me to resolving many issues around Docker and container strategy.

Categories
Code camp Coding Distributed Systems First programming job Getting Started in Coding JAVA LinkedIn Scala Software Engineering Spark Uncategorized Web

From Junior to Senior: Software Engineering Must-Knows

* This is a living document and will be update over time*

Why these Resources?

Along a software developer’s journey from post-grad to seasoned vet, you come across articles and literature that enlighten you, propelling your skills forward by miles rather than inches. This is a collection of those essential resources that I feel a software engineer should know to be an informed, efficient, and effective engineer.

Contents

  1. Maintaining Clean Code
  2. Database Design
  3. Lean Engineering
  4. Testing
  5. Technical Decision Making
  6. Managing Deployments
  7. Container Orchestration
  8. JVM
    1. JVM Tuning
    2. Scala
  9. Machine Learning

Resources

1. Maintaining Clean Code

Clean Code (Book by Robert Martin)

“Clean Code” is one of those books that after reading it, you come out with an immediate feeling of both excitement (You know how to write maintainable code now!), and regret (you realize the code you have been writing your whole life is smelly!). While a few chapters are pretty dated technically, it successfully outlines sound practices to maintain hygienic object oriented codebases that can be borrowed for other programming paradigms. This book is a must-know!

https://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882

Dependency Injection (DI)/Inversion of Control (IoC)

https://www.martinfowler.com/articles/injection.html

2. Database Design

Normalization

Normalization is easy to avoid early on, but tough to ignore its effects later down the road. When designing databases, five extra minutes spent thinking about and adhering to normalization will save days, if not weeks, later on in redesign and data integrity issue resolution. Trust me.
Short walkthrough on Normalization:

http://www.informit.com/articles/article.aspx?p=30646

3. Lean Engineering

Implementing Lean Software Development (Book by the Mary and Tom Poppendieck)

http://tinyurl.com/y9xdf7ed

4. Testing

Testing Quadrants

Those needing to prune, or cherry pick certain testing practices into their operations, can benefit from the diagram “Agile Testing Quadrants”. It outlines each test type’s organizational boundaries, initiation mechanism, and outcomes.

http://searchsoftwarequality.techtarget.com/tip/Agile-testing-quadrants-Guiding-managers-and-teams-in-test-strategies

Is Unit Testing Worth it?

Chances are you eventually started work at a company whose culture had a baked-in focus on quality, where you set off following orders to test, then realized the benefits later. For some, you are one of the testing thought-leaders at your organization and have to sell the benefit! This article gives you the points that express why unit testing is more than a nicety.

https://stackoverflow.com/questions/67299/is-unit-testing-worth-the-effort

Testing in a Microservices Architecture

https://martinfowler.com/articles/microservice-testing/

5. Technical Decision Making

Building Consensus Before Commitment

Encroaching on the famed “How to win friends and influence people” genre, this article explains how and why you should take a holistic approach to presentations and multi-org affecting decisions.

https://www.kitchensoap.com/2017/08/12/multiple-perspectives-on-technical-problems-and-solutions/

Technology Radar

A must in every developers exposure toolkit. The Thoughtworks team hand curates languages, frameworks, and practices organizations should adopt, trial, and assess.

https://www.thoughtworks.com/radar

Site Reliability Engineering Learnings

http://danluu.com/google-sre-book/

6. Managing Deployments

Continuous Integration

https://www.thoughtworks.com/continuous-integration

Git Workflows

https://www.atlassian.com/git/tutorials/comparing-workflows

Terraform Up-and-Running

While not critical to know intimately, Terraform is an amazing option as a multi PAAS hosting framework and Infra as Code management tool.

https://www.terraformupandrunning.com/

7. Container Orchestration

Kubernetes vs ECS

While this article will quickly grow stale, it is a great comparison of two of the leaders in cloud container orchestration and hosting.

https://platform9.com/blog/kubernetes-vs-ecs/

8. JVM

Scala

Class and Package Naming Strategies

While we all like to think we always execute the best file and class packaging practices, this naming and scoping refresher from Nikita Volkov can keep you sharp!

https://stackoverflow.com/questions/17121773/scalas-naming-convention-for-traits

Scala Interview Questions

https://www.journaldev.com/8958/scala-interview-questions-answers

Effective Scala

http://twitter.github.io/effectivescala

Profiling

Extensive Learnings from JVM Performance Tuning

https://www.infoq.com/presentations/JVM-Performance-Tuning-twitter

Profiling with VisualVM

This tool is awesome for investigating how JAVA options affect performance, and getting a feel for your apps overall health.

Walkthrough: https://www.youtube.com/watch?v=z8n7Bg7-A4I

https://visualvm.github.io/documentation.html

9. Machine Learning

10 Algorithms Software Engineers must know

https://www.kdnuggets.com/2016/08/10-algorithms-machine-learning-engineers.html

Disclaimer on References

The resources in this list are intended to be self referencing and imply the original authors are the ones that are due an immense amount of credit.

Think a resource should be added to this article? Please submit it here:

Categories
Coding First programming job Getting Started in Coding LinkedIn meme reddit Software Engineering Uncategorized Web

Be like Bill. Don’t cherry pick Technologies!

BeLikeBill

Resist the urge! Include the entire business in your local decisions and focus on optimizing the constraints you have now, not those in your “future”.

Categories
Code camp Coding Distributed Systems First programming job Getting Started in Coding JAVA LinkedIn reddit Scala Software Engineering Spark Uncategorized Web

Why Spark can’t foldLeft: Monoids and Associativity.

cover (3)
Apache Spark is the the elephant in a room full of data processing engines, yet Spark does not supply a foldLeft() or foldRight() method on its RDD class. Strange right? Such a fundamental collection method. How could it be forgotten? Or, was this not an accident?

null

scoreAverageByPlayer(), which would take an RDD, and return an RDD of tuples of each player with their average score. Note: foldLeft() is not an available method on the scores RDD class.

Remembering associativity

Lets dig deeper, because the realization of the answer is more useful than the answer itself.

Back to Algebra class we go! Associativity is one of the many algebraic properties defining functional mathematics and therefore functional programming. Without its understanding we cannot truly appreciate parallelism in computing and its limitations.

Mathematical Associativity:
"When the order in which the operations are performed does not matter as long as the sequence of the operands is not changed. That is, rearranging the parentheses in such an expression will not change its value."
The following expressions are associative:

null

Even though the parentheses were rearranged in the equation for res2, the values of res1 and res2 remained equivalent. It can then be said that the act of addition of real numbers is an associative operation.

How Spark achieves Parallelism

In order for Spark to become a leader in computational speed, it needed to incorporate operational parallelism. Parallelism will ultimately be the reason foldLeft is not found on the RDD class.

Clustering

At a high level, Spark clusters computational “worker” nodes or machines, partitions the data to be computed on in the master, distributes the data partitions from the master to the worker nodes where the computations are done on each node’s respective shard of data, then aggregates the resulting dataset(s) on the master node.

Parallelization

You can force Spark to parallelize computation on an RDD by using parallelize() on a SparkContext.

val scores = Array(68, 71, 73)
val parScores = sc.parallelize(scores)

Below is a function f being applied to an input dataset concurrently on a spark cluster. This can be thought of as a map transformation.

Parallelization Visualized

Spark-Clustering

Parallelizing reduce() in Spark

Let’s look at how spark parallelizes the reduce operation on an RDD.

reduce() from the Spark Documentation

Action Meaning
reduce(func) Aggregate the elements of the dataset using a function func (which takes two arguments and returns one). The function should be commutative and associative so that it can be computed correctly in parallel.

“The function should be commutative and associative so that it can be computed correctly in parallel.”

Signature of reduce()

def reduce[A](op: (A, A) => A): A

This reads:

Execute the function “op” on each element (type A), with the result of the previous op computation (accumulator of type A) and respective element (type A) as inputs, eventually returning the resulting accumulator value from the last iteration (type A).

Spark’s reduce() in action

Now let’s say we have a set of “score” integers and want to determine the lowest score. We can execute a reduce action on the RDD with a monoid findMin() (more on monoids later) as an operational parameter to solve this.

In code we would solve this like:

val data = Array(1, 2, 3, 4, 5)
val distData = sc.parallelize(data)
def findMin(first: Int, second: Int): Int = first.min(second)

val min = distData.reduce(findMin)

This would evaluate in Spark as:

Spark-Reduce

Monoids

We know what parallelism looks like in Spark, but why can’t we use foldLeft()? This will come together, but we need to understand Monoids first.

The laws surrounding Monoids are tightly coupled to associativity and state a Monoid operation:

  • Is of some type A
  • Consists of an operation, op, taking two values of type A, combining them into a single: op(op(x,y), z) == op(x, op(y,z)), where the type of x, y and z is A
  • Has an identity for the operation that maintains: op(x, zero) == x and op(zero, x) == x for any x of type A

Balanced Folds

reduce() can be categorized as a balanced fold, or a fold that allows for parallelism. Compare the following for the Sequence (a, b, c, d).

foldLeft() with the operation op would look like:

op(op(op(a, b), c), d)

While a balancedFold looks like:

op(op(a, b), op(c, d))

Can you see how this operation could be parallelized with a fork-join data structure?

Thinking of reduce() as a Balanced Fold

If we look back at the reduce of the findMin operation, its operational execution looks like:

Parallelizing reduce - Visualized

This is a balanced fold! Which can be written as either:

List(
  List(66,72).reduce(findMin),
  List(81,68).reduce(findMin)
).reduce(<strong>findMin</strong>)

or:

(66,72,81,68).reduce(findMin)

foldLeft()/foldRight()

foldLeft/right are methods made available on many monadic collections. However, let’s focus on the List collection which provides the following signature and implementation of foldLeft():

def foldLeft[B](z: B)(op: (B, A) => B): B
  if (this.isEmpty) z
  else op(head, tail.foldRight(z)(op))

This reads:

foldLeft is of type B, takes an initial element z, performs the operation op on each element in the Traversable object, returning a Type A in each accumulator iteration, and eventually returns a type B

foldLeft()’s operation is NOT a Monoid

We know that foldLeft’s predicate operation has a non-Monoidal signature, as it breaks all three monoid laws, but why again is this not a transformation supported by Spark? Simply put, it’s because foldLeft is not sufficiently parallelizable!

Looking back at the third Monoid law, it states the following must be true:

op(op(x,y), z) == op(x, op(y,z))

This law is what drives the ability to parallelize. Spark can fork a monoidal operation across a dataset into n number of operations and join the resulting values within the master. This fork-join parallelization results in a best-case decrease in execution time by a factor of n.

foldLeft() cannot be Parallelized

If we pretend foldLeft was a transformation available on Spark collections, and visually walked through its execution, maybe we could more easily understand it’s limitations.

Given the following collection and transformation lets see it in action…

val nums = List(2.2, 3.3, 4.4)
nums.foldLeft(1)((agg, next) => (agg * next).toInt)

Parallelizing foldLeft - Visualized

We can see why foldLeft() was never implemented within Spark, as the fork-join execution model would still result in serial blocking of computation!

Categories
JAVA Scala Web Zookeeper

Zookeeper in AWS: Practices for High Availability with Exhibitor

Untitled drawing (1)

Overview

Zookeeper is a distributed sequentially consistent system developed to attack the many tough use cases surrounding distributed systems such as leader election in a cluster, configuration, and distributed locking. For more Zookeeper recipes visit: http://zookeeper.apache.org/doc/current/recipes.html. Zookeeper clusters(ensembles) can be made of of any number of nodes, but typically take the form of a three or five node ensemble with the minority of nodes able to fail and the Zookeeper service able to continue serving traffic.

How we use Zookeeper

On the Search team at Careerbuilder we use Zookeeper as a configuration service replacing static configuration file deployment. We also have plans to move to Solr Cloud, which requires a Zookeeper Ensemble for its election and configuration tasks. Below is a system diagram for the entire Zookeeper/Exhibitor service in AWS with an auto-scaling group and consuming client libraries.

Zookeeper-noLines.png

Difficulties with Zookeeper

While Zookeeper provides a high level of reliability and availability through redundancy and leader election patterns, it’s critical use cases introduce a high level of risk for outages and mass failures. To mitigate a lot of the risk, the open source Exhibitor application should be used as a supervisory process to monitor and restart each Zookeeper process, as well as provide data backups, recovery, and auto ensemble configuration. Exhibitor will be explained in more detail later.

Exhibitor: https://github.com/soabase/exhibitor/wiki.

Assumptions

  • Deployment will take advantage of AWS resources (Currently avoiding containerization)
  • Exhibitor will be deployed alongside Zookeeper to server as it’s supervisory process and management UI (https://github.com/soabase/exhibitor/wiki)

Deployment

Infrastructure as Code: Frameworks

Infrastructure automation can be achieved through many open and closed source framework providers such as Ansible, Puppet, and Chef. For deployments on the Search team at Careerbuilder we use Chef as the means for setting up environments on self hosted boxes in AWS. The Chef + Ruby combo is a robust tool in managing infrastructure as code and I highly suggest it: https://learn.chef.io/.

Artifacts

To deploy a Zookeeper ensemble the following artifacts and dependencies are required to be installed on each node in the cluster. After each artifact is built they can be uploaded to an artifact repository, but in our case an S3 bucket through a continuous integration build, to be later downloaded and installed by Chef during deployment.

  • JAVA

    • Zookeeper and Exhibitor are JAVA Virtual Machine(JVM) applications therefore JAVA must be installed on all machines running them.
  • Zookeeper

  • Exhibitor

    • Exhibitor can be run as a WAR file or with an embedded Jetty Server. After finding the Tomcat/WAR based server cumbersome we chose the Jetty based Exhibitor moving forward and it has paid off through it’s ease of configuration.
    • Steps on how to build the Exhibitor artifact can be found on Github: https://github.com/soabase/exhibitor/wiki/Building-Exhibitor

Cloud Formation

On a higher level of Infrastructure as code lies the ability to orchestrate resource allocation, bringup, and system integration. AWS CloudFormation is a means to perform these deployment actions internal to AWS.

Generic Templates

By generalizing CloudFormation template files for your specific use cases and parameterizing them, you can use a homegrown application or Integration processes to regularly phoenix or A-B deploy your resources with ease.

Here at Careerbuilder we have developed an in-house JAVA application known as Nimbus that pulls our generic CF Stack templates from Github, populates their parameters with values from parameter files and triggers a CloudFormation Stack creation in AWS. This abstracts a lot of CloudFormation’s unused Stack template complexity.

Cluster Discovery through Instance Tagging

In many cases it is required that you define machine instances indexes to each machine in a clustered distributed system either due to dataset sharding or static clustering. It is possible to deploy Zookeeper with exhibitor in a static ensemble that does not utilize a shared S3 exhibitor.properties configuration file(more on this later) and this would require instance tagging. Each node could be tagged with an attribute, say {“application”=”zookeeper-development”} by CloudFormation to be used as a selector during bring up. The Chef/Ansible/Puppet processes running on each node could then acquire the IP’s of each of the nodes in the ensemble by going through a query-check-sleep cycle in the local chef script until the correct number of Zookeeper boxes are up and running. However, this deployment model is clumsy and error prone, therefore the shared configuration file in S3 for exhibitor is highly suggested, as server IPs are added and removed from the config file dynamically.

Starting Exhibitor and Zookeeper

Exhibitor will start and restart the Zookeeper processes periodically during clustering tasks and rolling configuration changes. Therefore, all you need to do is start Exhibitor on each node, each node will starte Zookeeper, then register itself with the shared config file specified by the “–s3config” parameter. Below is some example ruby/chef code that can be used to start Exhibitor:

execute “Start Exhibitor” do
commandnohup java -jar #{node[:search][:exhibitor_dir]}/exhibitor.jar –hostname #{node[‘ipaddress’]} –configtype s3 –s3config #{bucket} –s3backup true > /opt/search/zookeeper/exhibitorNohup.log 2> /opt/search/zookeeper/exhibitorNohup.error.log &
action :run
end

Since Exhibitor manages the Zookeeper process it is important not to create any configuration files manually nor expect any manual configurations made to Zookeeper to persists after deployment or during runtime.

Exhibitor

Security

Securing Exhibitor/Zookeeper is an extensive topic left to the specific implementation of the reader. However, the Exhibitor Wiki lists command parameters that can be used to enable and configure security features within Exhibitor and suggest giving it a look.

https://github.com/soabase/exhibitor/wiki/Running-Exhibitor

Configuration

A typical exhibitor bucket root will look like the following, with an exhibitor.properties config file that you have uploaded containing a default level of configuration. I suggest also maintaining a ‘last known default’ config file in a “base-config” directory for backup of the exhibitor.properties file, in the event it gets corrupted.

null

An example exhibitor.properties file:

com.netflix.exhibitor-hostnames=
com.netflix.exhibitor-hostnames-index=0
com.netflix.exhibitor.auto-manage-instances-apply-all-at-once=1
com.netflix.exhibitor.auto-manage-instances-fixed-ensemble-size=5
com.netflix.exhibitor.auto-manage-instances-settling-period-ms=60000
com.netflix.exhibitor.auto-manage-instances=1com.netflix.exhibitor.backup-extra=
com.netflix.exhibitor.backup-max-store-ms=86400000
com.netflix.exhibitor.backup-period-ms=60000
com.netflix.exhibitor.check-ms=30000
com.netflix.exhibitor.cleanup-max-files=3
com.netflix.exhibitor.cleanup-period-ms=43200000
com.netflix.exhibitor.client-port=2181
com.netflix.exhibitor.connect-port=2888
com.netflix.exhibitor.election-port=3888
com.netflix.exhibitor.java-environment=
com.netflix.exhibitor.log-index-directory=/opt/search/zookeeper/logIndex/
com.netflix.exhibitor.log4j-properties=
com.netflix.exhibitor.observer-threshold=999
com.netflix.exhibitor.servers-spec=
com.netflix.exhibitor.zoo-cfg-extra=syncLimit\=5&tickTime\=2000&initLimit\=10
com.netflix.exhibitor.zookeeper-data-directory=/opt/search/zookeeper_data
com.netflix.exhibitor.zookeeper-install-directory=/opt/search/zookeeper
com.netflix.exhibitor.zookeeper-log-directory=/opt/search/zookeeper

Pro Tip: A great benefit we have found is the ability to modify Zookeeper Java Environment settings through Exhibitor for performance and Garbage Collection tuning.

Ensemble Registration

The following images outline the steps of the consensus process during Ensemble deployment.

Zk-Deploy-Step1-IntialGossipZk-Deploy-Step2-FirstNodeAddedZk-Deploy-Step3-IpIsPulledDownZk-Deploy-StepN-AllBoxesAreUp

Self Healing

The following images outline the event of a node failure in an auto-managed ensemble.

Zk-Recovery-Step1-NodeFallsOutOfServiceZk-Recovery-Step2-AsgBringsNodeUpZk-Recovery-Step3-NewNodeRegisteredZk-Recovery-Step4-DenialOfReEntry

Categories
Scala Web

A Web Server in 5 Minutes with Scala + Jetty + SBT

Recently, I was tasked with developing a load generation tool on top of Twitter’s open source Iago project. I initially validated the request rates of the app using a separate local Play! app as a victim server with restful endpoints summing the requests. But.. this setup wasn’t going to cut it within my acceptance test suite.

Solution: Embedded Jetty Server.

Goal

  • Programmatically stand up a web server with minimal restful endpoints/routes as concise as possible.

Overview

For this example our Jetty server will act as a “Counter” and have the following characteristics:

  • Listen on a random port
  • Two restful endpoints
    • /increment
      • Will increment the counter by one and return the new count in the response body
    • /reset
      • Will reset the counter back to zero
embeddedjettyserver
Embedded Jetty Server Component Diagram

Prerequisites

  • An SBT/Maven based Scala Project
    • Scala version 2.11.8

           Note: Setting up a project is out of the scope of this article

Steps:

1. Add Project Dependencies

Jetty has changed ownership a few times(currently Eclipse) and you can get Jetty working in a number of ways, but this example assumes the latest Jetty release from Eclipse.

Maven

<dependency>
<groupId>org.eclipse.jetty</groupId>
<artifactId>jetty-server</artifactId>
<version>9.3.12.v20160915</version>
</dependency>
<dependency>
<groupId>org.eclipse.jetty</groupId>
<artifactId>jetty-servlet</artifactId>
<version>9.3.12.v20160915</version>
</dependency>
view raw pom.xml hosted with ❤ by GitHub

build.sbt

libraryDependencies ++= Seq(
"org.eclipse.jetty" % "jetty-servlet" % "9.3.12.v20160915",
"org.eclipse.jetty" % "jetty-server" % "9.3.12.v20160915"
)
view raw build.sbt hosted with ❤ by GitHub

2. Create a Jetty manager Object

First thing we’ll do is create an Object that will contain the Jetty Server, configuration variables, and the Servlets (hold the route definitions). For this example I named it JettyExample.

Since the server will have two functions:

  1. Incrementing the counter
  2. Resetting the counter

we should define the string literals for those routes.


object JettyExample {
val incrementRoute = "/increment"
val resetRoute = "/reset"
def main(args: Array[String]) = {
}
}

3. Add Helper Functions

Add a createServer() function and assign an internally global server variable to a createServer() call. The Server object will need an import reference as well.

Tip: You could add action methods similar to createServer() that Start or Stop the server, or execute handler functionality programmatically rather than through web requests.


import org.eclipse.jetty.server.Server
object JettyExample {
val server = createServer()
{…}
def createServer() = new Server(0) // 0 for random port
}

Since the server was told to start on a random port, we need a function that grabs this port from the server instance. We will use this to print the port number later. Also, add the import reference to NetworkConnector.


import org.eclipse.jetty.server.{NetworkConnector, Server}
object JettyExample{
{…}
def port() = {
val conn = server.getConnectors()(0).asInstanceOf[NetworkConnector]
conn.getLocalPort()
}
}

4. Create the Servlets

Servlet Container

CREATE THE SERVLETS! Wait, wait, wait…. first lets create a Servlet container object inside our JettyExample that will encapsulate our private counter variable as well as our endpoint Servlets. Don’t forget we need a Thread-Safe variable to eliminate race conditions.


object JettyExample{
{…}
object CounterServlets{
private var requestCount: Int = AtomicInteger(0) // encapsulate the state in a Thread safe way
// TODO: Servlet Classes to go here
}
}

IncrementServlet

Add the local requestCount variable, and the Servlet that handles the increment logic then returns html containing the new counter value. This Servlet will serve GET request types. Also, you must include the three imports required for defining an HttpServlet.


import javax.servlet.http.{HttpServlet, HttpServletRequest, HttpServletResponse}
object JettyExample{
{…}
object CounterServlets{
private var requestCount: Int = AtomicInteger(0)
class IncrementServlet extends HttpServlet {
override protected def doGet(request: HttpServletRequest, response: HttpServletResponse):Unit = {
requestCount.getAndIncrement()
response.setContentType("text/html")
response.setStatus(HttpServletResponse.SC_OK)
response.getWriter().println(s"<h2>Increment performed. Count is now $requestCount.</h2>")
}
}
}
}

ResetServlet

Add another HttpServlet titled “ResetServlet”. This Servlet will reset the counter, and return an OK message and HTML stating the counter has been reset.

At this point the Servlet Classes are defined, but they are not bound to the server with a ServletHandler as a route definition. We’ll do that next.


{…}
object JettyExample{
{…}
object CounterServlets{
{…}
class ResetServlet extends HttpServlet {
override protected def doGet(request: HttpServletRequest, response: HttpServletResponse):Unit = {
requestCount.getAndIncrement()
response.setContentType("text/html")
response.setStatus(HttpServletResponse.SC_OK)
response.getWriter().println(s"<h2>Counter reset to 0.</h2>")
}
}
}
}

5. Bind the Servlets with a ServletHandler

Before adding the handler bindings add an import declaration for the CountingServlets internal to the JettyExample so we can have access to the Servlet Classes.

Now add a ServletHandler variable to the JettyExample scope. Within the main method assign this handler to the current server instance, and add the increment Servlet to the server after being mapped to the route “/increment”. Do the same with the reset Servlet. ServletHandler requires one import.

Both endpoints are now configured, the last thing to do is Start the server, and have the server wait to terminate.


import org.eclipse.jetty.servlet.ServletHandler
object JettyExample {
{…}
import CounterServlets._
val incrementRoute = "/increment"
val resetRoute = "/reset"
val handler = new ServletHandler()
def main(args: Array[String]) = {
server.setHandler(handler)
handler.addServletWithMapping(classOf[IncrementServlet], incrementRoute)
handler.addServletWithMapping(classOf[ResetServlet], resetRoute)
}
}

6. Start the Server

Fire it up! Start the server and have it wait for termination. We also should print out some diagnostic info about the port so we can hit the endpoint from a browser or other HTTP client.

Note: server.join() causes the main thread to wait to continue util the server is fully up, and will cause the main thread to wait to terminate until the Jetty thread terminates.


{…}
object JettyExample {
{…}
val server = createServer()
def main(args: Array[String]) = {
{…}
server.start()
println(s"Server started on ${port()} with routes: '$incrementRoute'")
server.join()
}
}

When you start the server you’ll see a terminal printout similar to:

jettystartupcl

In a browser, go to http://localhost:{your port}/increment. Then refresh the page a few times. You should see something similar to the following:

incrementbrowser

To reset the count go to http://localhost:{your port}/reset . You should see

resetbrowser

Complete Code


import org.eclipse.jetty.servlet.ServletHandler
import org.eclipse.jetty.server.{NetworkConnector, Server}
import javax.servlet.http.{HttpServlet, HttpServletRequest, HttpServletResponse}
object JettyExample {
import CounterServlets._
val incrementRoute = "/increment"
val resetRoute = "/reset"
val server = createServer()
val handler = new ServletHandler()
def main(args: Array[String]) ={
server.setHandler(handler)
handler.addServletWithMapping(classOf[IncrementServlet], incrementRoute)
handler.addServletWithMapping(classOf[ResetServlet], resetRoute)
server.start()
println(s"Server started on ${port()} with endpoints: '$incrementRoute' and '$resetRoute'")
server.join()
}
def port() = {
val conn = server.getConnectors()(0).asInstanceOf[NetworkConnector]
conn.getLocalPort()
}
def createServer() = new Server(0)
object CounterServlets{
private var requestCount: Int = AtomicInteger(0)
class IncrementServlet extends HttpServlet {
override protected def doGet(request: HttpServletRequest, response: HttpServletResponse):Unit = {
requestCount.getAndIncrement() // Thread-Safe Increment
response.setContentType("text/html")
response.setStatus(HttpServletResponse.SC_OK)
response.getWriter().println(s"<h2>Increment received. Count is now $requestCount.</h2>")
}
}
class ResetServlet extends HttpServlet {
override protected def doGet(request: HttpServletRequest, response: HttpServletResponse):Unit = {
requestCount.getAndIncrement() // Thread-Safe Increment
response.setContentType("text/html")
response.setStatus(HttpServletResponse.SC_OK)
response.getWriter().println(s"<h2>Counter reset to 0.</h2>")
}
}
}
}

 

Resources:

Maven Jetty Server Repo

Twitter’s Iago | Load Generation for Engineers