Overview
Zookeeper is a distributed sequentially consistent system developed to attack the many tough use cases surrounding distributed systems such as leader election in a cluster, configuration, and distributed locking. For more Zookeeper recipes visit: http://zookeeper.apache.org/doc/current/recipes.html. Zookeeper clusters(ensembles) can be made of of any number of nodes, but typically take the form of a three or five node ensemble with the minority of nodes able to fail and the Zookeeper service able to continue serving traffic.
How we use Zookeeper
On the Search team at Careerbuilder we use Zookeeper as a configuration service replacing static configuration file deployment. We also have plans to move to Solr Cloud, which requires a Zookeeper Ensemble for its election and configuration tasks. Below is a system diagram for the entire Zookeeper/Exhibitor service in AWS with an auto-scaling group and consuming client libraries.
Difficulties with Zookeeper
While Zookeeper provides a high level of reliability and availability through redundancy and leader election patterns, it’s critical use cases introduce a high level of risk for outages and mass failures. To mitigate a lot of the risk, the open source Exhibitor application should be used as a supervisory process to monitor and restart each Zookeeper process, as well as provide data backups, recovery, and auto ensemble configuration. Exhibitor will be explained in more detail later.
Exhibitor: https://github.com/soabase/exhibitor/wiki.
Assumptions
- Deployment will take advantage of AWS resources (Currently avoiding containerization)
- Exhibitor will be deployed alongside Zookeeper to server as it’s supervisory process and management UI (https://github.com/soabase/exhibitor/wiki)
Deployment
Infrastructure as Code: Frameworks
Infrastructure automation can be achieved through many open and closed source framework providers such as Ansible, Puppet, and Chef. For deployments on the Search team at Careerbuilder we use Chef as the means for setting up environments on self hosted boxes in AWS. The Chef + Ruby combo is a robust tool in managing infrastructure as code and I highly suggest it: https://learn.chef.io/.
Artifacts
To deploy a Zookeeper ensemble the following artifacts and dependencies are required to be installed on each node in the cluster. After each artifact is built they can be uploaded to an artifact repository, but in our case an S3 bucket through a continuous integration build, to be later downloaded and installed by Chef during deployment.
-
JAVA
- Zookeeper and Exhibitor are JAVA Virtual Machine(JVM) applications therefore JAVA must be installed on all machines running them.
-
Zookeeper
- Download and Install from https://zookeeper.apache.org/releases.html
-
Exhibitor
- Exhibitor can be run as a WAR file or with an embedded Jetty Server. After finding the Tomcat/WAR based server cumbersome we chose the Jetty based Exhibitor moving forward and it has paid off through it’s ease of configuration.
- Steps on how to build the Exhibitor artifact can be found on Github: https://github.com/soabase/exhibitor/wiki/Building-Exhibitor
Cloud Formation
On a higher level of Infrastructure as code lies the ability to orchestrate resource allocation, bringup, and system integration. AWS CloudFormation is a means to perform these deployment actions internal to AWS.
Generic Templates
By generalizing CloudFormation template files for your specific use cases and parameterizing them, you can use a homegrown application or Integration processes to regularly phoenix or A-B deploy your resources with ease.
Here at Careerbuilder we have developed an in-house JAVA application known as Nimbus that pulls our generic CF Stack templates from Github, populates their parameters with values from parameter files and triggers a CloudFormation Stack creation in AWS. This abstracts a lot of CloudFormation’s unused Stack template complexity.
Cluster Discovery through Instance Tagging
In many cases it is required that you define machine instances indexes to each machine in a clustered distributed system either due to dataset sharding or static clustering. It is possible to deploy Zookeeper with exhibitor in a static ensemble that does not utilize a shared S3 exhibitor.properties configuration file(more on this later) and this would require instance tagging. Each node could be tagged with an attribute, say {“application”=”zookeeper-development”} by CloudFormation to be used as a selector during bring up. The Chef/Ansible/Puppet processes running on each node could then acquire the IP’s of each of the nodes in the ensemble by going through a query-check-sleep cycle in the local chef script until the correct number of Zookeeper boxes are up and running. However, this deployment model is clumsy and error prone, therefore the shared configuration file in S3 for exhibitor is highly suggested, as server IPs are added and removed from the config file dynamically.
Starting Exhibitor and Zookeeper
Exhibitor will start and restart the Zookeeper processes periodically during clustering tasks and rolling configuration changes. Therefore, all you need to do is start Exhibitor on each node, each node will starte Zookeeper, then register itself with the shared config file specified by the “–s3config” parameter. Below is some example ruby/chef code that can be used to start Exhibitor:
execute “Start Exhibitor” do command “nohup java -jar #{node[:search][:exhibitor_dir]}/exhibitor.jar –hostname #{node[‘ipaddress’]} –configtype s3 –s3config #{bucket} –s3backup true > /opt/search/zookeeper/exhibitorNohup.log 2> /opt/search/zookeeper/exhibitorNohup.error.log &” action :run end |
Since Exhibitor manages the Zookeeper process it is important not to create any configuration files manually nor expect any manual configurations made to Zookeeper to persists after deployment or during runtime.
Exhibitor
Security
Securing Exhibitor/Zookeeper is an extensive topic left to the specific implementation of the reader. However, the Exhibitor Wiki lists command parameters that can be used to enable and configure security features within Exhibitor and suggest giving it a look.
https://github.com/soabase/exhibitor/wiki/Running-Exhibitor
Configuration
A typical exhibitor bucket root will look like the following, with an exhibitor.properties config file that you have uploaded containing a default level of configuration. I suggest also maintaining a ‘last known default’ config file in a “base-config” directory for backup of the exhibitor.properties file, in the event it gets corrupted.
An example exhibitor.properties file:
com.netflix.exhibitor-hostnames= com.netflix.exhibitor-hostnames-index=0 com.netflix.exhibitor.auto-manage-instances-apply-all-at-once=1 com.netflix.exhibitor.auto-manage-instances-fixed-ensemble-size=5 com.netflix.exhibitor.auto-manage-instances-settling-period-ms=60000 com.netflix.exhibitor.auto-manage-instances=1com.netflix.exhibitor.backup-extra= com.netflix.exhibitor.backup-max-store-ms=86400000 com.netflix.exhibitor.backup-period-ms=60000 com.netflix.exhibitor.check-ms=30000 com.netflix.exhibitor.cleanup-max-files=3 com.netflix.exhibitor.cleanup-period-ms=43200000 com.netflix.exhibitor.client-port=2181 com.netflix.exhibitor.connect-port=2888 com.netflix.exhibitor.election-port=3888 com.netflix.exhibitor.java-environment= com.netflix.exhibitor.log-index-directory=/opt/search/zookeeper/logIndex/ com.netflix.exhibitor.log4j-properties= com.netflix.exhibitor.observer-threshold=999 com.netflix.exhibitor.servers-spec= com.netflix.exhibitor.zoo-cfg-extra=syncLimit\=5&tickTime\=2000&initLimit\=10 com.netflix.exhibitor.zookeeper-data-directory=/opt/search/zookeeper_data com.netflix.exhibitor.zookeeper-install-directory=/opt/search/zookeeper com.netflix.exhibitor.zookeeper-log-directory=/opt/search/zookeeper |
Pro Tip: A great benefit we have found is the ability to modify Zookeeper Java Environment settings through Exhibitor for performance and Garbage Collection tuning.
Ensemble Registration
The following images outline the steps of the consensus process during Ensemble deployment.
Self Healing
The following images outline the event of a node failure in an auto-managed ensemble.