Configuration of HDFS cluster using Ansible

Divya Kurothe
3 min readJan 19, 2021

Ansible:

Ansible is an open-source IT automation engine, which can remove drudgery from your work life, and will also dramatically improve the scalability, consistency, and reliability of your IT environment. Ansible can automate IT environments whether they are hosted on traditional bare metal servers, virtualization platforms, or in the cloud. It can also automate the configuration of a wide range of systems and devices such as databases, storage devices, networks, firewalls, and many others.

Hadoop:

The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing. The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures.

Here in this article we will Configure Hadoop and start cluster services using Ansible Playbook.

For the initial installation of ansible in base os and writing inventory file you can refer to :

My inventory file for this practical looks like:

Here is the playbook for creating a Namenode of the hdfs cluster.

namenode playbook code

Output after running the playbook

Playbook for creating a Datanode of the hdfs cluster.

datanode playbook code

Output after running the datanode playbook

Playbook for creating a Client node of the hdfs cluster.

client playbook code

Output after running client playbook

Now let’s have a look at

  1. Namenode os:

2. Datanode os:

3. Client os

That’s all folks. Thank you for reading :)

--

--

No responses yet