Find Amazing Talent Find Your Dream Job

Hadoop Engineer

Contract: Charlotte, North Carolina, US

Salary Range: 65.00 - 70.00 | Per Hour

Job Code: 366263

End Date: 2026-01-08

Days Left: 26 days, 12 hours left

Details: 
Client: Bank  
Job Title: Hadoop Engineer
Location: Chicago IL and Charlotte, NC - Onsite 
Duration: 12 Months (Extension/Conversion will be based on performance)
Pay Range: $ 65 - 70/HR
Benefits: The Company offers the following benefits for this position, subject to applicable eligibility requirements: medical insurance, dental insurance, vision insurance, 401(k) retirement plan, life insurance, long-term disability insurance, short-term disability insurance, paid parking/public transportation, (paid time , paid sick and safe time , hours of paid vacation time, weeks of paid parental leave, paid holidays annually - AS Applicable)

Mission: 

Seeking a Hadoop Engineer (SME) to support our NextGen Platforms built around Big Data Technologies, including Hadoop, Spark, Kafka, Impala, Hbase, Docker-Container, Ansible on a project that will focus on platform modernization and consolidation 

Day-to-Day:

  • Work on complex, major, or highly visible tasks in support of multiple projects requiring multiple areas of expertise. - Provide subject matter expertise in managing Hadoop and Data Science Platform operations, focusing on Cloudera Hadoop, Jupyter Notebook, OpenShift, Docker-Container Cluster Management, and Administration. -
  • Integrate solutions with other applications and platforms outside the framework.
  • Manage day-to-day operations for platforms built on Hadoop, Spark, Kafka, Kubernetes/OpenShift, Docker/Podman, and Jupyter Notebook.
  • Support and maintain AI/ML platforms such as Cloudera, DataRobot, C3 AI, Panopticon, Talend, Trifacta, Selerity, ELK, KPMG Ignite, and others.
  • Automate platform tasks using tools like Ansible, shell scripting, and Python.

Must Haves:

  • Strong knowledge of Hadoop Architecture, HDFS, Hadoop Cluster, and Hadoop Administrator's role
  • Intimate knowledge of fully integrated AD/Kerberos authentication.
  • Experience setting up optimum cluster configurations
  • Expert-level knowledge of Cloudera Hadoop components such as HDFS, Sentry, HBase, Kafka, Impala, SOLR, Hue, Spark, Hive, YARN, Zookeeper, and Postgres.
  • Hands-on experience analyzing various Hadoop log files, compression, encoding, and file formats.
  • Strongly Proficiency with Unix/SQL scripting
Job Requirement
  • HDFS
  • Hadoop
  • Cloudera
  • Unix
  • SQL
Reach Out to a Recruiter
  • Recruiter
  • Email
  • Phone
  • Vivek Singh
  • vivek.ksingh@collabera.com
Apply Now
Apply Now
close-icon