The right talent can transform your business—and we make that happen. At Collabera, we go beyond staffing to deliver strategic workforce solutions that drive growth, innovation, and agility. With deep industry expertise, a global talent network, and a people-first approach, we connect you with professionals who don’t just fit the role but elevate your business. Partner with us and build a workforce that powers success.
Hadoop Data Engineer
Contract: Charlotte, North Carolina, US span>
Salary Range: 60.00 - 63.00 | Per Hour
Job Code: 369348
End Date: 2026-06-12
Days Left: 28 days, 15 hours left
Job Title: Hadoop Data Engineer
Location: Chicago, Denver, Jacksonville, Charlotte, Addison
Work Arrangement: Onsite from Day 1
Client Industry: Banking
Duration:12-18 Months (Possibility of Full-Time Conversion)
About the Role:
We are actively looking for an experienced Hadoop Data Engineer to join a high-performing enterprise data engineering team. The ideal candidate will have strong expertise in Big Data technologies, distributed systems, and building scalable batch and near real-time data pipelines.
What We’re Looking For:
- Strong hands-on experience with Hadoop and Big Data ecosystems
- Expertise in Spark Structured Streaming and Apache Spark
- Strong SQL skills with Hive, Impala, MySQL, or Spark SQL
- Experience with Kafka, Sqoop, MapReduce, HDFS, HBase, SOLR
- Experience working with Cloudera/Hortonworks platforms (CDP/HDP)
- Knowledge of Elastic Search and Kibana is a plus
- Strong programming experience in Scala, Python, or PHP
- Experience working in distributed systems and large-scale data environments
Responsibilities
- Design, develop, and maintain batch and near real-time data pipelines using Spark Structured Streaming, MapReduce, and Hadoop technologies
- Ingest data from multiple sources including Kafka/message queues, REST APIs, relational databases, and file systems
- Transform, validate, and process large datasets using Hive, Impala, Spark SQL, and HDFS
- Work with structured and semi-structured data formats such as JSON, CSV, and XML
- Perform data profiling, validation, and troubleshooting for Spark applications and SQL jobs
- Optimize data processing workflows and resolve performance bottlenecks in distributed environments
Compensation:
Hourly Rate: $60 – $65 per hour
This range reflects base compensation and may vary based on location, market conditions, experience, and candidate qualifications.
Benefits:
The Company offers the following benefits for this position, subject to applicable eligibility requirements: medical insurance, dental insurance, vision insurance, 401(k) retirement plan, life insurance, long-term disability insurance, short-term disability insurance, paid parking/public transportation, (paid time, paid sick and safe time, hours of paid vacation time, weeks of paid parental leave, paid holidays annually - AS Applicable)
About Us
At Collabera, we don’t just offer jobs—we build careers. As a global leader in talent solutions, we provide opportunities to work with top organizations, cutting-edge technologies, and dynamic teams. Our culture thrives on innovation, collaboration, and a commitment to excellence. With continuous learning, career growth, and a people-first approach, we empower you to achieve your full potential. Join us and be part of a company that values passion, integrity, and making an impact.
Ready to Apply?
Apply now or reach out to mritunjay.kumar@collabera.com at 973 381 7213 for more information. We look forward to speaking with you!
Job Requirement
- Apache Spark & Spark Structured Streaming
- Hadoop Ecosystem
- Kafka & Data Ingestion Pipelines
- Strong SQL
- Scala/Python Programming Skills
Reach Out to a Recruiter
- Recruiter
- Phone
- Mritunjay Kumar
- mritunjay.kumar@collabera.com
- 9733817213