The right talent can transform your business—and we make that happen. At Collabera, we go beyond staffing to deliver strategic workforce solutions that drive growth, innovation, and agility. With deep industry expertise, a global talent network, and a people-first approach, we connect you with professionals who don’t just fit the role but elevate your business. Partner with us and build a workforce that powers success.
Hadoop developer
Contract: Charlotte, North Carolina, US span>
Salary Range: 65.00 - 68.00 | Per Hour
Job Code: 361508
End Date: 2025-06-07
Days Left: 28 days, 12 hours left
Job Title: Hadoop Developer (Contract-to-Hire)
Location: Hybrid – Charlotte, NC | Wilmington, DE
Pay Range: $65–68 per hour
Job Description:
We are seeking an experienced Hadoop Developer to join a dynamic and collaborative data engineering team focused on building high-performance, scalable Big Data solutions. This is a contract-to-hire opportunity ideal for professionals passionate about architecting and delivering robust data pipelines and platforms using the latest technologies.
Key Responsibilities:
- Design, develop, and deploy large-scale Big Data applications and data processing pipelines.
- Work with distributed systems to build reliable and scalable solutions using Hadoop and Spark.
- Create and optimize data ingestion and streaming solutions using Kafka.
- Develop reusable scripts and tools using Python and/or Scala.
- Collaborate with cross-functional teams (business, QA, offshore) to define technical solutions and ensure delivery against milestones.
- Maintain and enhance existing Big Data applications in production environments.
- Provide mentorship to junior developers and contribute to technical roadmaps and strategies.
Required Skills:
- Strong hands-on experience with Hadoop ecosystem tools (HDFS, Spark, Hive, Oozie, Sqoop, Impala, etc.)
- Solid knowledge of Kafka for streaming data pipelines.
- Proficiency in Python for scripting and data processing.
- Experience in performance tuning and troubleshooting distributed data processing jobs.
Preferred Skills:
- Exposure to NoSQL technologies (e.g., HBase, MongoDB, SingleStore).
- Knowledge of Unix/Linux environments and Shell scripting.
- Development experience using Scala is a plus.
Job Requirement
- Hadoop
- Spark
- Kafka
- Python
- HDFS
- Hive
- Sqoop
- Oozie
Reach Out to a Recruiter
- Recruiter
- Phone
- Tanupriya Ganguly
- tanupriya.ganguly@collabera.com
- 7032890198
Apply Now
Apply Now
