The right talent can transform your business—and we make that happen. At Collabera, we go beyond staffing to deliver strategic workforce solutions that drive growth, innovation, and agility. With deep industry expertise, a global talent network, and a people-first approach, we connect you with professionals who don’t just fit the role but elevate your business. Partner with us and build a workforce that powers success.
Developer
Contract: Plano, Texas, US span>
Salary Range: 50.00 - 54.00 | Per Hour
Job Code: 349783
End Date: 2024-06-27
Job Status: Expired
This Job is no longer accepting applications
DayToDay Responsbilities:
- We are seeking a skilled PySpark Developer to join our dynamic team.
- As a PySpark Developer, you will be responsible for developing, implementing, and maintaining PySpark applications to support our clients' data processing needs.
- The ideal candidate should have a strong background in Python programming and experience working with Spark framework.
Responsibilities:
- Develop PySpark applications to process large volumes of data efficiently.
- Design and implement data pipelines for data ingestion, transformation, and analysis.
- Optimize PySpark jobs for performance and scalability.
- Collaborate with data engineers and data scientists to understand business requirements and translate them into technical solutions.
- Troubleshoot and debug PySpark applications to ensure reliability and accuracy.
- Stay updated with the latest trends and best practices in PySpark and big data technologies.
- Requirements:
- Bachelor's degree in Computer Science, Engineering, or a related field.
- 5-7 years of experience working as a PySpark Developer or in a similar role.
- Proficiency in Python programming language.
- Experience working with Spark framework and its ecosystem (PySpark, Spark SQL, Spark Streaming).
- Strong understanding of distributed computing principles.
- Experience with data modeling, ETL processes, and data warehousing concepts.
- Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and big data tools (e.g., Hadoop, Hive, Kafka) is a plus.
- Excellent problem-solving and analytical skills.
- Strong communication and teamwork abilities.
Must-Have:
- Python
- Spark
- ETL
- Good to have:
- Hive
- Kafka
- Hadoop
Job Requirement
- pyspark
- python
- etl
Reach Out to a Recruiter
- Recruiter
- Phone
- Raju Yadav
- raju.yadav@collabera.com
- 4155234502
This Job is no longer accepting applications
Apply Now
