Data Scientist

Contract: Charlotte, North Carolina, US

Salary Range: 65.00 - 70.00 | Per Hour

Job Code: 360330

End Date: 2025-04-25

Days Left: 21 days, 1 hours left

Position Details:
Client: Financial
Job Title: Data Scientist
Duration: 6 Months (Possible Extension or Conversion)
Location: Charlotte, NC
Pay Range: $65-$70/hr.

About the Role:

  • The role is likely focused on optimizing and deploying AI models on GPU clusters for large-scale deep learning and generative AI applications.
  • This position may involve working with cloud platforms and distributed computing frameworks to enhance AI/ML workloads.

Responsibilities:

  1. AI Model Optimization and Deployment:

    • Optimize and deploy AI models on GPU clusters.
    • Leverage parallel processing capabilities for deep learning and generative AI applications.
  2. Multi-GPU Training and Distributed Computing:

    • Utilize frameworks such as TensorFlow Distributed, PyTorch Distributed, and Horovod.
    • Accelerate AI/ML workloads using distributed computing techniques.
  3. Infrastructure Management:

    • Configure and manage NVIDIA GPUs.
    • Work with GCP, including TPUs and GPU instances.
  4. API Development and Cloud Architecture:

    • Develop APIs and design cloud-native architectures.
    • Implement generative AI frameworks like LLaMA, Mistral, etc.
  5. Utilizing Frameworks and Tools:

    • Work with FastAPI, Unicorn, Swagger, Apache Spark (PySpark), Kubernetes, and Django.
    • Set up and manage Apache Kafka for real-time data streaming.

Required Qualifications:

  • Proficiency in Python.
  • Experience with Apache Spark (PySpark) for distributed data processing.
  • Familiarity with Kubernetes for container orchestration.
  • Knowledge of Django for web application development.
  • Experience with FastAPI, Unicorn, and Swagger for API development.
  • Understanding of generative AI frameworks, such as LLaMA and Mistral.
  • Experience with distributed computing frameworks like TensorFlow
  • Distributed, PyTorch Distributed, and Horovod
  • Experience with configuring and managing NVIDIA GPUs.
  • Familiarity with GCP, including TPUs and GPU instances
  • Experience with Apache Kafka for real-time data streaming.

 

Job Requirement
  • Python
  • Pyspark
  • Kubernetes
  • generative AI frameworks
  • Apache Kafka
  • GCP
Reach Out to a Recruiter
  • Recruiter
  • Email
  • Phone
  • Abhishek Naik
  • abhishek.naik@collabera.com
Apply Now
Apply Now
close-icon

©2025 Collabera. All rights reserved.