Senior Data Engineer

Contract: Toronto, Ontario, CA

Salary Range: 70.00 - 80.00 | Per Hour

Job Code: 356403

End Date: 2024-12-01

Job Status: Expired

This Job is no longer accepting applications

What is the Opportunity?

This is an opportunity within the Data & Analytics team at Global Asset Management. We are seeking a hands-on, highly skilled, and motivated individual to join our dynamic team as a Senior Data Engineer. As a key member of our data engineering team, you will play a crucial role in the development and implementation of GAM’s Data Platform. As the successful candidate you will partner with key business stakeholders, understand the business drivers, and focus on designing, implementing, and maintaining robust data processing pipelines to support our organization’s data needs and advance our capabilities pertaining to data engineering, BI, Machine Learning and AI.

What you will do:

  • Work with business stakeholders and cross-functional teams to understand data requirements and deliver scalable data solutions. 
  • Design, develop, and maintain robust ETL processes to extract, transform, and load data from various sources into our data platform.
  • Build large-scale batch and event-driven data pipelines using cloud and on-premises hybrid data platform topology.
  • Work closely with data architects to review solutions and data models and ensure adherence to data platform architecture guidelines and engineering best practices. 
  • Take ownership of end-to-end deliverables and ensures high quality software development while fulfilling all operational and functional requirements in a timely manner.
  • Implement and enforce data quality standards and best practices while collaborating with data governance teams to ensure compliance with data policies and regulations. 
  • Optimize data integration workflows for performance and reliability.
  • Troubleshoot and resolve data integration and data processing issues.
  • Leverage best practices in continuous integration and delivery using DataOps pipelines.
  • Apply design-thinking and agile mindset in working with other engineers and business stakeholders to continuously experiment, iterate and deliver on new initiatives.
  • Stay informed about emerging technologies and trends in data engineering domain.
  • Lead, mentor, and inspire a team of data engineers to achieve high performance levels. 

Must-haves:

  • Previous asset management experience is preferred
  • 5-7 years of experience building batch and realtime data pipelines leveraging big data technologies and distributed data processing using Spark, Hadoop, Airflow, NiFi, and Kafka.
  • Proficiency in writing and optimizing SQL queries and at least one programming languages like Python and/or Scala.
  • Experience with cloud-based data platforms (Snowflakes, Databricks, AWS, Azure, GCP)
  • Expertise using CI/CD tools and working with docker and Kubernetes platforms.
  • Experience with following DevOps and agile best practices.
  • Experience with data modelling tools and methodologies.

Nice-to-have:

  • Experience with OpenShift, S3, Trino, Ranger and Hive
  • Knowledge of machine learning and data science concepts and tools.
  • Knowledge with BI & Analytics tools such as Tableau and Superset.
Job Requirement
  • Big Data Technologies
  • Spark
  • Hadoop
  • Snowflake
  • AWS
  • SQL
  • Docker
  • Kubernetes
  • Asset Management
  • Python
  • Scala
  • Airflow
  • Data Modelling
Reach Out to a Recruiter
  • Recruiter
  • Email
  • Phone
  • Sthitapragnya Pattanaik
  • sthitapragnya.p@collabera.com
This Job is no longer accepting applications
Apply Now
close-icon

©2024 Collabera. All rights reserved.