Title: Data Engineer (Python, AWS)
Client: Investment Industry Client
Type: 6-month contract (22 weeks) + strong opportunity to extend or convert
Location: Downtown Toronto, ON
Work Model: Hybrid — 4 days/week onsite on-site, Fri WFH
# of Openings: 1
Why Is This Role Open?
• Backfill for an employee on extended leave (22-week coverage).
• The original role was for a Quality Engineer (QE), but responsibilities have evolved to a full Data Engineer.
• Senior-level need to support ongoing data quality + data engineering initiatives.
• Strong chance of extension or conversion pending budget.
Current Problem to Solve
Client’s risk & data teams rely on data flowing from multiple upstream sources. Incorrect data anywhere in the pipeline breaks models, calculations, and downstream reporting.
The team is shifting toward a “shift-left” model, embedding quality checks closer to the raw data layer. This engineer will be central to implementing that strategy.
What They Will Accomplish (High-Level)
• Solve organization-wide data quality issues across data products.
• Implement data quality checks and alerts.
• Help transition unstructured data → structured data as new enterprise tools are onboarded.
• Strengthen data pipelines to support operational due-diligence data products.
Day-to-Day Responsibilities
• Build and enhance data pipelines supporting risk and operational data.
• Implement and maintain data quality rules within existing frameworks.
• Add data validations (e.g., null checks, schema checks, upstream dependency validation).
• Set up alerts/notifications for data quality issues (SNS/SMS).
• Work with large, high-volume datasets.
• Support ingestion of new 3rd-party tools and convert unstructured outputs into structured data.
• Partner with data engineers and leads to ensure consistency across data products.
Must-Haves
• Strong data engineering experience
• Python
• Airflow (core requirement)
• AWS data stack including:
– AWS Glue
– Lake Formation
• Experience with high-volume data processing
• Experience building and supporting data pipelines
Nice-to-Haves
• Glue/Athena/Table formats (Arcaid tables)
• S3 expertise
• Ability to set up SNS notifications
• Broader AWS ecosystem exposure
• Experience in data quality engineering (integrated into pipelines)
Role Focus
• 70–80% data engineering
• 20–30% data quality engineering
• Ensuring quality checks are built into pipelines rather than treated as a separate function.
• Framework already exists — engineer needs to define and apply rules, not build from scratch.