Sr. Data Engineer

 

Description:


We seek an experienced and high-performing individual to join our team as a Senior Data Engineer. This senior technical role is responsible for the design, building, deployment and support of data integration for BI reporting and analytic solutions, and application integration solutions using Python, AWS Glue and supporting tools and services. The Delivery team is involved in exciting projects using leading-edge technologies and leveraging our AWS Cloud based data platform for advanced analytics and data science.
 

 


What will you do?
 

  • Code, design, and guide on the construction and maintenance of robust and efficient data applications and reusable frameworks
  • Develop data pipelines in an AWS Cloud environment using Python and AWS Glue (PySpark) technology
  • Responsible for coordinating or participating in all aspects of the development cycle from design and development to release planning and implementation of data systems.
  • Mentor and guide less experienced data engineers across various locations to ensure all code follows applicable standards and is efficient and easily maintainable
  • Translate requirements into detailed functional and technical design using architecturally approved technology
  • Provide high level solution options and estimates for project proposals, and detailed work estimates in support of assigned work
  • Deliver solutions according to Systems Development Life Cycle (SDLC) methodology for either waterfall or agile projects
  • Provide consultation for the evaluation of data and software systems.
  • Develop and manage effective working relationships with other departments, groups or personnel with whom work must be coordinated.
     

What do you need to succeed?
 

  • 5 to 7 years or more of up-the-ranks experience developing solutions for data warehouse loads and system integrations using ETL tools.
  • 2 or more years developing data pipelines using AWS Glue.
  • Minimum 2 years of experience with Python script development using PySpark, Python libraries, configuration driven and object-oriented ETL.
  • Demonstrated strong core competency in SQL is essential.
  • Minimum 3 years of experience with Big Data including knowledge of Hive
  • Experience with creating complex data frames/structures in Hadoop for data integration and complex calculations
  • Experience with HDFS, Tez, and Spark is an asset.
  • An understanding and/or hands-on experience with Step and Lambda functions will be an asset.
  • Experience with data modeling concepts and data structure design for supporting high performing read SQLs.
  • Advanced level of SQL writing skills for handling large volume of data efficiently
  • Ability to deep dive in existing data integration code to analyze and reverse engineer.
  • Experience with handling complex multi level data transformations to integrate source system data to deliver on business needs.
  • Experience with production implementation change management processes
  • Experience with project management and software development life cycle/SDLC in an Agile environment.
  • Strong analytical skills, including conceptual, requirements interpretation, solution creation and problem-solving abilities
  • Excellent collaboration and leadership skills and proven ability to adapt to challenges, coaching and mentoring
  • Ability to work in a global multi-site environment and working in a matrix environment, onshore/offshore IT mode
  • Ability to lead a team of diverse skill sets and interface with peripheral technical teams

Organization Sun Life
Industry IT / Telecom / Software Jobs
Occupational Category Data Engineer
Job Location Ontario,Canada
Shift Type Morning
Job Type Full Time
Gender No Preference
Career Level Intermediate
Experience 2 Years
Posted at 2025-02-11 9:21 pm
Expires on 2025-03-28