Description:
Clutch is Canada’s largest online used car retailer, delivering a seamless, hassle-free car-buying experience to drivers everywhere. Customers can browse hundreds of cars from the comfort of their home, get the right one delivered to their door, and enjoy peace of mind with our 10-Day Money-Back Guarantee… and that’s just the beginning.
Named one of Canada’s top growing Companies two years in a row and also awarded a spot on LinkedIn’s Top Canadian Startups list, we’re looking to add curious, hard-working, and driven individuals to our growing team.
Headquartered in Toronto, Clutch was founded in 2017. Clutch is backed by a number of world-class investors, including Canaan, BrandProject, Real Ventures, D1 Capital, and Upper90. To learn more, visit clutch.ca.
What You'll Do
- Lead the development, testing, and maintenance of complex data management solutions that support business goals and drive decision-making processes at scale.
- Architect, design, and implement sophisticated ETL/ELT processes to manage complex data transformations, ensuring efficiency, reliability, and scalability of data pipelines.
- Proactively identify and resolve critical data quality issues, implementing data governance practices and leading regular audits to maintain data accuracy and integrity across multiple data sources.
- Optimize and evolve data integration processes, applying best practices for performance, scalability, and security to meet growing business demands.
- Collaborate closely with data architects and other senior stakeholders to design and implement data transformations that align with evolving business requirements and future-proof data architecture.
- Lead the adoption and implementation of modern data frameworks, including data lakes, data warehouses, and cloud-based architectures, to enhance business intelligence and analytics capabilities.
- Champion the use of DevOps tools (e.g., Git, GitHub Actions, Docker) for code versioning, deployment automation, and ensuring continuous integration, delivery, and real-time monitoring of data pipelines.
- Document and standardize data definitions, processes, and solutions, driving the establishment of clear data standards and ensuring cross-team communication and knowledge sharing.
- Ensure data solutions adhere to the highest standards for security, scalability, and reliability, guiding teams in following industry best practices and company policies.
What We're Looking For
- Bachelor’s or Master’s degree in Computer Science, Mathematics, Information Systems, or a related technical field.
- 5+ years of experience in data engineering, data architecture, or a related field, with demonstrated leadership experience in complex data initiatives.
- Advanced programming skills in languages such as Python, TypeScript, and JavaScript, with a deep understanding of software engineering principles.
- Expert-level SQL knowledge, with a proven track record of writing and optimizing complex queries and database performance for large-scale systems (e.g., PostgreSQL, MySQL, SQL Server).
- Extensive experience with modern database technologies, including both relational databases (e.g., Oracle, PostgreSQL, AWS Aurora) and NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB), as well as cloud-based data solutions (e.g., Amazon Redshift, Google BigQuery, Snowflake).
- In-depth experience with ETL/ELT tools such as Apache Airflow, Talend, or Informatica, and a demonstrated ability to design efficient data flows for large datasets.
- Hands-on experience with cloud platforms (AWS preferred) such as AWS, GCP, or Azure, and deep knowledge of their data services (e.g., AWS Glue, AWS S3, SageMaker, Azure Data Factory, GCP Dataflow).
- Familiarity with data warehousing and big data technologies, including Hadoop, Spark, Kafka, for both real-time data streaming and batch processing.
- Strong understanding of DevOps methodologies, with hands-on experience in using tools like Docker, Kubernetes, Datadog, Terraform, and GitHub Actions for managing and optimizing data pipeline deployments.
- Proven ability to lead and mentor junior engineers, driving innovation and best practices in data engineering.