Sezzle

Sezzle

Sezzle is a payments company revolutionizing the buy now, pay later experience with interest-free installment plans, empowering consumers and merchants alike.

Diversified Financial Services
251-1K
Founded 2016

Description

  • Own the end-to-end database and data warehousing infrastructure, including KPIs and SLAs for MySQL, Postgres, CDC capture, and Redshift.
  • Lead database and query performance optimizations.
  • Build and maintain tooling to manage, migrate, and upgrade data infrastructure.
  • Partner with development teams to ensure production-ready queries and a stable schema for product and business needs.
  • Evolve data systems to handle high volumes of writes, events, and complex ETL as the company scales.
  • Evaluate and integrate new technologies to guide the evolution of Sezzle’s data infrastructure.
  • Support data engineers and analysts with Redshift tuning, warehouse performance improvements, modeling, and cost management.
  • Contribute to the design of scalable, fault-tolerant data pipelines and modern data platform patterns.

Requirements

  • 12+ years of experience in DBA, SRE, or Data Engineering roles with a track record of scaling production-grade systems.
  • Deep expertise with MySQL, Postgres, and AWS Redshift or similar products.
  • Advanced proficiency in SQL.
  • Hands-on experience with data replication and ETL/ELT frameworks such as DBT and AWS DMS, or similar tools.
  • Strong understanding of data modeling, distributed systems, and warehouse/lake design patterns.
  • Excellent communication and documentation skills in a fast-paced, collaborative environment.
  • Prior experience in high-growth, data-intensive fintech or similar regulated environments is preferred.
  • Knowledge of lakehouse architectures and tools such as Snowflake, Databricks, Iceberg, or Delta Lake is preferred.
  • Experience designing scalable, fault-tolerant data pipelines with orchestration tools such as Airflow, Dagster, or Prefect is preferred.
  • Familiarity with streaming technologies such as Kafka, Kinesis, Flink, or Spark Streaming is preferred.

Benefits

  • Salary range of $6,000-$12,500 USD per month gross, based on location and experience level.
  • Full-time remote role.
  • Opportunity to work on a rapidly growing data infrastructure team at a high-growth fintech company.
  • Exposure to modern data tooling and the chance to help shape future architecture.
  • Open-source-first engineering culture.

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

Data Engineer II

Capital Rx 251-1K Health Care Providers & Services

Judi Health/Capital Rx is hiring a Data Engineer II (Analytics Engineer) to build reliable Snowflake and dbt data models that support analytics, products, and healthcare data operations.

Apache Airflow Apache Spark CI/CD Dagster Databricks dbt Git HIPAA Python Snowflake SQL
3 minutes ago

Senior Data Engineer

Correlation One 251-1K Professional Services

Correlation One is seeking a Senior Data Engineer to build and operate analytics-ready data systems that support decision-making across its workforce training organization.

Apache Airflow CI/CD dbt GCP GitHub Python REST API SQL Terraform
3 minutes ago

Senior Data Engineer

Mozilla 251-1K Internet Software & Services

Mozilla is hiring a Senior Data Engineer to manage and improve its data platform by turning raw and telemetry data into reliable models and datasets that support data-informed decision-making across the company.

GCP Python SQL
3 minutes ago

Mid-Level Data Engineer | CARE

Wellhub 1-10 Gas Utilities

Wellhub is hiring a Brazil-based remote Data Engineer for its CARE (Coach on Artificial Intelligence for Real Engagement) team to build cloud-based data solutions that support AI-powered wellness engagement products.

Apache Airflow Kafka
33 minutes ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers