Sezzle

Sezzle

Sezzle is a payments company revolutionizing the buy now, pay later experience with interest-free installment plans, empowering consumers and merchants alike.

Diversified Financial Services
251-1K
Founded 2016

Description

  • Own end-to-end database and data warehousing infrastructure, including KPIs and SLAs for MySQL, Postgres, CDC capture, and Redshift.
  • Lead database and query performance optimizations.
  • Build and maintain tooling to manage, upgrade, migrate, and optimize data infrastructure.
  • Partner with development teams to ensure queries and schemas support product and business needs.
  • Evolve data systems to handle higher volumes of writes, events, and complex ETL workloads.
  • Evaluate and integrate new technologies to advance Sezzle’s data infrastructure.
  • Support data engineers and analysts with Redshift optimization, query tuning, modeling improvements, and cost management.

Requirements

  • 12+ years of experience in DBA, SRE, or Data Engineering roles with a track record of scaling production-grade systems.
  • Deep expertise with MySQL, Postgres, and AWS Redshift or similar systems, including performance tuning, table design, and workload management.
  • Advanced proficiency in SQL.
  • Strong hands-on experience with data replication and ETL/ELT frameworks, especially DBT and AWS DMS or similar tools.
  • Strong understanding of data modeling, distributed systems, and warehouse/lake design patterns.
  • Experience working in a fast-paced, collaborative environment with strong communication and documentation skills.
  • Preferred experience in high-growth, data-intensive fintech or other regulated environments.
  • Preferred knowledge of lakehouse architectures and tools such as Snowflake, Databricks, Iceberg, or Delta Lake.
  • Preferred experience designing scalable, fault-tolerant data pipelines with orchestration tools such as Airflow, Dagster, or Prefect.
  • Preferred familiarity with streaming technologies such as Kafka, Kinesis, Flink, or Spark Streaming.
  • High enthusiasm for using AI to improve productivity is preferred.

Benefits

  • Salary range of $6,000-$12,500 USD gross per month, based on location and experience level.
  • Full-time remote role.
  • Opportunity for career advancement in a rapidly growing team.
  • Work on modern data infrastructure and new tooling initiatives.
  • Open-source-first engineering culture.

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

Data Engineer II

Capital Rx 251-1K Health Care Providers & Services

Judi Health/Capital Rx is hiring a Data Engineer II (Analytics Engineer) to build reliable Snowflake and dbt data models that support analytics, products, and healthcare data operations.

Apache Airflow Apache Spark CI/CD Dagster Databricks dbt Git HIPAA Python Snowflake SQL
6 minutes ago

Senior Data Engineer

Correlation One 251-1K Professional Services

Correlation One is seeking a Senior Data Engineer to build and operate analytics-ready data systems that support decision-making across its workforce training organization.

Apache Airflow CI/CD dbt GCP GitHub Python REST API SQL Terraform
6 minutes ago

Senior Data Engineer

Mozilla 251-1K Internet Software & Services

Mozilla is hiring a Senior Data Engineer to manage and improve its data platform by turning raw and telemetry data into reliable models and datasets that support data-informed decision-making across the company.

GCP Python SQL
6 minutes ago

Mid-Level Data Engineer | CARE

Wellhub 1-10 Gas Utilities

Wellhub is hiring a Brazil-based remote Data Engineer for its CARE (Coach on Artificial Intelligence for Real Engagement) team to build cloud-based data solutions that support AI-powered wellness engagement products.

Apache Airflow Kafka
36 minutes ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers