Wave

Wave

Wave is a small business software company that offers a suite of money management tools including invoicing, accounting, credit card processing, payroll, receipt scanning, and personal finance tools. With 2.5 million customers worldwide, Wave provides ...

Internet Software & Services
251-1K
Founded 2010
$80M raised

Description

  • Design, build, and deploy components of a modern data platform, including CDC-based ingestion, a centralized data lake, and batch, incremental, and streaming pipelines.
  • Maintain and enhance the existing Amazon Redshift data warehouse and legacy Python ELT pipelines while supporting the transition to Databricks and dbt.
  • Build fault-tolerant, scalable, and cost-efficient data systems with improved observability, performance, and reliability.
  • Collaborate with cross-functional partners to plan and deliver data infrastructure and processing pipelines for analytics, machine learning, and GenAI use cases.
  • Support teams across Wave by ensuring data and insights are delivered accurately and on time.
  • Work independently to identify opportunities to optimize pipelines and improve data workflows under evolving requirements and tight timelines.
  • Respond to PagerDuty alerts, troubleshoot incidents, and implement monitoring and alerting to reduce incidents and maintain high availability.
  • Provide technical guidance to colleagues and help resolve issues through clear communication with technical and non-technical stakeholders.
  • Assess existing systems and improve data accessibility to support internal decision-making and customer-facing outcomes.

Requirements

  • 6+ years of experience building data pipelines and managing a secure, modern data stack.
  • At least 3 years of experience working with AWS cloud infrastructure.
  • Experience with CDC streaming ingestion using tools like Debezium into a data warehouse supporting AI/ML workloads.
  • Experience with Kafka (MSK), Spark or AWS Glue, and infrastructure as code using Terraform.
  • Fluency in SQL and a strong understanding of data modelling principles and data storage structures for OLTP and OLAP.
  • Experience developing or maintaining a production data system on Databricks.
  • Strong coding skills in Python, SQL, and dbt, with the ability to write and review maintainable code.
  • Prior experience building data lakes on S3 using Apache Hudi with Parquet, Avro, JSON, and CSV file formats.
  • Experience developing and deploying data pipeline solutions using CI/CD best practices.
  • Familiarity with data governance practices, including data quality, lineage, privacy, and cataloging tools (preferred).
  • Working knowledge of Stitch and Segment CDP for integrating diverse data sources (preferred).
  • Experience with Looker, Power BI, Athena, Redshift, or Sagemaker Feature Store is a plus.

Benefits

  • $145,000 - $154,000 annual salary, with final compensation based on experience and role alignment.
  • Bonus structure.
  • Employer-paid benefits plan.
  • Health & Wellness Flex Account.
  • Professional Development Account.
  • Wellness Days, Holiday Shutdown, and Wave Days with extra summer vacation days.
  • Get A-Wave program offering work from anywhere in the world for up to 90 days.

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

Senior Data Engineer

Plain Concepts 251-1K Internet Software & Services

Plain Concepts is hiring a Data Engineer to design and build custom data-powered solutions for international client projects within a multidisciplinary, agile team.

Agile Apache Spark AWS Azure CI/CD dbt Python Scala Snowflake SQL
1 minute ago

Rust Engineer - Data Interoperability (Remote EU)

Retinai 11-50 Internet Software & Services

Ikerian AG is hiring a Rust Engineer to develop robust data interoperability systems that support healthcare screening and monitoring through high-performance AI and data management solutions.

Computer Vision DNS Git GraphQL Python REST API Rust TCP/IP TLS
16 minutes ago

AWS Data Engineer (Senior)

Mactores 51-250 IT Services

Mactores is hiring a Senior AWS Data Engineer in Seattle/remote to build and support modern data platforms and pipelines that enable automated, secure data solutions for customer digital transformation initiatives.

Agile Apache Airflow Apache Spark AWS Presto Snowflake SQL
16 minutes ago

Data Engineer

MUTT DATA 51-250 Internet Software & Services

Mutt Data is hiring a remote Data Engineer in Argentina to build and improve cloud-based data systems and machine learning solutions for clients across forecasting, e-commerce, and automation use cases.

Apache Airflow Apache Spark AWS Azure CI/CD Databricks dbt Docker GCP Jupyter NumPy Pandas Prefect Python SQL
16 minutes ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers