Wave

Wave

Wave is a small business software company that offers a suite of money management tools including invoicing, accounting, credit card processing, payroll, receipt scanning, and personal finance tools. With 2.5 million customers worldwide, Wave provides ...

Internet Software & Services
251-1K
Founded 2010
$80M raised

Description

  • Design, build, and deploy components of a modern data platform, including CDC-based ingestion (e.g., Debezium/Kafka), a centralized Hudi-based data lake, and batch, incremental, and streaming pipelines.
  • Maintain and enhance the Amazon Redshift warehouse and legacy Python ELT pipelines while driving the transition to a Databricks and dbt-based analytics environment.
  • Build fault-tolerant, scalable, and cost-efficient data systems and continuously improve observability, performance, and reliability across legacy and modern platforms.
  • Partner with cross-functional teams to design and deliver data infrastructure and pipelines that support analytics, machine learning, and GenAI use cases.
  • Identify and implement optimizations to data pipelines and workflows, working autonomously under tight timelines and evolving requirements.
  • Respond to PagerDuty alerts, troubleshoot incidents, and implement monitoring and alerting to minimize downtime and maintain high availability.
  • Provide technical guidance and clear communication to technical and non-technical stakeholders to resolve issues and align on solutions.
  • Assess existing systems to improve data accessibility and deliver practical solutions that enable internal teams to generate actionable insights.
  • Own parts of the data platform lifecycle, from development through deployment and operational support.

Requirements

  • 3+ years of experience building data pipelines and managing a secure, modern data stack, including CDC streaming ingestion (e.g., Debezium) into data warehouses supporting AI/ML workloads.
  • At least 3 years of experience with AWS cloud infrastructure, including Kafka (MSK), Spark / AWS Glue, and infrastructure-as-code using Terraform.
  • Fluency in SQL and a strong understanding of data modeling principles and storage structures for both OLTP and OLAP systems.
  • Experience writing and reviewing production-quality code in Python and SQL, and familiarity with dbt.
  • Experience developing or maintaining a production data system on Databricks (significant plus).
  • Prior experience building data lakes on S3 using Apache Hudi and handling Parquet, Avro, JSON, and CSV file formats.
  • Experience implementing CI/CD best practices for deploying and maintaining data pipeline solutions.
  • Familiarity with data governance practices (data quality, lineage, privacy) and experience with data cataloging tools (preferred).
  • Working knowledge of data integration tools such as Stitch and Segment CDP (preferred).
  • Experience with analytics and ML tools such as Athena, Redshift, or SageMaker Feature Store (preferred).

Benefits

  • $101,000 - $113,000 a year (final compensation determined by experience, expertise, and role alignment)
  • Bonus structure
  • Employer-paid benefits plan and Health & Wellness Flex Account
  • Professional development account and wellness days
  • Holiday shutdown and Wave Days (extra vacation days in the summer)
  • Get A-Wave program: work from anywhere in the world up to 90 days
  • Remote, full-time position
  • Inclusive and accessible recruitment process with accommodations available

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

Sr. Data Engineer I (6436)

MetroStar 251-1K IT Services

MetroStar is hiring a Sr. Data Engineer I to support an enterprise AI-enabled financial compliance initiative for the Department of War, building the data foundation for compliance modernization across 180+ systems.

PostgreSQL Python SQLAlchemy XML YAML
1 day, 6 hours ago

Staff Software Engineer - Product Analytics

Datadog 5K-10K IT Services

Datadog is hiring a Staff Engineer to lead the backend technical direction for its Product Analytics platform, building systems that help customers analyze user behavior, retention, and growth at scale.

SQL
1 day, 6 hours ago

Senior Staff Data Engineer

SoFi 1K-5K Capital Markets

SoFi is seeking a Senior Staff Data Engineer to lead the architecture and evolution of its AI-powered Data Platform, advancing data reliability, governance, and scalable data experiences for members.

Apache Airflow Apache Spark AWS GCP GitLab Hadoop Kafka Python Snowflake SQL
1 day, 6 hours ago

Principal Engineer, Ads Measurement

Unity 5K-10K Internet Software & Services

Unity is hiring a Principal Engineer for Ads Measurement to lead the development of self-attribution and install measurement systems that help the company independently evaluate ad performance and support optimization across its ads platform.

C++ Go Java
1 day, 6 hours ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers