Level Access

Level Access

Level Access provides a comprehensive digital accessibility platform, automated scans, and expert-led services to help organizations achieve and maintain accessibility compliance. With a focus on IT systems, Level Access has been a leader in the access...

Internet Software & Services
251-1K
Founded 1997
$72M raised

Description

  • Design, build, and maintain a scalable data platform on Databricks and AWS using Medallion architecture principles.
  • Develop and manage data ingestion and ETL/ELT pipelines from core business systems.
  • Implement data quality monitoring and automated alerting for freshness, completeness, validity, and anomaly detection.
  • Establish and maintain Golden Record identity resolution and reverse synchronization across source systems.
  • Govern metadata, lineage, and permissions using Databricks Unity Catalog and platform standards.
  • Collaborate with cross-functional teams to deliver analytics- and AI-ready datasets.
  • Apply infrastructure-as-code, CI/CD, and version control best practices for data pipelines.
  • Communicate with stakeholders, manage ambiguity, and drive results in a fast-paced environment.

Requirements

  • 8+ years of experience in data engineering building production-grade ETL/ELT pipelines at scale.
  • Strong proficiency in Python, including PySpark, scripting, API integrations, and pipeline automation.
  • Hands-on experience with Databricks, including notebooks, jobs, workflows, and Unity Catalog.
  • Hands-on experience with AWS services including Lambda, ECS/Fargate, S3, Glue, Athena, DynamoDB, EventBridge, Step Functions, and IAM.
  • Expertise in data modeling and lakehouse architectures, including Medallion, Delta Lake, Parquet, schema evolution, and incremental upserts.
  • Proficiency in SQL and dbt for building modular, tested, and documented models.
  • Experience with REST API integration, entity resolution, and record matching strategies.
  • Familiarity with Git-based version control and CI/CD for data pipelines.
  • Preferred: experience with advanced Databricks features such as Delta Live Tables and MLflow integration.
  • Preferred: Salesforce API experience, including Bulk API 2.0, REST API, and SOQL.

Benefits

  • Full-time, salaried position.
  • Competitive benefits package.
  • Bonus opportunities.
  • Generous paid time off.
  • Paid holidays.
  • Programs that support employee well-being and work-life balance.

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

Staff Data Engineer

tvScientific 11-50 Media

tvScientific is hiring a Staff Data Engineer to build and evolve the company’s core data infrastructure and pipelines that support its CTV advertising platform.

Apache Spark AWS Scala SQL
7 minutes ago

Senior Software Engineer - Data Platform

Motional 1K-5K Automotive

Motional is seeking a Data Platform engineer to help design and operate core infrastructure that turns large-scale data into actionable insights for Machine Learning and Autonomy teams.

AWS CI/CD Machine Learning Microservices Python SQL
52 minutes ago

AI Data Engineer

Power Digital is hiring a data engineer to build and own end-to-end data systems that power its internal AI platform and translate raw data into AI-ready outputs across the organization.

AWS CI/CD Git Looker Machine Learning Python Snowflake SQL
1 hour, 7 minutes ago

Staff Data Engineer, tvScientific

Pinterest 5K-10K Internet Software & Services

Pinterest is hiring a Staff Data Engineer to lead tvScientific’s identity services and data governance platform, building trusted, privacy-safe data infrastructure that supports reliable advertising and analytics systems.

Apache Spark AWS REST API Scala
1 hour, 52 minutes ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers