tapouts

tapouts

tapouts is a company dedicated to preparing one million children by 2025 to face future challenges by transforming stress into success. Their methodology integrates psychology, neuroscience, and education to provide impactful coaching sessions that pro...

Family Services
11-50
Founded 2021

Description

  • Design, build, and maintain scalable batch and real-time data pipelines.
  • Design and develop dashboards that track key business metrics and support decision-making.
  • Develop and optimize complex SQL queries, stored procedures, and data models.
  • Write production-grade Python code for data ingestion, transformation, and automation.
  • Build and manage cloud-native data infrastructure across AWS, GCP, or Azure.
  • Implement and maintain data lakehouse architectures such as Delta Lake or Apache Iceberg.
  • Support ML workflows, including feature engineering, training pipelines, and MLOps integration.
  • Ensure data quality, governance, and lineage tracking across data assets.
  • Collaborate with data scientists and analysts to deliver trusted, well-documented datasets.
  • Monitor pipeline performance, troubleshoot issues, and optimize for cost and efficiency.

Requirements

  • 5+ years of experience in data engineering or a related field.
  • Advanced English proficiency.
  • Strong proficiency in SQL, including complex queries, performance optimization, and data modeling.
  • Strong proficiency in Python for ETL/ELT pipelines, scripting, and automation.
  • Experience with cloud platforms such as AWS, GCP, or Azure.
  • Hands-on experience with orchestration tools such as Apache Airflow, Prefect, or similar.
  • Experience with big data frameworks such as Apache Spark, Kafka, Flink, or similar.
  • Familiarity with data warehousing solutions such as Snowflake, BigQuery, Redshift, or similar.
  • Strong understanding of data modeling, schema design, and data architecture principles.
  • Experience with dbt and the modern data stack is preferred.
  • Familiarity with streaming and event-driven architectures is preferred.
  • Knowledge of MLOps and AI pipeline support is preferred.
  • Experience with data mesh or data platform engineering is preferred.
  • Familiarity with data governance tools and frameworks, including data lineage and data cataloging, is preferred.

Benefits

  • Remote full-time position.
  • Location restricted to candidates based in Latin America.
  • Equal opportunity employer committed to diversity and non-discrimination.

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

AI Data Engineer

Influur 11-50 Media

Influur is hiring an AI Data Engineer in New York/remote to own the full data-to-agent pipeline behind its autonomous viral marketing system for influencer campaigns.

AWS GCP LLM Python
2 hours, 56 minutes ago

Senior Data Engineer

Zencore Group 11-50 Internet Software & Services

Zencore is hiring a Senior Data Engineer in its LATAM Data & Analytics team to help customers modernize and migrate data platforms on Google Cloud through hands-on pipeline engineering and advisory work.

Apache Airflow Apache Spark CI/CD Databricks GCP MLOps Oracle Python Snowflake SQL
3 hours, 41 minutes ago

Data Observability Consultant - Dynatrace

Lingaro 5K-10K IT Services

Dynatrace India’s Consulting and Advisory Data Consulting Practice is hiring a remote Data Observability Consultant to support data-focused consulting work.

3 hours, 56 minutes ago

Senior Data Engineer

Lodgify 251-1K Internet Software & Services

Lodgify is hiring a Senior Data Engineer in Barcelona to build and optimize the company’s modern data platform that powers data-driven decisions across its vacation rental business.

Apache Airflow AWS Azure dbt GCP JavaScript Machine Learning Python SQL
3 hours, 56 minutes ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers