[Job - 29349] Senior Data Developer (AWS), Brazil

2 hours, 38 minutes ago
Full-time
Senior
Software Development
CI&T

CI&T

CI&T is a global digital technology agency empowering agile growth for leading companies through advanced technologies with a team of 2000 experts worldwide.

Internet Software & Services
5K-10K
Founded 1995

Description

  • Design and build end-to-end data pipelines across RAW, Silver, and Gold layers of the Medallion Architecture.
  • Architect data ingestion, transformation, standardization, and serving processes from diverse and heterogeneous sources.
  • Model data for analytical consumption using Data Warehouse best practices, including Star Schema and dimensional modeling.
  • Identify, evaluate, consolidate, and validate new data sources aligned with agronomic business objectives.
  • Translate business and domain requirements into data architecture decisions in collaboration with stakeholders and client leadership.
  • Optimize and serve data in formats such as Parquet, CSV, and geospatial datasets for downstream AI and map-based applications.
  • Manage and configure AWS cloud infrastructure, including storage, compute, access control, serverless functions, data cataloging, and event-driven processing.
  • Own deployment and CI/CD practices for data pipelines using GitLab, including branching strategy, test gates, and automated deployments.
  • Support the data layer that feeds AI/ML applications by ensuring data quality, structure, and availability.
  • Operate proactively in a greenfield environment by questioning assumptions, proposing solutions, experimenting, and iterating with the team.

Requirements

  • B2 level or above English proficiency for technical communication with international stakeholders.
  • Solid hands-on experience with AWS, including S3, IAM, Redshift, Lambda, and Glue.
  • Experience with Terraform or equivalent Infrastructure-as-Code tooling in real data engineering projects.
  • Strong proficiency with GitLab for source control, CI/CD configuration, deployment workflows, and test gate management.
  • Strong proficiency in SQL, including complex queries, analytical transformations, and performance tuning for data warehouses.
  • Strong proficiency in PySpark for large-scale distributed data processing and performance optimization.
  • Experience with Databricks in data engineering pipelines and lakehouse architectures, including migration and deployment scenarios.
  • Analytical data modeling expertise with Star Schema and dimensional modeling.
  • Hands-on experience with the Medallion Architecture and manipulation of Parquet and CSV files.
  • Experience integrating and consolidating data from multiple heterogeneous sources.
  • Comfort operating in greenfield projects with ambiguity and ownership.
  • Familiarity with SnapLogic or similar low-code/no-code ETL orchestration tools such as Pentaho, Airflow, or Alteryx (preferred).
  • Experience with geospatial data processing and map-based analytical environments (preferred).
  • Knowledge of DuckDB for in-process analytical queries (preferred).
  • Background in agribusiness or precision agriculture data projects (preferred).
  • Exposure to predictive modeling workflows as a data provider to ML pipelines, not as a model developer (preferred).

Benefits

  • Health and dental insurance.
  • Meal and food allowance.
  • Childcare assistance.
  • Extended paternity leave.
  • Gym and wellness partnerships via Wellhub (Gympass) and TotalPass.
  • Profit sharing and results participation (PLR).
  • Life insurance.
  • Continuous learning platform and partnerships with online learning platforms, including language learning support.

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

Data Engineer - Databricks / AWS (Gaming & LiveOps) - Latin America - Remote

Azumo 51-250 Internet Software & Services

Azumo is hiring a fully remote Big Data Engineer in Latin America to build and improve governed data and analytics infrastructure for gaming-focused insights, operations, and business decision-making.

Apache Airflow AWS Databricks dbt Generative AI Machine Learning Python SQL
3 hours, 47 minutes ago

Data Engineer, Azure - Remote, Latin America

Bluelight Consulting 11-50 Internet Software & Services

Bluelight is hiring a remote Data Engineer, Azure to build and optimize data pipelines and warehousing solutions for client projects across Latin America.

Agile Apache Spark Azure Git Machine Learning Power BI Python REST API SQL Tableau
6 hours, 55 minutes ago

Senior Data Engineer

Massive Rocket 51-250 Media

Massive Rocket is hiring a Senior Data Engineer to build and optimize Snowflake-based data solutions that support customer engagement, analytics, and business decision-making for its remote-first digital experience agency.

AWS Azure dbt GCP Power BI Python Scrum Snowflake SQL Tableau
11 hours, 39 minutes ago

Senior Engineer (remote, Europe)

Modash 11-50 Media

Modash is hiring a Senior Data Search Engineer in its remote Data Search team to build and evolve the creator-discovery data platform that powers search products used by thousands of companies.

Apache Airflow Apache Spark AWS DynamoDB Elasticsearch GCP GitHub Linear Machine Learning Microservices Node.js Notion Pulumi Python SageMaker System Design TypeScript
12 hours, 1 minute ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers