Trustly

Trustly

Trustly specializes in developing and providing online payment solutions that leverage Open Banking technology to enhance payment processes, reduce costs, and streamline financial services for consumers, merchants, and banks.

Diversified Financial Services
251-1K
Founded 2008

Description

  • Design, implement, and maintain scalable data platform infrastructure across AWS and Kubernetes (EKS, EC2, EMR, RDS, Redshift, Glue).
  • Design, develop, and maintain modular, analytics-ready data models and transformations in the Redshift data warehouse (using dbt and SQL).
  • Build and maintain secure, automated CI/CD pipelines and infrastructure-as-code for data components and platform resources.
  • Operate and evolve data processing and consumer-facing tools and frameworks (QuickSight, Redshift Editor, Athena, Metabase, Sagemaker Studio).
  • Manage batch and streaming orchestration (Airflow and Kafka) and automate data workloads, integrations, and release workflows for producers and consumers.
  • Implement data quality, validation checks, unit tests, and observability/alerting to ensure high availability, performance, and SLA compliance.
  • Collaborate with DevOps, Security, and other stakeholders to ensure compliance, reliability, and scalable developer workflows.
  • Maintain data catalogs, lineage documentation, architectural documentation, and investigate/resolve pipeline issues and incidents.
  • Create abstraction layers and tooling to simplify data creation, improve developer experience, and increase business area productivity.

Requirements

  • Bachelor’s or Master’s degree in IT, Computer Science, Engineering, Mathematics, or related technical discipline.
  • Proven history building big data pipelines and orchestrating data workloads across batch and streaming systems.
  • Experience with AWS services and big data tools (EKS, EC2, EMR, RDS, Redshift, Glue) and familiarity with Spark.
  • Strong SQL skills, data modeling experience, and experience with data warehouses (preferably Redshift) and relational databases (preferably Postgres).
  • Experience with dbt for transformations and writing automated tests (dbt tests) and unit tests for data pipelines.
  • Experience with Infrastructure as Code (Terraform or similar) and CI/CD tooling to automate deployments.
  • Experience with Kubernetes and Docker for running and maintaining services.
  • Experience with automation and workflow/orchestration tools (Airflow, Sagemaker, Kafka).
  • Python programming skills for building and automating data workloads.
  • Professional working proficiency in English for daily collaboration across a distributed global team.

Benefits

  • Bradesco health and dental plan for employee and dependents with no co-payment cost.
  • Life insurance with enhanced coverage.
  • Meal voucher and supermarket voucher.
  • Home Office Allowance and remote-first flexible hours (work from any city in Brazil).
  • Wellhub access for physical activities and online classes and Trustly Club discounts at educational institutions and partner stores.
  • English program with online group classes and private teacher.
  • Extended maternity and paternity leave and Birthday Off.
  • Welcome kit with Apple equipment (MacBook Pro, iPhone) and a referral program with rewards.

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

Principal Engineer - Data Infrastructure

Sezzle 251-1K Diversified Financial Services

Sezzle is hiring a Principal Engineer for Data Infrastructure to own and evolve the company’s database and data warehousing systems as its data volume and business needs scale.

Apache Airflow AWS Dagster Databricks dbt Elasticsearch Flink Git GitLab Go Kafka Kubernetes MySQL PostgreSQL Prefect Python React React Native Snowflake SQL TypeScript
12 minutes ago

Staff Software Engineer (L4) Data Platform

Twilio 5K-10K Diversified Telecommunication Services

Twilio is hiring a Staff Software Engineer for its remote U.S.-based Data & Analytics Platform team to architect and deliver scalable data systems and help advance its global communications infrastructure.

Apache Spark AWS Hadoop Java Kafka Python Scala
27 minutes ago

Principal Engineer - Data Infrastructure

Sezzle 251-1K Diversified Financial Services

Sezzle is hiring a Principal Engineer for Data Infrastructure to own and evolve the systems that power its rapidly growing data platform and support reliable, high-scale analytics and operations.

Apache Airflow AWS Dagster Databricks dbt Elasticsearch Git GitLab Go Kafka Kubernetes Microservices MySQL PostgreSQL Prefect Python React React Native Scala Snowflake SQL TypeScript
27 minutes ago

Data Migration Engineer

Mark43 251-1K Professional Services

Mark43 is hiring a Data Migration Engineer (ETL Developer) to lead customer data migration efforts for public safety software, ensuring agencies transition clean, accurate data into the platform.

SQL
42 minutes ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers