Trustly

Trustly

Trustly specializes in developing and providing online payment solutions that leverage Open Banking technology to enhance payment processes, reduce costs, and streamline financial services for consumers, merchants, and banks.

Diversified Financial Services
251-1K
Founded 2008

Description

  • Design, implement, and maintain scalable data platform infrastructure across AWS and Kubernetes (EKS, EC2, EMR, RDS, Redshift, Glue).
  • Design, develop, and maintain modular, analytics-ready data models and transformations in the Redshift data warehouse (using dbt and SQL).
  • Build and maintain secure, automated CI/CD pipelines and infrastructure-as-code for data components and platform resources.
  • Operate and evolve data processing and consumer-facing tools and frameworks (QuickSight, Redshift Editor, Athena, Metabase, Sagemaker Studio).
  • Manage batch and streaming orchestration (Airflow and Kafka) and automate data workloads, integrations, and release workflows for producers and consumers.
  • Implement data quality, validation checks, unit tests, and observability/alerting to ensure high availability, performance, and SLA compliance.
  • Collaborate with DevOps, Security, and other stakeholders to ensure compliance, reliability, and scalable developer workflows.
  • Maintain data catalogs, lineage documentation, architectural documentation, and investigate/resolve pipeline issues and incidents.
  • Create abstraction layers and tooling to simplify data creation, improve developer experience, and increase business area productivity.

Requirements

  • Bachelor’s or Master’s degree in IT, Computer Science, Engineering, Mathematics, or related technical discipline.
  • Proven history building big data pipelines and orchestrating data workloads across batch and streaming systems.
  • Experience with AWS services and big data tools (EKS, EC2, EMR, RDS, Redshift, Glue) and familiarity with Spark.
  • Strong SQL skills, data modeling experience, and experience with data warehouses (preferably Redshift) and relational databases (preferably Postgres).
  • Experience with dbt for transformations and writing automated tests (dbt tests) and unit tests for data pipelines.
  • Experience with Infrastructure as Code (Terraform or similar) and CI/CD tooling to automate deployments.
  • Experience with Kubernetes and Docker for running and maintaining services.
  • Experience with automation and workflow/orchestration tools (Airflow, Sagemaker, Kafka).
  • Python programming skills for building and automating data workloads.
  • Professional working proficiency in English for daily collaboration across a distributed global team.

Benefits

  • Bradesco health and dental plan for employee and dependents with no co-payment cost.
  • Life insurance with enhanced coverage.
  • Meal voucher and supermarket voucher.
  • Home Office Allowance and remote-first flexible hours (work from any city in Brazil).
  • Wellhub access for physical activities and online classes and Trustly Club discounts at educational institutions and partner stores.
  • English program with online group classes and private teacher.
  • Extended maternity and paternity leave and Birthday Off.
  • Welcome kit with Apple equipment (MacBook Pro, iPhone) and a referral program with rewards.

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

Senior Information Security Engineer – Data

Rubrik 1K-5K IT Services

Rubrik is hiring a Senior Security Engineer to operate its SIEM environment and help build a Security Data Lake platform that supports security monitoring, analytics, and automated SecOps across a global multi-cloud footprint.

AWS Azure CI/CD Databricks Elasticsearch GCP Kubernetes LLM Python SIEM Snowflake Splunk Terraform
59 minutes ago

INGENIERO DE DATOS

NEORIS 5K-10K Internet Software & Services

NEORIS busca un Data Engineer para diseñar, desarrollar y desplegar soluciones de datos en un entorno Big Data y Cloud, alineadas con la arquitectura de datos y orientadas a eficiencia y mantenibilidad.

Agile Apache Spark AWS Azure Cassandra Elasticsearch GCP Hadoop HDFS MongoDB Neo4j Oracle PostgreSQL Python SQL Server
4 hours, 53 minutes ago

Vice President, Data Engineering

TASQ Staffing Solutions 11-50 Professional Services

The VP, Data Engineering at the company will lead enterprise-wide data, analytics, and AI strategy across multiple business units to modernize reporting, enable self-service insights, and turn data into measurable business outcomes.

Azure Databricks Machine Learning Power BI Snowflake SQL
12 hours, 57 minutes ago

Data Engineer

Soros Fund Management Capital Markets

Soros Fund Management is hiring an experienced Data Engineer to build and modernize data systems that support trading, risk, research, and accounting operations across the firm.

Apache Spark Databricks dbt Docker FastAPI Kubernetes PostgreSQL Python Snowflake SQL SQL Server
13 hours, 15 minutes ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers