Kpler

Kpler

Kpler provides a comprehensive platform for global trade intelligence, offering real-time data and analytics that empower businesses to plan, grow, and operate sustainably across various commodities and markets.

Professional Services
251-1K
Founded 2014

Description

  • Build and maintain core datasets for vessels characteristics, companies, and geospatial data.
  • Create and maintain REST APIs, streaming pipelines with Kafka Streams, and Spark batch pipelines.
  • Own development tasks end to end, from understanding requirements through implementation, testing, deployment, and review.
  • Design and build APIs and data processing components for development environments and peer/product testing.
  • Write and execute unit, integration, functional, and end-to-end tests based on defined scenarios.
  • Monitor system performance, alerts, and SLOs after release to ensure reliability and optimal functionality.
  • Deliver well-documented, maintainable code following TDD principles and clean code standards.
  • Implement data schema evolution and versioning strategies to support reliable data exchange.
  • Develop and maintain batch and streaming data pipelines, including backpressure handling, orchestration, retries, and data quality controls.
  • Instrument services with metrics, logs, and traces, and contribute to CI/CD and incident response.

Requirements

  • 3–5 years of experience in data-focused software engineering roles.
  • Strong programming skills in Scala or other JVM languages; Python experience is preferred.
  • Experience designing and operating RESTful APIs and versioned interfaces.
  • Good understanding of data modeling, schema evolution, and serialization technologies such as Avro or Protobuf.
  • Experience building and maintaining batch or streaming data systems.
  • Familiarity with CI/CD pipelines and modern monitoring and alerting practices.
  • Proficiency with Git-based workflows, code reviews, and Agile development methodologies.
  • Strong ownership mindset with pragmatic problem-solving skills and the ability to deliver end-to-end solutions.
  • Excellent communication skills and fluency in English.
  • Experience with Apache Airflow is a plus.
  • Exposure to cloud platforms, preferably AWS, and infrastructure as code with Terraform is a plus.
  • Experience with Docker and Kubernetes in production environments is a plus.
  • Hands-on knowledge of Kafka and event-driven or microservices architectures is a plus.
  • Familiarity with JVM build tools such as Gradle or Maven is a plus.

Benefits

  • Remote full-time role based in Germany.
  • Opportunity to work on high-impact data products in commodities, energy, and maritime markets.
  • Work at a global company with more than 700 experts across 35+ countries.
  • Inclusive and diverse work environment.
  • Supportive culture that emphasizes collaboration and helping colleagues and clients.
  • Encouragement to apply even if you do not meet every requirement.

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

Data Platform Engineer

Lola Blankets 1-10 Textiles, Apparel & Luxury Goods

Lola Blankets is hiring a Data Platform Engineer to own its analytics platform and support engineering work across product, operations, integrations, and platform reliability.

Apache Airflow Dagster dbt LLM Prefect Python Snowflake SQL TypeScript
4 hours, 9 minutes ago

Oracle Data Engineer (with German Language)

Soname Solutions 11-50 Internet Software & Services

Soname Solutions is seeking a Senior Data Warehouse Developer to support a German telecom client by designing, optimizing, and evolving its multi-layer data warehouse environment.

Oracle PostgreSQL Power BI SQL
6 hours, 11 minutes ago

Synthetic Data Engineer (AI Data/Training)

Hyphen Connect 1-10 staffing & recruiting

A Synthetic Data Engineer at the organization will design and manage domain-specific synthetic data pipelines that support data processing and model training workflows.

Apache Airflow Apache Spark
6 hours, 43 minutes ago

Senior Data Engineer

Alpaca 51-250 Capital Markets

Alpaca is seeking a Senior Data Platform Engineer to build and operate the data management layer for its global brokerage infrastructure as it scales across customers, jurisdictions, and high-volume financial event streams.

Apache Airflow dbt Docker GCP Helm Kafka Kubernetes Python SQL Terraform Trino
8 hours, 30 minutes ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers