Irth

Irth

Irth Solutions is a leading provider of SaaS cloud-based asset protection solutions that focus on damage prevention, risk analysis, and network infrastructure resilience. With nearly three decades of experience, Irth Solutions offers a wide range of fe...

Diversified Telecommunication Services
51-250
Founded 1985

Description

  • Build and maintain ingestion pipelines across AWS, Azure, and GCP.
  • Implement batch and streaming data pipelines using Databricks, Spark/PySpark, SQL, Delta Live Tables, and Lakeflow.
  • Apply medallion architecture patterns to transform structured data.
  • Implement CDC, SCD Type 1/2, schema evolution, and data validation processes.
  • Manage Delta Lake storage, tables, partitions, and performance optimization tasks such as OPTIMIZE, Z-ORDER, and VACUUM.
  • Support metadata management, lineage tracking, and cataloging using Unity Catalog or similar tools.
  • Assist with multi-cloud integrations between cloud storage platforms and Databricks.
  • Implement data quality validation, profiling, monitoring, and governance controls.
  • Build and manage orchestration workflows and support CI/CD and deployment automation.
  • Troubleshoot pipeline issues, recover failed jobs, and support performance tuning.
  • Work closely with the Senior Data Architect on architecture implementation and design reviews.
  • Document pipelines, data models, transformation logic, and operational procedures.

Requirements

  • 3–5 years of experience in data engineering, ETL, or cloud data platforms.
  • Experience with Databricks, Spark, PySpark, or distributed data processing.
  • Strong SQL and structured data transformation skills.
  • Experience with at least one cloud platform, with Azure preferred and AWS or GCP acceptable.
  • Knowledge of data modeling, schema evolution, pipeline troubleshooting, and data quality.
  • Understanding of security practices including RBAC, encryption, and credential management.
  • Experience with Delta Lake, medallion architecture, and lakehouse design is preferred.
  • Familiarity with Unity Catalog, Purview, Glue Catalog, or similar metadata tools is preferred.
  • Experience with orchestration tools such as ADF, Airflow, or Databricks Workflows is preferred.
  • Experience with Git, CI/CD, and DevOps practices is preferred.
  • Knowledge of Power BI, geospatial data, or AI/ML data preparation is a plus.
  • Relevant cloud or Databricks certifications such as DP-203 or Data Engineer Associate are preferred.
  • Bachelor’s or master’s degree in computer science, Software Engineering, or a related field, or equivalent professional experience.

Benefits

  • Being an integral part of a dynamic, growing company that is well respected in its industry.
  • Competitive pay based on experience.

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

Senior Data Engineer - Dbt, CI/CD

ClickHouse 51-250 IT Services

ClickHouse is hiring a Senior Data Engineer to evolve its internal data warehouse platform that powers analytics, forecasting, and decision-making across the company.

Apache Airflow AWS CI/CD ClickHouse dbt GitHub GitHub Actions Python SQL Superset
1 hour, 10 minutes ago

Senior Data Engineer

Filevine 251-1K Specialized Consumer Services

Filevine is hiring a Senior Data Engineer to build and operate the data systems, analytics foundations, and agentic AI data products that power its legal operating intelligence platform.

Agile Apache Airflow AWS Azure C# CI/CD Dagster Databricks dbt Docker GCP Git Kanban Kubernetes LLM .NET Python React Snowflake SQL Svelte Terraform
2 hours, 46 minutes ago

Data Engineer II

Zafin 251-1K Internet Software & Services

Zafin is hiring a Data Engineer II to design and scale data platforms that support analytics and product use cases for its banking platform.

Apache Airflow Apache Spark CI/CD Java Kafka Kubernetes Python Scala Snowflake SQL
3 hours, 14 minutes ago

Data Engineer, Azure - Remote, Latin America

Bluelight Consulting 11-50 Internet Software & Services

Bluelight is hiring a remote Data Engineer, Azure to develop and optimize ETL and data warehousing solutions for client projects across Latin America.

Agile Apache Spark Azure Git Machine Learning Power BI Python REST API SQL SQL Server Tableau
4 hours, 1 minute ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers