Irth

Irth

Irth Solutions is a leading provider of SaaS cloud-based asset protection solutions that focus on damage prevention, risk analysis, and network infrastructure resilience. With nearly three decades of experience, Irth Solutions offers a wide range of fe...

Diversified Telecommunication Services
51-250
Founded 1985

Description

  • Build and maintain ingestion pipelines across AWS, Azure, and GCP.
  • Implement batch and streaming data pipelines using Databricks, Spark/PySpark, SQL, Delta Live Tables, and Lakeflow.
  • Apply medallion architecture patterns to transform structured data.
  • Implement CDC, SCD Type 1/2, schema evolution, and data validation processes.
  • Manage Delta Lake storage, tables, partitions, and performance optimization tasks such as OPTIMIZE, Z-ORDER, and VACUUM.
  • Support metadata management, lineage tracking, and cataloging using Unity Catalog or similar tools.
  • Assist with multi-cloud integrations between cloud storage platforms and Databricks.
  • Implement data quality validation, profiling, monitoring, and governance controls.
  • Build and manage orchestration workflows and support CI/CD and deployment automation.
  • Troubleshoot pipeline issues, recover failed jobs, and support performance tuning.
  • Work closely with the Senior Data Architect on architecture implementation and design reviews.
  • Document pipelines, data models, transformation logic, and operational procedures.

Requirements

  • 3–5 years of experience in data engineering, ETL, or cloud data platforms.
  • Experience with Databricks, Spark, PySpark, or distributed data processing.
  • Strong SQL and structured data transformation skills.
  • Experience with at least one cloud platform, with Azure preferred and AWS or GCP acceptable.
  • Knowledge of data modeling, schema evolution, pipeline troubleshooting, and data quality.
  • Understanding of security practices including RBAC, encryption, and credential management.
  • Experience with Delta Lake, medallion architecture, and lakehouse design is preferred.
  • Familiarity with Unity Catalog, Purview, Glue Catalog, or similar metadata tools is preferred.
  • Experience with orchestration tools such as ADF, Airflow, or Databricks Workflows is preferred.
  • Experience with Git, CI/CD, and DevOps practices is preferred.
  • Knowledge of Power BI, geospatial data, or AI/ML data preparation is a plus.
  • Relevant cloud or Databricks certifications such as DP-203 or Data Engineer Associate are preferred.
  • Bachelor’s or master’s degree in computer science, Software Engineering, or a related field, or equivalent professional experience.

Benefits

  • Being an integral part of a dynamic, growing company that is well respected in its industry.
  • Competitive pay based on experience.

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

Sr. Data Engineer - Trust Platform

Mitek Systems 251-1K Communications Equipment

Mitek is hiring a Senior Data Engineer to help build and maintain a global cloud data platform that supports analytics, reporting, and business decision-making across the company.

Apache Airflow AWS CI/CD Computer Vision dbt Encryption Machine Learning MapReduce PostgreSQL Power BI Python Snowflake SQL Tableau Terraform
1 minute ago

Staff Engineer, Data (Tech Lead)

Bellese Technologies 51-250 Internet Software & Services

Bellese is hiring a Staff Engineer, Data (Tech Lead) to lead its analytics team in modernizing CMS Hospital Quality Reporting systems and building scalable data solutions that reduce provider burden and improve healthcare outcomes.

Agile Apache Spark AWS Databricks EC2 Jenkins Python R
16 minutes ago

AWS Data Engineer (Associate)

Mactores 51-250 IT Services

Mactores is hiring an AWS Data Engineer (Associate) in Seattle for a remote engineering role focused on building and supporting data products that improve business decision-making.

Apache Airflow Apache Spark AWS SQL
16 minutes ago

Binance Accelerator Program - Data Engineer & Analytic

Binance 5K-10K Capital Markets

Binance’s Accelerator Program is seeking an early-career Data Engineer & Analyst to support AI evaluation, data pipelines, and operational analytics for its global blockchain ecosystem.

Apache Airflow Apache Spark Git Hive JSON LLM Python SQL
16 minutes ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers