ShyftLabs

ShyftLabs

ShyftLabs is a strategic partner in driving digital transformation with tailored solutions for businesses. Specializing in end-to-end software solutions, they integrate seamlessly to accelerate value creation, particularly in the retail sector. Their e...

IT Services
51-250
Founded 2018

Description

  • Design, build, and maintain scalable batch and real-time ETL/ELT data pipelines using cloud services.
  • Architect and implement data infrastructure for high-volume data ingestion and processing.
  • Develop and manage the central data warehouse in Google BigQuery.
  • Design data models, schemas, and table structures optimized for performance and maintainability.
  • Write clean, efficient SQL and Python code to transform raw data into curated datasets.
  • Build transformation workflows that support analytics, reporting, and data science initiatives.
  • Monitor, troubleshoot, and optimize data infrastructure for performance, reliability, and cost efficiency.
  • Implement data quality checks, validation rules, monitoring, governance, observability, and lineage tracking.
  • Collaborate with engineers, analysts, and data scientists to deliver data products and infrastructure.
  • Lead client and stakeholder communication and align technical solutions with business strategy.

Requirements

  • 5+ years of hands-on experience in data engineering, data integration, or data platform development.
  • Degree in Computer Science, Engineering, Mathematics, or a related STEM discipline.
  • Strong programming and query skills in SQL and Python.
  • Experience with distributed version control systems such as Git in an Agile/Scrum environment.
  • Experience designing and orchestrating ETL pipelines, particularly with Databricks.
  • Experience working in cloud environments such as GCP, AWS, or Azure.
  • Experience with database systems such as MongoDB and Elasticsearch.
  • Strong understanding of data warehousing and dimensional modeling methodologies.
  • Hands-on experience with Airflow and Hadoop.
  • Experience using Docker for containerized workflows and reproducible environments.
  • Ability to identify opportunities to improve data quality, reliability, and automation.
  • Strong business awareness and communication skills with the ability to collaborate with technical teams and business stakeholders.
  • Experience within the retail industry is a plus.
  • Master’s degree in Computer Science, Engineering, or a related discipline is preferred.
  • Experience with enterprise-scale data platforms and Fortune 500 clients is preferred.
  • Familiarity with Druid and its Python API, including Kafka integrations, is preferred.
  • Strong experience using Apache Spark for large-scale data processing is preferred.
  • Experience designing real-time streaming data architectures is preferred.
  • Experience supporting AI/ML systems or agentic AI workflows is preferred.

Benefits

  • Fully remote work arrangement with the possibility of transitioning to a hybrid model in the future.
  • 100% employer-paid health, dental, and vision insurance premiums for employees and dependents.
  • Coverage available from day one.
  • Access to extensive learning and development resources.
  • Opportunity to work with Fortune 500 clients and influence strategy as the company scales.
  • Equal-opportunity, inclusive work environment with accommodation support during the interview process.

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

Data Engineer

Metova 51-250 Internet Software & Services

Data Engineer at a software products company, responsible for building and optimizing data processing solutions that support delivery timelines, scope, and technical standards.

Agile Apache Airflow Apache Spark Cassandra Flink Git Go Java JavaScript Kafka MongoDB PostgreSQL Python RabbitMQ REST API Scala SQL
17 minutes ago

FBS Data Production Support Analyst (Data Pipelines)

Capgemini 100K+ Internet Software & Services

Data Production Support Analyst for a major U.S. insurer, responsible for keeping production data assets reliable, accurate, and operational across data pipelines and related systems.

Java Python SQL
32 minutes ago

Senior Systems & Data Engineer

The Key 51-250 Diversified Consumer Services

Senior Systems Engineer role at a UK-based remote company focused on building and scaling Salesforce commercial systems, an in-house ESB, and internal data services across a growing global business.

CI/CD Docker GPT Kafka Kubernetes LLM Microservices Node.js Python RabbitMQ REST API Salesforce Salesforce Apex Salesforce Lightning SQL
47 minutes ago

AI Data Engineer

Rockstar 1-10 Professional Services

Rockstar is hiring an AI Data Engineer for an eCommerce and Amazon services company to connect warehouse data, AI analytics tools, and operational workflows that support marketplace growth.

JavaScript JSON LLM NumPy Pandas Python SQL
1 hour, 2 minutes ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers