MediaRadar

MediaRadar

MediaRadar: Revolutionizing ad sales with comprehensive advertising intelligence for media buyers and sellers.

Media
51-250
Founded 2006
$7M raised

Description

  • Architect and implement complex end-to-end data pipelines using Azure Databricks and PySpark.
  • Design, build, and maintain a scalable Medallion Architecture (Bronze/Silver/Gold) for data processing and delivery.
  • Spend 70-80% of time coding and performing technical stewardship as a hands-on 'player-coach' for the team.
  • Optimize Apache Spark jobs, tune Databricks units, define cluster policies, and implement caching strategies to minimize compute costs and improve performance.
  • Proactively audit and refactor pipelines every 3-6 months to maintain effectiveness and reduce cloud costs.
  • Develop a proactive monitoring and alerts framework to achieve 99.9% reliability and mitigate system issues before they impact end users.
  • Build and maintain an end-to-end Data Validation Framework (e.g., Great Expectations) to enforce data accuracy and consistency.
  • Ensure data is available in the Gold layer within the required 24-hour turnaround time and minimize job failure rates.
  • Architect high-performance PostgreSQL schemas (indexing, partitioning) and optimize complex analytical queries.
  • Lead a lean team toward cross-trained agility: manage sprint cycles, conduct code reviews, enforce CI/CD best practices, translate business requirements into technical user stories, and collaborate with ML and offshore teams for model integration and knowledge transfer.

Requirements

  • 10+ years of experience in Data or Software Engineering with deep codebase involvement.
  • 3+ years as a Technical Lead managing agile teams.
  • Mandatory experience with Python.
  • Mandatory experience with PostgreSQL and pgvector.
  • Mandatory experience with Azure Databricks, PySpark, and Delta Lake.
  • Experience with Docker, Git, Azure DevOps, and CI/CD pipelines.
  • Proven ability to lead lean, high-impact teams while maintaining high individual output and advocating cross-training.
  • Experience scaling data processing through automation and performance/cost optimization.
  • Desired: experience with Apache Airflow (workflow orchestration) and familiarity with Azure Kubernetes Service (AKS).
  • Eligible to work remotely within the USA.

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

Senior Data Engineer – Backend (Python / Typescript / Big Data / AWS / Kubernetes)

Varicent 251-1K Professional Services

Senior Software Engineer at Varicent working on the ELT data platform to design, scale, and advance backend services and data-processing systems that accelerate data workflows and enable high-throughput, cloud-native data pipelines.

Apache Spark AWS CI/CD Docker Kafka Kubernetes Microservices Node.js Python REST API Terraform TypeScript
3 hours, 46 minutes ago

Senior Software Engineer – Backend (Python / Typescript / Big Data / AWS / Kubernetes)

Varicent 251-1K Professional Services

Senior Software Engineer at Varicent contributing to the ELT application to simplify data workflows and enable faster insights by designing and scaling large-scale, data-intensive backend and cloud-native systems.

Apache Spark AWS CI/CD Docker DynamoDB EC2 Kafka Kubernetes Microservices Node.js Python REST API System Design Terraform TypeScript
3 hours, 46 minutes ago

Senior Data Engineer – Backend (Python / Typescript / Big Data / AWS / Kubernetes)

Varicent 251-1K Professional Services

Senior Software Engineer at Varicent responsible for advancing the ELT application and cloud-native data platform to enable faster insights and automation for customers working with large-scale, data-intensive workflows.

Apache Spark AWS CI/CD DynamoDB EC2 Kafka Kubernetes Microservices Node.js Python REST API System Design Terraform TypeScript
3 hours, 46 minutes ago

Senior Data Engineer – Backend (Python / Typescript / Big Data / AWS / Kubernetes)

Varicent 251-1K Professional Services

Varicent is seeking a Senior Software Engineer to advance its ELT data platform by designing and scaling backend services and data processing pipelines that simplify data workflows and accelerate insights for customers.

Apache Spark AWS CI/CD Kafka Kubernetes Microservices Node.js Python REST API Terraform TypeScript
3 hours, 46 minutes ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers