Later

Later

Later is a top social media management and influencer platform that simplifies visual content marketing for Instagram, Facebook, Twitter, and Pinterest. With over 2 million users globally, including renowned brands like Yelp and The Huffington Post, La...

Media
51-250
Founded 2014

Description

  • Define and own the long-term ML infrastructure roadmap for experimentation and future AI initiatives.
  • Establish best practices for model lifecycle management, deployment standards, monitoring, and governance.
  • Design scalable solutions to fill infrastructure gaps and support faster ML development.
  • Build and maintain production-grade model deployment and inference systems using CI/CD, Docker, and APIs.
  • Automate ML workflows for training, validation, registry management, deployment, and rollback.
  • Implement monitoring for model performance, latency, drift, and infrastructure health.
  • Operate ML workloads across AWS and GCP, including GPU-based infrastructure and BigQuery datasets.
  • Develop and maintain infrastructure-as-code to create scalable, repeatable, and secure cloud environments.
  • Optimize CI/CD workflows for ML and infrastructure automation.
  • Partner with data scientists, analysts, platform engineers, and product engineers to translate experimentation needs into production-ready systems.

Requirements

  • 4+ years of experience in ML Ops, ML infrastructure, backend engineering, or a related role supporting production ML systems.
  • Experience working in cloud-native environments with AWS and/or GCP.
  • Proven experience designing and implementing CI/CD pipelines for ML systems.
  • Strong experience with Amazon SageMaker, Docker, Flask-based APIs, and infrastructure automation tools.
  • Hands-on experience with ML lifecycle tooling such as MLflow, SageMaker Studio, or Weights & Biases.
  • Experience managing container orchestration platforms such as Kubernetes, EKS, or GKE.
  • Strong programming experience in Python; additional experience in Go, Java, or Scala is a plus.
  • Experience with infrastructure-as-code tools such as Terraform or CloudFormation.
  • Familiarity with observability tools such as CloudWatch, Prometheus, Grafana, Datadog, or centralized logging platforms.
  • Experience managing GPU-based workloads and scaling training and inference systems.
  • Familiarity with data infrastructure tools such as BigQuery and cloud-native data pipelines.
  • Bonus: experience supporting LLMs or generative AI pipelines, distributed training systems, feature stores like Feast, real-time inference systems, or ML governance frameworks.
  • A mindset focused on automation, reliability, performance, and continuous improvement in fast-scaling environments.

Benefits

  • Salary range of $145,000 to $165,000.
  • Market-based, data-driven compensation approach with biannual review.
  • Permanent team members are eligible for a broader benefits package.
  • Fully remote option available for select positions.
  • Offices in Boston, Vancouver (BC), Chicago, and Vancouver (WA).
  • Inclusive, equal opportunity workplace with accommodations available during the recruitment process.

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

Director, Machine Learning Engineering - Surfaces Foundation

Spotify Media

Spotify is hiring a Director of Machine Learning Engineering to lead the Surfaces Foundation team in building the platform systems that power personalized recommendations across its consumer audio experience.

Machine Learning
4 hours, 5 minutes ago

Machine Learning Principal Solutions Architect

phData 251-1K IT Services

phData is hiring a Principal Solutions Architect to lead delivery of AI/ML solutions for enterprise clients while also driving strategic account growth and client engagement.

AWS Azure Databricks dbt Django Docker Flask GCP Java Keras Kubernetes Machine Learning MLflow Python SageMaker Scala Scikit-learn Snowflake Spring TensorFlow Vertex AI
7 hours, 48 minutes ago

Senior AI/ML Engineer (LLM, GenAI, and Agentic Systems)

Astro Sirens / Astro Sirens Staffing and Consulting IT services, staffing, and consulting

Astro Sirens is hiring a Senior AI/ML Engineer to design and deploy advanced AI solutions for U.S. company projects, with a focus on modern large language models, generative AI, and intelligent agent systems.

Apache Spark AWS Azure CI/CD Deep Learning Docker GCP Generative AI Hugging Face Kubernetes Machine Learning Microservices MLOps Python SQL
8 hours, 19 minutes ago

Machine Learning Engineering Manager - Personalization

Spotify Media

Spotify’s Personalization team is hiring a Machine Learning Engineering Manager in New York or Boston to lead safety-focused ML systems for recommendations, search, and emerging AI experiences.

Generative AI LLM Machine Learning
9 hours, 16 minutes ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers