Prolific

Prolific

Prolific is a platform that enables researchers to quickly find trustworthy research participants. With a pool of over 120,000 active and verified participants, Prolific ensures high-quality responses through continuous monitoring and engagement. The p...

Professional Services
51-250
Founded 1997
$0M raised

Description

  • Design and maintain scalable cloud infrastructure using Infrastructure as Code on GCP and AWS with Terraform.
  • Manage GPU and TPU resource allocation for model training, fine-tuning, and interactive notebooks.
  • Build internal services and CLI tools that improve the AI team’s developer experience.
  • Design CI/CD/CT pipelines for machine learning and continuous training workflows.
  • Implement and support model-serving patterns and manage service deployments to Kubernetes.
  • Manage and optimize vector databases and embedding pipelines for retrieval-augmented generation systems.
  • Improve model inference performance by reducing latency and increasing throughput.
  • Solve scaling bottlenecks for serverless and containerized model deployments.
  • Optimize GPU utilization and cloud spend while maintaining performance.
  • Monitor model drift, data skew, resource utilization, and LLM service health, including prompt and agent tracing.

Requirements

  • 5+ years of experience with cloud infrastructure and Infrastructure as Code.
  • Experience across the ML and LLM lifecycle, including training, hosting, optimization, and observability.
  • Experience working closely with researchers and data scientists to move experiments into production.
  • Strong understanding of machine learning fundamentals and the modern GenAI stack.
  • Experience with GCP and/or AWS is required.
  • Experience with Terraform is required.
  • Experience with CI/CD tooling such as GitHub Actions, MLflow, or Vertex AI Pipelines is preferred.
  • Experience with Kubernetes and model serving patterns is preferred.
  • Experience with vector databases and embedding pipelines for RAG systems is preferred.
  • Familiarity with LLM tracing and production monitoring is preferred.

Benefits

  • Competitive salary.
  • Benefits package.
  • Remote working.
  • Mission-driven culture.
  • Access to a unique human data platform.
  • Opportunities for groundbreaking research.

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

Staff Machine Learning Engineer

Samsara 1K-5K IT Services

Samsara is hiring a Staff Machine Learning Engineer to develop end-to-end AI solutions and core ML infrastructure for physical operations customers using large-scale sensor, video, diagnostic, and text data.

Apache Spark C++ Computer Vision Machine Learning Python Rust
20 hours, 33 minutes ago

Senior Intelligent Process Automation Engineer (IPA)

GlobalDev Tech 51-250 Internet Software & Services

Senior Intelligent Process Automation Engineer at a transportation and logistics company, responsible for designing integration-first automation solutions that connect multiple systems into end-to-end workflows and support intelligent document processing.

Docker Kubernetes Machine Learning Microservices NLP REST API
20 hours, 34 minutes ago

Principal Machine Learning Engineer

Qodea is seeking a Principal Machine Learning Engineer to lead the architecture and evolution of large-scale data and ML systems that improve data quality, enrichment, and intelligent product linking within its Knowledge domain.

CI/CD Docker GCP Go GraphQL Kubernetes LLM Machine Learning NLP Node.js Python Redis REST API Scala SQL
20 hours, 48 minutes ago

Voice AI Engineer (Mid/Senior)

Lucidya 51-250 Media

Lucidya is hiring an AI Engineer specializing in Voice Agents to design, build, and deploy production-ready voice AI solutions for enterprise customer experience use cases across the MENA region.

AWS Azure GCP gRPC Java LLM MLOps NLP Python WebSockets
21 hours, 4 minutes ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers