Quanata

Quanata

Quanata is a software development company based in San Francisco, specializing in context-based insurance solutions. The company leverages AI, real-time telematics, and data science to enhance risk prediction, promote safer driving behaviors, and create modern insurance products. Quanata aims to transform the insurance industry by fostering positive behaviors and advancing digital experiences. The company develops a range of software platforms and tools for insurers. Their offerings include AI-powered risk assessment, telematics for driver monitoring, and claims solutions that optimize and automate processes. Quanata also focuses on customer engagement through personalized products and retention tools, supporting insurtech modernization with big data analytics and cloud-native platforms. With a team of around 26 professionals, Quanata draws on talent from Silicon Valley to drive innovation in the insurance sector.

information technology & services
201-500

Description

  • Operationalize data science solutions that support underwriting, pricing, claims routing, and marketing products.
  • Design and build machine learning pipelines using AWS services, MLflow, and Snowflake.
  • Stand up and operate a shared feature store supporting both batch and real-time feature retrieval.
  • Own real-time inference services and manage low-latency endpoint deployments on SageMaker or EKS.
  • Implement testing strategies for machine learning systems, including unit, integration, data validation, model validation, and performance tests.
  • Build and maintain CI/CD pipelines for machine learning platform quality and release automation.
  • Manage model and data versioning, experiment tracking, and reproducibility to support ML governance.
  • Implement event-driven orchestration for automated retraining, evaluation, and redeployment.
  • Monitor production models for performance, drift, and data quality, and drive automated remediation.
  • Partner with data engineers and data scientists to improve the model development and delivery platform.

Requirements

  • Bachelor’s degree or equivalent relevant experience.
  • 8 years of industry experience, including 2 years focused on MLOps and 2 years in software engineering, or equivalent experience.
  • Strong experience with Python and Docker.
  • Familiarity with build tools such as Bash and Bazel.
  • Advanced proficiency with infrastructure-as-code principles and tools such as Terraform.
  • Demonstrated experience designing, deploying, and managing scalable, resilient MLOps solutions on AWS.
  • Applied expertise across the end-to-end machine learning lifecycle, including data ingestion, preprocessing, training, deployment, and production monitoring.
  • Excellent written and verbal communication skills with a strong collaborative mindset.
  • Experience designing and implementing workflows with AWS Step Functions.
  • Experience with CI/CD for machine learning systems, including automating model training, validation, and deployment.
  • Bonus points for experience with large-scale distributed systems, complex APIs, or platform-level software engineering.
  • Bonus points for Snowflake advanced ML capabilities such as Snowpark, UDFs for in-database scoring, or integration with external training and serving platforms.
  • Bonus points for experience in insurance or another highly regulated environment.

Benefits

  • Salary range of $213,000 to $300,000.
  • Medical, dental, vision, life insurance, and supplemental income plans for employees and dependents.
  • Headspace app subscription and a monthly wellness allowance.
  • 401(k) plan with company match.
  • One-time $2,000 work-from-home equipment stipend plus a fully provisioned MacBook Pro.
  • Four weeks of PTO in the first year.
  • Twelve weeks of fully paid parental leave for both birthing and non-birthing parents.
  • Up to $5,000 per year for professional learning, continuing education, and career development, plus LinkedIn Learning and BetterUp coaching.
  • Remote-first work environment with the option to work from anywhere in the U.S., excluding U.S. territories.
  • Core collaboration hours from 9 AM to 2 PM Pacific time.

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

Sr. Data Engineer I (6436)

MetroStar 251-1K IT Services

MetroStar is hiring a Sr. Data Engineer I to support an enterprise AI-enabled financial compliance initiative for the Department of War, building the data foundation for compliance modernization across 180+ systems.

PostgreSQL Python SQLAlchemy XML YAML
10 hours, 55 minutes ago

Staff Software Engineer - Product Analytics

Datadog 5K-10K IT Services

Datadog is hiring a Staff Engineer to lead the backend technical direction for its Product Analytics platform, building systems that help customers analyze user behavior, retention, and growth at scale.

SQL
11 hours, 10 minutes ago

Senior Staff Data Engineer

SoFi 1K-5K Capital Markets

SoFi is seeking a Senior Staff Data Engineer to lead the architecture and evolution of its AI-powered Data Platform, advancing data reliability, governance, and scalable data experiences for members.

Apache Airflow Apache Spark AWS GCP GitLab Hadoop Kafka Python Snowflake SQL
11 hours, 10 minutes ago

Principal Engineer, Ads Measurement

Unity 5K-10K Internet Software & Services

Unity is hiring a Principal Engineer for Ads Measurement to lead the development of self-attribution and install measurement systems that help the company independently evaluate ad performance and support optimization across its ads platform.

C++ Go Java
11 hours, 10 minutes ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers