PointClickCare

PointClickCare

PointClickCare provides a leading cloud-based healthcare software platform that enables long-term and post-acute care providers to effectively manage the complete lifecycle of resident care while enhancing operational efficiency and improving resident ...

Health Care Providers & Services
1K-5K
Founded 2000
$232M raised

Description

  • Lead and guide the design and implementation of scalable streaming data pipelines and real-time data solutions.
  • Engineer and optimize streaming workloads using frameworks such as Apache Kafka, Flink, and Spark Streaming.
  • Enhance and implement batch and real-time data solutions as part of ongoing modernization efforts and adoption of event-driven architectures.
  • Collaborate cross-functionally with product, analytics, and AI teams to ensure data supports business goals and analytics needs.
  • Embed data quality into processing pipelines by defining schema contracts, implementing transformation tests and data assertions, and enforcing backward-compatible schema evolution.
  • Establish and maintain observability for data pipelines by implementing metrics, logging, distributed tracing, SLAs/SLOs, alerting, and dashboards for proactive monitoring and rapid incident response.
  • Drive adoption of best practices in data governance, CI/CD integration, performance tuning, and operational excellence for streaming and batch environments.
  • Mentor and provide technical leadership to engineers, foster a culture of quality through peer reviews, and evangelize modern data practices across teams.

Requirements

  • 10+ years of professional experience in software or data engineering, including a minimum of 4 years focused on streaming and real-time data systems.
  • Proven experience driving technical direction and mentoring engineers while delivering complex, high-scale solutions as a hands-on contributor.
  • Deep expertise with streaming technologies such as Apache Kafka, Flink, and Spark Streaming.
  • Strong understanding and hands-on experience with event-driven architectures and building resilient, low-latency distributed systems.
  • Practical experience with cloud platforms (AWS, Azure, or GCP) and containerized deployments for data workloads.
  • Fluency in data quality practices and CI/CD integration, including schema management, automated testing and validation frameworks (examples: dbt, Great Expectations).
  • Operational excellence in observability with experience implementing metrics, logging, tracing, and alerting for data pipelines.
  • Experience with Lakehouse architectures and related technologies, including Databricks, Azure ADLS Gen2, and Apache Hudi.
  • Solid foundation in data governance, performance optimization, and scalability across batch and streaming environments.
  • Strong collaboration, communication, analytical, and problem-solving skills; ability to learn quickly, work autonomously, and leverage AI tools to accelerate development.

Benefits

  • CAD base salary range $156,000–$174,000 plus bonus and benefits.
  • Benefits starting from day 1, including retirement plan matching.
  • Flexible paid time off and wellness support programs and resources.
  • Parental and caregiver leaves, fertility and adoption support.
  • Continuous development support program and opportunities for growth.
  • Employee Assistance Program, allyship and inclusion communities, and employee recognition programs.
  • Flexible work options (remote or hybrid) with periodic in-office events; accommodations available on request.

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

Data Engineering Tech Lead

Lingaro 5K-10K IT Services

Data Engineering Tech Lead at Lingaro (Data Engineering & Management) — lead a Poland-based remote/full-time team to design, deliver, and maintain scalable, secure data engineering solutions while mentoring engineers and ensuring timely, high-quality project delivery.

Azure CI/CD Python Scala SQL
14 hours, 41 minutes ago

Senior Software Engineer - Data Integration & JVM Ecosystem

ClickHouse 51-250 IT Services

Senior Software Engineer (JVM) at ClickHouse joining the Connectors team to own and maintain JVM-based data framework integrations, connectors, and drivers that enable high-performance data ingestion and a seamless developer experience for data engineering workloads.

Apache Airflow Apache Spark ClickHouse dbt Grafana HTTP Java Kafka Metabase Pandas Power BI Python SQL Tableau TCP/IP
1 month ago

Junior Data Engineer (Remote Argentina) / Ingénieur données junior (à distance)

GlobalVision 51-250 Internet Software & Services

Junior Data Engineer at GlobalVision supporting and maintaining the company’s data infrastructure to ensure reliable, accessible, and actionable data that informs business decision-making across the organization.

dbt Domo Machine Learning Power BI Python Salesforce SQL Tableau
1 month ago

Data/Infrastructure Advocate Engineer - EMEA Remote

Hugging Face 51-250 IT Services

Hugging Face is hiring a Data/Infrastructure Advocate Engineer to bridge data infrastructure and the community by championing Xet storage on the Hub and enabling efficient storage, versioning, and collaboration on large-scale datasets.

AWS GitHub Pandas Python
1 month ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers