Trinetix

Trinetix

Trinetix provides comprehensive software product engineering and design services to Fortune 500 companies and emerging brands, enabling them to innovate and enhance their digital operations for sustainable growth in a competitive landscape.

Internet Software & Services
251-1K
Founded 2011

Description

  • Design and develop production ETL/ELT pipelines integrating Snowflake, Snowpipe, internal systems, Salesforce, SharePoint, and DocuSign.
  • Build and maintain dimensional data models (Kimball methodology, SCDs) in Snowflake using dbt and implement data quality checks.
  • Implement CDC patterns for near real-time data synchronization across systems.
  • Manage and evolve the data platform spanning S3 data lake (Apache Iceberg) and Snowflake data warehouse.
  • Design and maintain a Medallion architecture data lake in Snowflake.
  • Prepare and publish ML features using SageMaker Feature Store to support models in production.
  • Develop analytical dashboards and reports in Power BI for business stakeholders.
  • Partner with engineering and business stakeholders to take forecasting, asset cascading, contract analysis, and risk detection models from concept to production on AWS.

Requirements

  • 5+ years of experience in data analysis or data engineering.
  • 3+ years of hands-on experience building and supporting production ETL/ELT pipelines.
  • Advanced SQL skills including CTEs, window functions, and performance optimization.
  • Strong Python skills, including pandas and API integrations.
  • Proven experience with Snowflake (schema design, Snowpipe, Streams, Tasks, performance tuning, and data quality).
  • Solid knowledge of AWS services: S3, Lambda, EventBridge, IAM, CloudWatch, and Step Functions.
  • Strong understanding of dimensional data modeling (Kimball methodology, SCDs).
  • Experience working with enterprise systems such as ERP or CRM.
  • Nice-to-have: experience with data quality frameworks (Great Expectations, Deequ), CDC tools/concepts (AWS DMS, Kafka, Debezium), data lake technologies (Apache Iceberg, Parquet), SageMaker Feature Store exposure, and document processing tools like Amazon Textract.

Benefits

  • Continuous learning and career growth opportunities.
  • Professional training and English/Spanish language classes.
  • Comprehensive medical insurance.
  • Mental health support.
  • Specialized benefits program with compensation for fitness activities, hobbies, pet care, and more.
  • Flexible working hours.
  • Inclusive and supportive culture.

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

Data Engineering Tech Lead

Lingaro 5K-10K IT Services

Data Engineering Tech Lead at Lingaro (Data Engineering & Management) — lead a Poland-based remote/full-time team to design, deliver, and maintain scalable, secure data engineering solutions while mentoring engineers and ensuring timely, high-quality project delivery.

Azure CI/CD Python Scala SQL
14 hours, 38 minutes ago

Senior Software Engineer - Data Integration & JVM Ecosystem

ClickHouse 51-250 IT Services

Senior Software Engineer (JVM) at ClickHouse joining the Connectors team to own and maintain JVM-based data framework integrations, connectors, and drivers that enable high-performance data ingestion and a seamless developer experience for data engineering workloads.

Apache Airflow Apache Spark ClickHouse dbt Grafana HTTP Java Kafka Metabase Pandas Power BI Python SQL Tableau TCP/IP
1 month ago

Junior Data Engineer (Remote Argentina) / Ingénieur données junior (à distance)

GlobalVision 51-250 Internet Software & Services

Junior Data Engineer at GlobalVision supporting and maintaining the company’s data infrastructure to ensure reliable, accessible, and actionable data that informs business decision-making across the organization.

dbt Domo Machine Learning Power BI Python Salesforce SQL Tableau
1 month ago

Data/Infrastructure Advocate Engineer - EMEA Remote

Hugging Face 51-250 IT Services

Hugging Face is hiring a Data/Infrastructure Advocate Engineer to bridge data infrastructure and the community by championing Xet storage on the Hub and enabling efficient storage, versioning, and collaboration on large-scale datasets.

AWS GitHub Pandas Python
1 month ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers