Software Engineer I - Data

1 hour, 47 minutes ago
Full-time
Junior
Software Development

Precision AQ

Precision AQ is a life sciences commercialization and healthcare consulting company that helps biopharma companies bring therapies to market and improve patient access. It describes itself as a commercialization partner for biopharma innovators, market access teams, and brand strategists, with expertise spanning access strategy, HEOR, medical communications, market access, data and analytics, branding, investor relations, and omnichannel/product solutions.

Business Consulting and Services
1001-5000

Description

  • Prepare, transform, and structure client-specific data according to requirements while adhering to MLR guidelines.
  • Design, generate, and maintain baseline and customized messaging datasets for demos, frameworks, and production use.
  • Process and integrate data from third-party vendors using business rules and transformation logic to produce client-ready datasets.
  • Represent the Data Operations / Data Engineering function on active client and product initiatives.
  • Participate in daily stand-ups to review data readiness, defects, transformations, and deployments.
  • Follow change-management procedures for data updates, transformations, and environment promotions.
  • Document data pipelines, transformation logic, and standard operating procedures.
  • Create and maintain wiki articles for reusable processes, transformation patterns, and troubleshooting guidance.
  • Coordinate with development, product, and front-end teams to optimize data structures for UI rendering and configurability.
  • Support data deployments across Development, UAT, and Production using Octopus Deploy and controlled release processes.
  • Manage and validate data flows from Goliath servers to Core servers and into Salesforce through stored procedures and Boomi ETL workflows.
  • Set up and maintain data configurations for new client implementations, including environment-specific variations and vendor inputs.
  • Assist in developing automated data validation checks and regression test datasets.
  • Measure and track data quality and operational KPIs such as transformation accuracy, defect leakage, re-processing rates, and turnaround times.
  • Support production issue investigation, root cause analysis, escalation, and client communication in partnership with infrastructure support teams.

Requirements

  • Bachelor’s degree in Technology, Engineering, Data Science, Computer Science, or a related discipline.
  • 1–3 years of experience in a data-centric role such as Data Engineer, Data Analyst, Application Support (Data), or ETL / Integration role.
  • Strong working knowledge of SQL, including complex queries, stored procedures, and data validation logic.
  • Hands-on experience with data transformation and processing using SQL, scripting, or backend technologies such as .NET / C#.
  • Experience working with relational databases, with SQL Server preferred.
  • Understanding of Agile development methodologies and sprint-based delivery.
  • Experience with multi-environment data handling across Dev, UAT, and Prod.
  • Operational knowledge of Azure DevOps, Team Foundation Server, Octopus Deploy, or similar tools.
  • Basic understanding of HTML, tags, and structured content for front-end rendering support.
  • Exposure to continuous integration and deployment concepts for data pipelines.
  • Experience with Salesforce or Veeva CRM data models is a plus.
  • Working knowledge of US Healthcare or Market Access data concepts is a strong advantage.
  • Familiarity with Azure hosting components and data-related services.
  • Ability to analyze and reconcile large datasets from multiple vendors and sources.
  • Experience with ETL/ELT tools such as Boomi, SSIS, or similar integration platforms.
  • Experience working with complex data models supporting front-end applications.
  • Knowledge of C#, .NET, or scripting used for data processing and transformation logic.
  • Experience supporting Salesforce data integrations and environment-specific configurations.
  • Understanding of healthcare data standards and structures such as HL7 or FHIR, preferred.
  • Experience designing scalable, reusable, enterprise-grade data solutions.
  • Exposure to workflow engines, message-driven architectures, message queues, or event-based data processing.
  • Strong analytical and problem-solving skills with attention to detail and data quality.
  • May require domestic and/or international travel, including overnight stays, up to 5%.

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

Data Engineer, Azure - Remote, Latin America

Bluelight Consulting 11-50 Internet Software & Services

Bluelight is hiring a remote Data Engineer, Azure to build and optimize data pipelines and warehousing solutions for client projects across Latin America.

Agile Apache Spark Azure Git Machine Learning Power BI Python REST API SQL Tableau
24 minutes ago

Senior Data Engineer

Rezilient Health 11-50 Health Care Providers & Services

Rezilient Health is seeking a Data Engineer to build the data platform that powers near-real-time healthcare insights, patient and provider experiences, and operational efficiency across its CloudClinic model.

Apache Airflow AWS Azure dbt GCP HIPAA Machine Learning Python Scala Snowflake SQL
27 minutes ago

Gen AI Data Engineer

Tiger Analytics 1K-5K Professional Services

Tiger Analytics is hiring an experienced Machine Learning Engineer with GenAI experience to build and optimize large-scale data and retrieval systems for advanced analytics and RAG solutions.

Apache Airflow Apache Spark AWS CI/CD CloudFormation Docker Elasticsearch GCP Generative AI GitHub GitHub Actions Hadoop Jenkins Kubernetes Linux LLM Machine Learning Neo4j Python Snowflake SQL Terraform Vertex AI VS Code
51 minutes ago

[Job - 29221] Senior Data Developer (Azure), Brazil

CI&T 5K-10K Internet Software & Services

CI&T is seeking a Senior Data Developer (Azure) to build and evolve its cloud data platform in Brazil, turning architectural standards into reliable, scalable data pipelines and analytics-ready datasets.

Apache Airflow Apache Spark Azure CI/CD Databricks dbt Feature Engineering Git Prefect Python Snowflake SQL
1 hour, 13 minutes ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers