Data Engineer, Azure - Remote, Latin America

1 hour, 14 minutes ago
Full-time
Mid Level
Software Development
Bluelight Consulting

Bluelight Consulting

Bluelight Consulting is a leading Nearshore Software Development company that provides access to highly skilled Nearshore Tech Talent in the US timezone. They offer services such as Nearshore Staffing, Cloud consulting, Cloud migration strategies, and ...

Internet Software & Services
11-50
Founded 2015

Description

  • Develop and maintain ETL processes using Python (PySpark) in Azure Synapse Analytics Notebooks and Pipelines.
  • Extract, transform, and load data from REST APIs, SQL database tables, and CSV files.
  • Design and build data warehouse structures using star schemas, facts, and dimensions in an MPP SQL pool.
  • Optimize Azure Synapse Analytics notebooks and pipelines for scalability, performance, and SLA attainment.
  • Contribute to data fabric capabilities such as data lakes, lakehouses, delta lakes, and data cataloging.
  • Collaborate with data architects to define data models and schemas aligned with business requirements.
  • Implement data quality checks and validation processes to ensure accurate and consistent data.
  • Monitor ETL jobs, troubleshoot issues, and resolve pipeline performance bottlenecks.
  • Maintain documentation for data flows, transformations, and ETL processes.
  • Work with cross-functional teams on data requirements, support data initiatives, and ensure security and compliance standards are met.

Requirements

  • Bachelor’s degree in Computer Science, Information Technology, or a related field, or equivalent work experience.
  • Certifications related to data engineering or data science, such as Azure Data Engineer, are a plus.
  • Proven ETL data engineering experience with strong Python (PySpark) skills.
  • Experience extracting, transforming, and loading data from REST APIs, SQL database tables, and CSV files.
  • Proficiency with Azure Synapse Analytics, including Notebooks, Pipelines, Linked Services, and Azure Key Vault.
  • Ability to write complex SQL queries and optimize performance using SparkSQL and MS SQL.
  • Knowledge of data integration best practices and tools.
  • Experience with version control systems such as Git and Azure DevOps.
  • Strong problem-solving, analytical, and detail-oriented skills.
  • Excellent verbal and written communication skills with the ability to work collaboratively in changing priorities.
  • Familiarity with big data technologies, machine learning, and data analysis is preferred.
  • Experience with data visualization tools such as Power BI or Tableau and Agile methodologies is a plus.

Benefits

  • Competitive salary with bonuses and performance-based salary increases.
  • Generous paid time off policy.
  • Flexible working hours.
  • Remote work.
  • Continuing education, training, and conference opportunities.
  • Company-sponsored coursework, exams, and certifications.

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

Senior Palantir Foundry Engineer - Data Pipelines & Ontology

Workana 1K-5K Internet Software & Services

Palantir Foundry Engineers are needed for a long-term remote healthcare data implementation supporting complex enterprise data environments and ontology development across the United States.

Apache Spark Python SQL
7 minutes ago

Data Development & Support Analyst - Fixed Term Contract

Livestock Information 11-50 Professional Services

Livestock Information Ltd is hiring a Data Development & Support Analyst on a 12-month fixed-term contract to support and improve its Azure-based data platform, reporting services, and delivery processes.

Agile Azure CI/CD Databricks Power BI Python Scrum SQL
22 minutes ago

Senior Data Engineer

NEORIS 5K-10K Internet Software & Services

NEORIS, del grupo EPAM, busca un Senior Data Engineer para diseñar y operar soluciones de datos en entornos cloud y apoyar proyectos de modernización y escalado de arquitecturas de datos.

Apache Airflow Apache Spark AWS Azure CI/CD CloudFormation dbt Docker FastAPI GCP Git Linux Microservices Pandas Python REST API Snowflake SQL Terraform
24 minutes ago

Data Engineer

Pavago IT Services

A remote Data Engineer will join a client team to design, build, and maintain scalable data infrastructure and reliable pipelines that support analytics, reporting, and operational decision-making across the business.

Apache Airflow AWS Azure CI/CD CloudFormation Dagster dbt Docker GCP GitHub Actions GitLab CI HIPAA Jenkins Kafka Kubernetes Looker Power BI Prefect Python Scala Snowflake SQL Tableau Terraform
37 minutes ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers