Data Engineer, Azure - Remote, Latin America

1 hour, 4 minutes ago
Full-time
Mid Level
Software Development
Bluelight Consulting

Bluelight Consulting

Bluelight Consulting is a leading Nearshore Software Development company that provides access to highly skilled Nearshore Tech Talent in the US timezone. They offer services such as Nearshore Staffing, Cloud consulting, Cloud migration strategies, and ...

Internet Software & Services
11-50
Founded 2015

Description

  • Develop and maintain ETL data engineering processes using Python (PySpark) in Azure Synapse Analytics notebooks and pipelines.
  • Design and build data storage structures in an MPP SQL pool using data warehousing concepts such as star schemas, facts, and dimensions.
  • Extract data from REST APIs, SQL database tables, and CSV files.
  • Design and optimize Azure Synapse Analytics notebooks and pipelines for scalability and performance.
  • Contribute to data fabric initiatives including data lakes, lakehouses, delta lakes, and data cataloging.
  • Collaborate with data architects to create data models and schemas aligned with business requirements.
  • Implement data quality checks and validation processes to ensure accuracy and consistency.
  • Identify and resolve performance bottlenecks and troubleshoot ETL jobs to meet SLAs.
  • Maintain documentation for ETL processes, data flows, and data transformations.
  • Work with cross-functional teams on data-related initiatives and ensure security, governance, and privacy compliance.

Requirements

  • Bachelor’s degree in Computer Science, Information Technology, or a related field, or equivalent work experience.
  • Certification related to data engineering or data science, such as Azure Data Engineer, is a plus.
  • Proven experience in ETL data engineering with strong Python (PySpark) expertise.
  • Experience extracting, transforming, and loading data from REST APIs, SQL database tables, and CSV files.
  • Proficiency with Azure Synapse Analytics resources, including Notebooks, Pipelines, Linked Services, and Azure Key Vault.
  • Ability to write complex SQL queries and optimize query performance using SparkSQL and MS SQL.
  • Knowledge of data integration best practices and tools.
  • Experience with version control systems such as Git and Azure DevOps.
  • Strong problem-solving and analytical skills with attention to detail.
  • Excellent verbal and written communication skills and ability to work in a team with shifting priorities.
  • Familiarity with big data technologies, machine learning, and data analysis is preferred.
  • Experience with data visualization tools such as Power BI or Tableau is a plus.
  • Experience with Agile methodologies is a plus.

Benefits

  • Competitive salary and bonuses, including performance-based salary increases.
  • Generous paid time off policy.
  • Flexible working hours.
  • Remote work arrangement.
  • Continuing education, training, and conference opportunities.
  • Company-sponsored coursework, exams, and certifications.

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

Senior Palantir Foundry Engineer - Data Pipelines & Ontology

Workana 1K-5K Internet Software & Services

Palantir Foundry Engineers are needed for a long-term remote healthcare data implementation supporting complex enterprise data environments and ontology development across the United States.

Apache Spark Python SQL
6 minutes ago

Data Development & Support Analyst - Fixed Term Contract

Livestock Information 11-50 Professional Services

Livestock Information Ltd is hiring a Data Development & Support Analyst on a 12-month fixed-term contract to support and improve its Azure-based data platform, reporting services, and delivery processes.

Agile Azure CI/CD Databricks Power BI Python Scrum SQL
21 minutes ago

Senior Data Engineer

NEORIS 5K-10K Internet Software & Services

NEORIS, del grupo EPAM, busca un Senior Data Engineer para diseñar y operar soluciones de datos en entornos cloud y apoyar proyectos de modernización y escalado de arquitecturas de datos.

Apache Airflow Apache Spark AWS Azure CI/CD CloudFormation dbt Docker FastAPI GCP Git Linux Microservices Pandas Python REST API Snowflake SQL Terraform
23 minutes ago

Data Engineer

Pavago IT Services

A remote Data Engineer will join a client team to design, build, and maintain scalable data infrastructure and reliable pipelines that support analytics, reporting, and operational decision-making across the business.

Apache Airflow AWS Azure CI/CD CloudFormation Dagster dbt Docker GCP GitHub Actions GitLab CI HIPAA Jenkins Kafka Kubernetes Looker Power BI Prefect Python Scala Snowflake SQL Tableau Terraform
36 minutes ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers