Data Engineer, Azure - Remote, Latin America

1 hour, 34 minutes ago
Full-time
Mid Level
Software Development
Bluelight Consulting

Bluelight Consulting

Bluelight Consulting is a leading Nearshore Software Development company that provides access to highly skilled Nearshore Tech Talent in the US timezone. They offer services such as Nearshore Staffing, Cloud consulting, Cloud migration strategies, and ...

Internet Software & Services
11-50
Founded 2015

Description

  • Develop and maintain ETL processes using Python (PySpark) in Azure Synapse Analytics notebooks and pipelines.
  • Design and build data warehousing structures using star schemas, facts, and dimensions in an MPP SQL pool.
  • Extract and integrate data from REST APIs, SQL database tables, and CSV files.
  • Design and optimize Azure Synapse Analytics notebooks and pipelines for scalability and performance.
  • Contribute to data fabric capabilities such as data lakes, lakehouses, delta lakes, and data cataloging.
  • Collaborate with data architects to create data models and schemas aligned with business requirements.
  • Implement data quality checks and validation processes to ensure accuracy and consistency.
  • Identify and resolve performance bottlenecks and troubleshoot ETL jobs to meet SLAs.
  • Maintain documentation for ETL processes, data flows, and transformations.
  • Work with cross-functional teams to support data-related initiatives and ensure security and compliance standards are met.

Requirements

  • Bachelor’s degree in Computer Science, Information Technology, or a related field, or equivalent work experience.
  • Certifications related to data engineering or data science, such as Azure Data Engineer, are a plus.
  • Proven experience in ETL data engineering using Python (PySpark).
  • Experience extracting, transforming, and loading data from REST APIs, SQL database tables, and CSV files.
  • Proficiency with Azure Synapse Analytics resources, including Notebooks, Pipelines, Linked Services, and Azure Key Vault.
  • Ability to write complex SQL queries and optimize query performance.
  • Experience working with both SparkSQL and MS SQL.
  • Knowledge of data integration best practices and tools.
  • Experience with version control systems such as Git and Azure DevOps.
  • Strong problem-solving, analytical, and communication skills with attention to detail.
  • Familiarity with big data technologies, machine learning, and data analysis is preferred.
  • Experience with data visualization tools such as Power BI or Tableau is a plus.
  • Experience with Agile methodologies is a plus.

Benefits

  • Competitive salary and bonuses, including performance-based salary increases.
  • Generous paid time off policy.
  • Flexible working hours.
  • Remote work.
  • Continuing education, training, and conference opportunities.
  • Company-sponsored coursework, exams, and certifications.

Interested in this position?

Apply directly on the company website

Apply Now

Similar Roles

Data Engineer

qode Internet Software & Services

CoderPush is hiring a Data Engineer for its Data Platform team to build scalable data pipelines and support analytics across the organization using a modern cloud data stack.

Apache Airflow Apache Spark AWS Databricks dbt Docker EC2 GitHub Actions Kubernetes Python SQL Terraform
29 minutes ago

Data Engineer

qode Internet Software & Services

CoderPush is hiring a Data Engineer for its Data Platform team to build scalable data pipelines and support analytics and data-driven decision-making across the organization.

Apache Airflow Apache Spark AWS Databricks dbt Docker EC2 GitHub Actions Kubernetes Python SQL Terraform
44 minutes ago

Sr. Data Engineer II (6589)

MetroStar 251-1K IT Services

MetroStar is seeking a Sr. Data Engineer II to work with a cross-functional team supporting classified mission operations by designing and operationalizing data and AI/ML pipelines.

AWS Databricks Kafka Python Snowflake SQL
3 hours, 1 minute ago

Backend Engineer II – Data Platform

Netomi 51-250 IT Services

Netomi is hiring a Backend Engineer II for its Data Platform team in Gurugram to build scalable backend services and data pipelines that power analytics, reporting, and product improvements for enterprise customer experience.

Agile Apache Airflow Apache Spark CI/CD Databricks Docker DynamoDB Elasticsearch HIPAA Java Kafka Kubernetes Luigi Microservices MongoDB MySQL PostgreSQL Prefect Python RabbitMQ Snowflake Spring Boot SQL
4 hours, 39 minutes ago

You're on a roll! Sign up now to keep applying.

Sign Up

Already have an account? Log in

Used by 14,729+ remote workers