Data Engineer
Data Engineer (Databricks | Py
Spark | ADF | Airflow)
Location: Lisbon (Hybrid/Remote options depending on profile)
Type: Full-time,
- term
We are looking for an experienced Data Engineer to join our growing data team and contribute to the design, optimization, and operation of scalable data platforms. You will work with modern data technologies, supporting complex business use cases and ensuring
- performance, reliable data pipelines.
Key Responsibilities
Databricks & Spark
- Optimize
- running Spark jobs to improve performance and cost efficiency. - Implement storage optimization using Delta Lake features such as Z-ordering and partitioning.
- Manage and
- tune Databricks cluster configurations. - Install and manage libraries across environments.
- Apply and maintain Medallion Architecture (Bronze, Silver, Gold layers).
- Translate complex business requirements into efficient data processing logic.
Py
Spark & SQL
- Design efficient data transformations using Py
Spark. - Apply partitioning and bucketing strategies for performance optimization.
- Handle skewed data and optimize
- scale joins. - Develop complex joins, advanced SQL queries, and window functions.
- Use broadcast joins and repartitioning where appropriate.
- Work with CTEs and structured/semi-structured data, including JSON formats.
- Perform detailed requirement analysis for complex business processes.
Azure Data Factory (ADF)
- Develop, debug, and optimize ADF pipelines.
- Implement
- based triggers and incremental data loading strategies. - Create and manage EDF pipelines and orchestrate multiple dependent pipelines.
- Design reusable and generic ADF pipelines to support multiple use cases.
Airflow
- Develop and maintain Airflow DAGs.
- Write custom operators and sensors.
- Implement dynamic DAGs.
- Handle task retries, failure scenarios, and recovery strategies.
- Use XComs for
- task data sharing.
Requirements
- Proven experience as a Data Engineer working with
- scale data platforms. - Strong
- on experience with Databricks, Spark, and Py
Spark . - Solid knowledge of Delta Lake and data lake best practices.
- Experience with Azure Data Factory for pipeline orchestration.
- Strong experience with Apache Airflow .
- Advanced SQL skills and strong understanding of data modeling.
- Ability to analyze complex business requirements and translate them into technical solutions.
- Strong
- solving skills and attention to performance optimization.
Nice to Have
- Experience with cloud platforms (Azure preferred).
- Knowledge of CI/CD for data pipelines.
- Experience working in Agile environments.
- Familiarity with data governance and data quality frameworks.
What We Offer
- Opportunity to work on complex,
- impact data projects. - Modern data stack and exposure to
-
- class technologies. - Competitive salary and benefits package.
- Career growth, continuous learning, and technical development.
- Hybrid or remote working model (depending on location).
How to Apply
Send your CV to Subject: Data Engineer (Banking)
- Informações detalhadas sobre a oferta de emprego
Empresa: Optiwisers Localização: Setúbal
Setúbal, Setubal, PortugalPublicado: 2. 1. 2026
Vaga de emprego atual
Seja o primeiro a candidar-se à vaga de emprego oferecida!