
Data Engineer
- Madrid
- Permanente
- Tiempo completo
- Developing Data Solutions: Collaborate with internal teams and stakeholders to understand business requirements and develop effective, optimized, and scalable data solutions.
- Implementing Data Pipelines: Build robust data pipelines using Azure Data Factory, Databricks, Azure Fabric, or similar technologies, as well as advanced SQL and Python, with a focus on PySpark.
- Creating Data Models: Design and develop optimized data models to support advanced analytics and BI applications, ensuring data integrity and quality.
- Optimizing Performance: Identify and address performance bottlenecks in pipelines and databases, implementing optimization techniques to enhance efficiency and scalability.
- Automating Processes: Develop scripts and automation tools for deployment, monitoring, and continuous maintenance of data pipelines and processes, improving operational efficiency.
- Collaborating with Cross-Functional Teams: Work closely with development teams, data analysts, data scientists, and other stakeholders to understand their data needs and provide effective, business-driven solutions.
- Providing Technical Support: Offer technical guidance and support to team members in designing, developing, and maintaining cloud-based data solutions.
- Proactive Learning: Stay up to date with the latest trends and technologies in data engineering and cloud computing, participating in training and certification programs as needed.
- Contributing to a Data-Driven Culture: Promote a data-driven culture within the company, encouraging best practices in data management and usage for informed decision-making.
- Experience in building, optimizing, and maintaining efficient data pipelines using services like Azure Data Factory, Databricks, BigQuery, AWS Glue, Fabric, DBT, or similar technologies for data ingestion, preprocessing, transformation, and enrichment. Experience with analytics technologies (Tableau, Looker Studio, Power BI, etc.) is a plus.
- Demonstrable experience (preferably 5 years) in a similar role as a Data Engineer, ideally with a focus on public cloud platforms (AWS, GCP, and/or Azure).
- Deep knowledge of data lifecycle services and tools related to AI in at least one of the public clouds (AWS, GCP, and/or Azure).
- Experience working with scalable, high-performance big data architectures and Data Governance—preferably in a digital transformation context.
- Strong programming skills in languages such as SQL and Python (with PySpark and/or Pandas).
- Excellent problem-solving skills, good programming practices, and attention to detail.
- Ability to work independently and collaboratively in a dynamic and proactive environment.
- Relevant certifications in AWS, Azure, GCP, and/or Databricks will be highly valued in the selection process.
- Experience working on Generative AI projects is a plus.
- Experience in public cloud platforms (AWS, Azure, GCP).
- Strong soft skills, including oral and written communication.
- Proactive, transparent, and leadership-oriented profiles.
- High level of spoken and written English.