Azure Data Engineer

Taux horaire: membres seulement

Disponibilité: membres seulement

Disponibilité à voyager: Partout dans le monde

Statut professionnel: Employeur

Dernière mise à jour: 28 mars 2025

Expérience de travail totale: 3 année(s)

Connaissances linguistiques: Anglais,

Sommaire professionel

• Nearly 4 years of experience in developing, analyzing, designing, and implementing Azure Data Engineering and Analytics solutions by Azure Data Factory, Data bricks, Azure Synapse, ADL, Azure SQL, and Power BI • Proficient in designing and implementing data integration solutions using Azure Data Factory, demonstrating expertise in orchestrating, and automating complex ETL workflows, optimizing data pipelines, and ensuring seamless data movement across on-premises and cloud environments • Experienced in utilizing Snowflake, a leading cloud data warehousing platform, to efficiently manage and analyse large volumes of data, enabling data-driven decision-making and improving overall business intelligence • Seasoned in designing and implementing end-to-end data pipelines, integrating data from diverse sources, transforming, and cleaning data, and ensuring seamless data flow, resulting in improved data accuracy and timely insights for informed business decisions • Accomplished in leveraging Apache Spark capabilities within Azure Databricks to process and analyze large datasets, enabling data-driven insights and facilitating the development of machine learning models for enhanced business intelligence • Skilled in architecting and implementing data solutions with Azure Synapse Analytics, highlighting expertise in developing scalable data warehouses for efficient storage, processing, and analysis of vast datasets • Well-versed proficiency in utilizing Informatica to enhance ETL processes, resulting in improvement in data integration efficiency and a 15% increase in overall data quality, contributing to the successful implementation of Azure Data Engineering solution • Competent in leading and executing complex data migration projects, ensuring seamless transition of data between systems, meticulous data validation, and minimizing downtime, resulting in successful data migrations with minimal disruption to business operations • Versatile in leveraging Python and PySpark for scalable data processing and analysis, utilizing Python's versatile scripting capabilities alongside PySpark's distributed computing framework to handle large datasets, optimize performance, and deliver actionable insights in complex data environments • Worked in an Agile environment and have good insight into Agile methodologies and Lean working techniques. Participated in Agile ceremonies and Scrum Meetings Education: College/University in Computer Sciences

Connaissances linguistiques

Anglais

Fluide