Were looking for someone who has these abilities and skills: Required Skills And Abilities Should have a robust background in Software development with experience in ingesting, transforming, and storing data from large datasets using Pyspark in Azure Databricks with a robust knowledge of distributed computing concepts. Must have hands-on experience in designing and developing ETL Pipelines in Pyspark in Azure Databricks with robust Python scripting exposure like list comprehensions, Dictionary variables, etc. Must have relevant experience and good proficiency in data warehousing concepts. Proficient in SQL and database Design concepts. Desired Skills And Abilities Hands-on experience with good proficiency in Delta table and delta file operations like merge, Insert override, Partition overrides, etc. Handson Experience in CICD in Azure DevOps/Harness and ADF/Stonebranch for orchestration. Knowledge of Azure cloud computing platform with Azure Synapse, ADLS. Knowledge. of Github and build management. Exposure to any Informatica ETL tool is a plus
Employement Category:
Employement Type: Full time Industry: Others Role Category: Others Functional Area: Not Applicable Role/Responsibilies: Engineer - Data Warehousing