Domain Specialization: ( cloud experience mandatory < Preferrably AWS ,AZURE )
Hadoop Ecosystem, Data warehousing, Cloud Computing Platforms (Azure/AWS/GCP) , Spark, Nifi, Kafka, Python, Business Intelligence
Role:
To provide best-fit architectural solutions for one or more projects; provide technology consultation; assist in defining scope and sizing of work; come up with Proof of Concept developments and support opportunity identification and pursuit processes. Collaborate with teams to create and implement innovative high-quality solutions, lead and participate in pre-sales (at times) and pursuits focused on our clients' business needs. Location for this position is Pune.(Some travel to U.S. might be there for client meetings)
Skills required for hiring
Basic Bachelors degree or Masters degree in Computer Science/Computer Science & Engineering/equivalent. Will also consider one year of relevant work experience in lieu of every year of education
At least 10 years of experience working across/understanding of one or more ETL/Big Data tools and Cloud Computing Platforms. Preferred
At least 5 years of experience working in Big Data Technologies
At least 5 years of experience in Data warehousing
Strong understanding and hands-on experience on the Big Data stack (HDFS, Sqoop, Hive, Java etc.)
Big Data solution design and architecture Design, sizing and implementation of Big Data platforms based
Deep understanding of Cloudera and or Hortonworks stack (Spark, Installation and configuration, Navigator, Oozie, Ranger etc.)
Experience in extracting data from feeds into Data Lake using Kafka and other open source components
Understanding of and experience in Data ingestion patterns and experience with building pipelines.
Experience in configuring Azure or AWS components and managing data flows. Knowledge of Google Cloud Platform a plus.
Experience work on Production grade projects with Terabyte to Petabyte size data sets.
Keyskills: gcp spark kafka hadoop ecosystem cloud computing aws nifi datawarehousing azure