You will work closely with Software Engineers ML engineers to build data infrastructure that fuels the needs of multiple teams, systems and products
You will automate manual processes, optimize data-delivery and build the infrastructure required for optimal extraction, transformation and loading of data required for a wide variety of use-cases using SQL/Spark
You will build stream processing pipelines and tools to support a vast variety of analytics and audit use-cases
You will continuously evaluate relevant technologies, influence and drive architecture and design discussions
You will work in cross-functional team and collaborate with peers during entire SDLC
What to Bring
BE/B.Tech/BS/MS/PhD in Computer Science or a related field (ideal)
6 - 10 years of work experience building data warehouse and BI systems
Strong Java skills
Experience in either Go or Python (plus to have)
Experience in Apache Spark, Hadoop, Redshift, Athena
Strong understanding of database and storage fundamentals
Experience with the AWS stack
Ability to create data-flow design and write complex SQL / Spark based transformations
Experience working on real time streaming data pipelines using Spark Streaming or Storm
Job Classification
Industry: Miscellaneous Functional Area: Product Management, Role Category: Digital Product Management Role: Digital Product Management Employement Type: Full time
Education
Under Graduation: B.Tech/B.E. in Production/Industrial, Any Graduate Post Graduation: Any Postgraduate Doctorate: Doctorate Not Required, Any Doctorate