Job Description
4+ years of experience in IT programming, application/product development.
At least 2 years experience working in any of the Bigdata cloud ecosystems like AWS/GCP
Develop and deploy batch and streaming data pipelines in cloud ecosystem.
Automation of manual processes and performance tuning of existing pipelines.
Data loading and processing from multiple source locations into Data lake, Datamart and Datawarehouse while keeping cost, performance, and security in mind.
Automate and develop analytics tools and occasionally involve in visualization set up processes.
Develop processes for migrating on-premises data to cloud environment
Strong in SQL/RDBMS and any of the programming languages like Java/Python/Scala.
Experience with one or more of the big data tools like Hadoop, Kafka, Spark, Beam etc.,
Good experience with AWS services like EC2, EMR, Redshift or equivalent GCP services like Compute engine, Big query, Dataflow etc.
Experience working with multiple OS like Windows, Linux and Unix and good scripting knowledge including Shell, Bash.
Basic knowledge in any of the web and server-side frameworks like AngularJS, ReactJS, NodeJS, Django
Good knowledge in NOSQL DB concepts.
Very good communication and team player skills.
Employement Category:
Employement Type: Full time
Industry: IT
Functional Area: IT
Role Category: Software Engineer
Role/Responsibilies: Data Engineer
Contact Details:
Company: Agilisium consulting
Location(s): Chennai