IMMEDIATE JOINERS OR NOTICE PERIOD WITH 30 DAYS REQUIRED.
Spark Scala Job description
1. Overall 5+ Years of IT experience and 2+ Years of relevant experience with Big Data Technologies (such as Spark Scala, Hadoop, Hive, HBase, Python, Kafka)
2. Hands on experience in Basic & Advance Java
3. Hands on experience in Apache Spark Scala and deep understanding of distributed systems
4. Hands on experience in Core Spark API and Spark Streaming APIs
5. Relevant experience with handling different file formats JSON, Avro, Parquet, CSV, XML, Text files
6. Experience working with HDFS, Hive, S3, Mongo DB and SQL database integration
7. Hands on experience in creating Scala/Spark Jobs for data transformation & aggregation
8. Sound exposure to ETL pipelines implementation, batch scheduling and automation
9. Proficient in writing hive queries and understand hive internals
10. Have good work exposure with Data integration/Data acquisition framework or similar kind of framework
11. Strong understanding of big data concepts and distributed computing
12. Experience with source control management like git , bitbucket
13. Familiar with all phases of development life cycle and agile development methodologies
14. Excellent problem-solving skills and attention to details
15. Strong communication and collaboration skills
16. Strong exposure to Unix commands and Unix shell scripting
17. Strong exposure to Oracle/SQL database
18. Must be able to code, and assist of other team members
19.Good to have experience on Snowflake, AWS - Data platform Cloud experience .