Desired Candidate Profile
Responsibilities include:
1. Develop big data solutions for near real-time stream processing, as well as batch processing on the BigData platform.
2. Analyse problems and engineer highly flexible solutions
3. Set up and run BigData development Frameworks like Hive, Sqoop, Streaming mechanisms, Pig, Mahout, Scala, SPARK and others.
4. Work with business domain experts, data scientists and application developers to identify data relevant for analysis and develop the Big Data solution
5. Coordinate effectively with team members in the project, customer and with business partners
6. Adapt and learn new technologies surrounding BigData eco systems
7. Take initiative to run the project and gel in the start-up environment.
Skills Required:
1. Minimum 3 years of Professional experience with 2 years of Hadoop project experience.
2. Experience in Big Data technologies like HDFS, Hadoop, Hive, Pig, Sqoop, Flume,Spark etc.
3. Must Have core Java experience or advance java experience.
4. Experience in developing and managing scalable Hadoop cluster environments and other scalable supportable infrastructure.
5. Working knowledge of setting up and running Hadoop clusters.
6. Familiarity with data warehousing concepts, distributed systems, data pipelines and ETL.
7. Good communication (written and oral) and interpersonal skills.
8. Extremely analytical with strong business sense.
Total Exp - 3+ Yrs
Notice Period - Immediate to 30 days (serving Notice period candidate also considerable)
If you are interested, Please share your update profile to " mohammed.azgar@ tigeranalytics.com "
And if you think your friend can fit for this role please refer.
Education:
UG: Any Graduate - Any Specialization
Contact Details:
Keyskills:
python
Hive
spark
SCALA
Big Data
Flume
hadoop
Elastic Search
SQL
HBase