Job Description
Dear Candidates
We have a Immediate requirement of Hadoop Developer (Apache Spark) for one of our Client at Bangalore Location.
Summary
Location: Bangalore
Experience: 5+ years experience level is required.
Position: Hadoop Developer (Apache Spark)
Immediate joines preffered within 15 days joiners acceptable.
JD for Hadoop Developer (Apache Spark) :
Primary Responsibilities:
- Design and build large scale data processing system (real-time and batch) to address growing AI/ML and Data needs of a Fortune 500 company
- Build a product to process large amount data/events for AI/ML and Data consumption
- Automate test coverage (90+%) for data pipelines. Best practices and frameworks for unit, functional and integration tests.
- Automate CI and deployment processes and best practices for the production data pipelines.
- Build AI/ML model based alert mechanism and anomaly detection system for the product. The goal is have a self-annealing product
- Basic Qualifications
- - Bachelor's degree, or equivalent work experience
- - 3 or more years of relevant experience
Required Skills/Experience:
- 6 to 8 years of overall experience in software development with 3 or more years of relevant experience in designing, developing, deploying and operating large data processing data pipelines at scale.
- 3 or more years experience with Apache Spark for Streaming and batch process
- Good knowledge on Apache Kafka
- Strong background in programming (Scala/Java)
- Experience on building reusable data frameworks/modules
- Experience on Airflow scheduler
- Experience with Containers, Kubernetes and scaling elastically
- Strong background in algorithms and data structures
- Strong analytical and problem solving skills
- Strong bent towards engineering solutions which increase productivity of data consumers
- Strong bent toward completely automated code deployment/testing (DevOps, CI/CD)
- Passion for data engineering and for enabling others by making their data easier to access.
- Some experience with working with and operating workflow or orchestration frameworks, including open source tools like Activiti, Spring Boot, Airflow and Luigi or commercial enterprise tools.
- Excellent communication (writing, conversation, presentation) skills, consensus builder
- Demonstrated ability to tackle tough coding challenges independently and work closely with others on a highly productive coding team
- Required Skills Summary: Apache Spark, Apache Kafka, Scala/Java, NoSQL Databases, Elasticsearch & Kibana, Kubernetes, Docker Containers
Preferred Skills / Experience:
Knowledge of API Development
Apache Flink experience
Cloud experience
DevOps skills
Any other streaming technologies/tools experience
Regards
Naveen Kumar N
Mail id: na*****k@tr****t.com
Contact no: 9108228***
Job Classification
Industry: IT-Software, Software Services
Functional Area: IT Software - Application Programming, Maintenance,
Role Category: Programming & Design
Role: Programming & Design
Employement Type: Full time
Education
Under Graduation: Any Graduate in Any Specialization
Post Graduation: Post Graduation Not Required, Any Postgraduate in Any Specialization
Doctorate: Doctorate Not Required, Any Doctorate in Any Specialization
Contact Details:
Company: Trigent Software Limited
Address: No.49,1st Floor,Khanija Bhavana,Race Course Road,B, angalore, , BANGALORE, Karnataka, India
Location(s): Bengaluru
Keyskills:
Streaming
Java
Kibana
CD
orchestration frameworks
workflow
Apache Flink
Scala
CI
Hadoop
scaling elastically
Airflow scheduler
Hadoop Developer
Devops
Apache Spark
Elasticsearch
NoSQL Databases
Cloud
CI/CD
Apache Kafka
Docker Containers
API Developement
Kubernetes