Your browser does not support javascript! Please enable it, otherwise web will not work for you.

Senior Data Engineer - Hadoop/Spark @ Orcapod Consulting

Home > Application Programming / Maintenance

 Senior Data Engineer - Hadoop/Spark

Job Description

Senior data Engineer - Bangalore Location Exp - 5+ We are seeking a highly-skilled Senior Data Engineer to join our team in Bangalore. The ideal candidate will be adept at using large data sets to find opportunities for product and process optimization, as well as using models to test the effectiveness of different courses of action. Your primary focus will be on selecting optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. Key Responsibilities Design, Construct, Install, and Maintain Large Scale Processing Systems : Ensure architectures meet the business needs and can handle large datasets. Build Large Scale Data Processing Systems Utilize Apache Spark, Hadoop, AWS EMR, and other technologies to process large datasets for both Batch and Stream processing. Implement Data Workflows and Scheduling : With tools like Airflow, manage and optimize data pipelines, ensuring that data is delivered timely and accurately. Optimize Data. Delivery And Architect Reusable Code Work on performance tuning, infrastructure selection, and setting up frameworks for data delivery. Operate and Optimize Data Storage : Handle storage solutions like Hive and Hadoop Distributed File System (HDFS)/S3 EMRFS and ensure they integrate well with the data infrastructure. AWS Data Platforms : Experience with tools and platforms provided by AWS for data projects, such as EMR, EKS, DynamoDB, and AWS Glue. Collaborate with Product/Service Teams : Work closely with other teams to support their data infrastructure needs and assist them in data-related optimizations. Ensure Data Security and Compliance : Uphold best practices in ensuring data privacy and meet relevant data protection regulations. Stay Updated with the Latest Technologies : Continuously learn and keep up with the latest technologies and incorporate them where beneficial. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 5+ years of experience in a Data Engineering role. Expertise in Big Data tools : Apache Spark, Hadoop, Hive. Good working experience with Airflow. Experience with AWS data services : EMR, DynamoDB, AWS Glue, etc. Strong analytic skills related to working with unstructured datasets. Proficient in SQL and scripting languages (Python/Scala). Strong project management and organizational skills. Excellent communication skills, both verbal and written. (ref:hirist.com

Employement Category:

Employement Type: Full time
Industry: Others
Role Category: Application Programming / Maintenance
Functional Area: Not Applicable
Role/Responsibilies: Senior Data Engineer - Hadoop/Spark

+ View Contactajax loader


Keyskills:   Apache Spark Hadoop Airflow Hive SQL Python Scala AWS EMR AWS Glue

 Job seems aged, it may have been expired!
 Fraud Alert to job seekers!

₹ 10 - 16 Lakh/Yr

Similar positions

Senior/Lead Software Engineer

  • Ntech It Solutions
  • 5 to 9 Yrs
  • Noida, Gurugram
  • 2 days ago
₹ Not Specified

Senior Member of Technical Staff - Oracle

  • Wen Womentech
  • 5 to 9 Yrs
  • Other Karnataka
  • 2 days ago
₹ Not Specified

Senior .Net Core Developer

  • White Horse Manpower
  • 5 to 10 Yrs
  • Bengaluru
  • 3 days ago
₹ Not Specified

Senior Angular Developer

  • White Horse Manpower
  • 5 to 10 Yrs
  • Bengaluru
  • 3 days ago
₹ Not Specified

Orcapod Consulting

Orcapod Consulting Services Private Limited ...