If you are highly interested and available immediately, please submit your resume along with your total experience, current CTC, notice period, and current location details to hidden_email Key Responsibilities: Design, develop, and optimize data pipelines and ETL workflows. Work with Apache Hadoop, Airflow, Kubernetes, and Containers to streamline data processing. Implement data analytics and mining techniques to drive business insights. Manage cloud-based big data solutions on GCP and Azure. Troubleshoot Hadoop log files and work with multiple data processing engines for scalable data solutions. Required Skills & Qualifications: Proficiency in Scala, Spark, PySpark, Python, and SQL. Strong hands-on experience with Hadoop ecosystem, Hive, Pig, and MapReduce. Experience in ETL, Data Warehouse Design, and Data Cleansing. Familiarity with data pipeline orchestration tools like Apache Airflow. Knowledge of Kubernetes, Containers, and cloud platforms such as GCP and Azure. If you are a seasoned big data engineer with a passion for Scala and cloud technologies, we invite you to apply for this exciting opportunity!,
Employement Category:
Employement Type: Full time Industry: IT Services & Consulting Role Category: Not Specified Functional Area: Not Specified Role/Responsibilies: Scala Big Data Lead Engineer - 7 YoE