Company Overview:
Since 1993, EPAM Systems, Inc. (NYSE: EPAM) has leveraged its advanced software engineering heritage to become the foremost global digital transformation services provider leading the industry in digital and physical product development and digital platform engineering services. Through its innovative strategy; integrated advisory, consulting and design capabilities; and unique Engineering DNA, EPAMs globally deployed hybrid teams help make the future real for clients and communities around the world by powering better enterprise, education and health platforms that connect people, optimize experiences, and improve peoples lives. Selected by Newsweek as a 2021 Most Loved Workplace, EPAMs global multi-disciplinary teams serve customers in more than 40 countries across five continents. As a recognized leader, EPAM is listed among the top 15 companies in Information Technology Services on the Fortune 1000 and ranked as the top IT services company on Fortunes 100 Fastest-Growing Companies list for the last three consecutive years. EPAM is also listed among Ad Ages top 25 Worlds Largest Agency Companies and in 2020, Consulting Magazine named EPAM Continuum a top 20 Fastest-Growing Firm. Learn more at www.epam.com.
Job Profile:
4+ Years of in Big Data & Data related technology experience
Expert level understanding of distributed computing principles
Expert level knowledge and experience in Apache Spark
Hands on programming with Python / Java / Scala
Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop
Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming
Experience with messaging systems, such as Kafka or RabbitMQ
Good understanding of Big Data querying tools, such as Hive, and Impala
Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files
Good understanding of SQL queries, joins, stored procedures, relational schemas
Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
Knowledge of ETL techniques and frameworks
Performance tuning of Spark Jobs
Experience with native Cloud data services AWS or AZURE or GCP (Preferred)
Ability to lead a team efficiently
Experience with designing and implementing Big data solutions
Practitioner of AGILE methodology
If you feel your profile is suitable, please send in your updated CVs at an********o@ep*m.com along with the following details. If not, we encourage you to share references, if any.
1. Experience in Spark:
2. Experience in programming language (Python, Java, Scala):
3. Current & Expected CTC:
4. Notice Period:
5. Current & Preferred location:
Thanks and Regards,
Anupama Rao