What you will do:
Build large-scale batch and real-time data pipelines using the latest technologies to support production systems.
Apply design-thinking and agile mindset in working with other engineers, and business stakeholders to continuously experiment, iterate and deliver on new initiatives.
Leverage best practices in continuous integration and delivery.
Help drive transformation by continuously looking for ways to automate existing processes, testing, optimize data quality.
Explore new capabilities and technologies to drive innovation.
What you need:
5 to 7 years of experience building data products leveraging big data technologies (Hadoop, Spark, Kafka, ElasticSearch)
Experience writing clean and concise code using Java / Scala / Python
Experience with pipeline tools (Airflow / Luigi / Nifi)
Experience with modern data warehousing tools (Presto / Snowflake / Redshift)
Experience with public cloud environments
Nice to have:
Knowledgeable about containers orchestration (Docker, Kubernetes, Mesos)
Knowledgeable about data modeling, data access data storage techniques.
Continuously learning the mindset and enjoy working on open-ended problems.
Keyskills: continuous integration orchestration spark Data modeling Hadoop Cloud Data quality big data Data warehousing Python
Accion Labs is a Product Engineering Company Helping to Transform Businesses Through Emerging Technologies. This includes Web 2.0, Open Source, SaaS/Cloud, Mobility, IT Operations Management/ITSM, Big Data, and traditional BI/DW. Through nine global offices and a rapid-response delivery model, Accio...