post-image

Impetus Technologies is HIRING!!

Role: Big Data Engineers

  • Experience in working on Hadoop Distribution, good understanding of core concepts and best practices.
  • Good experience in building/tuning Spark pipelines in Scala/Python.
  • Good experience in writing complex Hive queries to derive business critical insights.
  • Good Programming experience with Java or Python or Scala or Pyspark or SQL.
  • Understanding of Data Lake vs Data Warehousing concepts.
  • Experience in NoSQL Technologies – MongoDB, Dynamo DB.
  • Good to have Experience with AWS Cloud, exposure to Lambda/EMR/Kinesis.

Skills Required:

spark, hive, kafka, mapreduce, java, python, pyspark, hadoop, cloudRole:

  • Design and implement solutions for problems arising out of large-scale data processing.
  • Attend/drive various architectural, design and status calls with multiple stakeholders.
  • Ensure end-to-end ownership of all tasks being aligned.
  • Design, build & maintain efficient, reusable & reliable code .
  • Test implementation, troubleshoot & correct problems.
  • Capable of working as an individual contributor and within team too.
  • Ensure high quality software development with complete documentation and traceability.
  • Fulfil organizational responsibilities (sharing knowledge & experience with other teams / groups).
  • Conduct technical training(s)/session(s), write whitepapers/ case studies / blogs etc.

Antwak Data Engineer Program: LIVE online Data Engineering course covering Data Warehousing, Data Lakes, Data Processing, Big Data & Hadoop, Advanced SQL, Data Visualization & many more –

Learn about our AntWak Experiential Program Data Engineering here

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *