About us:

We, at Impetus, create software products and solutions that enable a real-time, data-driven and intelligent enterprise on the cloud. We partner to several Fortune 100 companies.

Impetus Technologies is focused on enabling a unified, clear, and present view for the intelligent enterprise by enabling data warehouse modernization, unification of data sources, self-service ETL, advanced analytics, and BI consumption. For more than a decade, Impetus has been the ‘Partner of Choice’ for several Fortune 500 enterprises in transforming their data and analytics lifecycle.

Location: Indore / Noida / Bangalore/Gurugram/Chennai/Hyderabad/Pune/Nagpur/Jaipur/Chandigarh

Must have:

  • Experience in working on Hadoop Distribution (CDH/HDP/MapR).
  • Hands-on experience with MapReduce, Hive 2.x*, Spark 2.x*.
  • Conceptual knowledge of Data Structures & Algorithms
  • Possessing in-depth knowledge of various Design Patterns, Data Processing Patterns

(Batch/NRT/RT processing) & capable of providing design & architecture of typical business problems

  • Knowledge and experience with NoSQL Database

(Cassandra/HBase/MongoDB/CouchDB/Neo4j),SQL Database(MySQL/Oracle).

  • Programming experience with Python/Java
  • Excellent communication, problem-solving & analytical skills with ability to thrive in a fast paced, dynamic environment & operate under stringent deadlines
  • Confident, highly motivated and passionate about delivery and customer satisfaction
  • Strong technical development experience with writing performant code leveraging best coding practices
  • Out of box thinker and not just limited to work done in existing assignment(s)

What You will do

  • Design and implement solutions for problems arising out of large-scale data processing
  • Provide the team technical direction(s)/approach(es) to be undertaken and guide them in resolution of queries/issues etc.
  • Attend/drive various architectural, design and status calls with multiple stakeholders
  • Ensure end-to-end ownership of all tasks being aligned
  • Design, build & maintain efficient, reusable & reliable code
  • Test implementation, troubleshoot & correct problems
  • Capable of working as an individual contributor and within team too
  • Ensure high quality software development with complete documentation and traceability
  • Fulfil organizational responsibilities (sharing knowledge & experience with other teams / groups)
  • Conduct technical training(s)/session(s), write whitepapers/ case studies / blogs etc.

Good to Have :

  • Knowledge/experience working on Search Platforms (Solr/ElasticSearch), designing as well as implementing RESTful APIs
  • Experience with Cloud environments (AWS/GCP/Azure), exposure to Containers & Container Management Platforms (Dockers/Kubernetes)
  • Understanding of Data Lake vs Data Warehousing concept along with the ability to perform comparative analysis of Data Stores and knowledge/experience with creation & maintenance of the same
  • Experience with Big Data ML toolkits (SparkML/Mahout)
  • Knowledge on Data Privacy, Data Governance, Data Compliance &Security
  • Programming experience with Python/Scala
  • Experience with building & maintaining optimal data pipelines in a reliable manner so as to deliver solutions on the fly
  • Experience working on open source products

Online Apply : click Here

Leave a Reply

Your email address will not be published. Required fields are marked *