Big Data Engineer

Big Data Engineer

1 Nos.
25634
Full Time
5.0 Year(s) To 8.0 Year(s)
9.00 LPA TO 16.00 LPA
IT Software - Project & Program Mgt / Other
IT-Software/Software Services
Job Description:
Skills Required:  
  • 5-7 years of experience as a Big Data Engineer or similar role
  • 5-7 years experience programming and/or architecting a back end language (Java, J2EE, Core
  • University degree in Computer Science, Engineering or equivalent preferred)
  • Experience with Java oriented technologies ( ORM frameworks like Hibernate, REST/SOAP)
  • Ability to gather accurate requirements and work closely with stakeholders to prioritize tasks and the scope of development
  • Experience with non-relational & relational databases (SQL, MySQL, NoSQL, Hadoop, MongoDB, CouchDB)
  • Experience with Spark, or the Hadoop ecosystem and similar frameworks
  • Object Oriented analysis and design using common design patterns.
  • Experience processing large amounts of structured and unstructured data, including integrating data from multiple sources.
  • Experience in cloud infrastructure AWS/GoogleCloud/Azure
  • Exposure to Python and AI/ML frameworks is good to have
  • Creative and innovative approach to problem-solving
  • Performance: Hands on experience Java Profiling (JProfiler, Eclipse Memory Analyzer, DynaTrace, Introscope, Ganglia etc.) and has a deep knowledge of JVM internals & GC, excellent knowledge of thread and heap dump analysis.
  • Experience in working on Source Control, Testing & Deployment  CVS, SVN, JUnit, jMock, ANT, Maven

Day to Day Work Responsibilities:

  • Build reusable frameworks for interaction between different components in platform
  • Apply Data cleaning, wrangling, visualization and reporting, with an understanding of the best, most efficient use of associated tools for machine learning pipelines and Time series data analysis
  • Monitoring data performance and define infrastructure needs
  • Contribution to our pluggable ELT framework for various data sources to extract data and store into datalake.
  • Work closely with data science team to build data preparation pipelines for scale
  • Streaming data using tools like Kafka from events generated across sources
Company Profile

As  a company they are bringing the next wave of technological disruption by being “OUTCOMES OBSESSED”. They aim to create an “Outcomes Economy” by building an enduring platform(s) for innovation that pursues excellence to create products, systems and solutions that delight their customers and exceed their expectations.

Apply Now

  • Interested candidates are requested to apply for this job.
  • Recruiters will evaluate your candidature and will get in touch with you.