Big Data Hadoop Architect



Work type:



Information & Communication Technology


$130000 - $150000 per annum

Job reference:


Start date:


Contact email:


Peter Li

We are seeking a Big Data Architect (Hadoop) for large scale Data & Analytics of large enterprises. Our client is building a team to support their next-generation data platform initiatives. We are we're looking for a Big Data Architect (Hadoop) who can lead the charge and take their existing architecture, design, and cloud-based data engineering to the next level.
In this role, you will have the opportunity to shape the future of Advanced Data Analytics. We are seeking highly skilled Technologists who can map Technology capabilities to Business Objectives.
As the Big Data Architect, you will be responsible for owning the design and architecture strategy of big data platform. This is a critical role, which will focus on delivering scalable solutions that unify and link all of their existing Data assets across the company.
This role will provide technical leadership and become a mentor to others on the Team. This role will be expected to have experience in designing and delivering scalable solutions using Technologies under the Hadoop ecosystem (Spark, Kafka, Flume, Oozie, HBase).
  • Exciting opportunities to do different stuff every day
  • Dynamic and fast-growing organisation
  • Diverse role with an opportunity to make a significant impact
  • Architect and lead the Development of a highly scalable Next Generation Data Platform.
  • Lead, manage, and mentor the WB Data Engineering Team in Big Data technologies.
  • Work with other teams to ensure Delivery targets are met.
  • Designing and implementing complex scalable statistical models such as but not limited to Recommendation and Classification models.
  • Designing and implementing solutions that meet security compliance guidelines.
  • Bachelor's degree in Computer Science or related field.
  • Minimum 2 years experience Architecting and Delivering Solutions on the Hadoop platform.
  • Minimum of 1.5 years experiences working with Apache Hadoop framework.
  • Solid experience in Apache Spark, Hive.
  • Experience using Sqoop, Kafka, Flume, Oozie, HBase, and Apache Phoenix.
  • Strong communication skills.
  • Ability to work independently or collaboratively.
  • Detail oriented with strong organization and prioritization skills.
  • Demonstrated ability to make decisions in technology selection and implementation approach based on long-term strategic objectives, while taking into consideration short-term implications for ongoing or planned implementations.
  • Demonstrated ability to apply technology in solving business problems and proven ability to communicate with both technical and non-technical audiences.
  • In-depth knowledge of software development technology, principles, methods, tools, and practices and industry standards and trends in Big Data space.
If you possess the above skills, are interested in a challenging and rewarding opportunity with an exciting and fast growing organisation please apply below or call Peter Li on 03 9236 7726

Share this job:

help your friends find their dream job: