Cloud Data Engineer - DevOps


Australian Capital Territory

Work type:



Information & Communication Technology


$600.00 - $650 per day

Job reference:


Start date:


Contact email:


Steven Jobson

  • Big Data Integration
  • Permanent Residents can apply
  • 2 month contract
About the Organisation
Our client is one of the world's foremost providers of consulting, technology, outsourcing services and local professional services

About the Role
As a Data Engineer, will expand and optimise our clients' data and data pipeline architecture, as well as optimise their data flow and collection for cross functional teams.

Key Responsibilities
  • Build robust, efficient and reliable data pipelines consisting of diverse data sources to ingest and process data into Cloud based data lake platform
  • Design and develop real time streaming and batch processing pipeline solutions
  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Build DevOps pipeline
Skills & Experience
  • Proven working experience as a Big Data engineer preferably in building data lake solution by ingesting and processing data from various source systems
  • Experience with multiple Big data technologies and concepts such as HDFS, Hive, MapReduce, Spark, Spark streaming and NoSQL DB like HBase etc
  • Experience in building data platforms using any public cloud (AWS, Azure or GCPs)
  • Experience in one or more of Java, Scala, python and bash.
Desirable skills:
  • Knowledge of and/or experience with Big Data integration and streaming technologies (e.g. Kafka, Flume, etc.)
  • Experience in building data ingestion framework for enterprise data lake is highly desirable
How to Apply

To apply for this opportunity, please submit your application to Isabelle Barling at Talent International by clicking the "APPLY NOW' button below. Alternatively, you can contact Isabelle on 02 6285 3500 or for further information.

Share this job:

help your friends find their dream job: