What type of candidate are you?
The role is well suited to an eager learner with a growth mindset. You will have the autonomy to explore innovative solutions and recommend technologies and methods that improve the business process.
You will contribute to the design, implementation, and operation of a platform to supercharge analytics across the company. You will work closely with the end-users to understand their challenges and align the evolution of the platform to their needs.
What will you be doing?
- Develop creative solutions and optimizations to improve the performance, elasticity, fault-tolerance, and user experience of existing and new infrastructure
- Implement and promote operational excellence through the adoption of enterprise-grade systems, automation, and troubleshooting processes
- Engineer, operate and optimise large scale data pipelines
- Work closely with end-users to promote the adoption of modern technologies, techniques, and best practices
- Establish close cross-functional relationships with developers and data analysts to deliver solutions aligned to the business needs
What skills will you have?
- Strong development skills using Python, Java or Scala
- Experience working with Linux-based operating systems and tools
- Good understanding of data lakes and foundation technologies such as Hadoop, S3, Hive, and EMR
- Proficient building ETL processes in batch and streaming contexts using Spark, Flink or similar technologies
- Excellent collaborator and communicator, able to share and explain complex solutions to business and technology stakeholders
- Preferably, experience in data modelling or data warehousing techniques
What technologies are we looking for?
- Familiarity with AWS analytic resources (SageMaker, MSK, Redshift, Athena, Glue, etc.)
- Having worked with Docker and Kubernete