Great Careers - jobs Great Careers

Principal Data Engineer Just Eat

The opportunity

Position: Principal Data Engineer

Department: Customer Platforms

Location: London

Closing Date: 22/02/2019

Salary: £80,000 - £100,000

The Opportunity

Just Eat is searching for a skilled engineer to join the Data Engineering team as a Principal Data Engineer. Ideal candidates will be passionate about modern big data technologies, engineering practices and relish the challenge of building scalable and reliable solutions designed to support real time analytics, advanced data science and critical operational projects reliant on data.

What We Do

The data engineering teams role is to build a transformational data platform in order to democratise data in Just Eat. Our team is built on following principles:

  • Open Data: We ingest all data produced across Just Eat using batch and realtime pipelines and make it available to every employee in Just Eat. This data is then used to drive analytics, business intelligence, data science and critical business operations.

  • Self Service: We build tools, frameworks and processes to support self service modeling and activation of data. Our goal is to empower our users to find, process and consume our data without barriers.

  • Single Truth: We build services that host all metadata about Just Eat’s data in a single store and promote governance, data culture and Single Source of Truth.

  • Intelligent Personalisation: We build and maintain a machine learning platform which support data scientists in developing and deploying ML models at production scale. This allows us to deliver insights, personalisation and predictions to our customers at scale.

How We Do It

Our team is built on the following tenets:

  • Innovate: We are always on the lookout to adopt new technologies to help us achieve our goals. Our team is always learning and growing, we’re inquisitive and we’re not afraid of new tech and open source tooling. We’re looking for like-minded engineers with a passion to keep our code-base and infrastructure best in class.

  • Build for Scale: All our tools and components are built for scale and we use Kubernetes and other tools to help us scale automatically based on usage.

  • Serverless: We don’t manage servers and treat them as cattle. We have multiple kubernetes clusters where we host our Airflow infrastructure as well as numerous microservices and surrounding tooling. In addition we take advantage of the great serverless products available in GCP including: BigQuery, Dataflow (apache beam), Pub/Sub, Datastore etc.

  • Infrastructure as Code: We practice a DevOps first culture, with everyone in the team helping to deploy our infrastructure using terraform and CI/CD pipeline using Jenkins pipeline.

  • Collaboration & Ownership: All code is owned by the team and we have multiple avenues for collaboration - rotation, pairing and technical showcases. We also encourage team members to own their own code and promote self governance.

What We Are Looking For

  • Great coding ability - We expect you to write well tested, readable and performant production code to process large volumes of data. Our code is currently a mix of Scala and Python and we love polyglots.

  • Experience working with Cloud - AWS, Azure, Google Cloud. We use Google Cloud for all our deployments with a mix of services - Kubernetes, Dataflow, PubSub etc.

  • Ability to contribute to architecture discussions and influence peers and stakeholders in making better decisions.

  • Inclination to collaborate and ability to communicate technical ideas clearly.

  • Someone that understands systems end to end, beyond the code itself. e.g infrastructure, CI, deployment, monitoring, alerting etc and is willing to take ownership of it.

  • Knowledge and understanding of the fundamentals of computing and distributed systems.

Required skills and experience

#LI-DNI