Job summary
The EU Economics team is a central science team working across a variety of topics in the EU Retail business and beyond. We work closely with EU business leaders to drive change at Amazon. We focus on solving long-term, ambiguous and challenging problems. Key topics include pricing, product selection, delivery speed, profitability, and customer experience. We tackle these issues by building novel econometric models, machine learning systems, and high-impact experiments which we integrate into business, financial, and system-level decision making. Our work is highly collaborative and we regularly partner with EU- and US-based interdisciplinary teams.
We are looking for a Data Engineer to work closely with our scientists and software engineers to accelerate and scale our impact. We build innovative prototype products based on economic-science and want to collaborate with a dynamic data engineer that can supercharge our products with high quality, scalable data systems. The ideal candidate will have strong bias for action, work closely with scientists and engineers on a day-to-day basis to enhance scientific approaches, and set up a roadmap to build data infrastructure to support the team’s overall ambitions.
If you have an entrepreneurial spirit, you know how to deliver results fast, and you are keen to learn how to build complex science-based solutions, we want to talk to you.
Key job responsibilities
* Work closely with scientists and software engineers to design and implement large-scale, high-volume, high-performance data models for greenfield projects
* Use AWS technologies to build, operate, and monitor reliable, scalable ETL pipelines
* Help establish a long-term strategy for our data infrastructure, best practice, and operational excellence
* Be prepared to challenge norms and innovate, and become a technical lead for everything data in the team
* Degree in Computer Science, Engineering, Mathematics, or a related field or 5+ years industry experience
* Demonstrated strength in data modelling, ETL development, and data warehousing.
* Advanced SQL and query performance tuning skills.
* Coding proficiency in at least one modern programming language (e.g. Python, Java).
* Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets.
* Experience leading large-scale data warehousing and machine learning projects.
* Experience using AWS data technologies (e.g. DynamoDB, Redshift, S3, Glue)
* Experience using Apache Spark (Spark SQL, Spark Scala, or PySpark)
Job ID: 93220
Meta is embarking on the most transformative change to its business and technolo...
Deloitte’s Enterprise Performance professionals are leaders in optimizing...
Job Duties/Responsibilities:Determine the acceptability of specimens for testing...
• JOB TYPE: Direct Hire Position (no agencies/C2C - see notes below)â€Â...