Tel Aviv, Israel
Product

Senior Data Engineer

About the position

The future of commerce will be networks of localized inventory determined by advanced data intelligence and combined with automated, highly efficient micro fulfillment centers and last-mile delivery platforms. The results will generate massive improvements in consumer experience (selection, speed and cost) and retailer operating margins, and capital efficiency. Advances in AI intelligent operating networks will enable retailers to predict and generate demand in near real-time while stocking and fulfilling inventory equally efficiently - dramatically reducing backorders and the need for buffer stock while accelerating profitability. The future will finally enable having the right inventory in the right place at the right time

A little bit about us - Founded in 2015, Fabric has raised $138 million to date and is backed by Aleph, Corner Ventures, Canada Pension Plan Investment Board (CPPIB), Innovation Endeavors, La Maison, Playground Ventures, and Temasek. With offices in New York City and Tel-Aviv and operations in 4 US metros, Fabric is rapidly expanding its U.S. operations with over 200+ team members globally and 15 sites under development/contract, including four live micro-fulfillment centers. 

Job Overview

Working with the company’s Director of Data Strategy and Engineering, Fabric is looking for senior engineers to drive Fabric’s Data architecture and engineering capabilities. These capabilities are needed for two main reasons - internally for Fabric’s purposes (operations management and robotics performance) and externally working with our customers to build our data streaming, visualization, and data insights capabilities.

Responsibilities

  • You’ll design the data architecture while working closely with the product org. to define what needs to be built and the path for execution
  • Contribute to the process methodology for Data Engineering. Fabric’s success is driven by repeatable and scalable data processes, and an implementation methodology that is repeatable, scalable and efficient is of paramount importance for the product overall.
  • Align and collaborate with the Applied Research and Engineering organizations to define and deliver optimal processes, as well as partnering to develop and enhance tools and methodologies that improve our products and services to our customers
  • Build pipelines to gather, refine, and transform large and diverse datasets into simplified, meaningful, and actionable data models.
  • Work with engineers and data analysts/scientists to understand data needs
  • Build and operate large-scale data infrastructure in production (performance, reliability, monitoring)
  • Improve quality and optimization of data and integrate data from several data sources
  • Design and develop data models to enrich the data team
Requirements
  • At least 6+ years of data engineering background
  • B.Sc. or M.Sc. in a technical field such as engineering, mathematics, statistics, computer science, physical science operations research
  • Experience with ETL design, implementation and maintenance. Familiarity with Airflow - Advantage
  • Significant experience with schema design and dimensional data modeling.
  • Experience with one or more of the columnar or relational DBs (Redshift, Snowflake, MySQL, MS-SQL)
  • Autodidact, with strong technical skills
  • Experience with working in a Linux-based environment
  • Advantage: understanding of and experience using the modern cloud (data) infrastructure: AWS, Docker, Kubernetes, Spark, Presto, Kafka, etc.
  • Advantage: Experience with ElasticSearch, Tableau, Bigquery, Looker, Sisense
  • Strong customer-facing skills, working with customers to define data needs
  • An excellent collaborator with great communication skills, who can take part in diverse cross-functional teams
  • Self-motivated and able to thrive under pressure and function effectively in a fast-paced environment.
  • Proven ability to deliver results in a high-performance organization