Research SW Engineer

Israel

About The Position

Zebra has set out on a mission to help hundreds of millions of people gain access to fast, accurate medical diagnosis, by teaching computers to read and diagnose medical imaging data.

We are looking for a strong SW Engineer to develop research and deep learning infrastructures for optimizing research at scale and optimizing deployed production.

Your mission will be to build world class infrastructure for developing, testing and delivering Deep Learning algorithms - making the process highly scalable, efficient and reproducible - using state of the art tools and practices from the world’s most leading companies pioneering AI at scale.

Responsibilities

  • reach X10 in #experiments/researcher improvement
  • Automate parts of the research process, from creating simple baselines for algorithms to grid searches for hyper parameters
  • Create state of the art experiment tracking infrastructure - which allows reproducible experiments - as well as experiment dashboard showing progress in all projects
  • Build infrastructure which allows us to be in 100% utilization of our tens of GPU’s for experiments
  • Develop and adopt innovative research tools - from image visualization tools to neural network debugging capabilities
  • Be the center of excellence for engineering in the algorithms team - for tools, libraries, pipelines, pre-processing and devops - and mentor algorithm researchers on engineering practices

Requirements

  • You are passionate and experienced in various technologies - most likely worked heavily with Python and data over the last few years.
  • You’ve authored large scale data pipelines on a modern data stack such as hadoop/spark.
  • You like Devops and worked on Docker and kubernetes.
  • You love and understand data - you are used to deep diving into data behavior, semantics, you profiled and debugged complex data.
  • You worked very closely with data scientists to develop, train and evaluate models.
  • You are curious about deep learning, probably took at least an online course, and developed few models yourself on keras/tensorflow/pytorch.

Skills

  • Very strong data engineer
  • Expert in Python
  • Experience with devops, esp. docker and ideally kubernetes
  • 4+ years working on data infra, pipelines and machine learning
  • Mentor, facilitator, enabler personality
  • Nice to have: Deep learning, C/C++, solid understanding of HW and network

Apply for this position