Skip navigation
Events

Intel® Nervana™ Graph: A Universal Tensor JIT Compiler Webinar

Description

Deep learning is rapidly advancing traditionally hard problems in artificial intelligence such as image recognition, Go board game playing, speech recognition, machine translation, and more. It also drives a large increase in the appetite for high performance computational hardware. With the chaotic software landscape around deep learning and Intel’s upcoming expansion into the deep learning accelerator space, Intel needs to deliver software to enable maximum performance in a wide variety of customer use cases on a wide variety of hardware platforms.

The Intel® Nervana™ Graph project is designed to solve this problem by establishing an Intermediate Representation (IR) for deep learning that all frameworks can target which allows them to seamlessly and efficiently execute across the platforms of today and tomorrow with minimal effort. In addition to this IR, the project offers connectors to popular frameworks such as TensorFlow*, Intel’s reference framework named Neon™, and back-ends for compiling and executing this IR on IA, GPUs and future Intel hardware.

What you will learn:

  • Learn about the ecosystem of deep learning, how the field is paralleling the programming language community, and how Nervana™ Graph is a compiler for deep learning workloads to solve the combinatorial ecosystem dilemma.
  • Why build a deep learning compiler when we already have things like LLVM, CUDA*, cuDNN, Intel® MKL-DNN, and TensorFlow*?
  • What goes into making a deep learning compiler and what can end users expect from it?

 

Register Here

When and Where

  • Start Time:

    Aug 1, 2017 9:00 AM PDT (America/Los_Angeles)
  • End Time:

    Aug 1, 2017 10:00 AM PDT (America/Los_Angeles)
  • Location:

    Online

Event Info

  • Event Type:

    Webinar
  • Event Visibility & Attendance Policy:

    Open

Contact Info

Comments

More Like This