|Full Title:||NEAT Topics|
|Date & Time:||23 Jun 2017 at 13:30|
|Event Topic(s):||Computing Techniques Seminar
|Event Info:||Speaker: Dr. Abhinav Vishnu (PNNL)
Abstract: Deep Learning (DL) is ubiquitous. Yet leveraging distributed memory systems for DL algorithms is incredibly hard. In this talk, we will present approaches to bridge this critical gap. We will start by scaling DL algorithms on large scale systems such as supercomputers on leadership class facilities. Specifically, we will: 1) present our TensorFlow and Keras runtime extensions which require negligible changes in user-code for scaling DL implementations, 2) present communication-reducing/avoiding techniques for scaling DL implementations, 3) present research on semi-automatic generation of DNN Topologies and 4) their applicability to science use-cases. Our results will include validation on several US supercomputer sites such as Berkeley's NERSC, Oak Ridge Leadership Class Facility, and PNNL Institutional Computing. We will provide pointers and discussion on the general availability of our research under the umbrella of Machine Learning Toolkit on Extreme Scale (MaTEx) available at http://github.com/matex-org/matex.
Dr. Vishnu's research interests are in designing scalable, fault tolerant and energy efficient Machine Learning and Data Mining (MLDM) algorithms. A few examples include Deep Learning algorithms (with TensorFlow), Support Vector Machines (SVM), Frequent Pattern Mining (FP-Growth) and several others such as K-Nearest Neighbors (k-NN), k-means using MPI and PGAS models, such as Global Arrays. The MLDM research is integrated in Machine Learning Toolkit for Extreme Scale (MaTEx). He is also interested in applications of Machine Learning including fault, performance modeling and domain sciences. He has also been involved in designing scalable programming models and communication subsystems. A by-product of his research in PGAS programming models (Global Arrays ) is Communication Runtime for Extreme Scale (ComEx). ComEx is released with Global Arrays. During his PhD, he was heavily involved in designing MPI runtimes with InfiniBand and other interconnects. His research is integrated with MVAPICH a high performance MPI on InfiniBand, RoCE, Intel Omni-path and other high performance Interconnects.
Join from PC, Mac, Linux, iOS or Android: https://zoom.us/j/6306058062 or https://fnal.zoom.us/my/gabrielperduemtg Or iPhone one-tap (US Toll): +16465588656,6306058062# or +14086380968,6306058062# Or Telephone:
Dial: +1 646 558 8656 (US Toll) or +1 408 638 0968 (US Toll)
Meeting ID: 630 605 8062
International numbers available: https://zoom.us/zoomconference?m=vQVYNrinJQOOLVeXXkMy7ccAFGdjNzAB Or an H.323/SIP room system:
H.323: 22.214.171.124 (US West) or 126.96.36.199 (US East)
Meeting ID: 630 605 8062