SysML: On System and Algorithm co-design and Automatic Machine Learning
published: Sept. 24, 2018, recorded: August 2018, views: 948
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
The rise of Big Data and AI computing has led to new demands for Machine Learning systems to learn complex models with millions to billions of parameters that promise adequate capacity to digest massive datasets and offer powerful and real-time predictive analytics thereupon. In this talk, I discuss a recent trend toward building new distributed frameworks for AI at massive scale known as “system and ML algorithm co-design”, or SysML—system designs are tailored to the unique properties of ML algorithms, and algorithms are re-designed to better fit into the system architecture. I show how one can explore the underlying statistical and algorithmic characteristics unique to ML programs but not typical in traditional computer programs in designing the system architecture to achieve significant, universal, and theoretically sound power-up of ML program across the board. I also present a briefly introduction of the Petuum system based on such interdisciplinary innovations, which intends to dramatically improve adoption of AI solutions by lowering the barrier of entry to AI technologies via Automatic Machine Learning through Petuum. I show how, through automatable, product-grade, hardware-agnostic, standardized building blocks that can be assembled and customized, AI users can liberate themselves from the demanding experience of algorithm programming and system tuning, and easily experiment with different AI methods, parameters, and speed/resource trade-offs by themselves or automatically. To put this in a broader context, recent discussions about AI in both research community, and the general public have been championing a novelistic view of AI, that AI can mimic, surpass, threaten, or even destroy mankind. And such discussions are fueled by mainly recent advances in deep learning experimentations and applications, which are however often plagued by its craftiness, un-interpretability, and poor generalizability. I will discuss a different view of AI as a rigorous engineering discipline and as a commodity, where standardization, modularity, repeatability, reusability, and transparency are commonly expected, just as in civil engineering where builders apply principles and techniques from all sciences to build reliable constructions. I will discuss how such a view sets different focus, approach, metric, and expectation for AI research and engineering, which we practiced in our SysML work.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !