GCASR 2018‎ > ‎


Prasanna Balaprakash

Computer Scientist, Mathematics and Computer Science Division, Argonne National Laboratory

Title: Machine Learning in High-Performance Computing

Abstract: Over the last few years, modeling and analysis of compute-communication performance, scalability, job interference, input/output (I/O), and job scheduling have become critical in Department Of Energy’s supercomputing facilities. The key challenge consists in finding new proactive and predictive methodologies that will improve the efficacy of large supercomputers. In this talk, we will present our current work on machine learning approaches for modeling the performances of compute, communication, and I/O. We will end with pressing challenges and potential avenues for future research.

Bio: Prasanna Balaprakash is a computer scientist with a joint appointment in the Mathematics and Computer Science Division and the Leadership Computing Facility at Argonne National Laboratory. His research interests span the areas of artificial intelligence, machine learning, optimization, and high-performance computing. Currently, his research focus is on the automated design and development of scalable algorithms for solving large-scale problems that arise in scientific data analysis and in automating application performance modeling and tuning. He received a Bachelor’s degree in computer science engineering from the Periyar University in Salem, India; a Master’s degree in computer science from the Otto-von- Guericke University Magdeburg in Germany; and a Ph.D. in engineering sciences from CoDE-IRIDIA (AI Lab), Université Libre de Bruxelles, Brussels, Belgium, where he was a Marie Curie fellow and later an FNRS Aspirant. He was the chief technology officer at Mentis Sprl., a data analytics startup in Brussels, Belgium for a year before moving to Argonne in late 2010, where he was a postdoc until late 2013.

Xin Chen

Director of Engineering, HERE Technologies

Adjunct Faculty, Northwestern University and Illinois Institute of Technology

Title: HD Live Maps for Automated Driving: An AI Approach

Abstract: HD Maps, one of the key components of automated driving and a life- saving safety feature, serve as the hub for sensing, perception, and decision. Making and maintaining a near-real-time HD map on a global scale is an extremely challenging task. I will present how we apply AI technologies to automate the creation of HD Live Maps using both industrial capture and crowd-sourced based data collection. Quality Index is introduced to provide automated driving customers with the confidence of HD map accuracy and reliability in a dynamic world. We implement low power and high throughput edge perception as a reference implementation to enable crowd-sourced based HD map maintenance. Finally, I will share best practices to democratize AI in our engineering organization and transition research into production in the creation of a half million kilometers of HD maps in 2017.

Bio: Dr. Xin Chen is a Director of Engineering in the Highly Automated Driving organization at HERE Technologies whose team is completing pioneering work to achieve the automation of next-generation map creation using computer vision and machine learning technologies. He has over 50 U.S. Patents in LIDAR and image analysis for mapping and he has served on an NSF (National Science Foundation) panel to evaluate and award funding to multi-million dollar projects advancing research in these areas. Xin has been awarded 2010 and 2011 IMPACT awards to recognize "employees making outstanding contributions", an award recognizing "Significant Intellectual Property Contributors" for 2011-2012, 2013 and 2014 company-wide Hack Week top awards, and 2015 Berkeley Office Hackathon top award. He has numerous publications at CVPR and CVIU. Xin is an adjunct professor and Ph.D. advisor at Northwestern University and Illinois Institute of Technology teaching "Geospatial Vision and Visualization" and "Biometrics" courses. Xin obtained his Ph.D. in Computer Science and Engineering from the University of Notre Dame.

Christos Dimoulas

Assistant Professor of Electrical Engineering and Computer Science, Northwestern University

Title: Whip: Higher-Order Contracts for Modern Services

Abstract: Modern service-oriented applications forgo semantically rich protocols and middleware when composing services. Instead, they embrace the loosely- coupled development and deployment of services that communicate via simple network protocols. Even though these applications do expose interfaces that are higher-order in spirit, the simplicity of the network protocols forces them to rely on brittle low-level encodings. To bridge the apparent semantic gap, programmers introduce ad-hoc and error-prone defensive code. Inspired by Design by Contract, we choose a different route to bridge this gap. We introduce Whip, a contract system for modern services. Whip (i) provides programmers with a higher-order contract language tailored to the needs of modern services; and (ii) monitors services at run time to detect services that do not live up to their advertised interfaces. Contract monitoring is local to a service. Services are treated as black boxes, allowing heterogeneous implementation languages without modification to services’ code. Thus, Whip does not disturb the loosely coupled nature of modern services.

Bio: Christos Dimoulas is an Assistant Professor of Electrical Engineering and Computer Science at Northwestern University. He is primarily interested on the semantics and design of programming languages with an eye towards software engineering and security. He holds a Ph.D. from Northeastern University and an undergraduate degree from the National Technical University of Athens.

Birhanu Eshete

Postdoctoral Researcher, University of Illinois at Chicago

Title: Learning from Offline Infection Episodes to Detect Malware On-the-Wire

Abstract: Postmortem analysis of successful malware infection traces provide invaluable insights to reason about proactive defense. A typical web-borne malware infection episode is often characterized by distinctive dynamics governed by conversations between a victim host and multiple shady remote hosts. This talk highlights a data-driven and learning-based approach to leverage conversational aspects of web-borne malware infections as a basis to build on-the-wire detection.

Bio: Birhanu Eshete is a Postdoctoral Researcher in the Computer Science department at UIC. His research interests include systems security, cybercrime analysis, big-data security analytics, and adversarial machine learning.

Goetz Graefe


Title: Optimistic and Pessimistic Concurrency Control

Abstract: Pessimistic concurrency control means locking any data item before reading or writing it, possibly waiting to acquire an item already locked by another transaction. Optimistic concurrency control means end-of- transaction validation, with no restriction during a transaction's read or work phase. But how different are these approaches really?

Bio: Goetz Graefe has been a professor, consultant, product architect, development manager, release manager, and industrial researcher. Formerly an HP Fellow, he is now a researcher at Google in Madison, Wis. His work on database query optimization is used in many products, e.g., Microsoft SQL Server. The same is true about his work on parallel query processing. More recently, he has researched indexing, transactions, logging and recovery, and concurrency control. He is the 2017 recipient of the ACM SIGMOD Edgar F. Codd Innovations award.

Josiah Hester

Assistant Professor of Computer Engineering, Northwestern University

Title: Computation and Sensing without Reliable Power

Abstract: For decades wireless sensing computing systems have relied primarily on battery power. However, batteries are not a viable energy storage solution for the tiny devices at the edge of a sustainable Internet of Things. Batteries are expensive, bulky, hazardous, and wear out after a few years (even rechargeables). Replacing and disposing of billions or trillions of dead batteries per year would be expensive and irresponsible. By leaving the batteries behind and surviving off energy harvested from the environment, tiny intermittently powered sensors can monitor objects in hard to reach places maintenance free for decades. Batteryless sensing will revolutionize computing and open up new application domains from infrastructure monitoring and wildlife tracking to wearables, healthcare, and space exploration. However, these devices intermittent power supply make power failures the common case; requiring a rethinking of hardware and software design, tool creation, and evaluation techniques. In this talk, I will introduce the challenges of batteryless sensing, then present my recent work on tools, hardware platforms, runtime and language techniques that streamline the creation, testing, and deployment of efficient, sophisticated applications on tiny, energy harvesting, batteryless devices.

Bio: Josiah Hester is an Assistant Professor in the Department of Electrical Engineering and Computer Science at Northwestern University. Josiah joined Northwestern in Fall 2017 after completing his Ph.D. in Computer Science at Clemson University. His research enables sophisticated, sustainable sensing on the tiny devices at the edge of the Internet of Things. These devices enable new application domains from infrastructure monitoring and wildlife tracking to wearables, healthcare, and space exploration. He explores and develops new hardware designs, software techniques, tools, and programming abstractions so that developers can easily design, debug, and deploy intricate batteryless sensor network applications that work in spite of frequent power failures. His work has received a Best Paper Award and Best Paper Nomination from ACM SenSys, and two Best Poster Awards. He was also named the Outstanding Ph.D. Student in Computer Science for 2016 by the School of Computing at Clemson University.

Junchen Jiang

Assistant Professor of Computer Science, University of Chicago

Title: Video Analytics at Scale

Abstract: For years, most groundbreaking advancements in deep learning have found their applications in computer vision, and soon they will be applied to all kinds of multimedia data, of which the most challenging one is real-time video analytics. The deep learning techniques are notoriously expensive in terms of computational resources, and video data have already dominated the Internet traffic and cloud storage. Their marriage would prove an unprecedented challenge in every aspect of systems research, from edge, networking to the computing stack in the cloud. In this talk, I will approach this challenge from a systems perspective and present some of my recent work that leads to potentially drastic savings in networking and computing resources while maintaining high inference accuracy.

Bio: Junchen Jiang is an Assistant Professor of Computer Science at University of Chicago (starting in July 2018). He is now visiting Microsoft Research. His research interests are in networking and big data systems. He received his Ph.D. degree from Computer Science Department at Carnegie Mellon University, and his Bachelor's degree from Tsinghua University (Yao's Class) in 2011. He received Juniper Networks Fellowship and won a paper award from ACM CoNEXT 2012. He is a winner of the CMU School of Computer Science Dissertation Award.

Baris Kasikci

Assistant Professor of Electrical Engineering and Computer Science, University of Michigan 

Title: Towards Continuous In-Production Failure Diagnosis

Abstract: Diagnosing bugs—the process of understanding the root causes of failures—is hard. Developers depend on reproducing bugs to diagnose them. Traditionally, systems that attempt to reproduce bugs record fine- grained events that lead to failures. Alas, recording incurs high runtime, making existing techniques unsuitable in production. In this talk, I will show that a fine-grained and expensive recording is unnecessary in most cases. I will then introduce Lazy Diagnosis, a hybrid dynamic-static interprocedural pointer and type analysis for diagnosing the root causes of bugs. Our Lazy Diagnosis prototype, Snorlax, relies on commodity hardware and does not require any source code changes and can diagnose complex bugs in real, large-scale systems with full accuracy and an average runtime performance overhead of below 1%. Broadly, I will discuss how our findings can be used to build more efficient in-production bug detection and record/replay techniques.

Bio: Baris Kasikci is an Assistant Professor of Computer Science and Engineering at the University of Michigan. His research is centered around developing techniques, tools, and environments that help developers build more efficient, reliable, and secure software. He is interested in finding solutions that allow programmers to better reason about their code, and that efficiently detect performance issues and correctness bugs, classify them, and diagnose their root cause. He is also interested in system support for emerging hardware platforms, efficient runtime instrumentation, hardware and runtime support for enhancing system security, and program analysis. Baris completed his PhD in computer science at EPFL. He is the recipient of the 2016 Roger Needham PhD Award for the best PhD thesis in computer systems in Europe and the 2016 Patrick Denantes Memorial Prize for the best PhD thesis in the Department of Information and Communication Sciences at EPFL. He is also one of the recipients of the VMware 2014-2015 Graduate Fellowship. He previously held roles at Microsoft research, VMware, Intel and Siemens.

Aniket Kate

Assistant Professor of Computer Science, Purdue University

Title: Privacy Challenges with Blockchains and Layer-2 Protocols

Abstract: The hope that cryptography and decentralization together might ensure robust user privacy was among the strongest drivers of early success of Bitcoin's blockchain technology. A desire for privacy still permeates the growing blockchain user and application base today. Nevertheless, thanks to the inherent public nature of most blockchain ledgers, users' privacy is severely restricted, and a few deanonymization attacks have been reported so far. Several privacy-enhancing technologies have been proposed to solve these issues, and a few also have been implemented; however, some important challenges still remain to be resolved. In this talk, we discuss privacy challenges, promising solutions, and unresolved privacy issues with the blockchain technology. We also study prominent privacy attacks on layer-2 protocols, analyze the existing privacy solution, and describe important unresolved challenges.

Bio: Dr. Aniket Kate is an Assistant Professor in the computer science department at Purdue University. His research integrates applied cryptography, distributed systems and data-driven analysis towards designing, implementing and analyzing privacy and transparency enhancing technologies. Before joining Purdue in 2015, he was a faculty member and an independent research group leader at Saarland University in Germany, where he was heading the Cryptographic Systems Research Group. He completed his postdoctoral fellowship at Max Planck Institute for Software Systems, Germany in 2012, and received his PhD from the University of Waterloo, Canada in 2010.

Kate Keahey

Senior Fellow, Computation Institute, University of Chicago
Computer Scientist, Mathematics and Computer Science Division, Argonne National Laboratory

Title: Chameleon: A Testbed for Computer Science Systems Research

Abstract: Computer Science experimental testbeds allow investigators to explore a broad range of different state-of-the-art hardware options, assess scalability of their systems via large-scale experiments, and support bare metal reconfiguration and isolation so that one user does not impact the experiments of another. Chameleon is a large-scale, deeply reconfigurable testbed built specifically to support Computer Science systems experiments. It currently consists of ~600 nodes (~15,000 cores), a total of 5PB of total disk space hosted at the University of Chicago and TACC, and leverages 100 Gbps connection between the sites. The hardware includes a large-scale homogenous partition to support large- scale experiments as well a diversity of configurations and architectures including Infiniband, GPUs, FPGAs, storage hierarchies with a mix of HDDs, SDDs, NVRAM, and high memory as well as non-x86 architectures such as ARMs and Atoms. To support systems experiments, Chameleon provides a configuration system giving users full control of the software stack including root privileges, kernel customization, and console access. To date, Chameleon has supported 2,000+ users working on 300+ research and education projects. This talk will describe the current systems as well as the extensions projected for the near future that will allow us to broaden the range of supported experiments. This will be accomplished by deploying new hardware with state-of-the-art architectures and new networking capabilities allowing experimenters to deploy their own switch controllers and experiment with Software Defined Networking (SDN). I will also describe new capabilities targeted at improving experiment monitoring and analysis as well as tying together testbed features to improve experiment repeatability. Finally, I will outline our plans for packaging the Chameleon infrastructure to allow others to reproduce our configuration. 

Bio: Kate Keahey is one of the pioneers of infrastructure cloud computing. She created the Nimbus project, recognized as the first open source Infrastructure-as-a-Service implementation, and continues to work on research aligning cloud computing concepts with the needs of scientific datacenters and applications. To facilitate such research for the community at large, Kate leads the Chameleon project, providing a deeply reconfigurable, large-scale, and open experimental platform for Computer Science research. To foster the recognition of contributions to science made by software projects, Kate co-founded and serves as co-Editor-in-Chief of the SoftwareX journal, a new format designed to publish software contributions. Kate is a Scientist at Argonne National Laboratory and a Senior Fellow at the Computation Institute at the University of Chicago.

Chunyi Peng

Assistant Professor of Computer Science, Purdue University

Title: Amplifying Intelligence in Mobile Networks

Abstract: Mobile networking and artificial intelligence are two disruptive technologies that have already and will continue to reshape our daily life. In the past decade, mobile Internet have become the norm, with the popularity of smartphones and the upgrade of mobile network infrastructure (from 3G to 4G). In the foreseen future, mobile network will continue to serve as the most critical network infrastructure to mobile services and massive Internet of things (IoTs), especially with the upcoming 5G evolution. On the other hand, AI is profoundly renovating each field using big data and machine learning. In this talk, I will present the opportunities exposed when mobile network meets AI. In particular, we envision that mobile network (5G) is moving to a higher level of network intelligence. I will present our early efforts towards amplifying mobile network intelligence. We harness device-side intelligence through data- driven network analytics which not only helps the device to learn what happens but also sheds light on why and how in the underlying cellular network operations which remained closed box before; We also incorporate formal methods to augment domain-specific intelligence by providing verifiable properties. I will showcase open opportunities when we apply this new approach to conduct mobile network research; Finally, I will present open questions on network side and some ongoing efforts.

Bio: Chunyi Peng is an Assistant Professor in the Department of Computer Science at Purdue University. She had been working as an Assistant Professor in the Department of Computer Science and Engineering at the Ohio State University between 2013 and 2017, after she received her Ph.D. in Computer Science at University of California, Los Angeles in 2013. She is a recipient of NSF CAREER award and received MobiCom’17 Best Community Paper award and MobiCom’16 Best Community Paper award and several best demo awards. Her research interests are on the broad areas of mobile networking, system and security, with a focus on 5G/4G/3G cellular networks, mobile sensing and network security.

Feng Qian

Assistant Professor of Computer Science, Indiana University Bloomington

Title: MP-DASH: Adaptive Video Streaming Over Preference-Aware Multipath

Abstract: More and more users watch videos on their mobile devices. In Q4 2016, mobile videos have eventually surpassed desktop videos in terms of the online viewing time. In this talk, I describe our recent work aiming at improving the performance and reducing the network resource usage for mobile video streaming. Specifically, we develop MP-DASH, a system that strategically leverages multiple network interfaces such as WiFi and LTE on mobile devices to stream videos. Compared to off-the-shelf multipath solutions, MP-DASH reduces the cellular data usage by up to 99% and the radio energy consumption by up to 85% with negligible degradation of the QoE.

Bio: Feng Qian is an assistant professor in the Computer Science Department at Indiana University Bloomington. His research interests cover the broad areas of mobile systems, VR/AR, computer networking, and system security. He obtained his Ph.D. at the University of Michigan. He is a recipient of several awards including a Key Contributor Award at AT&T Shannon Labs (2014), an NSF CRII Award (2016), a Google Faculty Award (2016), an AT&T VURI Award (2017), an NSF CAREER Award (2018), the best paper award at ACM CoNEXT 2016, and several best paper nominees. He has published 18 top-tier conference papers according to csrankings.org. The ARO (mobile Application Resource Optimizer) system, his Ph.D. thesis, has been productized by AT&T and is now widely used in industry. 

Xiaokang Qiu

Assistant Professor of Electrical and Computer Engineering, Purdue University

Title: Reconciling Enumerative and Symbolic Search in Syntax-Guided Synthesis

Abstract: Syntax-guided synthesis aims to find a program satisfying semantic specification as well as user-provided structural hypotheses. For syntax- guided synthesis there are two main search strategies: concrete search, which systematically or stochastically enumerates all possible solutions, and symbolic search, which leverages a symbolic procedure to solve the synthesis problem. In this paper, we propose a novel combination of the two strategies that combines the best of the two worlds. Based on a decision tree representation, our synthesis framework works by enumerating tree heights from the smallest possible one to larger ones. For each fixed height, the framework symbolically searches a solution through the counterexample-guided inductive synthesis approach. We also complement the concolic synthesis framework with two pure symbolic and decidable synthesis procedures for two fragments of problems, namely Strong Single Invocation and Acyclic Translational invariant synthesis. The two fragments are decidable as their procedures are terminating and complete. We implemented our synthesis procedures and compared with state-of-the-art synthesizers on a range of benchmarks. Experiments show that our algorithms are promising.

Bio: Xiaokang Qiu is an Assistant Professor of Electrical and Computer Engineering at Purdue University. He finished his Ph.D. in Computer Science at the University of Illinois at Urbana-Champaign in 2013. Before starting at Purdue in 2016, he was a postdoctoral associate at the Massachusetts Institute of Technology. He is interested in software verification and synthesis, with a particular emphasis on heap- manipulating programs. His research focuses on program logics, decision procedures, automated verification, and syntax-guided synthesis. He is a member of the Purdue Programming Languages (PurPL) research group and leads the Computer-Aided Programming (CAP) group at Purdue.

Theo Rekatsinas

Assistant Professor of Computer Science, University of Wisconsin-Madison

Title: From Dirty Data to Structured Prediction

Abstract: The advent of data-hungry applications has enabled computers to interpret what they see, communicate in natural language, and answer complex questions. There is a hidden catch, however: the reliance of all these state-of-the-art systems on high-effort tasks like data preparation and data cleaning. It is estimated that 70% to 80% of the time devoted on analytics projects is spent on checking and organizing data. The challenge is that data collection often introduces dirty data, i.e., incomplete, erroneous, replicated, or conflicting data records. In this talk, I discuss how to reason about dirty data and demonstrate how statistical learning is the key to managing large volumes of heterogeneous, noisy data sources effectively. I will present HoloClean, our new system that relies on statistical learning and inference to repair identified data errors and anomalies. Finally, I will conclude by drawing connections between data cleaning and structured prediction and how these connections lead to new insights and solutions to classical database problems such as data repairs and consistent query answering.

Bio: Theodoros (Theo) Rekatsinas is an Assistant Professor in the Department of Computer Sciences at the University of Wisconsin-Madison. He is a member of the Database Group. He earned his Ph.D. in Computer Science from the University of Maryland and was a Moore Data Postdoctoral Fellow at Stanford University. His research interests are in data management, with a focus on data integration, data cleaning, and uncertain data. Theo's work has been recognized with an Amazon Research Award in 2018, a Best Paper Award at SDM 2015, and the Larry S. Davis Doctoral Dissertation award in 2015.

Tim Rogers

Assistant Professor of Electrical and Computer Engineering, Purdue University

Title: Achieving Utilization and Utility by Virtualizing GPU Resources

Abstract: GPUs are at the epicenter of two competing forcing in the computing world: the need to create energy-efficient hardware and the need to simply program this efficient hardware. Reconciling the software industry’s desire for simple code with the hardware industry’s push for energy efficiency presents a number of challenges. This talk will focus on a newly developed software solution to handle one of these challenges: efficiently executing many fine-grained tasks on massively parallel GPUs. I will introduce Pagoda, a runtime system that virtualizes GPU resources, using an OS-like daemon kernel called MasterKernel. Tasks are spawned from the CPU onto Pagoda as they become available and are scheduled by the MasterKernel at the warp granularity. Experimental results show a 1.51x improvement over utilizing CUDA-HyperQ and a 5.70x improvement over a 20-core CPU. The talk will conclude with a brief introduction into future directions being explored at the Purdue accelerator research lab.

Bio: Tim Rogers is an Assistant Professor at Purdue University, where his research focuses on massively multithreaded processor design. He is interested in exploring computer systems and architectures that improve both programmer productivity and energy efficiency. Tim is a winner of the NVIDIA Graduate Fellowship and the Alexander Graham Bell Canada Graduate Scholarship. His work has been selected as a “Top Pick” from computer architecture by IEEE Micro Magazine and as a “Research Highlight” in Communications of the ACM magazine. During his PhD, Tim has interned at NVIDIA Research and AMD Research. Prior to obtaining his PhD from the University of British Columbia, Tim worked as a software engineer at Electronic Arts and received his BEng in Electrical Engineering from McGill University.

Brent Stephens

Postdoctoral Researcher, University of Wisconsin-Madison

Title: Towards Problem-Free Lossless Data Center Networks 

Abstract: Packet losses significantly increase tail response latency in distributed data center applications, and important emerging technologies for remotely accessing memory (RDMA) and storage area networking require a lossless network substrate. Unfortunately, there are many problems that can arise in lossless networks, e.g., large buffering delays, unfairness, head of line blocking, and deadlock, and these problems exist in both physical and virtual networks. These problems are preventing the widespread adoption of lossless networking and these new technologies. This talk will discuss my work on improving lossless networking so that it is possible to avoid congestion-related packet losses without suffering from these many pitfalls. First, I will briefly discuss the need for congestion control in lossless networks. Second, I will discuss SquareDance, a new lossless virtual switch that uses input queuing to avoid all head-of-line blocking.

Bio: Brent will be an assistant professor at UIC beginning Fall 2018. Before that, he was a postdoc at the University of Wisconsin-Madison working with Professors Aditya Akella and Michael Swift. He completed his Ph.D. at Rice University in 2015, where he worked with Professors Alan L. Cox and Scott Rixner. Throughout graduate school, Brent gained real-world experience during internships at IBM Research. In addition to his current focus on server-side networking and network offloads, he is also interested in designing data center networks that are performant, scalable, fault-tolerant, and consistent.

Xian-He Sun

Distinguished Professor of Computer Science, Illinois Institute of Technology

Title: Pace-Matching Data Access: An System Approach for the Memory-Wall Problem

Abstract: Computing has changed from compute-centric to data-centric. Data movement becomes the killer factor of performance. In this talk, based on a series of fundamental results, we introduce a new thought on memory system design. We first present the Concurrent-AMAT (C-AMAT) data access model and the LPM optimization method to quantify and optimize the unified impact of data locality, concurrency and overlapping. Then, we introduce the pace-matching data-transfer design methodology to optimize memory system performance. Based on the pace-matching design, a memory-computing hierarchy is built to generate and transfer the final results. Its optimization is very different from the conventional locality- based system optimization and can minimize memory-wall effects to the minimum. Experimental testing confirms the theoretical findings with a 150x reduction of memory stall time. We will present the concept of the pace-matching data-transfer design, the design of C-AMAT and LPM, and experimental case studies on Argonne HPC machines and on NASA Climate applications. We will also discuss optimization and research issues related to pace-matching data access and to memory systems in general.

Bio: Dr. Xian-He Sun is a University Distinguished Professor of Computer Science at the Department of Computer Science in the Illinois Institute of Technology (IIT). He is the director of the Scalable Computing Software laboratory at IIT. Before joining IIT, he worked at DoE Ames National Laboratory, at ICASE, NASA Langley Research Center, at Louisiana State University, Baton Rouge, and was an ASEE fellow at Navy Research Laboratories. Dr. Sun is an IEEE fellow and is known for his memory- bounded speedup model, also called Sun-Ni’s Law, for scalable computing. His research interests include data-intensive high-performance computing, memory and I/O systems, software system for big data applications, and performance evaluation and optimization. He has over 250 publications and 6 patents in these areas. He is the Associate Editor-in-Chief of the IEEE Transactions on Parallel and Distributed Systems, a Golden Core member of the IEEE CS society, a former vice chair of the IEEE Technical Committee on Scalable Computing, the past chair of the Computer Science Department at IIT, and is serving and served on the editorial board of leading professional journals in the field of parallel processing. More information about Dr. Sun can be found at his website www.cs.iit.edu/~sun/.

Ben Y. Zhao

Neubauer Professor of Computer Science, University of Chicago

Title: Security Challenges in Large-scale ML Adoption

Abstract: There is an unsustainable level of excitement over recent results in solving systems problems using machine learning. This has led to a rush to deploy ML-based systems in countries around the world, across a range of critical applications. In this talk, I will talk about some of the key security challenges that must be addressed before ML-based systems can be deployed widely, and some of our recent work that targets these problems.

Bio: Ben Zhao is the Neubauer Professor of Computer Science at University of Chicago. He completed his PhD from Berkeley (2004) and his BS from Yale (1997). He is an ACM distinguished scientist, and recipient of the NSF CAREER award, MIT Technology Review's TR-35 Award (Young Innovators Under 35), ComputerWorld Magazine's Top 40 Tech Innovators award, Google Faculty award, and IEEE ITC Early Career Award. His work has been covered by media outlets such as Scientific American, New York Times, Boston Globe, LA Times, MIT Tech Review, and Slashdot. He has published more than 150 publications in areas of security and privacy, networked systems, wireless networks, data-mining and HCI (H-index 60). He recently served as TPC co-chair for the World Wide Web Conference (WWW 2016) and the upcoming ACM Internet Measurement Conference (IMC 2018).

Heather Zheng

Neubauer Professor of Computer Science, University of Chicago

Title: Adversarial Mobile Localization 

Abstract: As IoT devices gain in popularity, any security vulnerability in their design will have heavy impact across a large user population. In this talk, I will discuss adversarial localization attacks where attackers can accurately pinpoint the location of WiFi security cameras in homes and offices, using a small amount of stealthy, passive, exterior measurements coupled with unsupervised learning techniques. Such attacks can be extended to target many types of wireless IoT devices. We also show that current defenses have minimal impact against these attacks and are also easily circumvented via countermeasures. Thus significant work is needed to develop robust defenses against these attacks.

Bio: Heather Zheng is the Neubauer Professor of Computer Science at University of Chicago. She received her PhD degree from University of Maryland, College Park in 1999. After spending six years as researcher in industry labs (Bell-Labs, USA, and Microsoft Research Asia), she joined the UC Santa Barbara faculty in Fall 2005, and moved to University of Chicago in Summer 2017. At University of Chicago, she co-directs the SANDLab (http://sandlab.cs.uchicago.edu/) with a broad research coverage on wireless networking and systems, mobile computing, security, and data mining and modeling. Her research has been featured by a number of media outlets, such as the New York Times, Boston Globe, LA Times, MIT Technology Review, and Computer World. She is an IEEE Fellow and has received a number of awards, including MIT Technology Review’s TR-35 Award (Young Innovators Under 35) and the World Technology Network Fellow Award. She served as the TPC-cochair of MobiCom’15 and DySPAN’1 and is currently serving on the steering committees of MobiCom