Virtual Seminar: Secure Frameworks for Outsourced Neural Network Inference
Machine learning as a service (MLaaS) has emerged as a paradigm allowing clients to outsource machine learning computations to the cloud. However, MLaaS raises immediate security concerns, specifically relating to the integrity (or correctness) of computations performed by an untrusted cloud, and the privacy of the client’s data. In this talk, I discuss frameworks we built on cryptographic tools that can be used for secure deep learning based inference on an untrusted cloud: CryptoNAS (building models for private inference) and SafetyNets (addressing correctness). CryptoNAS is a novel neural architecture search method for finding and tailoring deep learning models to the needs of private inference. Existing models are not well suited for this task, as operation latencies in the private domain are inverted: non-linear layers dominate latency, while linear operations become effectively free. CryptoNAS introduces the idea of a ReLU budget as a proxy for inference latency, and can build models that maximize accuracy within a given budget. SafetyNets is a framework built on interactive proof protocols that enables an untrusted server (the cloud) to provide a client with a short mathematical proof of the correctness of the inference task that they perform on behalf of the client.
Access: Seminar will be delivered live at 11:00 AM – 12:00 PM Central (CDT; UTC -5) via the following link: Zoom link
A Zoom account is required in order to access the seminar. Zoom is accessible to UT faculty, staff, and students with support from ITS. Otherwise, you can sign up for a free personal account on the Zoom website.