Hi, I am a PhD candidate in the Ming Hsieh Department of Electrical and Computer Engineering at the University of Southern California, working under the guidance of Prof. Salman Avestimehr in the Information Theory and Machine Learning (vITAL) research lab. During my research pursuits, I have also collaborated closely with Prof. Murali Annavaram, Prof. Keith Chugg, and Prof. Ramtin Pedarsani. I have also had the fortune to gain industry experience through multiple internships. I spent Summer 2018 and Summer 2019 as a Research Intern at Intel Labs under Dr. Shilpa Talwar and Dr. Nageen Himayat respectively. During Summer 2021, I was an Applied Scientist Intern at Amazon Alexa AI under Dr. Clement Chung and Dr. Rahul Gupta. Prior to joining the graduate school, I completed my BTech in 2016 in Electrical Engineering from the Indian Institute of Technology Kanpur, where I worked under Prof. Aditya K. Jagannatham, in the Multimedia Wireless Networks (MWN) Group.
As a graduate student, I have been working towards holistically addressing real-world bottlenecks in large-scale distributed computing, including federated learning. My projects are broadly classified into the following three exciting paradigms:
Outside research, I like hanging out with friends, watching classical Bollywood movies, and listening to Indian classical music.
PhD in Electrical and Computer Engineering, 2022 (expected)
University of Southern California
BTech in Electrical Engineering, 2016
Indian Institute of Technology Kanpur
Developed a fast and computationally efficient Byzantine robust algorithm that leverages a sequential, memory assisted and performance criteria for training over a logical ring.
Developed a practical algorithm for distributed learning that is both communication efficient and straggler-resilient.
Proposed a novel sampling based approach that applies per client criteria for mitigating Byzantines in the general federated learning setting.
Proposed CodedFedL that injects structured coding redundancy into non-linear federated learning for mitigating stragglers and speeding up training procedure in heterogeneous MEC networks.
Formulated a problem for decentralized training from data at the edge users, incorporating the challenges of straggling communications and limited communication bandwidth.
Proposed and implemented a practical MapReduce based approach for large-scale graph processing.
Proposed the first approach to reduce footprint of convolutional neural networks via pre-defined sparsity.
Developed an efficient approach for load allocation in heterogeneous cloud clusters.