Secure Large-Scale Serveless Training at the Edge
Developed a fast and computationally efficient Byzantine robust algorithm that leverages a sequential, memory assisted and performance criteria for training over a logical ring.
Fast and Robust Large-Scale Distributed Gradient Descent
Developed a practical algorithm for distributed learning that is both communication efficient and straggler-resilient.
Mitigating Byzantine Attacks in Federated Learning
Proposed a novel sampling based approach that applies per client criteria for mitigating Byzantines in the general federated learning setting.
Low-Latency Federated Learning in Wireless Edge Networks
Proposed CodedFedL that injects structured coding redundancy into non-linear federated learning for mitigating stragglers and speeding up training procedure in heterogeneous MEC networks.
Hierarchical Decentralized Training at the Edge
Formulated a problem for decentralized training from data at the edge users, incorporating the challenges of straggling communications and limited communication bandwidth.
Communciation Efficient Large-Scale Graph Processing
Proposed and implemented a practical MapReduce based approach for large-scale graph processing.
Pre-defined Sparsity for Convolutional Neural Networks
Proposed the first approach to reduce footprint of convolutional neural networks via pre-defined sparsity.
Optimal Resource Allocation for Cloud Computing
Developed an efficient approach for load allocation in heterogeneous cloud clusters.