2 code implementations • 13 Nov 2022 • Yuhang Yao, Mohammad Mahdi Kamani, Zhongwei Cheng, Lin Chen, Carlee Joe-Wong, Tianqiang Liu
Much of the value that IoT (Internet-of-Things) devices bring to ``smart'' homes lies in their ability to automatically trigger other devices' actions: for example, a smart camera triggering a smart lock to unlock a door.
no code implementations • ICLR 2022 • Farzin Haddadpour, Mohammad Mahdi Kamani, Mehrdad Mahdavi, Amin Karbasi
To train machine learning models that are robust to distribution shifts in the data, distributionally robust optimization (DRO) has been proven very effective.
1 code implementation • 19 Oct 2021 • Sumanth Chennupati, Mohammad Mahdi Kamani, Zhongwei Cheng, Lin Chen
Despite this advancement in different techniques for distilling the knowledge, the aggregation of different paths for distillation has not been studied comprehensively.
Ranked #33 on Knowledge Distillation on ImageNet
no code implementations • 22 Jul 2021 • Yuyang Deng, Mohammad Mahdi Kamani, Mehrdad Mahdavi
This work is the first to show the convergence of Local SGD on non-smooth functions, and will shed lights on the optimization theory of federated training of deep neural networks.
no code implementations • 4 Apr 2021 • Mohammad Mahdi Kamani, Rana Forsati, James Z. Wang, Mehrdad Mahdavi
The proposed PEF notion is definition-agnostic, meaning that any well-defined notion of fairness can be reduced to the PEF notion.
1 code implementation • NeurIPS 2020 • Yuyang Deng, Mohammad Mahdi Kamani, Mehrdad Mahdavi
To compensate for this, we propose a Distributionally Robust Federated Averaging (DRFA) algorithm that employs a novel snapshotting scheme to approximate the accumulation of history gradients of the mixing parameter.
1 code implementation • 1 Aug 2020 • Mohammad Mahdi Kamani, Sadegh Farhang, Mehrdad Mahdavi, James Z. Wang
The proposed framework, named targeted data-driven regularization (TDR), is model- and dataset-agnostic and employs a target dataset that resembles the desired nature of test data in order to guide the learning process in a coupled manner.
1 code implementation • 2 Jul 2020 • Farzin Haddadpour, Mohammad Mahdi Kamani, Aryan Mokhtari, Mehrdad Mahdavi
In federated learning, communication cost is often a critical bottleneck to scale up distributed optimization algorithms to collaboratively learn a model from millions of devices with potentially unreliable or limited communication and heterogeneous data distributions.
9 code implementations • 30 Mar 2020 • Yuyang Deng, Mohammad Mahdi Kamani, Mehrdad Mahdavi
Investigation of the degree of personalization in federated learning algorithms has shown that only maximizing the performance of the global model will confine the capacity of the local models to personalize.
no code implementations • 12 Nov 2019 • Mohammad Mahdi Kamani, Farzin Haddadpour, Rana Forsati, Mehrdad Mahdavi
It has been shown that dimension reduction methods such as PCA may be inherently prone to unfairness and treat data from different sensitive groups such as race, color, sex, etc., unfairly.
2 code implementations • NeurIPS 2019 • Farzin Haddadpour, Mohammad Mahdi Kamani, Mehrdad Mahdavi, Viveck R. Cadambe
Specifically, we show that for loss functions that satisfy the Polyak-{\L}ojasiewicz condition, $O((pT)^{1/3})$ rounds of communication suffice to achieve a linear speed up, that is, an error of $O(1/pT)$, where $T$ is the total number of model updates at each worker.
1 code implementation • International Conference on Machine Learning 2019 • Farzin Haddadpour, Mohammad Mahdi Kamani, Mehrdad Mahdavi, Viveck Cadambe
Communication overhead is one of the key challenges that hinder the scalability of distributed optimization algorithms to train large neural networks.
no code implementations • 10 Nov 2018 • Farshid Farhat, Mohammad Mahdi Kamani, James Z. Wang
A user study demonstrates that the work is useful to those taking photos.