Search Results for author: Mohammad Mahdi Kamani

Found 13 papers, 8 papers with code

FedRule: Federated Rule Recommendation System with Graph Neural Networks

2 code implementations13 Nov 2022 Yuhang Yao, Mohammad Mahdi Kamani, Zhongwei Cheng, Lin Chen, Carlee Joe-Wong, Tianqiang Liu

Much of the value that IoT (Internet-of-Things) devices bring to ``smart'' homes lies in their ability to automatically trigger other devices' actions: for example, a smart camera triggering a smart lock to unlock a door.

Link Prediction Recommendation Systems

Learning Distributionally Robust Models at Scale via Composite Optimization

no code implementations ICLR 2022 Farzin Haddadpour, Mohammad Mahdi Kamani, Mehrdad Mahdavi, Amin Karbasi

To train machine learning models that are robust to distribution shifts in the data, distributionally robust optimization (DRO) has been proven very effective.

Adaptive Distillation: Aggregating Knowledge from Multiple Paths for Efficient Distillation

1 code implementation19 Oct 2021 Sumanth Chennupati, Mohammad Mahdi Kamani, Zhongwei Cheng, Lin Chen

Despite this advancement in different techniques for distilling the knowledge, the aggregation of different paths for distillation has not been studied comprehensively.

Knowledge Distillation Neural Network Compression +3

Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time

no code implementations22 Jul 2021 Yuyang Deng, Mohammad Mahdi Kamani, Mehrdad Mahdavi

This work is the first to show the convergence of Local SGD on non-smooth functions, and will shed lights on the optimization theory of federated training of deep neural networks.

Distributed Optimization

Pareto Efficient Fairness in Supervised Learning: From Extraction to Tracing

no code implementations4 Apr 2021 Mohammad Mahdi Kamani, Rana Forsati, James Z. Wang, Mehrdad Mahdavi

The proposed PEF notion is definition-agnostic, meaning that any well-defined notion of fairness can be reduced to the PEF notion.

Bilevel Optimization Decision Making +1

Distributionally Robust Federated Averaging

1 code implementation NeurIPS 2020 Yuyang Deng, Mohammad Mahdi Kamani, Mehrdad Mahdavi

To compensate for this, we propose a Distributionally Robust Federated Averaging (DRFA) algorithm that employs a novel snapshotting scheme to approximate the accumulation of history gradients of the mixing parameter.

Federated Learning

Targeted Data-driven Regularization for Out-of-Distribution Generalization

1 code implementation1 Aug 2020 Mohammad Mahdi Kamani, Sadegh Farhang, Mehrdad Mahdavi, James Z. Wang

The proposed framework, named targeted data-driven regularization (TDR), is model- and dataset-agnostic and employs a target dataset that resembles the desired nature of test data in order to guide the learning process in a coupled manner.

Bilevel Optimization Meta-Learning +1

Federated Learning with Compression: Unified Analysis and Sharp Guarantees

1 code implementation2 Jul 2020 Farzin Haddadpour, Mohammad Mahdi Kamani, Aryan Mokhtari, Mehrdad Mahdavi

In federated learning, communication cost is often a critical bottleneck to scale up distributed optimization algorithms to collaboratively learn a model from millions of devices with potentially unreliable or limited communication and heterogeneous data distributions.

Distributed Optimization Federated Learning

Adaptive Personalized Federated Learning

9 code implementations30 Mar 2020 Yuyang Deng, Mohammad Mahdi Kamani, Mehrdad Mahdavi

Investigation of the degree of personalization in federated learning algorithms has shown that only maximizing the performance of the global model will confine the capacity of the local models to personalize.

Bilevel Optimization Personalized Federated Learning

Efficient Fair Principal Component Analysis

no code implementations12 Nov 2019 Mohammad Mahdi Kamani, Farzin Haddadpour, Rana Forsati, Mehrdad Mahdavi

It has been shown that dimension reduction methods such as PCA may be inherently prone to unfairness and treat data from different sensitive groups such as race, color, sex, etc., unfairly.

Dimensionality Reduction Fairness

Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization

2 code implementations NeurIPS 2019 Farzin Haddadpour, Mohammad Mahdi Kamani, Mehrdad Mahdavi, Viveck R. Cadambe

Specifically, we show that for loss functions that satisfy the Polyak-{\L}ojasiewicz condition, $O((pT)^{1/3})$ rounds of communication suffice to achieve a linear speed up, that is, an error of $O(1/pT)$, where $T$ is the total number of model updates at each worker.

Distributed Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.