Publications

Assisted Learning: A Framework for Multi-Organization Learning

Xian, Xun, Xinran Wang, Jie Ding, and Reza Ghanadan. "Assisted learning: A framework for multi-organization learning." Advances in neural information processing systems 33 (2020): 14580-14591 (Spotlight Presentation).

In an increasing number of AI scenarios, collaborations among different organizations or agents (e.g., human and robots, mobile units) are often essential to accomplish an organization-specific mission. However, to avoid leaking useful and possibly proprietary information, organizations typically enforce stringent security constraints on sharing modeling algorithms and data, which significantly limits collaborations. In this work, we introduce the Assisted Learning framework for organizations to assist each other in supervised learning tasks without revealing any organization’s algorithm, data, or even task.

Information Laundering for Model Privacy

Wang, Xinran, Yu Xiang, Jun Gao, and Jie Ding. "Information Laundering for Model Privacy." In International Conference on Learning Representations (Spotlight Presentation). 2020.

In this work, we propose information laundering, a novel framework for enhancing model privacy. Unlike data privacy that concerns the protection of raw data information, model privacy aims to protect an already-learned model that is to be deployed for public use. The private model can be obtained from general learning methods, and its deployment means that it will return a deterministic or random response for a given input query. An information-laundered model consists of probabilistic components that deliberately maneuver the intended input and output for queries of the model, so the model’s adversarial acquisition is less likely. Under the proposed framework, we develop an information-theoretic principle to quantify the fundamental tradeoffs between model utility and privacy leakage, and derive the optimal design.

Optimization of Spinal Cord Stimulation Using Bayesian Preference Learning and Its Validation

Zhao, Zixi, Aliya Ahmadi, Caleb Hoover, Logan Grado, Nicholas Peterson, Xinran Wang, David Freeman, Thomas Murray, Andrew Lamperski, David Darrow, and Theoden I. Netoff. "Optimization of spinal cord stimulation using Bayesian preference learning and its validation." IEEE Transactions on Neural Systems and Rehabilitation Engineering 29 (2021): 1987-1997.

Epidural spinal cord stimulation has been reported to partially restore volitional movement and autonomic functions after motor and sensory-complete spinal cord injury. In this paper, we present a Bayesian optimization strategy for identifying personalized optimal stimulation patterns based on the participant’s expressed preference for stimulation settings. We present companion validation protocols for investigating the credibility of learned preference models.

Parallel Assisted Learning

Wang, Xinran, Jiawei Zhang, Mingyi Hong, Yuhong Yang, and Jie Ding. "Parallel Assisted Learning." IEEE Transactions on Signal Processing 70 (2022): 5848-5858.

In the era of big data, multimodal data are often collected and preserved by different business and government entities. These entities often have their local machine learning data, models, and tasks that they cannot share with others. Meanwhile, an entity often needs to seek assistance from others to enhance its learning quality without sharing proprietary information. How can an entity be assisted while it is assisting others? We develop a general method called parallel assisted learning (PAL) that applies to the context where entities perform supervised learning and can collate their data according to a common data identifier. Under the PAL mechanism, a learning entity that receives assistance is obligated to assist others without the need to reveal any local data, model, and learning objective.

Personalized Federated Recommender Systems with Private and Partially Federated AutoEncoders

Le, Qi, Enmao Diao, Xinran Wang, Ali Anwar, Vahid Tarokh, and Jie Ding. "Personalized Federated Recommender Systems with Private and Partially Federated AutoEncoders." In 2022 56th Asilomar Conference on Signals, Systems, and Computers, pp. 1157-1163. IEEE, 2022.

Recommender Systems (RSs) have become increasingly important in many application domains, such as digital marketing. However, conventional RSs suffer from two critical limitations: the personalization problem that the RSs trained traditionally may not be customized for individual users, and the privacy problem that directly sharing user data is not encouraged. We propose Personalized Federated Recommender Systems (PersonalFR), which introduces a personalized autoencoder-based recommendation model with Federated Learning to address these challenges.

PI-FL: Personalized and Incentivized Federated Learning

Khan, Ahmad Faraz, Xinran Wang, Qi Le, Azal Ahmad Khan, Haider Ali, Jie Ding, Ali Butt, and Ali Anwar. "PI-FL: Personalized and Incentivized Federated Learning." arXiv preprint arXiv:2304.07514 (2023).

Personalized FL has been widely used to cater to heterogeneity challenges with non-IID data. A primary obstacle is considering the personalization process from the client’s perspective to preserve their autonomy. Allowing the clients to participate in personalized FL decisions becomes significant due to privacy and security concerns, where the clients may not be at liberty to share private information necessary for producing good quality personalized models. Moreover, clients with high-quality data and resources are reluctant to participate in the FL process without reasonable incentive. In this paper, we propose PI-FL, a one-shot personalization solution complemented by a token-based incentive mechanism that rewards personalized training. PI-FL outperforms other state-of-the-art approaches and can generate good-quality personalized models while respecting clients’ privacy.

A Framework for Incentivized Collaborative Learning

Wang, Xinran, Qi Le, Ahmad Faraz Khan, Jie Ding, and Ali Anwar. "A Framework for Incentivized Collaborative Learning." arXiv preprint arXiv:2305.17052 (2023).

Collaborations among various entities, such as companies, research labs, AI agents, and edge devices, have become increasingly crucial for achieving machine learning tasks that cannot be accomplished by a single entity alone. In this work, we establish a novel framework for incentivized collaborative learning, and provide insights into the critical issue of when and why incentives can improve collaboration performance. Furthermore, we show the broad applicability of ICL to specific cases in federated learning, assisted learning, and multi-armed bandit with both theory and experimental results.

Robust and Efficient Quantum Communication

Howe, Connor, Xinran Wang, and Ali Anwar. "Robust and Efficient Quantum Communication." In Proceedings of the 2023 International Workshop on Quantum Classical Cooperative, pp. 13-16. 2023.

Quantum communication between quantum processors offers new capabilities and applications in quantum computing. However, Noisy Intermediate-Scale Quantum (NISQ) devices face challenges such as decoherence, entanglement distillation latency, high communication-to-data qubit ratio, quantum error correction, and scalability. Inspired by distributed systems concepts, this paper presents two solutions for optimizing quantum communication: advanced quantum repeaters and machine learning for quantum network optimization. Advanced quantum repeaters will leverage topological quantum states to improve entanglement generation, swapping, and distillation efficiency. Concurrently, machine learning techniques using multi-armed bandit algorithms will dynamically allocate quantum processing resources across distributed quantum networks. This optimization enhances the efficiency of quantum teleportation protocols and reduces computational costs. By integrating advanced quantum repeaters with machine learning optimization, the proposed solutions aim to address the challenges in quantum communication.

AID: Adaptive Integration of Detectors for Safe AI with Language Models


As Large Language Models (LLMs) increasingly influence content generation across diverse platforms, there is a heightened urgency to regulate their outputs to ensure safe usage. However, defining “safety” is complex, given that entities across domains may interpret it through varied lenses and develop detectors from specific safety criteria. To address this complexity, we introduce the approach of Adaptive Integration of Detectors (AID) to orchestrate the strengths of multiple pretrained detectors to ensure comprehensive effectiveness in diverse scenarios. AID employs a Mixture-of- Experts (MoE) framework, wherein it dynamically assigns and learns data-adaptive weights for each detector using domain-specific annotated data and LLM-extracted features. We provide theoretical insights into why MoE can be effective by showing its optimality in a classical Neyman-Pearson setting. Our experimental studies using various detection tasks curated from benchmark datasets demonstrate AID’s ability to synergistically combine the unique capabilities of individual detectors. For example, it is observed that AID can improve the area under the curve (AUC) by an absolute value of 0.07 to 0.21, with a median of 0.12, compared with the best individual detectors. The improvement is particularly significant for complex detection tasks that mix different unsafe data sources.

MAP: Multi-Human-Value Alignment Palette


Ensuring that generative AI systems align with human values is essential but challenging, especially when considering multiple human values and their potential trade-offs. Since human values can be personalized and dynamically change over time, the desirable levels of value alignment vary across different ethnic groups, industry sectors, and user cohorts. Within existing frameworks, it is hard to define human values and align AI systems accordingly across different directions simultaneously, such as harmlessness, helpfulness, and positiveness. To address this, we develop a novel, first-principle approach called Multi-Human-Value Alignment Palette (MAP), which navigates the alignment across multiple human values in a structured and reliable way. MAP formulates the alignment problem as an optimization task with user-defined constraints, which define human value targets. It can be efficiently solved via a primal-dual approach, which determines whether a user-defined alignment target is achievable and how to achieve it. We conduct a detailed theoretical analysis of MAP by quantifying the trade-offs between values, the sensitivity to constraints, the fundamental connection between multi-value alignment and sequential alignment, and proving that linear weighted rewards are sufficient for multi-value alignment. Extensive experiments demonstrate MAP’s ability to align multiple values in a principled manner while delivering strong empirical performance across various tasks.