no code implementations • 28 Feb 2024 • Yu-Neng Chuang, Tianwei Xing, Chia-Yuan Chang, Zirui Liu, Xun Chen, Xia Hu
In this work, we propose a Natural Language Prompt Encapsulation (Nano-Capsulator) framework compressing original prompts into NL formatted Capsule Prompt while maintaining the prompt utility and transferability.
no code implementations • 7 Feb 2024 • Yu-Neng Chuang, Guanchu Wang, Chia-Yuan Chang, Ruixiang Tang, Fan Yang, Mengnan Du, Xuanting Cai, Xia Hu
In this work, we introduce a generative explanation framework, xLLM, to improve the faithfulness of the explanations provided in natural language formats for LLMs.
2 code implementations • 2 Jan 2024 • Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Zirui Liu, Chia-Yuan Chang, Huiyuan Chen, Xia Hu
To achieve this goal, we propose SelfExtend to extend the context window of LLMs by constructing bi-level attention information: the grouped attention and the neighbor attention.
no code implementations • 23 Dec 2023 • Guanchu Wang, Yu-Neng Chuang, Fan Yang, Mengnan Du, Chia-Yuan Chang, Shaochen Zhong, Zirui Liu, Zhaozhuo Xu, Kaixiong Zhou, Xuanting Cai, Xia Hu
To address this problem, we develop a pre-trained, DNN-based, generic explainer on large-scale image datasets, and leverage its transferability to explain various vision models for downstream tasks.
no code implementations • 2 Oct 2023 • Chia-Yuan Chang, Yu-Neng Chuang, Zhimeng Jiang, Kwei-Herng Lai, Anxiao Jiang, Na Zou
In real-world applications, machine learning models often become obsolete due to shifts in the joint distribution arising from underlying temporal trends, a phenomenon known as the "concept drift".
no code implementations • 1 Oct 2023 • Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Chia-Yuan Chang, Xia Hu
Our method progressively increases the training length throughout the pretraining phase, thereby mitigating computational costs and enhancing efficiency.
1 code implementation • 4 Sep 2023 • Yu-Neng Chuang, Guanchu Wang, Chia-Yuan Chang, Kwei-Herng Lai, Daochen Zha, Ruixiang Tang, Fan Yang, Alfredo Costilla Reyes, Kaixiong Zhou, Xiaoqian Jiang, Xia Hu
The exponential growth in scholarly publications necessitates advanced tools for efficient article retrieval, especially in interdisciplinary fields where diverse terminologies are used to describe similar research.
no code implementations • 14 Jul 2023 • Chia-Yuan Chang, Yu-Neng Chuang, Guanchu Wang, Mengnan Du, Na Zou
Domain generalization aims to learn a generalization model that can perform well on unseen test domains by only training on limited source domains.
no code implementations • 9 Jul 2023 • Chia-Yuan Chang, Yu-Neng Chuang, Kwei-Herng Lai, Xiaotian Han, Xia Hu, Na Zou
Those studies face challenges, either in inaccurate predictions of sensitive attributes or the need to mitigate unequal distribution of manually defined non-sensitive attributes related to bias.
no code implementations • 30 Mar 2023 • Sirui Ding, Qiaoyu Tan, Chia-Yuan Chang, Na Zou, Kai Zhang, Nathan R. Hoot, Xiaoqian Jiang, Xia Hu
Organ transplant is the essential treatment method for some end-stage diseases, such as liver failure.
no code implementations • 24 Mar 2023 • Chia-Yuan Chang, Jiayi Yuan, Sirui Ding, Qiaoyu Tan, Kai Zhang, Xiaoqian Jiang, Xia Hu, Na Zou
To tackle these challenges, deep learning frameworks have been created to match patients to trials.
no code implementations • 26 Nov 2022 • Yu-Neng Chuang, Kwei-Herng Lai, Ruixiang Tang, Mengnan Du, Chia-Yuan Chang, Na Zou, Xia Hu
Knowledge graph data are prevalent in real-world applications, and knowledge graph neural networks (KGNNs) are essential techniques for knowledge graph representation learning.
no code implementations • 27 May 2022 • Yicheng Wang, Xiaotian Han, Chia-Yuan Chang, Daochen Zha, Ulisses Braga-Neto, Xia Hu
Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation.
1 code implementation • 23 Mar 2020 • Chia-Yuan Chang, Shuo-En Chang, Pei-Yung Hsiao, Li-Chen Fu
In this work, we propose an Efficient Panoptic Segmentation Network (EPSNet) to tackle the panoptic segmentation tasks with fast inference speed.
Ranked #34 on Panoptic Segmentation on COCO test-dev