All papers that have not been peer-reviewed will not appear here, including preprints. You can access my all of papers at 🔗Google Scholar.

2025

Slidechat-A large vision-language assistant for whole-slide pathology image understanding
Slidechat-A large vision-language assistant for whole-slide pathology image understanding

Ying Chen*, Guoan Wang*, Yuanfeng Ji*†, Yanjun Li, Jin Ye, Tianbin Li, Ming Hu, Rongshan Yu, Yu Qiao, Junjun He†(* co-first author; † corresponding author)

CVPR 2025 ConferencePoster

We present SlideChat, the first vision-language assistant capable of understanding gigapixel whole-slide images, exhibiting excellent multimodal conversational capability and response complex instruction across diverse pathology scenarios.

Slidechat-A large vision-language assistant for whole-slide pathology image understanding
Slidechat-A large vision-language assistant for whole-slide pathology image understanding

Ying Chen*, Guoan Wang*, Yuanfeng Ji*†, Yanjun Li, Jin Ye, Tianbin Li, Ming Hu, Rongshan Yu, Yu Qiao, Junjun He†(* co-first author; † corresponding author)

CVPR 2025 ConferencePoster

We present SlideChat, the first vision-language assistant capable of understanding gigapixel whole-slide images, exhibiting excellent multimodal conversational capability and response complex instruction across diverse pathology scenarios.

2024

PathMethy-an interpretable AI framework for cancer origin tracing based on DNA methylation
PathMethy-an interpretable AI framework for cancer origin tracing based on DNA methylation

Jiajing Xie*, Yuhang Song*, Hailong Zheng*, Shijie Luo, Ying Chen, Chen Zhang, Rongshan Yu†, Mengsha Tong†(* co-first author; † corresponding author)

Briefings in Bioinformatics 2024 Journal

We presented PathMethy, a novel Transformer model integrated with functional categories and crosstalk of pathways, to accurately trace the origin of tumors in CUP samples based on DNA methylation.

PathMethy-an interpretable AI framework for cancer origin tracing based on DNA methylation
PathMethy-an interpretable AI framework for cancer origin tracing based on DNA methylation

Jiajing Xie*, Yuhang Song*, Hailong Zheng*, Shijie Luo, Ying Chen, Chen Zhang, Rongshan Yu†, Mengsha Tong†(* co-first author; † corresponding author)

Briefings in Bioinformatics 2024 Journal

We presented PathMethy, a novel Transformer model integrated with functional categories and crosstalk of pathways, to accurately trace the origin of tumors in CUP samples based on DNA methylation.

Tracing unknown tumor origins with a biological-pathway-based transformer model
Tracing unknown tumor origins with a biological-pathway-based transformer model

Jiajing Xie*, Ying Chen*, Shijie Luo*, Wenxian Yang, Yuxiang Lin, Liansheng Wang, Xin Ding†, Mengsha Tong†, Rongshan Yu†(* co-first author; † corresponding author)

Cell Reports Methods 2024 Journal

Cancer of unknown primary (CUP) represents metastatic cancer where the primary site remains unidentified despite standard diagnostic procedures. To determine the tumor origin in such cases, we developed BPformer, a deep learning method integrating the transformer model with prior knowledge of biological pathways.

Tracing unknown tumor origins with a biological-pathway-based transformer model
Tracing unknown tumor origins with a biological-pathway-based transformer model

Jiajing Xie*, Ying Chen*, Shijie Luo*, Wenxian Yang, Yuxiang Lin, Liansheng Wang, Xin Ding†, Mengsha Tong†, Rongshan Yu†(* co-first author; † corresponding author)

Cell Reports Methods 2024 Journal

Cancer of unknown primary (CUP) represents metastatic cancer where the primary site remains unidentified despite standard diagnostic procedures. To determine the tumor origin in such cases, we developed BPformer, a deep learning method integrating the transformer model with prior knowledge of biological pathways.

Survmamba-State space model with multi-grained multi-modal interaction for survival prediction
Survmamba-State space model with multi-grained multi-modal interaction for survival prediction

Ying Chen, Jiajing Xie, Yuxiang Lin, Yuhang Song, Wenxian Yang, Rongshan Yu†(† corresponding author)

Arxiv 2024 Technical Report

We introduce XrayGLM, a conversational medical visual language model that analyzes and summarizes chest X-rays, aimed at improving domain-specific expertise for radiology tasks compared to general large models.

Survmamba-State space model with multi-grained multi-modal interaction for survival prediction
Survmamba-State space model with multi-grained multi-modal interaction for survival prediction

Ying Chen, Jiajing Xie, Yuxiang Lin, Yuhang Song, Wenxian Yang, Rongshan Yu†(† corresponding author)

Arxiv 2024 Technical Report

We introduce XrayGLM, a conversational medical visual language model that analyzes and summarizes chest X-rays, aimed at improving domain-specific expertise for radiology tasks compared to general large models.

Generalizable whole slide image classification with fine-grained visual-semantic interaction
Generalizable whole slide image classification with fine-grained visual-semantic interaction

Hao Li, Ying Chen, Yifei Chen, Rongshan Yu†, Wenxian Yang, Liansheng Wang†, Bowen Ding, Yuchen Han†(† corresponding author)

CVPR 2024 ConferencePoster

In this paper we propose a novel "Fine-grained Visual-Semantic Interaction" (FiVE) framework for WSI classification. It is designed to enhance the model's generalizability by leveraging the interaction between localized visual patterns and fine-grained pathological semantics.

Generalizable whole slide image classification with fine-grained visual-semantic interaction
Generalizable whole slide image classification with fine-grained visual-semantic interaction

Hao Li, Ying Chen, Yifei Chen, Rongshan Yu†, Wenxian Yang, Liansheng Wang†, Bowen Ding, Yuchen Han†(† corresponding author)

CVPR 2024 ConferencePoster

In this paper we propose a novel "Fine-grained Visual-Semantic Interaction" (FiVE) framework for WSI classification. It is designed to enhance the model's generalizability by leveraging the interaction between localized visual patterns and fine-grained pathological semantics.

2023

RAFNet-Restricted attention fusion network for sleep apnea detection
RAFNet-Restricted attention fusion network for sleep apnea detection

Ying Chen, Huijun Yue, Ruifeng Zou, Wenbin Lei, Wenjun Ma, Xiaomao Fan†(† corresponding author)

Neural Networks 2023 Journal

In this paper, we focus on SA detection with single lead ECG signals, which can be easily collected by a portable device. Under this context, we propose a restricted attention fusion network called RAFNet for sleep apnea detection.

RAFNet-Restricted attention fusion network for sleep apnea detection
RAFNet-Restricted attention fusion network for sleep apnea detection

Ying Chen, Huijun Yue, Ruifeng Zou, Wenbin Lei, Wenjun Ma, Xiaomao Fan†(† corresponding author)

Neural Networks 2023 Journal

In this paper, we focus on SA detection with single lead ECG signals, which can be easily collected by a portable device. Under this context, we propose a restricted attention fusion network called RAFNet for sleep apnea detection.

2022

SE-MSCNN-A Lightweight Multi-scaled Fusion Network for Sleep Apnea Detection Using Single-Lead ECG Signals
SE-MSCNN-A Lightweight Multi-scaled Fusion Network for Sleep Apnea Detection Using Single-Lead ECG Signals

Xianhui Chen*, Ying Chen*, Wenjun Ma, Xiaomao Fan†, Ye Li†(* co-first author; † corresponding author)

BIBM 2021 ConferenceOral

In this study, we propose a multi-scaled fusion network named SEMSCNN for SA detection based on single-lead ECG signals acquired from wearable devices.

SE-MSCNN-A Lightweight Multi-scaled Fusion Network for Sleep Apnea Detection Using Single-Lead ECG Signals
SE-MSCNN-A Lightweight Multi-scaled Fusion Network for Sleep Apnea Detection Using Single-Lead ECG Signals

Xianhui Chen*, Ying Chen*, Wenjun Ma, Xiaomao Fan†, Ye Li†(* co-first author; † corresponding author)

BIBM 2021 ConferenceOral

In this study, we propose a multi-scaled fusion network named SEMSCNN for SA detection based on single-lead ECG signals acquired from wearable devices.