Shiwen Ni

Shiwen Ni (倪仕文)

profile photo

I am now an Assistant Professor/Postdoc in Institute of Advanced Computing and Digital Engineering, SIAT Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences. I received my Ph.D. degree from Department of Computer Science and Information Engineering, NCKU National Cheng Kung University. I was selected as an honorary member of the Phi Tau Phi Scholastic Honor Society in 2023. I received the Institute of Information & Computing Machinery (IICM) Ph.D.Dissertation Award -Outstanding in 2023. I was selected for SIAT's Dean's Postdoctoral Merit Special Programme in 2023. I was selected for the National Level Talent Programme of China in 2024.

My research interest includes (but is not limited to) Deep LearningNatural Language ProcessingLarge Language Model and AI for Science.

For a full list of publications and citation counts, visit my Google Scholar page.

Research and project collaboration is welcome, my email: sw.ni@siat.ac.cn.

♥ 欢迎各种科研和项目合作!♥

Service

  • (Technical) program committee of EMNLP 2023, PAKDD 2024, ECML-PKDD 2023 2024, AACIP 2023, AFccT 2023, AICS 2022 2023 2024, CNIOT 2022 2023 2024.

  • Reviewer of Information Processing and Management, CAAI Transactions on Intelligence Technolog, Social Network Analysis and Mining, Multimedia Systems, International Journal of Machine Learning and Cybernetics, Journal of Web Engineering, ACL ARR 2023 2024, COLING 2024 2025, NeurIPS 2023 2024, EMNLP 2023 2024, ICLR 2024, ICASSP 2023 2024, IJCAI 2022 2023, KDD 2022, AAMAS 2022, CIKM 2022, PAKDD 2022, ICTAI 2021, etc.

Publication

  1. Shiwen Ni*, Xiangtao Kong*, Chengming Li, Xiping Hu, Ruifeng Xu, Jia Zhu†, Min Yang†. Training on the Benchmark Is Not All You Need, AAAI 2025.

  2. Shiwen Ni*, Hao Cheng*, Min Yang†. Pre-training, Fine-tuning and Re-ranking: A Three-Stage Framework for Legal Question Answering, ICASSP 2025.

  3. Yuelin Bai*, Xinrun Du*, Yiming Liang, Yonggang Jin, Ziqiang Liu, Junting Zhou, Tianyu Zheng, Xincheng Zhang, Nuo Ma, Zekun Wang, Ruibin Yuan, Haihong Wu, Hongquan Lin, Wenhao Huang, Jiajun Zhang, Wenhu Chen, Chenghua Lin, Jie Fu, Min Yang, Shiwen Ni†, Ge Zhang†. COIG-CQIA: Quality is All You Need for Chinese Instruction Fine-tuning, NAACL 2025.

  4. Ziqiang Liu*, Feiteng Fang*, Xi Feng*, Xinrun Du*, Chenhao Zhang*, Zekun Wang, Yuelin Bai, Qixuan Zhao, Liyang Fan, Chengguang Gan, Hongquan Lin, Jiaming Li, Yuansheng Ni, Haihong Wu, Yaswanth Narsupalli, Zhigang Zheng, Chengming Li, Xiping Hu, Ruifeng Xu, Xiaojun Chen, Min Yang, Jiaheng Liu, Ruibo Liu, Wenhao Huang, Ge Zhang†, Shiwen Ni†. II-Bench: An Image Implication Understanding Benchmark for Multimodal Large Language Models, NeurIPS 2024.

  5. Zichong Wang, Zhibo Chu, Thang Viet Doan, Shiwen Ni, Min Yang, Wenbin Zhang. History, development, and principles of large language models: an introductory survey, AI and Ethics, 2024.

  6. Nan Xie*, Yuelin Bai*, Hengyuan Gao, Feiteng Fang, Qixuan Zhao, Zhijian Li, Ziqiang Xue, Liang Zhu, Shiwen Ni†, Min Yang†. DeliLaw: A Chinese Legal Counselling System Based on a Large Language Model, CIKM 2024.

  7. Jinchang Hou*, Chang Ao*, Haihong Wu, Xiangtao Kong, Zhigang Zheng, Daijia Tang, Chengming Li, Xiping Hu, Ruifeng Xu, Shiwen Ni†, Min Yang†. E-EVAL: A Comprehensive Chinese K-12 Education Evaluation Benchmark for Large Language Models, ACL 2024.

  8. Shiwen Ni*, Dingwei Chen*, Chengming Li†, Xiping Hu, Ruifeng Xu, Min Yang†. Forgetting before Learning: Utilizing Parametric Arithmetic for Knowledge Updating in Large Language Models, ACL 2024.

  9. Feiteng Fang*, yuelin bai*, Shiwen Ni†, Min Yang†, Xiaojun Chen, Ruifeng Xu. Enhancing Noise Robustness of Retrieval-Augmented Language Models with Adaptive Adversarial Training, ACL 2024.

  10. Shiwen Ni*, Minghuan Tan*, Yuelin Bai, Fuqiang Niu, Min Yang†, Bowen Zhang†, Ruifeng Xu, Xiaojun Chen, Chengming Li, Xiping Hu, Ye Li, Jianping Fan. MoZIP: A Multilingual Benchmark to Evaluate Large Language Models in Intellectual Property, COLING 2024.

  11. Shiwen Ni, Min Yang†, Ruifeng Xu, Chengming Li, Xiping Hu. Layer-wise Regularized Dropout for Neural Language Models, COLING 2024.

  12. Shiwen Ni, Jiawen Li, Min Yang, Hung-Yu Kao†. DropAttack: A Random Dropped Weight Attack Adversarial Training for Natural Language Understanding, IEEE/ACM Transactions on Audio, Speech and Language Processing, 2023.

  13. Shiwen Ni, Hung-Yu Kao†. KPT++: Refined Knowledgeable Prompt Tuning for Few-shot Text Classification, Knowledge-Based Systems, 2023.

  14. Shiwen Ni, Hung-Yu Kao†. Masked Siamese Prompt Tuning for Few-Shot Natural Language Understanding. IEEE Transactions on Artificial Intelligence, 2023.

  15. Jiawen Li, Ronghui Li, Shiwen Ni, Hung-Yu Kao†. EPRD: Exploiting prior knowledge for evidence-providing automatic rumor detection, Neurocomputing, 2023.

  16. Shiwen Ni, Jiawen Li, Hung-Yu Kao†. R-AT: Regularized Adversarial Training for Natural Language Understanding, EMNLP 2022.

  17. Shiwen Ni, Hung-Yu Kao†. ELECTRA Is A Zero-shot Learner, Too, arXiv preprint arXiv:2207.08141, 2022.

  18. Shiwen Ni, Jiawen Li, Hung-Yu Kao†. True or False: Does the Deep Learning Model Learn to Detect Rumors? TAAI 2021.

  19. Jiawen Li, Shiwen Ni, Hung-Yu Kao†. Meet The Truth: Leverage Objective Facts and Subjective Views for Interpretable Rumor Detection, ACL 2021.

  20. Shiwen Ni, Hung-Yu Kao†. PSForest: Improving Deep Forest via Feature Pooling and Error Screening, ACML 2020.

  21. * Equal Contribution † Corresponding author

The web page template is sourced from:
No. Visitor Since Feb 2022. Powered by w3.css