Xiaodan Zhu

xiaodan_photo
I am an Associate Professor at the Department of Electrical and Computer Engineering at Queen's University, a Mitchell Professor and the AI Lead at the Ingenuity Labs Research Institute at Queen's University, and a Faculty Affiliate at the Vector Institute for Artificial Intelligence in Toronto. I received my Ph.D. from the Department of Computer Science at the University of Toronto and my Master's degree from the Department of Computer Science and Technology of Tsinghua University. I was a researcher of National Research Council Canada from 2010 to 2017.
Mitchell Professor and AI Lead, Ingenuity Labs Research Institute, Queen's University
Associate Professor, Department of Electrical and Computer Engineering, Queen's University
Faculty Affiliate, Vector Institute for Artificial Intelligence
Reserch Interests
Natural Language Processing, Machine Learning, Artificial Intelligence.

firstname.lastname@queensu.ca

For my research community, I served as a chair for The 33rd Canadian Conference on Artificial Intelligence. I will be a Co-Editor-in-Chief for the ACL Rolling Review (ARR) for a two-year term starting from September 2024. ARR is responsible for the major NLP conferences and handles ~10,000 paper submissions every two months. I am an Associate Editor for ACM/IEEE Transactions on Audio, Speech, and Language Processing (TASL). In the past, I was on the COLING '20 and ACL '19 Best Paper Selection Committee. I was a co-chair for SemEval '20, '19; senior area chair for EMNLP '23, NAACL '22, ACL '21; workshop chair for EMNLP '24 and COLING '20; area chair for ACL '19, '18, EMNLP '19, '18, and NAACL '19; publication chair for COLING '18.
Along with students and collaborators, we have four papers accepted to the EMNLP 2024 main conference and Findings. Congratulations to the co-authors!
Yihao Fang's work received the Best Long Paper Award at the NAACL-2024 TrustNLP Workshop, titled "HGOT: Hierarchical Graph of Thoughts for Retrieval-Augmented In-Context Learning in Factuality Evaluation". Congratulations Yihao! [link]
I received JP Morgan 2022 and 2023 Faculty Research Award [link]. I thank JP Morgan for supporting my research at Queen's Unversity on exploring robust reasoning models over text. Part of our research results has been reported by Bloomberg News, Fortune, Business Insider, etc.
My master's student Stephen Obadinma's work received the Best Paper Award at the 34th Canadian Conference on Artificial Intelligence. The paper investigates miscalibration on imbalanced data and studies COVID-19 "hate speech". Congratulations Stephen!! [paper] [photo1]. For more of our research, please refer to our publications.
My proposal was selected and funded by the highly competitive NSERC Discovery Accelerator Supplement (DAS) program (a three-year support), in addition to being funded by the Discovery Grants (a five-year support). Furthermore, the proposal is also funded by the DND Supplement (an additional three-year support).
At NAACL-2019, I gave a tutorial (with Samuel Bowman, NYU) on "Deep Learning for Natural Language Inference" (Slides). Earlier at ACL-2017, I gave a tutorial with Edward Grefenstette (Google DeepMind) on "Deep Learning for Semantic Composition" (Slides). At EMNLP-2014, Saif Mohammad (NRC Canada) and I gave a tutorial on "Sentiment Analysis of Social Media Texts".(Slides)
In collaboration with Prof. Samuel Dahan of Queen's Law School and a group of excellent collaborators, we are funded by the competitive New Frontiers in Research Fund. The project uses NLP and deep learning to help solve several exciting legal problems.
The PhD student in my lab, Xiaoyu Yang, received the Vector Institute Scholarship on Artificial Intelligence in 2019 and Borealis AI Fellowships in 2021. The MSc. student, Stephen Obadinma, received Vector Institute Scholarship on Artificial Intelligence in 2020 and Canada Graduate Scholarship (CGS) of NSERC in 2021. Well done, Xiaoyu and Stephen!
We organized SemEval-2020 Task 5 on Counterfactual Recognition. The task definition, data, leaderboard, and performances of the participating teams are available here: [link]. I was also involved in shared Task 4 on Commonsense Validation and Explanation, which is available here: [link]. We invite you to use these datasets in related research and post-competition evaluation.
Tree LSTM: we proposed tree LSTM, which was published at the International Conference on Machine Learning; see the paper at [ICML], or at [ArXiv] for an earlier version.
Sentiment Analysis for Tweets: we built top-ranked systems in a number of SEMEVAL competitions on sentiment analysis for tweets, see here for details, and see publications for papers.
Our paper on semantic compositionality received the Adam Kilgarriff *SEM Best Paper Award for Lexical Semantics at the Joint Conference on Lexical and Computational Semantics (*SEM), Denver, Colorado. [paper]

More (Old) News
Hui's work has recently been accepted into International Joint Conference on Artificial Intelligence (IJCAI-2020), which is about dialogue disentanglement. Zhan's work on image captioning has been accepted into ACL-2020 as a long paper. Earlier, the collaboration with Yongfei, Bo, and Xuming's on visual grouding has appeared in AAAI Conference on Artificial Intelligence (AAAI-2020). We have two other papers will appear in ACL-2020, which is about few-shot classification and dialogue modelling.
In 2020, Zhan will do a research internship at Samsung's research lab in California; Yufei will do one more internship at IBM T.J. Watson Research Center at Yorktown Height, New York; Hui will do a research internship with Amazon's research lab in California.
The shared task that Xiaoyu led in SemEval-2020 (Task 5), "Detecting Counterfactuals", has attracted over 40 submissions in total on two subtasks. The task aims to recognize counterfactuals descriptions, a form of causal reasoning.
In NAACL-2019, I gave a tutorial (with Sam Bowman, NYU) on "deep learning for natural language inference". [slides]
In 2019 summer, Yufei will intern at IBM T.J. Watson Research Center at Yorktown Height, New York.
In the fall of 2019, Hui Liu and Zi'ou Zheng will join my group as PhD students; both Hui and Zi'ou gradutated from Peking University. Zhan Shi joined my group in January 2019 as a PhD student. Zhan received his MSc from Fudan University. Before joining Queen's Zhan has published in top venues and has co-authored a paper that received an outstanding paper award at ACL. Welcome Zhan!
I am excited to have Yufei Feng (Stats MSc, U. Chicago), Xiaoyu Yang (CAS), and Tianda Li (Nankai U) joining my lab in September 2018 as a PhD or MSc student. Yuping Ruan from University of Science and Technology of China will visit my lab. Wecome Everyone! Welcome Yuping!
Served COLING-18 as a Publication Co-Chair.
Served ACL-18 as an Area Chair.
Served COLING-18 as an Area Chair.
From 2018 and 2020, I will be a co-chair of SEMEVAL.
On November 29th 2017, I will give an talk at McMaster.AI at McMaster University.
In November 2017, I reviewed a book proposal for Cambridge University Press.
On October 13th 2017, I helped the Creative Destruction Lab (CDL) review six startup companies and their proposals. (CDL is one of the most successful seed-stage programs in Canada.)
The proposal that Prof. Stan Matwin and I submitted to AIR (Alibaba Innovation Research) has been accepted and will be funded.
On August 14th 2017, I gave a keynote talk at the KDD-2017 WISDOM workshop on neural network models for contrasting meaning representation and sentiment composition.
On July 30th 2017, I gave a tutorial (together with Edward Grefenstette, Google DeepMind) on "deep learning for semantic composition" at ACL-2017. [slides][description]

Congratulations to Qian! We developed a model (alpha) that is ranked among the top in RepEval-17 Shared Task [link]. The neural networks attempt to encode the meaning of sentences into fixed-sized vectors; the effectiveness of the models is evaluated with Natural Langauge Inference. [paper]
From July 1, 2016 to June 30, 2019, I will serve a three-year term (part-time) for NSERC (Natural Sciences and Engineering Research Council of Canada) as an Evaluation Group (EG) Member to evaluate Discovery Grants applications in the area of Computer Science (link).

Employing our Tree-LSTM, Qian Chen's recent work on Natural Language Inference (paper link) has achieved the the state-of-the-art results on the Stanford Natural Language Inference benchmark (link). (Qian is a vistinting student I co-supervised. He will visit me and Diana Inkpen at University of Ottawa.)

Our paper exploring neural nets for semantic compositionality, "DAG-Structured Recurrent Neural Networks for Semantic Compositionality" has been published at NAACL-2016 (link).

On Feb. 22, 2016, I gave a talk at Ottawa Machine Learning Meetup on "Deep Learning for Text Mining: Case Studies on Social Media and Medical Text" (talk announcement).

Between July and November 2015, I gave a couple of talks at the University of Toronto, McGill University, UMass medical school, University of Ottawa, and Baidu Inc. about our recent efforts on neural networks for semantics.
Our paper on semantic compositionality received the Adam Kilgarriff *SEM Best Paper Award for Lexical Semantics at the Joint Conference on Lexical and Computational Semantics (*SEM), Denver, Colorado. [paper]

Our work on long short-term memory over tree structures has been accepted to International Conference on Machine Learning. Here is the ICML version with updated results [ICML]. The older arXiv version can be found at [ArXiv];
Our paper presented at NIPS-2014 Workshop on Representation and Learning Methods for Complex Outputs [paper]
Gave a tutorial at EMNLP-2014 in Doha on "Sentiment Analysis of Social Media Texts."
In Semeval-2014 Task 9: Sentiment Analysis in Twitter, we ranked first in five of the ten subtask-domain combinations among about 40 teams. In Semeval-2014 Task 4: Aspect Based Sentiment Analysis, our models ranked first in three of the six subtasks among about 30 teams. [paper-1][2]
Our ACL-2014 paper on sentiment analysis (negation modeling). [paper]
Our paper on machine translation of sentiment, presented at EACL-2014. [paper]