Welcome
My name is Yihong Chen. I study AI knowledge acquisition, with a particular focus on how systems learn to abstract, represent, and deploy concepts and symbols efficiently.
I am open to collaborations related to embedding learning, link prediction, and language modeling.
If you would like to get in touch, please email yihong-chen AT outlook DOT com.
News
- Jul 2025, My PhD thesis is released: Knowledge engines need not just structure, but also destructuring — for plasticity, flow, and adaptability.
- Mar 2025, We introduced a new tool for transformer interpretability: jet expansion.
- March 2024, Quanta Magazine covered our research on active forgetting. The article is available here.
- Dec 2023, Presentation of our work on forgetting at NeurIPS 2023 (poster available here).
- Sep 2023, Talk at the IST–Unbabel Seminar on learning with forgetting.
- Jul 2023, Presentation at ELLIS Unconference 2023 on forgetting in language modeling (slides available here).
- Jul 2023, Our paper Improving Language Plasticity via Pretraining with Active Forgetting demonstrates how forgetting-enhanced pretraining enables rapid adaptation to new languages.
- Nov 2022, REFACTOR GNNs: Revisiting Factorisation-based Models from a Message-Passing Perspective appeared at NeurIPS 2022.
- Jun 2022, We released ssl-relation-prediction, a hands-on repository for experimenting with link prediction.