About
Hi, I’m Junjie Oscar Yin#
I am excessively interested in building efficient, trustworthy, and expressive AI systems(LLMs) that possess a deeper understanding of language and vision. Below is a list of things that I am constantly thinking about/workding toward:
- Making Training & Deployment of LLMs More Accessible. I am excited by novel methods to efficiently pre-train, fine-tune/instruction-tune, and evaluate LLMs.
- Toward Multi-Modality. I am drawn by works that push LLMs to reason about, generate, and make inference on 2.5D/3D visual world.
In my undergraduate years at Johns Hopkins and Oxford, I am forunate to be advised by Alan Yuille and Volodymy Kuleshov.
Recent Updates#
Below you will find the most recent updates:
- Selected for Honorable Mention of the 2024 Computing Research Association’s (CRA) Outstanding Undergraduate Researcher Award (URA). Dec 2023.
Publications#
My selected list of publications is available here:
ModuLoRA: Finetuning 2-Bit LLMs on Consumer GPUs by Integrating with Modular Quantizers.
Junjie Oscar Yin, Jiahao Dong, Yingheng Wang, Christopher De Sa, Volodymyr Kuleshov.
TMLR 2024. (Featured Certification. Presentation at ICLR 2024)
[Blog Post] [Public Codebase] [Paper]EasyRet3D: Uncalibrated Multi-view Multi-Human 3D Tracking and Reconstruction.
Junjie Oscar Yin, Yi Zhang, Ting Li, Jiahao Wang, Alan Yuille.
In Submission to ECCV 2024.
[Project Site] [Manuscript]How LLM-powered Conversational Agents Influence Decision Making in Domestic Medical Triage Contexts.
*Junjie Oscar Yin, *Catalina Gomez, Chien-Ming Huang, Mathias Unberath.
In Submission to JAMIA 2024.
[Manuscript]