About Me
Hello! I'm Kunhee Kim, a PhD student at KAIST advised by Professor Hyunjung Shim in the CVML Lab. My current research focuses on generative AI, particularly diffusion-based foundation models for image and video synthesis.
I study efficient adaptation of large generative models: how pretrained diffusion models can be adapted to new concepts, styles, or tasks with minimal supervision, parameter updates, or compute. My work spans controllable generation, personalization, and representation learning, with an emphasis on understanding how weights, activations, and optimization dynamics govern adaptation and generalization.
I am also interested in how these principles translate into scalable and efficient training systems for large-scale image and video generation. I am always open to discussions and collaborations!
Publications (Selected)
-
Directional Textual Inversion for Personalized Text-to-Image Generation
Kunhee Kim*, NaHyeon Park*, Kibeom Hong, Hyunjung Shim
arXiv:2512.13672
[arXiv] [Code] [Project] -
TextBoost: Boosting Text Encoder for Personalized Text-to-Image Diffusion Models
NaHyeon Park*, Kunhee Kim*, Hyunjung Shim
arXiv:2409.08248
[arXiv] [Code] [Project] -
A Style-Aware Discriminator for Controllable Image Translation
Kunhee Kim, Sanghun Park, Eunyeong Jeon, Taehun Kim, Daijin Kim CVPR 2022
[arXiv] [Code]
See the full publication list on Google Scholar .
Experience
NAVER Cloud - Visual Generation Team (Residency, Engineering Role)
Oct 2025 -- Apr 2026
- Engineering large-scale training systems for video diffusion models, focusing on robustness, scalability, and performance.
- Implementing production-quality training code, including data loading pipelines, model architectures, and end-to-end training orchestration.
- Building and maintaining distributed training pipelines on multi-node GPU clusters.