Hi, this is Yanling Hua, a second-year Master’s student at Lund University, specializing in Virtual Reality and Augmented Reality (VR/AR).

I’m passionate about Extended Reality (XR) and Generative AI, with a focus on designing accessible, personalized, and immersive experiences.

Key areas of interest:

  1. Task-Oriented VR/AR: Building immersive, task-specific systems that boost efficiency by reducing cognitive load and enhancing user engagement.
  2. Ubiquitous Mixed Reality: Adapting generative AI models to MR environments to enrich and simplify everyday life.

News!

[2025/01/20] I am starting as a Master’s thesis worker at Sony Nordic.

Background

Master. Aug. 2023 - Jun. 2025 (Expected)
Faculty of Engineering , Major in VR/AR
Lund University
Master. Aug. 2018 - Jun. 2020
Department of Artificial Intelligence and Automation, Major in Computer Vision
Huazhong University of Science and Technology
Bachelor. Aug. 2014 - Jun. 2018
Department of Automation, Major in Automation
Hefei University of Technology

Research

Infrared image generation by pix2pix based on multi-receptive field feature fusion
Yangyang Ma, Yanling Hua, Zhengrong Zuo
International Conference on Control, Automation and Information Sciences (ICCAIS), 2021
Paper / Code

Infrared image period extension algorithm based on StarGAN
This project and the work mentioned in the second published paper above are parts of my master's thesis aiming for infra image generation. The published papar generate infra images based on rgb images, but this project achieves infra image generation between images from different peroid in a day.
Project Page / Paper /

Dehazing algorithm based on image fogging degree
Yanling Hua, Zhengrong Zuo
MIPPR 2019: Pattern Recognition and Computer Vision. SPIE, 2020
Project Page / Paper_v1 / Paper_v2 /

Working Experience

Talking pose generation
This project is about talking-pose generation and I do this project independently. I tried two ways, motion retrival and end2end motion generation.
Project Page /

Semantic Nerf in unbounded scene for autonomous driving
This project is for the semantic auto-labeling task in the autonomous driving scene and I do this project independently. I use nerf, which has show significant performance in novel view synthesis, to achieve muti-view generation and get the semantic label.
Project Page /

3D genration for Game Asset
This project is about research of constructing 3D models of game assets.
Project Page /

Projects

HeyDancing
This project focuses on a VR/AR application designed to help users improve their dance skills in an interactive environment. The system supports four core functions: motion capture using a fixed multi-camera setup, motion remapping to project movements onto 3D characters, real-time motion correction, and customizable learning.
Paper / Demo /

XR and AI
This ongoing research project surveys recent XR and AI papers and is currently in progress.
Project Page /

XR in Dance performance
This is a brief individual research project that reviews papers on XR in dance performance.
Paper / Project Page /

VR museum
This project combines VR techniques with Interaction Design principles to come up with a Virtual Reality Museum, which could provide a unique, interactive and educational experience for different kinds of visitors.
Report / Demo /

Excape the fire
This project is about a VR program to help people get more information and know how to act in a fire emergency.
Presentation and Demo /

Personal

I am committed to learning endlessly, living authentically, and loving deeply. I believe the meaning of life is to experience. This is my favourite quote:

“Sing like no one is listening.
Love like you’ve never been hurt.
Dance like nobody’s watching,
And live like it’s heaven on earth.”

Contact

E-mail: ya4736hu-s[AT]student.lu.se