This project aims to develop personalized diffusion models that enable users to generate, edit, and interact with 3D character assets using intuitive text prompts and image uploads.
Research Area:
Deep Learning, Computer Vision
The proposed research project aims to develop advanced personalized diffusion models that enable users to generate, edit, and interact with 3D character assets using intuitive text prompts and image uploads. By optimizing the diffusion process and integrating sophisticated natural language processing and computer vision techniques, the project will translate user inputs into high-quality 3D characters. An intuitive user interface will facilitate real-time adjustments, while comprehensive user studies and performance metrics will guide iterative improvements. This technology promises to enhance creative workflows in gaming, animation, and virtual reality, making 3D character generation more accessible and personalized.
This project will employ advanced mathematical modeling to optimize the diffusion process for 3D character generation. It will utilize state-of-the-art NLP techniques to interpret text prompts and cutting-edge computer vision methods to analyze image uploads. The development of an intuitive user interface will enable real-time interaction and refinement. Comprehensive user studies and defined performance metrics will be conducted to evaluate effectiveness and guide improvements.
Offering:
A PhD scholarship for 3.5 years at the RTP stipend rate (currently $40,109 in 2024). International applicants will have their tuition fees covered.
Successful candidates must have:
Additionally, candidates are expected to have a publication record in relevant venues.
How to apply:
To apply, email c.xu@sydney.edu.au the following:
The opportunity ID for this research opportunity is 3543