|
1 | | -- title: Cool Dataset |
2 | | - subtitle: a subtitle |
| 1 | +- title: Foundation Models for Computer Vision |
| 2 | + subtitle: Building powerful and efficient backbones for visual recognition |
3 | 3 | group: featured |
4 | | - image: images/assets/photo.jpg |
5 | | - link: https://github.com/ |
6 | | - description: Lorem ipsum _dolor sit amet_, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. |
7 | | - repo: greenelab/lab-website-template |
| 4 | + image: https://github.com/hustvl/Vim/raw/main/assets/vim_pipeline_v1.9.png |
| 5 | + link: https://github.com/hustvl/Vim |
| 6 | + description: We are dedicated to building the next generation of visual representation models that are both powerful and efficient. Our research explores novel architectures, from high-resolution networks (HRNet) and cutting-edge Vision Transformers (EVA) to State Space Models (Vision Mamba). These foundational models serve as robust backbones for a wide array of downstream vision tasks. |
8 | 7 | tags: |
9 | | - - resource |
| 8 | + - Foundation Models |
| 9 | + - Visual Representation |
| 10 | + - Efficient AI |
10 | 11 |
|
11 | | -- title: Cool Package |
12 | | - subtitle: a subtitle |
| 12 | +- title: 3D Scene Understanding and Generation |
| 13 | + subtitle: Innovating techniques to perceive, reconstruct, and generate the 3D world |
13 | 14 | group: featured |
14 | | - image: images/assets/photo.jpg |
15 | | - link: https://github.com/ |
16 | | - description: Lorem ipsum _dolor sit amet_, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. |
17 | | - repo: greenelab/lab-website-template |
| 15 | + image: https://guanjunwu.github.io/media/4dgs.gif |
| 16 | + link: https://guanjunwu.github.io/4dgs |
| 17 | + description: Our group is at the forefront of 3D vision. Our work spans from real-time dynamic scene rendering with 4D Gaussian Splatting to fast text-to-3D asset creation using GaussianDreamer. We aim to create immersive and interactive 3D experiences by bridging the gap between 2D images and 3D understanding. |
18 | 18 | tags: |
19 | | - - resource |
| 19 | + - 3D Vision |
| 20 | + - Generative Models |
| 21 | + - Scene Reconstruction |
20 | 22 |
|
21 | | -- title: Cool Tutorial |
22 | | - subtitle: a subtitle |
23 | | - image: images/assets/photo.jpg |
24 | | - link: https://github.com/ |
25 | | - description: Lorem ipsum _dolor sit amet_, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. |
26 | | - repo: greenelab/lab-website-template |
27 | | - tags: |
28 | | - - resource |
29 | | - - publication |
30 | | - |
31 | | -- title: Cool Web App |
32 | | - subtitle: a subtitle |
33 | | - image: images/assets/photo.jpg |
34 | | - link: https://github.com/ |
35 | | - description: Lorem ipsum _dolor sit amet_, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. |
36 | | - repo: greenelab/lab-website-template |
| 23 | +- title: Perception for Autonomous Driving |
| 24 | + subtitle: Developing robust and reliable perception systems for self-driving |
| 25 | + group: featured |
| 26 | + image: https://github.com/hustvl/VAD/raw/main/assets/vad_demo.gif |
| 27 | + link: https://github.com/hustvl/VAD |
| 28 | + description: We are developing the full stack for autonomous driving perception. Our research covers online HD map construction (MapTR), 3D object detection, and end-to-end vectorized driving systems (VAD). Our goal is to create AI that can safely and efficiently navigate complex real-world traffic scenarios. |
37 | 29 | tags: |
38 | | - - software |
| 30 | + - Autonomous Driving |
| 31 | + - 3D Perception |
| 32 | + - End-to-End Systems |
39 | 33 |
|
40 | | -- title: Cool Web Server |
41 | | - subtitle: a subtitle |
42 | | - image: images/assets/photo.jpg |
43 | | - link: https://github.com/ |
44 | | - description: Lorem ipsum _dolor sit amet_, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. |
45 | | - repo: greenelab/lab-website-template |
| 34 | +- title: Open-World Object Understanding |
| 35 | + subtitle: Enabling AI to detect and track any object in the open world |
| 36 | + image: https://github.com/AILab-CVC/YOLO-World/raw/main/assets/yolo-world.gif |
| 37 | + link: https://github.com/AILab-CVC/YOLO-World |
| 38 | + description: Our research pushes beyond traditional closed-set recognition. We develop methods for real-time open-vocabulary object detection (YOLO-World) and robust multi-object tracking in complex scenes (ByteTrack). Our work allows models to detect and track any object described by natural language, making AI more flexible and adaptable. |
46 | 39 | tags: |
47 | | - - software |
| 40 | + - Object Detection |
| 41 | + - Multi-Object Tracking |
| 42 | + - Open Vocabulary |
0 commit comments