From 185a670aa5f2b4a21ca2b89a7e820e8d40c65ceb Mon Sep 17 00:00:00 2001 From: Zicheng Zhang <58689334+zzc-1998@users.noreply.github.com> Date: Tue, 27 Feb 2024 11:45:57 +0800 Subject: [PATCH] Update README.md --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index e97f209..cc37773 100644 --- a/README.md +++ b/README.md @@ -73,6 +73,7 @@ The proposed Q-Bench includes three realms for low-level vision: perception (A1) - For assessment (A3), as we use **public datasets**, we provide an abstract evaluation code for arbitrary MLLMs for anyone to test. ## Release +- [2/27] 🔥 Our work **Q-Insturct** has been accepted by CVPR 2024, try to learn the [details](https://github.com/Q-Future/Q-Instruct) about how to instruct MLLMs on low-level vision! - [2/23] 🔥 The low-level vision compare task part of [Q-bench+](https://arxiv.org/abs/2402.07116) is now released at [Huggingface](https://huggingface.co/datasets/q-future/q-bench2)! - [2/10] 🔥 We are releasing the extended [Q-bench+](https://arxiv.org/abs/2402.07116), which challenges MLLMs with both single images and **image pairs** on low-level vision. The [LeaderBoard](https://huggingface.co/spaces/q-future/Q-Bench-Leaderboard) is onsite, check out the low-level vision ability for your favorite MLLMs!! More details coming soon. - [1/16] 🔥 Our work ["Q-Bench: A Benchmark for General-Purpose Foundation Models on Low-level Vision"](https://arxiv.org/abs/2309.14181) is accepted by **ICLR2024 as Spotlight Presentation**.