From 03d1163167fad877d11c0f65fd56a5abddce4d40 Mon Sep 17 00:00:00 2001 From: johind Date: Fri, 19 Jan 2024 12:08:25 +0100 Subject: [PATCH] Update README.md Fix: unrecognized arguments: --model_path --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index cb88fc3..2c4f900 100644 --- a/README.md +++ b/README.md @@ -215,19 +215,19 @@ After preparing the datasets, you can evaluate pre-trained **OneAlign** as follo - Image Quality Assessment (IQA) ```shell -python q_align/evaluate/iqa_eval.py --model_path q-future/one-align --device cuda:0 +python q_align/evaluate/iqa_eval.py --model-path q-future/one-align --device cuda:0 ``` - Image Aesthetic Assessment (IAA) ```shell -python q_align/evaluate/iaa_eval.py --model_path q-future/one-align --device cuda:0 +python q_align/evaluate/iaa_eval.py --model-path q-future/one-align --device cuda:0 ``` - Video Quality Assessment (VQA) ```shell -python q_align/evaluate/vqa_eval.py --model_path q-future/one-align --device cuda:0 +python q_align/evaluate/vqa_eval.py --model-path q-future/one-align --device cuda:0 ``` See our [model zoo](./model_zoo) for all available models that you can use.