diff --git a/.github/workflows/sample_ubuntu_x86_Pikachu_python.yaml b/.github/workflows/sample_ubuntu_x86_Pikachu_python.yaml new file mode 100644 index 00000000..a111699f --- /dev/null +++ b/.github/workflows/sample_ubuntu_x86_Pikachu_python.yaml @@ -0,0 +1,45 @@ +# This GitHub Actions workflow is designed for a CMake project running on a single platform (Ubuntu-x86). +# For multi-platform testing, see the link provided. +# Refer to: https://github.com/actions/starter-workflows/blob/main/ci/cmake-multi-platform.yml +name: Run Ubuntu-x86 Test Pikachu from Python Native + +# Trigger this workflow on push or pull request to the "feature/sub" branch +on: + push: + branches: ["master"] + pull_request: + branches: ["master"] + +# Define environment variables shared across jobs +env: + # Set the CMake build type (e.g., Release, Debug, RelWithDebInfo, etc.) + BUILD_TYPE: Release + +# Jobs section defines all individual tasks for the CI workflow +jobs: + build: + # Specify that this job should run on the latest Ubuntu environment provided by GitHub + runs-on: ubuntu-latest + + # Define steps for this job + steps: + # Step 1: Check out the code from the repository + - uses: actions/checkout@v4 + + # Step 2: Update Git submodules recursively + - name: Update submodules + run: | + git clone --recurse-submodules https://github.com/HyperInspire/3rdparty.git + + # Step 3: Install necessary dependencies for building the CMake project + - name: Install dependencies + run: | + sudo apt-get update # Update package lists + # Install build tools and required libraries for video processing + sudo apt-get install -y build-essential libgtk-3-dev libavcodec-dev libavformat-dev libjpeg-dev libswscale-dev + + # Step 4: Run a separate script for CMake configuration and building + - name: Download Dataset And Configure CMake + # Execute a pre-existing script to handle CMake configuration and building + # The script is assumed to be located at `ci/quick_test_linux_x86_usual.sh` + run: bash ci/quick_test_linux_x86_usual_python_native_interface.sh diff --git a/CMakeLists.txt b/CMakeLists.txt index 448467c4..5a7e798b 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -9,7 +9,7 @@ set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O3") # Current version set(INSPIRE_FACE_VERSION_MAJOR 1) set(INSPIRE_FACE_VERSION_MINOR 1) -set(INSPIRE_FACE_VERSION_PATCH 3) +set(INSPIRE_FACE_VERSION_PATCH 4) # Converts the version number to a string string(CONCAT INSPIRE_FACE_VERSION_MAJOR_STR ${INSPIRE_FACE_VERSION_MAJOR}) diff --git a/README.md b/README.md index 8554f988..4f287137 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,7 @@ # InspireFace [![GitHub release](https://img.shields.io/github/v/release/HyperInspire/InspireFace.svg?style=for-the-badge&color=blue)](https://github.com/HyperInspire/InspireFace/releases/latest) [![build](https://img.shields.io/github/actions/workflow/status/HyperInspire/InspireFace/release-sdks.yaml?&style=for-the-badge&label=build)](https://img.shields.io/github/actions/workflow/status/HyperInspire/InspireFace/release-sdks.yaml?&style=for-the-badge&label=build) +[![test](https://img.shields.io/github/actions/workflow/status/HyperInspire/InspireFace/release-sdks.yaml?&style=for-the-badge&label=test)](https://img.shields.io/github/actions/workflow/status/HyperInspire/InspireFace/test_ubuntu_x86_Pikachu.yaml?&style=for-the-badge&label=test) InspireFace is a cross-platform face recognition SDK developed in C/C++, supporting multiple operating systems and various backend types for inference, such as CPU, GPU, and NPU. @@ -10,6 +11,10 @@ Please contact [contact@insightface.ai](mailto:contact@insightface.ai?subject=In ## Change Logs +**`2024-07-05`** Fixed some bugs in the python ctypes interface. + +**`2024-07-03`** Add the blink detection algorithm of face interaction module. + **`2024-07-02`** Fixed several bugs in the face detector with multi-level input. **`2024-06-27`** Verified iOS usability and fixed some bugs. @@ -52,7 +57,7 @@ You can download the model package files containing models and configurations ne If you intend to use the SDK locally or on a server, ensure that OpenCV is installed on the host device beforehand to enable successful linking during the compilation process. For cross-compilation targets like Android or ARM embedded boards, you can use the pre-compiled OpenCV libraries provided by **3rdparty/inspireface-precompile/opencv/**. ### 1.4. Installing MNN -The '3rdparty' directory already includes the MNN library and specifies a particular version as the stable version. If you need to enable or disable additional configuration options during compilation, you can refer to the CMake Options provided by MNN. If you need to use your own precompiled version, feel free to replace it. +The '**3rdparty**' directory already includes the MNN library and specifies a particular version as the stable version. If you need to enable or disable additional configuration options during compilation, you can refer to the CMake Options provided by MNN. If you need to use your own precompiled version, feel free to replace it. ### 1.5. Requirements @@ -298,14 +303,14 @@ In the project, there is a subproject called cpp/test. To compile it, you need t ```bash cmake -DISF_BUILD_WITH_TEST=ON .. ``` -If you need to run test cases, you will need to download the required [resource files](https://drive.google.com/file/d/1i4uC-dZTQxdVgn2rP0ZdfJTMkJIXgYY4/view?usp=sharing), which are **test_res** and **Model Package** respectively. Unzip the pack file into the test_res folder. The directory structure of test_res should be prepared as follows before testing: +If you need to run test cases, you will need to download the required [resource files](https://drive.google.com/file/d/1i4uC-dZTQxdVgn2rP0ZdfJTMkJIXgYY4/view?usp=sharing): **test_res**. Unzip the test_res folder. The directory structure of test_res should be prepared as follows before testing: ```bash test_res ├── data ├── images -├── pack <- unzip pack.zip +├── pack <-- The model package files are here ├── save ├── valid_lfw_funneled.txt ├── video @@ -352,17 +357,17 @@ The following functionalities and technologies are currently supported. | 6 | Silent Liveness Detection | ![Static Badge](https://img.shields.io/badge/STABLE-blue?style=for-the-badge) | MiniVision | | 7 | Face Quality Detection | ![Static Badge](https://img.shields.io/badge/STABLE-blue?style=for-the-badge) | | | 8 | Face Pose Estimation | ![Static Badge](https://img.shields.io/badge/STABLE-blue?style=for-the-badge) | | -| 9 | Age Prediction | ![Static Badge](https://img.shields.io/badge/PENDING-yellow?style=for-the-badge) | | -| 10 | Cooperative Liveness Detection | ![Static Badge](https://img.shields.io/badge/PENDING-yellow?style=for-the-badge) | | +| 9 | Face Attribute Prediction | ![Static Badge](https://img.shields.io/badge/STABLE-blue?style=for-the-badge) | Age, Race, Gender | +| 10 | Cooperative Liveness Detection | ![Static Badge](https://img.shields.io/badge/DEVELOP-green?style=for-the-badge) | Blink | ## 6. Models Package List -For different scenarios, we currently provide several Packs, each containing multiple models and configurations. +For different scenarios, we currently provide several Packs, each containing multiple models and configurations.The package file is placed in the **pack** subdirectory under the **test_res** directory. | Name | Supported Devices | Note | Link | | --- | --- | --- | --- | -| Pikachu | CPU | Lightweight edge-side model | [GDrive](https://drive.google.com/drive/folders/1krmv9Pj0XEZXR1GRPHjW_Sl7t4l0dNSS?usp=sharing) | -| Megatron | CPU, GPU | Local or server-side model | [GDrive](https://drive.google.com/drive/folders/1krmv9Pj0XEZXR1GRPHjW_Sl7t4l0dNSS?usp=sharing) | +| Pikachu | CPU | Lightweight edge-side models | [GDrive](https://drive.google.com/drive/folders/1krmv9Pj0XEZXR1GRPHjW_Sl7t4l0dNSS?usp=sharing) | +| Megatron | CPU, GPU | Mobile and server models | [GDrive](https://drive.google.com/drive/folders/1krmv9Pj0XEZXR1GRPHjW_Sl7t4l0dNSS?usp=sharing) | | Gundam-RV1109 | RKNPU | Supports RK1109 and RK1126 | [GDrive](https://drive.google.com/drive/folders/1krmv9Pj0XEZXR1GRPHjW_Sl7t4l0dNSS?usp=sharing) | diff --git a/ci/quick_test_linux_x86_usual.sh b/ci/quick_test_linux_x86_usual.sh index e60170b8..f915d1d7 100644 --- a/ci/quick_test_linux_x86_usual.sh +++ b/ci/quick_test_linux_x86_usual.sh @@ -54,8 +54,14 @@ cmake -DCMAKE_BUILD_TYPE=Release \ # Compile the project using 4 parallel jobs make -j4 -# Create a symbolic link to the extracted test data directory -ln -s ${FULL_TEST_DIR} . +# Check if the symbolic link or directory already exists +if [ ! -e "$(basename ${FULL_TEST_DIR})" ]; then + # Create a symbolic link to the extracted test data directory + ln -s ${FULL_TEST_DIR} . + echo "Symbolic link to '${TARGET_DIR}' created." +else + echo "Symbolic link or directory '$(basename ${FULL_TEST_DIR})' already exists. Skipping creation." +fi # Check if the test executable file exists if [ ! -f "$TEST_EXECUTABLE" ]; then diff --git a/ci/quick_test_linux_x86_usual_python_native_interface.sh b/ci/quick_test_linux_x86_usual_python_native_interface.sh new file mode 100644 index 00000000..7ca20b56 --- /dev/null +++ b/ci/quick_test_linux_x86_usual_python_native_interface.sh @@ -0,0 +1,71 @@ +#!/bin/bash + +# Exit immediately if any command exits with a non-zero status +set -e + +ROOT_DIR="$(pwd)" +TARGET_DIR="test_res" +DOWNLOAD_URL="https://github.com/tunmx/inspireface-store/raw/main/resource/test_res-lite.zip" +ZIP_FILE="test_res-lite.zip" +BUILD_DIRNAME="ubuntu18_shared" + +# Check if the target directory already exists +if [ ! -d "$TARGET_DIR" ]; then + echo "Directory '$TARGET_DIR' does not exist. Downloading..." + + # Download the dataset zip file + wget -q "$DOWNLOAD_URL" -O "$ZIP_FILE" + + echo "Extracting '$ZIP_FILE' to '$TARGET_DIR'..." + # Unzip the downloaded file + unzip "$ZIP_FILE" + + # Remove the downloaded zip file and unnecessary folders + rm "$ZIP_FILE" + rm -rf "__MACOSX" + + echo "Download and extraction complete." +else + echo "Directory '$TARGET_DIR' already exists. Skipping download." +fi + +# Get the absolute path of the target directory +FULL_TEST_DIR="$(realpath ${TARGET_DIR})" + +# Create the build directory if it doesn't exist +mkdir -p build/${BUILD_DIRNAME}/ + +# Change directory to the build directory +# Disable the shellcheck warning for potential directory changes +# shellcheck disable=SC2164 +cd build/${BUILD_DIRNAME}/ + +# Configure the CMake build system +cmake -DCMAKE_BUILD_TYPE=Release \ + -DISF_BUILD_WITH_SAMPLE=OFF \ + -DISF_BUILD_WITH_TEST=OFF \ + -DISF_ENABLE_BENCHMARK=OFF \ + -DISF_ENABLE_USE_LFW_DATA=OFF \ + -DISF_ENABLE_TEST_EVALUATION=OFF \ + -DOpenCV_DIR=3rdparty/inspireface-precompile/opencv/4.5.1/opencv-ubuntu18-x86/lib/cmake/opencv4 \ + -DISF_BUILD_SHARED_LIBS=ON ../../ + +# Compile the project using 4 parallel jobs +make -j4 + +# Come back to project root dir +cd ${ROOT_DIR} + +# Important: You must copy the compiled dynamic library to this path! +cp build/${BUILD_DIRNAME}/lib/libInspireFace.so python/inspireface/modules/core/ + +# Install dependency +pip install opencv-python +pip install click +pip install loguru + +cd python/ + +# Run sample +python sample_face_detection.py ../test_res/pack/Pikachu ../test_res/data/bulk/woman.png + diff --git a/ci/quick_test_local.sh b/ci/quick_test_local.sh index b34c66b3..f3c51f74 100644 --- a/ci/quick_test_local.sh +++ b/ci/quick_test_local.sh @@ -66,3 +66,5 @@ else echo "Test executable found. Running tests..." "$TEST_EXECUTABLE" fi + +# Executing python scripts \ No newline at end of file diff --git a/cpp/inspireface/c_api/inspireface.cc b/cpp/inspireface/c_api/inspireface.cc index 079752a4..69747311 100644 --- a/cpp/inspireface/c_api/inspireface.cc +++ b/cpp/inspireface/c_api/inspireface.cc @@ -100,13 +100,13 @@ HResult HFReleaseInspireFaceSession(HFSession handle) { HResult HFCreateInspireFaceSession(HFSessionCustomParameter parameter, HFDetectMode detectMode, HInt32 maxDetectFaceNum, HInt32 detectPixelLevel, HInt32 trackByDetectModeFPS, HFSession *handle) { inspire::ContextCustomParameter param; param.enable_mask_detect = parameter.enable_mask_detect; - param.enable_age = parameter.enable_age; + param.enable_face_attribute = parameter.enable_face_quality; param.enable_liveness = parameter.enable_liveness; param.enable_face_quality = parameter.enable_face_quality; - param.enable_gender = parameter.enable_gender; param.enable_interaction_liveness = parameter.enable_interaction_liveness; param.enable_ir_liveness = parameter.enable_ir_liveness; param.enable_recognition = parameter.enable_recognition; + param.enable_face_attribute = parameter.enable_face_attribute; inspire::DetectMode detMode = inspire::DETECT_MODE_ALWAYS_DETECT; if (detectMode == HF_DETECT_MODE_LIGHT_TRACK) { detMode = inspire::DETECT_MODE_LIGHT_TRACK; @@ -138,11 +138,8 @@ HResult HFCreateInspireFaceSessionOptional(HOption customOption, HFDetectMode de if (customOption & HF_ENABLE_IR_LIVENESS) { param.enable_ir_liveness = true; } - if (customOption & HF_ENABLE_AGE_PREDICT) { - param.enable_age = true; - } - if (customOption & HF_ENABLE_GENDER_PREDICT) { - param.enable_gender = true; + if (customOption & HF_ENABLE_FACE_ATTRIBUTE) { + param.enable_face_attribute = true; } if (customOption & HF_ENABLE_MASK_DETECT) { param.enable_mask_detect = true; @@ -508,13 +505,13 @@ HResult HFMultipleFacePipelineProcess(HFSession session, HFImageStream streamHan } inspire::ContextCustomParameter param; param.enable_mask_detect = parameter.enable_mask_detect; - param.enable_age = parameter.enable_age; + param.enable_face_attribute = parameter.enable_face_quality; param.enable_liveness = parameter.enable_liveness; param.enable_face_quality = parameter.enable_face_quality; - param.enable_gender = parameter.enable_gender; param.enable_interaction_liveness = parameter.enable_interaction_liveness; param.enable_ir_liveness = parameter.enable_ir_liveness; param.enable_recognition = parameter.enable_recognition; + param.enable_face_attribute = parameter.enable_face_attribute; HResult ret; std::vector data; @@ -562,11 +559,8 @@ HResult HFMultipleFacePipelineProcessOptional(HFSession session, HFImageStream s if (customOption & HF_ENABLE_IR_LIVENESS) { param.enable_ir_liveness = true; } - if (customOption & HF_ENABLE_AGE_PREDICT) { - param.enable_age = true; - } - if (customOption & HF_ENABLE_GENDER_PREDICT) { - param.enable_gender = true; + if (customOption & HF_ENABLE_FACE_ATTRIBUTE) { + param.enable_face_attribute = true; } if (customOption & HF_ENABLE_MASK_DETECT) { param.enable_mask_detect = true; @@ -675,6 +669,23 @@ HResult HFGetFaceIntereactionResult(HFSession session, PHFFaceIntereactionResult return HSUCCEED; } +HResult HFGetFaceAttributeResult(HFSession session, PHFFaceAttributeResult results) { + if (session == nullptr) { + return HERR_INVALID_CONTEXT_HANDLE; + } + HF_FaceAlgorithmSession *ctx = (HF_FaceAlgorithmSession* ) session; + if (ctx == nullptr) { + return HERR_INVALID_CONTEXT_HANDLE; + } + + results->num = ctx->impl.GetFaceAgeBracketResultsCache().size(); + results->race = (HPInt32 )ctx->impl.GetFaceRaceResultsCache().data(); + results->gender = (HPInt32 )ctx->impl.GetFaceGenderResultsCache().data(); + results->ageBracket = (HPInt32 )ctx->impl.GetFaceAgeBracketResultsCache().data(); + + return HSUCCEED; +} + HResult HFFeatureHubGetFaceCount(HInt32* count) { *count = FEATURE_HUB->GetFaceFeatureCount(); return HSUCCEED; diff --git a/cpp/inspireface/c_api/inspireface.h b/cpp/inspireface/c_api/inspireface.h index 0a9d86be..fb32ebdd 100644 --- a/cpp/inspireface/c_api/inspireface.h +++ b/cpp/inspireface/c_api/inspireface.h @@ -29,8 +29,8 @@ extern "C" { #define HF_ENABLE_LIVENESS 0x00000004 ///< Flag to enable RGB liveness detection feature. #define HF_ENABLE_IR_LIVENESS 0x00000008 ///< Flag to enable IR (Infrared) liveness detection feature. #define HF_ENABLE_MASK_DETECT 0x00000010 ///< Flag to enable mask detection feature. -#define HF_ENABLE_AGE_PREDICT 0x00000020 ///< Flag to enable age prediction feature. -#define HF_ENABLE_GENDER_PREDICT 0x00000040 ///< Flag to enable gender prediction feature. +#define HF_ENABLE_FACE_ATTRIBUTE 0x00000020 ///< Flag to enable face attribute prediction feature. +#define HF_ENABLE_PLACEHOLDER_ 0x00000040 ///< - #define HF_ENABLE_QUALITY 0x00000080 ///< Flag to enable face quality assessment feature. #define HF_ENABLE_INTERACTION 0x00000100 ///< Flag to enable interaction feature. @@ -125,9 +125,8 @@ typedef struct HFSessionCustomParameter { HInt32 enable_liveness; ///< Enable RGB liveness detection feature. HInt32 enable_ir_liveness; ///< Enable IR liveness detection feature. HInt32 enable_mask_detect; ///< Enable mask detection feature. - HInt32 enable_age; ///< Enable age prediction feature. - HInt32 enable_gender; ///< Enable gender prediction feature. HInt32 enable_face_quality; ///< Enable face quality detection feature. + HInt32 enable_face_attribute; ///< Enable face attribute prediction feature. HInt32 enable_interaction_liveness; ///< Enable interaction for liveness detection feature. } HFSessionCustomParameter, *PHFSessionCustomParameter; @@ -149,7 +148,7 @@ typedef enum HFDetectMode { * @param detectMode Detection mode to be used. * @param maxDetectFaceNum Maximum number of faces to detect. * @param detectPixelLevel Modify the input resolution level of the detector, the larger the better, - * the need to input a multiple of 160, such as 160, 320, 640, the default value -1 is 160. + * the need to input a multiple of 160, such as 160, 320, 640, the default value -1 is 320. * @param trackByDetectModeFPS If you are using the MODE_TRACK_BY_DETECTION tracking mode, * this value is used to set the fps frame rate of your current incoming video stream, which defaults to -1 at 30fps. * @param handle Pointer to the context handle that will be returned. @@ -647,6 +646,47 @@ typedef struct HFFaceIntereactionResult { HYPER_CAPI_EXPORT extern HResult HFGetFaceIntereactionResult(HFSession session, PHFFaceIntereactionResult result); +/** + * @brief Struct representing face attribute results. + * + * This struct holds the race, gender, and age bracket attributes for a detected face. + */ +typedef struct HFFaceAttributeResult { + HInt32 num; ///< Number of faces detected. + HPInt32 race; ///< Race of the detected face. + ///< 0: Black; + ///< 1: Asian; + ///< 2: Latino/Hispanic; + ///< 3: Middle Eastern; + ///< 4: White; + HPInt32 gender; ///< Gender of the detected face. + ///< 0: Female; + ///< 1: Male; + HPInt32 ageBracket; ///< Age bracket of the detected face. + ///< 0: 0-2 years old; + ///< 1: 3-9 years old; + ///< 2: 10-19 years old; + ///< 3: 20-29 years old; + ///< 4: 30-39 years old; + ///< 5: 40-49 years old; + ///< 6: 50-59 years old; + ///< 7: 60-69 years old; + ///< 8: more than 70 years old; +} HFFaceAttributeResult, *PHFFaceAttributeResult; + +/** + * @brief Get the face attribute results. + * + * This function retrieves the attribute results such as race, gender, and age bracket + * for faces detected in the current context. + * + * @param session Handle to the session. + * @param results Pointer to the structure where face attribute results will be stored. + * @return HResult indicating the success or failure of the operation. + */ +HYPER_CAPI_EXPORT extern HResult HFGetFaceAttributeResult(HFSession session, PHFFaceAttributeResult results); + + /************************************************************************ * System Function ************************************************************************/ diff --git a/cpp/inspireface/face_context.cpp b/cpp/inspireface/face_context.cpp index 29725e24..d37d65b2 100644 --- a/cpp/inspireface/face_context.cpp +++ b/cpp/inspireface/face_context.cpp @@ -42,8 +42,7 @@ int32_t FaceContext::Configuration(DetectMode detect_mode, INSPIRE_LAUNCH->getMArchive(), param.enable_liveness, param.enable_mask_detect, - param.enable_age, - param.enable_gender, + param.enable_face_attribute, param.enable_interaction_liveness ); @@ -64,6 +63,9 @@ int32_t FaceContext::FaceDetectAndTrack(CameraStream &image) { m_quality_score_results_cache_.clear(); m_react_left_eye_results_cache_.clear(); m_react_right_eye_results_cache_.clear(); + m_quality_score_results_cache_.clear(); + m_attribute_race_results_cache_.clear(); + m_attribute_gender_results_cache_.clear(); if (m_face_track_ == nullptr) { return HERR_SESS_TRACKER_FAILURE; } @@ -133,6 +135,9 @@ int32_t FaceContext::FacesProcess(CameraStream &image, const std::vectorfaceMaskCache; } - // Age prediction - if (param.enable_age) { - auto ret = m_face_pipeline_->Process(image, face, PROCESS_AGE); - if (ret != HSUCCEED) { - return ret; - } - } - // Gender prediction - if (param.enable_age) { - auto ret = m_face_pipeline_->Process(image, face, PROCESS_GENDER); + // Face attribute prediction + if (param.enable_face_attribute) { + auto ret = m_face_pipeline_->Process(image, face, PROCESS_ATTRIBUTE); if (ret != HSUCCEED) { return ret; } + m_attribute_race_results_cache_[i] = m_face_pipeline_->faceAttributeCache[0]; + m_attribute_gender_results_cache_[i] = m_face_pipeline_->faceAttributeCache[1]; + m_attribute_age_results_cache_[i] = m_face_pipeline_->faceAttributeCache[2]; } + // Face interaction if (param.enable_interaction_liveness) { auto ret = m_face_pipeline_->Process(image, face, PROCESS_INTERACTION); @@ -260,6 +262,18 @@ const Embedded& FaceContext::GetFaceFeatureCache() const { return m_face_feature_cache_; } +const std::vector& FaceContext::GetFaceRaceResultsCache() const { + return m_attribute_race_results_cache_; +} + +const std::vector& FaceContext::GetFaceGenderResultsCache() const { + return m_attribute_gender_results_cache_; +} + +const std::vector& FaceContext::GetFaceAgeBracketResultsCache() const { + return m_attribute_age_results_cache_; +} + int32_t FaceContext::FaceFeatureExtract(CameraStream &image, FaceBasicData& data) { std::lock_guard lock(m_mtx_); int32_t ret; diff --git a/cpp/inspireface/face_context.h b/cpp/inspireface/face_context.h index 297c53a0..c422178a 100644 --- a/cpp/inspireface/face_context.h +++ b/cpp/inspireface/face_context.h @@ -37,8 +37,7 @@ typedef struct CustomPipelineParameter { bool enable_liveness = false; ///< Enable RGB liveness detection feature bool enable_ir_liveness = false; ///< Enable IR (Infrared) liveness detection feature bool enable_mask_detect = false; ///< Enable mask detection feature - bool enable_age = false; ///< Enable age prediction feature - bool enable_gender = false; ///< Enable gender prediction feature + bool enable_face_attribute = false; ///< Enable face attribute prediction feature bool enable_face_quality = false; ///< Enable face quality assessment feature bool enable_interaction_liveness = false; ///< Enable interactive liveness detection feature @@ -244,6 +243,24 @@ class INSPIRE_API FaceContext { */ const std::vector& GetFaceInteractionRightEyeStatusCache() const; + /** + * @brief Gets the cache of face attribute rece results. + * @return A const reference to a vector containing face attribute rece results. + */ + const std::vector& GetFaceRaceResultsCache() const; + + /** + * @brief Gets the cache of face attribute gender results. + * @return A const reference to a vector containing face attribute gender results. + */ + const std::vector& GetFaceGenderResultsCache() const; + + /** + * @brief Gets the cache of face attribute age bracket results. + * @return A const reference to a vector containing face attribute age bracket results. + */ + const std::vector& GetFaceAgeBracketResultsCache() const; + /** * @brief Gets the cache of the current face features. * @return A const reference to the Embedded object containing current face feature data. @@ -277,6 +294,9 @@ class INSPIRE_API FaceContext { std::vector m_quality_score_results_cache_; ///< Cache for RGB face quality score results std::vector m_react_left_eye_results_cache_; ///< Cache for Left eye state in face interaction std::vector m_react_right_eye_results_cache_; ///< Cache for Right eye state in face interaction + std::vector m_attribute_race_results_cache_; + std::vector m_attribute_gender_results_cache_; + std::vector m_attribute_age_results_cache_; Embedded m_face_feature_cache_; ///< Cache for current face feature data std::mutex m_mtx_; ///< Mutex for thread safety. diff --git a/cpp/inspireface/information.h b/cpp/inspireface/information.h index 63185a1e..c0b262fa 100644 --- a/cpp/inspireface/information.h +++ b/cpp/inspireface/information.h @@ -7,6 +7,6 @@ #define INSPIRE_FACE_VERSION_MAJOR_STR "1" #define INSPIRE_FACE_VERSION_MINOR_STR "1" -#define INSPIRE_FACE_VERSION_PATCH_STR "3" +#define INSPIRE_FACE_VERSION_PATCH_STR "4" #endif //HYPERFACEREPO_INFORMATION_H diff --git a/cpp/inspireface/pipeline_module/attribute/age_predict.cpp b/cpp/inspireface/pipeline_module/attribute/age_predict.cpp deleted file mode 100644 index b93f98d7..00000000 --- a/cpp/inspireface/pipeline_module/attribute/age_predict.cpp +++ /dev/null @@ -1,5 +0,0 @@ -// -// Created by Tunm-Air13 on 2023/9/8. -// - -#include "age_predict.h" diff --git a/cpp/inspireface/pipeline_module/attribute/age_predict.h b/cpp/inspireface/pipeline_module/attribute/age_predict.h deleted file mode 100644 index 493e1379..00000000 --- a/cpp/inspireface/pipeline_module/attribute/age_predict.h +++ /dev/null @@ -1,14 +0,0 @@ -// -// Created by Tunm-Air13 on 2023/9/8. -// -#pragma once -#ifndef HYPERFACEREPO_AGEPREDICT_H -#define HYPERFACEREPO_AGEPREDICT_H - - -class AgePredict { - -}; - - -#endif //HYPERFACEREPO_AGEPREDICT_H diff --git a/cpp/inspireface/pipeline_module/attribute/all.h b/cpp/inspireface/pipeline_module/attribute/all.h index 875ffb26..bb74aaa1 100644 --- a/cpp/inspireface/pipeline_module/attribute/all.h +++ b/cpp/inspireface/pipeline_module/attribute/all.h @@ -6,7 +6,6 @@ #define HYPERFACEREPO_ATTRIBUTE_ALL_H #include "mask_predict.h" -#include "gender_predict.h" -#include "age_predict.h" +#include "face_attribute.h" #endif //HYPERFACEREPO_ATTRIBUTE_ALL_H diff --git a/cpp/inspireface/pipeline_module/attribute/face_attribute.cpp b/cpp/inspireface/pipeline_module/attribute/face_attribute.cpp new file mode 100644 index 00000000..3def28eb --- /dev/null +++ b/cpp/inspireface/pipeline_module/attribute/face_attribute.cpp @@ -0,0 +1,41 @@ +// +// Created by Tunm-Air13 on 2023/9/8. +// + +#include "face_attribute.h" +#include "middleware/utils.h" + +namespace inspire { + +FaceAttributePredict::FaceAttributePredict(): AnyNet("FaceAttributePredict") {} + +std::vector FaceAttributePredict::operator()(const Matrix& bgr_affine) { + AnyTensorOutputs outputs; + Forward(bgr_affine, outputs); + // cv::imshow("w", bgr_affine); + // cv::waitKey(0); + + std::vector &raceOut = outputs[0].second; + std::vector &genderOut = outputs[1].second; + std::vector &ageOut = outputs[2].second; + + // for(int i = 0; i < raceOut.size(); i++) { + // std::cout << raceOut[i] << ", "; + // } + // std::cout << std::endl; + + auto raceIdx = argmax(raceOut.begin(), raceOut.end()); + auto genderIdx = argmax(genderOut.begin(), genderOut.end()); + auto ageIdx = argmax(ageOut.begin(), ageOut.end()); + + std::string raceLabel = m_original_labels_[raceIdx]; + std::string simplifiedLabel = m_label_map_.at(raceLabel); + int simplifiedRaceIdx = m_simplified_label_index_.at(simplifiedLabel); + + // std::cout << raceLabel << std::endl; + // std::cout << simplifiedLabel << std::endl; + + return {simplifiedRaceIdx, 1 - (int )genderIdx, (int )ageIdx}; +} + +} // namespace hyper \ No newline at end of file diff --git a/cpp/inspireface/pipeline_module/attribute/face_attribute.h b/cpp/inspireface/pipeline_module/attribute/face_attribute.h new file mode 100644 index 00000000..3e1ac4d6 --- /dev/null +++ b/cpp/inspireface/pipeline_module/attribute/face_attribute.h @@ -0,0 +1,69 @@ +// +// Created by Tunm-Air13 on 2023/9/8. +// +#pragma once +#ifndef HYPERFACEREPO_GENDERPREDICT_H +#define HYPERFACEREPO_GENDERPREDICT_H +#include "data_type.h" +#include "middleware/any_net.h" + +namespace inspire { + +/** + * @class FaceAttributePredict + * @brief According to the face image, three classification information of age, gender and race were extracted. + * + * This class inherits from AnyNet and provides methods for performing face attribute prediction. + */ +class INSPIRE_API FaceAttributePredict : public AnyNet { +public: + /** + * @brief Constructor for FaceAttributePredict class. + */ + FaceAttributePredict(); + + /** + * @brief Exec infer. + * + * @param bgr_affine The BGR affine matrix to perform mask prediction on. + * @return The multi-list attribute prediction result. + */ + std::vector operator()(const Matrix& bgr_affine); + +private: + // Define primitive tag + const std::vector m_original_labels_ = { + "Black", "East Asian", "Indian", "Latino_Hispanic", "Middle Eastern", "Southeast Asian", "White" + }; + + // Define simplified labels + const std::vector m_simplified_labels_ = { + "Black", "Asian", "Latino/Hispanic", "Middle Eastern", "White" + }; + + // Define the mapping from the original tag to the simplified tag + const std::unordered_map m_label_map_ = { + {"Black", "Black"}, + {"East Asian", "Asian"}, + {"Indian", "Asian"}, + {"Latino_Hispanic", "Latino/Hispanic"}, + {"Middle Eastern", "Middle Eastern"}, + {"Southeast Asian", "Asian"}, + {"White", "White"} + }; + + // Define index maps for simplified labels + const std::unordered_map m_simplified_label_index_ = { + {"Black", 0}, + {"Asian", 1}, + {"Latino/Hispanic", 2}, + {"Middle Eastern", 3}, + {"White", 4} + }; + +}; + + +} // namespace hyper + +#endif //HYPERFACEREPO_GENDERPREDICT_H diff --git a/cpp/inspireface/pipeline_module/attribute/gender_predict.cpp b/cpp/inspireface/pipeline_module/attribute/gender_predict.cpp deleted file mode 100644 index 5dfdee02..00000000 --- a/cpp/inspireface/pipeline_module/attribute/gender_predict.cpp +++ /dev/null @@ -1,6 +0,0 @@ -// -// Created by Tunm-Air13 on 2023/9/8. -// - -#include "gender_predict.h" - diff --git a/cpp/inspireface/pipeline_module/attribute/gender_predict.h b/cpp/inspireface/pipeline_module/attribute/gender_predict.h deleted file mode 100644 index 3d60d7c3..00000000 --- a/cpp/inspireface/pipeline_module/attribute/gender_predict.h +++ /dev/null @@ -1,14 +0,0 @@ -// -// Created by Tunm-Air13 on 2023/9/8. -// -#pragma once -#ifndef HYPERFACEREPO_GENDERPREDICT_H -#define HYPERFACEREPO_GENDERPREDICT_H - - -class GenderPredict { - -}; - - -#endif //HYPERFACEREPO_GENDERPREDICT_H diff --git a/cpp/inspireface/pipeline_module/face_pipeline.cpp b/cpp/inspireface/pipeline_module/face_pipeline.cpp index 301bfff1..31b05826 100644 --- a/cpp/inspireface/pipeline_module/face_pipeline.cpp +++ b/cpp/inspireface/pipeline_module/face_pipeline.cpp @@ -12,28 +12,23 @@ namespace inspire { -FacePipeline::FacePipeline(InspireArchive &archive, bool enableLiveness, bool enableMaskDetect, bool enableAge, - bool enableGender, bool enableInteractionLiveness) +FacePipeline::FacePipeline(InspireArchive &archive, bool enableLiveness, bool enableMaskDetect, bool enableAttribute, + bool enableInteractionLiveness) : m_enable_liveness_(enableLiveness), m_enable_mask_detect_(enableMaskDetect), - m_enable_age_(enableAge), - m_enable_gender_(enableGender), + m_enable_attribute_(enableAttribute), m_enable_interaction_liveness_(enableInteractionLiveness) { - if (m_enable_age_) { - InspireModel ageModel; - auto ret = InitAgePredict(ageModel); + if (m_enable_attribute_) { + InspireModel attrModel; + auto ret = archive.LoadModel("face_attribute", attrModel); if (ret != 0) { - INSPIRE_LOGE("InitAgePredict error."); + INSPIRE_LOGE("Load Face attribute model: %d", ret); } - } - // Initialize the gender prediction model (assuming Index is 0) - if (m_enable_gender_) { - InspireModel genderModel; - auto ret = InitGenderPredict(genderModel); + ret = InitFaceAttributePredict(attrModel); if (ret != 0) { - INSPIRE_LOGE("InitGenderPredict error."); + INSPIRE_LOGE("InitAgePredict error."); } } @@ -156,19 +151,21 @@ int32_t FacePipeline::Process(CameraStream &image, const HyperFaceData &face, Fa auto eyeStatus = (*m_blink_predict_)(pre_crop); eyesStatusCache[i] = eyeStatus; } - break; } - case PROCESS_AGE: { - if (m_age_predict_ == nullptr) { + case PROCESS_ATTRIBUTE: { + if (m_attribute_predict_ == nullptr) { return HERR_SESS_PIPELINE_FAILURE; // uninitialized } - break; - } - case PROCESS_GENDER: { - if (m_gender_predict_ == nullptr) { - return HERR_SESS_PIPELINE_FAILURE; // uninitialized + std::vector pointsFive; + for (const auto &p: face.keyPoints) { + pointsFive.push_back(HPointToPoint2f(p)); } + auto trans = getTransformMatrix112(pointsFive); + trans.convertTo(trans, CV_64F); + auto crop = image.GetAffineRGBImage(trans, 112, 112); + auto outputs = (*m_attribute_predict_)(crop); + faceAttributeCache = cv::Vec3i(outputs[0], outputs[1], outputs[2]); break; } } @@ -213,17 +210,16 @@ int32_t FacePipeline::Process(CameraStream &image, FaceObject &face) { return HSUCCEED; } - -int32_t FacePipeline::InitAgePredict(InspireModel &) { - - return 0; +int32_t FacePipeline::InitFaceAttributePredict(InspireModel &model) { + m_attribute_predict_ = std::make_shared(); + auto ret = m_attribute_predict_->loadData(model, model.modelType); + if (ret != InferenceHelper::kRetOk) { + return HERR_ARCHIVE_LOAD_FAILURE; + } + return HSUCCEED; } -int32_t FacePipeline::InitGenderPredict(InspireModel &model) { - return 0; -} - int32_t FacePipeline::InitMaskPredict(InspireModel &model) { m_mask_predict_ = std::make_shared(); auto ret = m_mask_predict_->loadData(model, model.modelType); diff --git a/cpp/inspireface/pipeline_module/face_pipeline.h b/cpp/inspireface/pipeline_module/face_pipeline.h index c1ba4a51..892291f2 100644 --- a/cpp/inspireface/pipeline_module/face_pipeline.h +++ b/cpp/inspireface/pipeline_module/face_pipeline.h @@ -21,8 +21,7 @@ namespace inspire { typedef enum FaceProcessFunction { PROCESS_MASK = 0, ///< Mask detection. PROCESS_RGB_LIVENESS, ///< RGB liveness detection. - PROCESS_AGE, ///< Age estimation. - PROCESS_GENDER, ///< Gender prediction. + PROCESS_ATTRIBUTE, ///< Face attribute estimation. PROCESS_INTERACTION, ///< Face interaction. } FaceProcessFunction; @@ -41,12 +40,11 @@ class FacePipeline { * @param archive Model archive instance for model loading. * @param enableLiveness Whether RGB liveness detection is enabled. * @param enableMaskDetect Whether mask detection is enabled. - * @param enableAge Whether age estimation is enabled. - * @param enableGender Whether gender prediction is enabled. + * @param enableAttributee Whether face attribute estimation is enabled. * @param enableInteractionLiveness Whether interaction liveness detection is enabled. */ - explicit FacePipeline(InspireArchive &archive, bool enableLiveness, bool enableMaskDetect, bool enableAge, - bool enableGender, bool enableInteractionLiveness); + explicit FacePipeline(InspireArchive &archive, bool enableLiveness, bool enableMaskDetect, bool enableAttribute, + bool enableInteractionLiveness); /** * @brief Processes a face using the specified FaceProcessFunction. @@ -71,20 +69,12 @@ class FacePipeline { private: /** - * @brief Initializes the AgePredict model. + * @brief Initializes the FaceAttributePredict model. * - * @param model Pointer to the AgePredict model. + * @param model Pointer to the FaceAttributePredict model. * @return int32_t Status code indicating success (0) or failure. */ - int32_t InitAgePredict(InspireModel &model); - - /** - * @brief Initializes the GenderPredict model. - * - * @param model Pointer to the GenderPredict model. - * @return int32_t Status code indicating success (0) or failure. - */ - int32_t InitGenderPredict(InspireModel &model); + int32_t InitFaceAttributePredict(InspireModel &model); /** * @brief Initializes the MaskPredict model. @@ -113,12 +103,10 @@ class FacePipeline { private: const bool m_enable_liveness_ = false; ///< Whether RGB liveness detection is enabled. const bool m_enable_mask_detect_ = false; ///< Whether mask detection is enabled. - const bool m_enable_age_ = false; ///< Whether age estimation is enabled. - const bool m_enable_gender_ = false; ///< Whether gender prediction is enabled. + const bool m_enable_attribute_ = false; ///< Whether face attribute is enabled. const bool m_enable_interaction_liveness_ = false; ///< Whether interaction liveness detection is enabled. - std::shared_ptr m_age_predict_; ///< Pointer to AgePredict instance. - std::shared_ptr m_gender_predict_; ///< Pointer to GenderPredict instance. + std::shared_ptr m_attribute_predict_; ///< Pointer to Face attribute prediction instance. std::shared_ptr m_mask_predict_; ///< Pointer to MaskPredict instance. std::shared_ptr m_rgb_anti_spoofing_; ///< Pointer to RBGAntiSpoofing instance. std::shared_ptr m_blink_predict_; ///< Pointer to Blink predict instance. @@ -127,6 +115,7 @@ class FacePipeline { float faceMaskCache; ///< Cache for face mask detection result. float faceLivenessCache; ///< Cache for face liveness detection result. cv::Vec2f eyesStatusCache; ///< Cache for blink predict result. + cv::Vec3i faceAttributeCache; ///< Cache for face attribute predict result. }; } diff --git a/cpp/inspireface/track_module/face_track.cpp b/cpp/inspireface/track_module/face_track.cpp index e36b5f05..25cf3738 100644 --- a/cpp/inspireface/track_module/face_track.cpp +++ b/cpp/inspireface/track_module/face_track.cpp @@ -245,7 +245,7 @@ void FaceTrack::UpdateStream(CameraStream &image) { image.SetPreviewSize(track_preview_size_); cv::Mat image_detect = image.GetPreviewImage(true); - nms(); + for (auto const &face: trackingFace) { cv::Rect m_mask_rect = face.GetRectSquare(); std::vector pts = Rect2Points(m_mask_rect); @@ -282,7 +282,7 @@ void FaceTrack::UpdateStream(CameraStream &image) { } } - + nms(); // LOGD("Track Cost %f", t_track.GetCostTimeUpdate()); track_total_use_time_ = ((double) cv::getTickCount() - timeStart) / cv::getTickFrequency() * 1000; diff --git a/cpp/inspireface/version.txt b/cpp/inspireface/version.txt index 0551400a..7cae29c9 100644 --- a/cpp/inspireface/version.txt +++ b/cpp/inspireface/version.txt @@ -1 +1 @@ -InspireFace Version: 1.1.3 +InspireFace Version: 1.1.4 diff --git a/cpp/test/test.cpp b/cpp/test/test.cpp index 6578557b..80d329f5 100644 --- a/cpp/test/test.cpp +++ b/cpp/test/test.cpp @@ -100,7 +100,7 @@ int main(int argc, char* argv[]) { } // Set log level - HFSetLogLevel(HF_LOG_ERROR); + HFSetLogLevel(HF_LOG_INFO); return session.run(); } diff --git a/cpp/test/unit/api/test_face_pipeline.cpp b/cpp/test/unit/api/test_face_pipeline.cpp index 334aec6d..e3f5578b 100644 --- a/cpp/test/unit/api/test_face_pipeline.cpp +++ b/cpp/test/unit/api/test_face_pipeline.cpp @@ -7,6 +7,115 @@ #include "inspireface/c_api/inspireface.h" #include "../test_helper/test_tools.h" + +TEST_CASE("test_FacePipelineAttribute", "[face_pipeline_attribute]") { + DRAW_SPLIT_LINE + TEST_PRINT_OUTPUT(true); + + enum AGE_BRACKED { + AGE_0_2 = 0, ///< Age 0-2 years old + AGE_3_9, ///< Age 3-9 years old + AGE_10_19, ///< Age 10-19 years old + AGE_20_29, ///< Age 20-29 years old + AGE_30_39, ///< Age 30-39 years old + AGE_40_49, ///< Age 40-49 years old + AGE_50_59, ///< Age 50-59 years old + AGE_60_69, ///< Age 60-69 years old + MORE_THAN_70, ///< Age more than 70 years old + }; + enum GENDER { + FEMALE = 0, ///< Female + MALE, ///< Male + }; + enum RACE { + BLACK = 0, ///< Black + ASIAN, ///< Asian + LATINO_HISPANIC, ///< Latino/Hispanic + MIDDLE_EASTERN, ///< Middle Eastern + WHITE, ///< White + }; + + HResult ret; + HFSessionCustomParameter parameter = {0}; + parameter.enable_face_attribute = 1; + HFDetectMode detMode = HF_DETECT_MODE_ALWAYS_DETECT; + HFSession session; + HInt32 faceDetectPixelLevel = 160; + ret = HFCreateInspireFaceSession(parameter, detMode, 5, faceDetectPixelLevel, -1, &session); + REQUIRE(ret == HSUCCEED); + + SECTION("a black girl") { + HFImageStream imgHandle; + auto img = cv::imread(GET_DATA("data/attribute/1423.jpg")); + REQUIRE(!img.empty()); + ret = CVImageToImageStream(img, imgHandle); + REQUIRE(ret == HSUCCEED); + + HFMultipleFaceData multipleFaceData = {0}; + ret = HFExecuteFaceTrack(session, imgHandle, &multipleFaceData); + REQUIRE(ret == HSUCCEED); + REQUIRE(multipleFaceData.detectedNum == 1); + + // Run pipeline + ret = HFMultipleFacePipelineProcessOptional(session, imgHandle, &multipleFaceData, HF_ENABLE_FACE_ATTRIBUTE); + REQUIRE(ret == HSUCCEED); + + HFFaceAttributeResult result = {0}; + ret = HFGetFaceAttributeResult(session, &result); + REQUIRE(ret == HSUCCEED); + REQUIRE(result.num == 1); + + // Check attribute + CHECK(result.race[0] == BLACK); + CHECK(result.ageBracket[0] == AGE_10_19); + CHECK(result.gender[0] == FEMALE); + + ret = HFReleaseImageStream(imgHandle); + REQUIRE(ret == HSUCCEED); + imgHandle = nullptr; + } + + SECTION("two young white women") { + HFImageStream imgHandle; + auto img = cv::imread(GET_DATA("data/attribute/7242.jpg")); + REQUIRE(!img.empty()); + ret = CVImageToImageStream(img, imgHandle); + REQUIRE(ret == HSUCCEED); + + HFMultipleFaceData multipleFaceData = {0}; + ret = HFExecuteFaceTrack(session, imgHandle, &multipleFaceData); + REQUIRE(ret == HSUCCEED); + REQUIRE(multipleFaceData.detectedNum == 2); + + // Run pipeline + ret = HFMultipleFacePipelineProcessOptional(session, imgHandle, &multipleFaceData, HF_ENABLE_FACE_ATTRIBUTE); + REQUIRE(ret == HSUCCEED); + + HFFaceAttributeResult result = {0}; + ret = HFGetFaceAttributeResult(session, &result); + REQUIRE(ret == HSUCCEED); + REQUIRE(result.num == 2); + + // Check attribute + for (size_t i = 0; i < result.num; i++) + { + CHECK(result.race[i] == WHITE); + CHECK(result.ageBracket[i] == AGE_20_29); + CHECK(result.gender[i] == FEMALE); + } + + + ret = HFReleaseImageStream(imgHandle); + REQUIRE(ret == HSUCCEED); + imgHandle = nullptr; + } + + ret = HFReleaseInspireFaceSession(session); + session = nullptr; + REQUIRE(ret == HSUCCEED); + +} + TEST_CASE("test_FacePipeline", "[face_pipeline]") { DRAW_SPLIT_LINE TEST_PRINT_OUTPUT(true); diff --git a/doc/Error-Feedback-Codes.md b/doc/Error-Feedback-Codes.md index adc2165b..3ec7d3de 100644 --- a/doc/Error-Feedback-Codes.md +++ b/doc/Error-Feedback-Codes.md @@ -22,33 +22,34 @@ During the use of InspireFace, some error feedback codes may be generated. Here | 16 | HERR_SESS_TRACKER_FAILURE | 1283 | Tracker module not initialized | | 17 | HERR_SESS_INVALID_RESOURCE | 1290 | Invalid static resource | | 18 | HERR_SESS_NUM_OF_MODELS_NOT_MATCH | 1291 | Number of models does not match | - | 19 | HERR_SESS_PIPELINE_FAILURE | 1288 | Pipeline module not initialized | - | 20 | HERR_SESS_REC_EXTRACT_FAILURE | 1295 | Face feature extraction not registered | - | 21 | HERR_SESS_REC_DEL_FAILURE | 1296 | Face feature deletion failed due to out of range index | - | 22 | HERR_SESS_REC_UPDATE_FAILURE | 1297 | Face feature update failed due to out of range index | - | 23 | HERR_SESS_REC_ADD_FEAT_EMPTY | 1298 | Feature vector for registration cannot be empty | - | 24 | HERR_SESS_REC_FEAT_SIZE_ERR | 1299 | Incorrect length of feature vector for registration | - | 25 | HERR_SESS_REC_INVALID_INDEX | 1300 | Invalid index number | - | 26 | HERR_SESS_REC_CONTRAST_FEAT_ERR | 1303 | Incorrect length of feature vector for comparison | - | 27 | HERR_SESS_REC_BLOCK_FULL | 1304 | Feature vector block full | - | 28 | HERR_SESS_REC_BLOCK_DEL_FAILURE | 1305 | Deletion failed | - | 29 | HERR_SESS_REC_BLOCK_UPDATE_FAILURE | 1306 | Update failed | - | 30 | HERR_SESS_REC_ID_ALREADY_EXIST | 1307 | ID already exists | - | 31 | HERR_SESS_FACE_DATA_ERROR | 1310 | Face data parsing | - | 32 | HERR_SESS_FACE_REC_OPTION_ERROR | 1320 | An optional parameter is incorrect | - | 33 | HERR_FT_HUB_DISABLE | 1329 | FeatureHub is disabled | - | 34 | HERR_FT_HUB_OPEN_ERROR | 1330 | Database open error | - | 35 | HERR_FT_HUB_NOT_OPENED | 1331 | Database not opened | - | 36 | HERR_FT_HUB_NO_RECORD_FOUND | 1332 | No record found | - | 37 | HERR_FT_HUB_CHECK_TABLE_ERROR | 1333 | Data table check error | - | 38 | HERR_FT_HUB_INSERT_FAILURE | 1334 | Data insertion error | - | 39 | HERR_FT_HUB_PREPARING_FAILURE | 1335 | Data preparation error | - | 40 | HERR_FT_HUB_EXECUTING_FAILURE | 1336 | SQL execution error | - | 41 | HERR_FT_HUB_NOT_VALID_FOLDER_PATH | 1337 | Invalid folder path | - | 42 | HERR_FT_HUB_ENABLE_REPETITION | 1338 | Enable db function repeatedly | - | 43 | HERR_FT_HUB_DISABLE_REPETITION | 1339 | Disable db function repeatedly | - | 44 | HERR_ARCHIVE_LOAD_FAILURE | 1360 | Archive load failure | - | 45 | HERR_ARCHIVE_LOAD_MODEL_FAILURE | 1361 | Model load failure | - | 46 | HERR_ARCHIVE_FILE_FORMAT_ERROR | 1362 | The archive format is incorrect | - | 47 | HERR_ARCHIVE_REPETITION_LOAD | 1363 | Do not reload the model | - | 48 | HERR_ARCHIVE_NOT_LOAD | 1364 | Model not loaded | + | 19 | HERR_SESS_LANDMARK_NUM_NOT_MATCH | 1300 | The number of input landmark points does not match | + | 20 | HERR_SESS_PIPELINE_FAILURE | 1288 | Pipeline module not initialized | + | 21 | HERR_SESS_REC_EXTRACT_FAILURE | 1295 | Face feature extraction not registered | + | 22 | HERR_SESS_REC_DEL_FAILURE | 1296 | Face feature deletion failed due to out of range index | + | 23 | HERR_SESS_REC_UPDATE_FAILURE | 1297 | Face feature update failed due to out of range index | + | 24 | HERR_SESS_REC_ADD_FEAT_EMPTY | 1298 | Feature vector for registration cannot be empty | + | 25 | HERR_SESS_REC_FEAT_SIZE_ERR | 1299 | Incorrect length of feature vector for registration | + | 26 | HERR_SESS_REC_INVALID_INDEX | 1300 | Invalid index number | + | 27 | HERR_SESS_REC_CONTRAST_FEAT_ERR | 1303 | Incorrect length of feature vector for comparison | + | 28 | HERR_SESS_REC_BLOCK_FULL | 1304 | Feature vector block full | + | 29 | HERR_SESS_REC_BLOCK_DEL_FAILURE | 1305 | Deletion failed | + | 30 | HERR_SESS_REC_BLOCK_UPDATE_FAILURE | 1306 | Update failed | + | 31 | HERR_SESS_REC_ID_ALREADY_EXIST | 1307 | ID already exists | + | 32 | HERR_SESS_FACE_DATA_ERROR | 1310 | Face data parsing | + | 33 | HERR_SESS_FACE_REC_OPTION_ERROR | 1320 | An optional parameter is incorrect | + | 34 | HERR_FT_HUB_DISABLE | 1329 | FeatureHub is disabled | + | 35 | HERR_FT_HUB_OPEN_ERROR | 1330 | Database open error | + | 36 | HERR_FT_HUB_NOT_OPENED | 1331 | Database not opened | + | 37 | HERR_FT_HUB_NO_RECORD_FOUND | 1332 | No record found | + | 38 | HERR_FT_HUB_CHECK_TABLE_ERROR | 1333 | Data table check error | + | 39 | HERR_FT_HUB_INSERT_FAILURE | 1334 | Data insertion error | + | 40 | HERR_FT_HUB_PREPARING_FAILURE | 1335 | Data preparation error | + | 41 | HERR_FT_HUB_EXECUTING_FAILURE | 1336 | SQL execution error | + | 42 | HERR_FT_HUB_NOT_VALID_FOLDER_PATH | 1337 | Invalid folder path | + | 43 | HERR_FT_HUB_ENABLE_REPETITION | 1338 | Enable db function repeatedly | + | 44 | HERR_FT_HUB_DISABLE_REPETITION | 1339 | Disable db function repeatedly | + | 45 | HERR_ARCHIVE_LOAD_FAILURE | 1360 | Archive load failure | + | 46 | HERR_ARCHIVE_LOAD_MODEL_FAILURE | 1361 | Model load failure | + | 47 | HERR_ARCHIVE_FILE_FORMAT_ERROR | 1362 | The archive format is incorrect | + | 48 | HERR_ARCHIVE_REPETITION_LOAD | 1363 | Do not reload the model | + | 49 | HERR_ARCHIVE_NOT_LOAD | 1364 | Model not loaded | diff --git a/python/inspireface/modules/core/libInspireFace.dylib b/python/inspireface/modules/core/libInspireFace.dylib index cd5af262..9a3b48dd 100755 Binary files a/python/inspireface/modules/core/libInspireFace.dylib and b/python/inspireface/modules/core/libInspireFace.dylib differ diff --git a/python/inspireface/modules/core/native.py b/python/inspireface/modules/core/native.py index b6f9d63e..10e74276 100644 --- a/python/inspireface/modules/core/native.py +++ b/python/inspireface/modules/core/native.py @@ -552,8 +552,8 @@ def __call__(self, libname): # noinspection PyBroadException try: return self.Lookup(path) - except Exception: # pylint: disable=broad-except - pass + except Exception as err: # pylint: disable=broad-except + print(err) raise ImportError("Could not load %s." % libname) @@ -918,6 +918,21 @@ class struct_HFaceRect(Structure): HFaceRect = struct_HFaceRect# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/intypedef.h: 32 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/intypedef.h: 37 +class struct_HPoint2f(Structure): + pass + +struct_HPoint2f.__slots__ = [ + 'x', + 'y', +] +struct_HPoint2f._fields_ = [ + ('x', HFloat), + ('y', HFloat), +] + +HPoint2f = struct_HPoint2f# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/intypedef.h: 37 + enum_HFImageFormat = c_int# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 49 HF_STREAM_RGB = 0# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 49 @@ -987,7 +1002,7 @@ class struct_HFImageData(Structure): HFLaunchInspireFace.argtypes = [HPath] HFLaunchInspireFace.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 132 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 131 class struct_HFSessionCustomParameter(Structure): pass @@ -996,9 +1011,8 @@ class struct_HFSessionCustomParameter(Structure): 'enable_liveness', 'enable_ir_liveness', 'enable_mask_detect', - 'enable_age', - 'enable_gender', 'enable_face_quality', + 'enable_face_attribute', 'enable_interaction_liveness', ] struct_HFSessionCustomParameter._fields_ = [ @@ -1006,15 +1020,14 @@ class struct_HFSessionCustomParameter(Structure): ('enable_liveness', HInt32), ('enable_ir_liveness', HInt32), ('enable_mask_detect', HInt32), - ('enable_age', HInt32), - ('enable_gender', HInt32), ('enable_face_quality', HInt32), + ('enable_face_attribute', HInt32), ('enable_interaction_liveness', HInt32), ] -HFSessionCustomParameter = struct_HFSessionCustomParameter# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 132 +HFSessionCustomParameter = struct_HFSessionCustomParameter# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 131 -PHFSessionCustomParameter = POINTER(struct_HFSessionCustomParameter)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 132 +PHFSessionCustomParameter = POINTER(struct_HFSessionCustomParameter)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 131 enum_HFDetectMode = c_int# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 142 @@ -1137,7 +1150,19 @@ class struct_HFMultipleFaceData(Structure): HFGetFaceBasicTokenSize.argtypes = [HPInt32] HFGetFaceBasicTokenSize.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 312 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 305 +if _libs[_LIBRARY_FILENAME].has("HFGetNumOfFaceDenseLandmark", "cdecl"): + HFGetNumOfFaceDenseLandmark = _libs[_LIBRARY_FILENAME].get("HFGetNumOfFaceDenseLandmark", "cdecl") + HFGetNumOfFaceDenseLandmark.argtypes = [HPInt32] + HFGetNumOfFaceDenseLandmark.restype = HResult + +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 315 +if _libs[_LIBRARY_FILENAME].has("HFGetFaceDenseLandmarkFromFaceToken", "cdecl"): + HFGetFaceDenseLandmarkFromFaceToken = _libs[_LIBRARY_FILENAME].get("HFGetFaceDenseLandmarkFromFaceToken", "cdecl") + HFGetFaceDenseLandmarkFromFaceToken.argtypes = [HFFaceBasicToken, POINTER(HPoint2f), HInt32] + HFGetFaceDenseLandmarkFromFaceToken.restype = HResult + +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 329 class struct_HFFaceFeature(Structure): pass @@ -1150,31 +1175,31 @@ class struct_HFFaceFeature(Structure): ('data', HPFloat), ] -HFFaceFeature = struct_HFFaceFeature# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 312 +HFFaceFeature = struct_HFFaceFeature# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 329 -PHFFaceFeature = POINTER(struct_HFFaceFeature)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 312 +PHFFaceFeature = POINTER(struct_HFFaceFeature)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 329 -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 324 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 341 if _libs[_LIBRARY_FILENAME].has("HFFaceFeatureExtract", "cdecl"): HFFaceFeatureExtract = _libs[_LIBRARY_FILENAME].get("HFFaceFeatureExtract", "cdecl") HFFaceFeatureExtract.argtypes = [HFSession, HFImageStream, HFFaceBasicToken, PHFFaceFeature] HFFaceFeatureExtract.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 336 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 353 if _libs[_LIBRARY_FILENAME].has("HFFaceFeatureExtractCpy", "cdecl"): HFFaceFeatureExtractCpy = _libs[_LIBRARY_FILENAME].get("HFFaceFeatureExtractCpy", "cdecl") HFFaceFeatureExtractCpy.argtypes = [HFSession, HFImageStream, HFFaceBasicToken, HPFloat] HFFaceFeatureExtractCpy.restype = HResult -enum_HFSearchMode = c_int# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 349 +enum_HFSearchMode = c_int# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 366 -HF_SEARCH_MODE_EAGER = 0# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 349 +HF_SEARCH_MODE_EAGER = 0# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 366 -HF_SEARCH_MODE_EXHAUSTIVE = (HF_SEARCH_MODE_EAGER + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 349 +HF_SEARCH_MODE_EXHAUSTIVE = (HF_SEARCH_MODE_EAGER + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 366 -HFSearchMode = enum_HFSearchMode# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 349 +HFSearchMode = enum_HFSearchMode# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 366 -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 362 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 379 class struct_HFFeatureHubConfiguration(Structure): pass @@ -1193,21 +1218,21 @@ class struct_HFFeatureHubConfiguration(Structure): ('searchMode', HFSearchMode), ] -HFFeatureHubConfiguration = struct_HFFeatureHubConfiguration# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 362 +HFFeatureHubConfiguration = struct_HFFeatureHubConfiguration# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 379 -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 374 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 391 if _libs[_LIBRARY_FILENAME].has("HFFeatureHubDataEnable", "cdecl"): HFFeatureHubDataEnable = _libs[_LIBRARY_FILENAME].get("HFFeatureHubDataEnable", "cdecl") HFFeatureHubDataEnable.argtypes = [HFFeatureHubConfiguration] HFFeatureHubDataEnable.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 380 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 397 if _libs[_LIBRARY_FILENAME].has("HFFeatureHubDataDisable", "cdecl"): HFFeatureHubDataDisable = _libs[_LIBRARY_FILENAME].get("HFFeatureHubDataDisable", "cdecl") HFFeatureHubDataDisable.argtypes = [] HFFeatureHubDataDisable.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 392 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 409 class struct_HFFaceFeatureIdentity(Structure): pass @@ -1222,11 +1247,11 @@ class struct_HFFaceFeatureIdentity(Structure): ('feature', PHFFaceFeature), ] -HFFaceFeatureIdentity = struct_HFFaceFeatureIdentity# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 392 +HFFaceFeatureIdentity = struct_HFFaceFeatureIdentity# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 409 -PHFFaceFeatureIdentity = POINTER(struct_HFFaceFeatureIdentity)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 392 +PHFFaceFeatureIdentity = POINTER(struct_HFFaceFeatureIdentity)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 409 -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 401 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 418 class struct_HFSearchTopKResults(Structure): pass @@ -1241,89 +1266,89 @@ class struct_HFSearchTopKResults(Structure): ('customIds', HPInt32), ] -HFSearchTopKResults = struct_HFSearchTopKResults# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 401 +HFSearchTopKResults = struct_HFSearchTopKResults# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 418 -PHFSearchTopKResults = POINTER(struct_HFSearchTopKResults)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 401 +PHFSearchTopKResults = POINTER(struct_HFSearchTopKResults)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 418 -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 412 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 429 if _libs[_LIBRARY_FILENAME].has("HFFeatureHubFaceSearchThresholdSetting", "cdecl"): HFFeatureHubFaceSearchThresholdSetting = _libs[_LIBRARY_FILENAME].get("HFFeatureHubFaceSearchThresholdSetting", "cdecl") HFFeatureHubFaceSearchThresholdSetting.argtypes = [c_float] HFFeatureHubFaceSearchThresholdSetting.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 423 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 440 if _libs[_LIBRARY_FILENAME].has("HFFaceComparison", "cdecl"): HFFaceComparison = _libs[_LIBRARY_FILENAME].get("HFFaceComparison", "cdecl") HFFaceComparison.argtypes = [HFFaceFeature, HFFaceFeature, HPFloat] HFFaceComparison.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 431 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 448 if _libs[_LIBRARY_FILENAME].has("HFGetFeatureLength", "cdecl"): HFGetFeatureLength = _libs[_LIBRARY_FILENAME].get("HFGetFeatureLength", "cdecl") HFGetFeatureLength.argtypes = [HPInt32] HFGetFeatureLength.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 440 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 457 if _libs[_LIBRARY_FILENAME].has("HFFeatureHubInsertFeature", "cdecl"): HFFeatureHubInsertFeature = _libs[_LIBRARY_FILENAME].get("HFFeatureHubInsertFeature", "cdecl") HFFeatureHubInsertFeature.argtypes = [HFFaceFeatureIdentity] HFFeatureHubInsertFeature.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 450 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 467 if _libs[_LIBRARY_FILENAME].has("HFFeatureHubFaceSearch", "cdecl"): HFFeatureHubFaceSearch = _libs[_LIBRARY_FILENAME].get("HFFeatureHubFaceSearch", "cdecl") HFFeatureHubFaceSearch.argtypes = [HFFaceFeature, HPFloat, PHFFaceFeatureIdentity] HFFeatureHubFaceSearch.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 460 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 477 if _libs[_LIBRARY_FILENAME].has("HFFeatureHubFaceSearchTopK", "cdecl"): HFFeatureHubFaceSearchTopK = _libs[_LIBRARY_FILENAME].get("HFFeatureHubFaceSearchTopK", "cdecl") HFFeatureHubFaceSearchTopK.argtypes = [HFFaceFeature, HInt32, PHFSearchTopKResults] HFFeatureHubFaceSearchTopK.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 468 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 485 if _libs[_LIBRARY_FILENAME].has("HFFeatureHubFaceRemove", "cdecl"): HFFeatureHubFaceRemove = _libs[_LIBRARY_FILENAME].get("HFFeatureHubFaceRemove", "cdecl") HFFeatureHubFaceRemove.argtypes = [HInt32] HFFeatureHubFaceRemove.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 476 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 493 if _libs[_LIBRARY_FILENAME].has("HFFeatureHubFaceUpdate", "cdecl"): HFFeatureHubFaceUpdate = _libs[_LIBRARY_FILENAME].get("HFFeatureHubFaceUpdate", "cdecl") HFFeatureHubFaceUpdate.argtypes = [HFFaceFeatureIdentity] HFFeatureHubFaceUpdate.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 485 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 502 if _libs[_LIBRARY_FILENAME].has("HFFeatureHubGetFaceIdentity", "cdecl"): HFFeatureHubGetFaceIdentity = _libs[_LIBRARY_FILENAME].get("HFFeatureHubGetFaceIdentity", "cdecl") HFFeatureHubGetFaceIdentity.argtypes = [HInt32, PHFFaceFeatureIdentity] HFFeatureHubGetFaceIdentity.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 493 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 510 if _libs[_LIBRARY_FILENAME].has("HFFeatureHubGetFaceCount", "cdecl"): HFFeatureHubGetFaceCount = _libs[_LIBRARY_FILENAME].get("HFFeatureHubGetFaceCount", "cdecl") HFFeatureHubGetFaceCount.argtypes = [POINTER(HInt32)] HFFeatureHubGetFaceCount.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 500 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 517 if _libs[_LIBRARY_FILENAME].has("HFFeatureHubViewDBTable", "cdecl"): HFFeatureHubViewDBTable = _libs[_LIBRARY_FILENAME].get("HFFeatureHubViewDBTable", "cdecl") HFFeatureHubViewDBTable.argtypes = [] HFFeatureHubViewDBTable.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 519 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 536 if _libs[_LIBRARY_FILENAME].has("HFMultipleFacePipelineProcess", "cdecl"): HFMultipleFacePipelineProcess = _libs[_LIBRARY_FILENAME].get("HFMultipleFacePipelineProcess", "cdecl") HFMultipleFacePipelineProcess.argtypes = [HFSession, HFImageStream, PHFMultipleFaceData, HFSessionCustomParameter] HFMultipleFacePipelineProcess.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 535 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 552 if _libs[_LIBRARY_FILENAME].has("HFMultipleFacePipelineProcessOptional", "cdecl"): HFMultipleFacePipelineProcessOptional = _libs[_LIBRARY_FILENAME].get("HFMultipleFacePipelineProcessOptional", "cdecl") HFMultipleFacePipelineProcessOptional.argtypes = [HFSession, HFImageStream, PHFMultipleFaceData, HInt32] HFMultipleFacePipelineProcessOptional.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 547 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 564 class struct_HFRGBLivenessConfidence(Structure): pass @@ -1336,17 +1361,17 @@ class struct_HFRGBLivenessConfidence(Structure): ('confidence', HPFloat), ] -HFRGBLivenessConfidence = struct_HFRGBLivenessConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 547 +HFRGBLivenessConfidence = struct_HFRGBLivenessConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 564 -PHFRGBLivenessConfidence = POINTER(struct_HFRGBLivenessConfidence)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 547 +PHFRGBLivenessConfidence = POINTER(struct_HFRGBLivenessConfidence)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 564 -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 560 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 577 if _libs[_LIBRARY_FILENAME].has("HFGetRGBLivenessConfidence", "cdecl"): HFGetRGBLivenessConfidence = _libs[_LIBRARY_FILENAME].get("HFGetRGBLivenessConfidence", "cdecl") HFGetRGBLivenessConfidence.argtypes = [HFSession, PHFRGBLivenessConfidence] HFGetRGBLivenessConfidence.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 571 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 588 class struct_HFFaceMaskConfidence(Structure): pass @@ -1359,17 +1384,17 @@ class struct_HFFaceMaskConfidence(Structure): ('confidence', HPFloat), ] -HFFaceMaskConfidence = struct_HFFaceMaskConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 571 +HFFaceMaskConfidence = struct_HFFaceMaskConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 588 -PHFFaceMaskConfidence = POINTER(struct_HFFaceMaskConfidence)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 571 +PHFFaceMaskConfidence = POINTER(struct_HFFaceMaskConfidence)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 588 -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 583 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 600 if _libs[_LIBRARY_FILENAME].has("HFGetFaceMaskConfidence", "cdecl"): HFGetFaceMaskConfidence = _libs[_LIBRARY_FILENAME].get("HFGetFaceMaskConfidence", "cdecl") HFGetFaceMaskConfidence.argtypes = [HFSession, PHFFaceMaskConfidence] HFGetFaceMaskConfidence.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 594 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 611 class struct_HFFaceQualityConfidence(Structure): pass @@ -1382,23 +1407,75 @@ class struct_HFFaceQualityConfidence(Structure): ('confidence', HPFloat), ] -HFFaceQualityConfidence = struct_HFFaceQualityConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 594 +HFFaceQualityConfidence = struct_HFFaceQualityConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 611 -PHFFaceQualityConfidence = POINTER(struct_HFFaceQualityConfidence)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 594 +PHFFaceQualityConfidence = POINTER(struct_HFFaceQualityConfidence)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 611 -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 606 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 623 if _libs[_LIBRARY_FILENAME].has("HFGetFaceQualityConfidence", "cdecl"): HFGetFaceQualityConfidence = _libs[_LIBRARY_FILENAME].get("HFGetFaceQualityConfidence", "cdecl") HFGetFaceQualityConfidence.argtypes = [HFSession, PHFFaceQualityConfidence] HFGetFaceQualityConfidence.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 618 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 635 if _libs[_LIBRARY_FILENAME].has("HFFaceQualityDetect", "cdecl"): HFFaceQualityDetect = _libs[_LIBRARY_FILENAME].get("HFFaceQualityDetect", "cdecl") HFFaceQualityDetect.argtypes = [HFSession, HFFaceBasicToken, POINTER(HFloat)] HFFaceQualityDetect.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 631 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 645 +class struct_HFFaceIntereactionResult(Structure): + pass + +struct_HFFaceIntereactionResult.__slots__ = [ + 'num', + 'leftEyeStatusConfidence', + 'rightEyeStatusConfidence', +] +struct_HFFaceIntereactionResult._fields_ = [ + ('num', HInt32), + ('leftEyeStatusConfidence', HPFloat), + ('rightEyeStatusConfidence', HPFloat), +] + +HFFaceIntereactionResult = struct_HFFaceIntereactionResult# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 645 + +PHFFaceIntereactionResult = POINTER(struct_HFFaceIntereactionResult)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 645 + +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 647 +if _libs[_LIBRARY_FILENAME].has("HFGetFaceIntereactionResult", "cdecl"): + HFGetFaceIntereactionResult = _libs[_LIBRARY_FILENAME].get("HFGetFaceIntereactionResult", "cdecl") + HFGetFaceIntereactionResult.argtypes = [HFSession, PHFFaceIntereactionResult] + HFGetFaceIntereactionResult.restype = HResult + +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 675 +class struct_HFFaceAttributeResult(Structure): + pass + +struct_HFFaceAttributeResult.__slots__ = [ + 'num', + 'race', + 'gender', + 'ageBracket', +] +struct_HFFaceAttributeResult._fields_ = [ + ('num', HInt32), + ('race', HPInt32), + ('gender', HPInt32), + ('ageBracket', HPInt32), +] + +HFFaceAttributeResult = struct_HFFaceAttributeResult# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 675 + +PHFFaceAttributeResult = POINTER(struct_HFFaceAttributeResult)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 675 + +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 687 +if _libs[_LIBRARY_FILENAME].has("HFGetFaceAttributeResult", "cdecl"): + HFGetFaceAttributeResult = _libs[_LIBRARY_FILENAME].get("HFGetFaceAttributeResult", "cdecl") + HFGetFaceAttributeResult.argtypes = [HFSession, PHFFaceAttributeResult] + HFGetFaceAttributeResult.restype = HResult + +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 701 class struct_HFInspireFaceVersion(Structure): pass @@ -1413,50 +1490,56 @@ class struct_HFInspireFaceVersion(Structure): ('patch', c_int), ] -HFInspireFaceVersion = struct_HFInspireFaceVersion# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 631 +HFInspireFaceVersion = struct_HFInspireFaceVersion# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 701 -PHFInspireFaceVersion = POINTER(struct_HFInspireFaceVersion)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 631 +PHFInspireFaceVersion = POINTER(struct_HFInspireFaceVersion)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 701 -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 641 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 711 if _libs[_LIBRARY_FILENAME].has("HFQueryInspireFaceVersion", "cdecl"): HFQueryInspireFaceVersion = _libs[_LIBRARY_FILENAME].get("HFQueryInspireFaceVersion", "cdecl") HFQueryInspireFaceVersion.argtypes = [PHFInspireFaceVersion] HFQueryInspireFaceVersion.restype = HResult -enum_HFLogLevel = c_int# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 653 +enum_HFLogLevel = c_int# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 723 -HF_LOG_NONE = 0# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 653 +HF_LOG_NONE = 0# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 723 -HF_LOG_DEBUG = (HF_LOG_NONE + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 653 +HF_LOG_DEBUG = (HF_LOG_NONE + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 723 -HF_LOG_INFO = (HF_LOG_DEBUG + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 653 +HF_LOG_INFO = (HF_LOG_DEBUG + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 723 -HF_LOG_WARN = (HF_LOG_INFO + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 653 +HF_LOG_WARN = (HF_LOG_INFO + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 723 -HF_LOG_ERROR = (HF_LOG_WARN + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 653 +HF_LOG_ERROR = (HF_LOG_WARN + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 723 -HF_LOG_FATAL = (HF_LOG_ERROR + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 653 +HF_LOG_FATAL = (HF_LOG_ERROR + 1)# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 723 -HFLogLevel = enum_HFLogLevel# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 653 +HFLogLevel = enum_HFLogLevel# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 723 -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 658 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 728 if _libs[_LIBRARY_FILENAME].has("HFSetLogLevel", "cdecl"): HFSetLogLevel = _libs[_LIBRARY_FILENAME].get("HFSetLogLevel", "cdecl") HFSetLogLevel.argtypes = [HFLogLevel] HFSetLogLevel.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 663 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 733 if _libs[_LIBRARY_FILENAME].has("HFLogDisable", "cdecl"): HFLogDisable = _libs[_LIBRARY_FILENAME].get("HFLogDisable", "cdecl") HFLogDisable.argtypes = [] HFLogDisable.restype = HResult -# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 676 +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 746 if _libs[_LIBRARY_FILENAME].has("HFDeBugImageStreamImShow", "cdecl"): HFDeBugImageStreamImShow = _libs[_LIBRARY_FILENAME].get("HFDeBugImageStreamImShow", "cdecl") HFDeBugImageStreamImShow.argtypes = [HFImageStream] HFDeBugImageStreamImShow.restype = None +# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 757 +if _libs[_LIBRARY_FILENAME].has("HFDeBugImageStreamDecodeSave", "cdecl"): + HFDeBugImageStreamDecodeSave = _libs[_LIBRARY_FILENAME].get("HFDeBugImageStreamDecodeSave", "cdecl") + HFDeBugImageStreamDecodeSave.argtypes = [HFImageStream, HPath] + HFDeBugImageStreamDecodeSave.restype = HResult + # /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 27 try: HF_ENABLE_NONE = 0 @@ -1489,13 +1572,13 @@ class struct_HFInspireFaceVersion(Structure): # /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 32 try: - HF_ENABLE_AGE_PREDICT = 32 + HF_ENABLE_FACE_ATTRIBUTE = 32 except: pass # /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 33 try: - HF_ENABLE_GENDER_PREDICT = 64 + HF_ENABLE_PLACEHOLDER_ = 64 except: pass @@ -1513,7 +1596,7 @@ class struct_HFInspireFaceVersion(Structure): HFImageData = struct_HFImageData# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 74 -HFSessionCustomParameter = struct_HFSessionCustomParameter# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 132 +HFSessionCustomParameter = struct_HFSessionCustomParameter# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 131 HFFaceBasicToken = struct_HFFaceBasicToken# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 204 @@ -1521,21 +1604,25 @@ class struct_HFInspireFaceVersion(Structure): HFMultipleFaceData = struct_HFMultipleFaceData# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 229 -HFFaceFeature = struct_HFFaceFeature# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 312 +HFFaceFeature = struct_HFFaceFeature# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 329 + +HFFeatureHubConfiguration = struct_HFFeatureHubConfiguration# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 379 + +HFFaceFeatureIdentity = struct_HFFaceFeatureIdentity# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 409 -HFFeatureHubConfiguration = struct_HFFeatureHubConfiguration# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 362 +HFSearchTopKResults = struct_HFSearchTopKResults# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 418 -HFFaceFeatureIdentity = struct_HFFaceFeatureIdentity# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 392 +HFRGBLivenessConfidence = struct_HFRGBLivenessConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 564 -HFSearchTopKResults = struct_HFSearchTopKResults# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 401 +HFFaceMaskConfidence = struct_HFFaceMaskConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 588 -HFRGBLivenessConfidence = struct_HFRGBLivenessConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 547 +HFFaceQualityConfidence = struct_HFFaceQualityConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 611 -HFFaceMaskConfidence = struct_HFFaceMaskConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 571 +HFFaceIntereactionResult = struct_HFFaceIntereactionResult# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 645 -HFFaceQualityConfidence = struct_HFFaceQualityConfidence# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 594 +HFFaceAttributeResult = struct_HFFaceAttributeResult# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 675 -HFInspireFaceVersion = struct_HFInspireFaceVersion# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 631 +HFInspireFaceVersion = struct_HFInspireFaceVersion# /Users/tunm/work/InspireFace/cpp/inspireface/c_api/inspireface.h: 701 # No inserted files diff --git a/python/inspireface/modules/inspire_face.py b/python/inspireface/modules/inspire_face.py index 19b3b2db..f3701261 100644 --- a/python/inspireface/modules/inspire_face.py +++ b/python/inspireface/modules/inspire_face.py @@ -1,3 +1,5 @@ +import ctypes + import cv2 import numpy as np from .core import * @@ -146,6 +148,11 @@ class FaceExtended: rgb_liveness_confidence: float mask_confidence: float quality_confidence: float + left_eye_status_confidence: float + right_eye_status_confidence: float + race: int + gender: int + age_bracket: int class FaceInformation: @@ -208,8 +215,7 @@ class SessionCustomParameter: enable_liveness: bool = False enable_ir_liveness: bool = False enable_mask_detect: bool = False - enable_age: bool = False - enable_gender: bool = False + enable_face_attribute: bool = False enable_face_quality: bool = False enable_interaction_liveness: bool = False @@ -225,8 +231,7 @@ def _c_struct(self): enable_liveness=int(self.enable_liveness), enable_ir_liveness=int(self.enable_ir_liveness), enable_mask_detect=int(self.enable_mask_detect), - enable_age=int(self.enable_age), - enable_gender=int(self.enable_gender), + enable_face_attribute=int(self.enable_face_attribute), enable_face_quality=int(self.enable_face_quality), enable_interaction_liveness=int(self.enable_interaction_liveness) ) @@ -317,6 +322,21 @@ def face_detection(self, image) -> List[FaceInformation]: else: return [] + def get_face_dense_landmark(self, single_face: FaceInformation): + num_landmarks = HInt32() + HFGetNumOfFaceDenseLandmark(byref(num_landmarks)) + landmarks_array = (HPoint2f * num_landmarks.value)() + ret = HFGetFaceDenseLandmarkFromFaceToken(single_face._token, landmarks_array, num_landmarks) + if ret != 0: + logger.error(f"An error occurred obtaining a dense landmark for a single face: {ret}") + + landmark = [] + for point in landmarks_array: + landmark.append(point.x) + landmark.append(point.y) + + return np.asarray(landmark).reshape(-1, 2) + def set_track_preview_size(self, size=192): """ Sets the preview size for the face tracking session. @@ -367,10 +387,12 @@ def face_pipeline(self, image, faces: List[FaceInformation], exec_param) -> List logger.error(f"Face pipeline error: {ret}") return [] - extends = [FaceExtended(-1.0, -1.0, -1.0) for _ in range(len(faces))] + extends = [FaceExtended(-1.0, -1.0, -1.0, -1.0, -1.0, -1, -1, -1) for _ in range(len(faces))] self._update_mask_confidence(exec_param, flag, extends) self._update_rgb_liveness_confidence(exec_param, flag, extends) self._update_face_quality_confidence(exec_param, flag, extends) + self._update_face_attribute_confidence(exec_param, flag, extends) + self._update_face_interact_confidence(exec_param, flag, extends) return extends @@ -431,6 +453,18 @@ def _update_mask_confidence(self, exec_param, flag, extends): else: logger.error(f"Get mask result error: {ret}") + def _update_face_interact_confidence(self, exec_param, flag, extends): + if (flag == "object" and exec_param.enable_interaction_liveness) or ( + flag == "bitmask" and exec_param & HF_ENABLE_INTERACTION): + results = HFFaceIntereactionResult() + ret = HFGetFaceIntereactionResult(self._sess, PHFFaceIntereactionResult(results)) + if ret == 0: + for i in range(results.num): + extends[i].left_eye_status_confidence = results.leftEyeStatusConfidence[i] + extends[i].right_eye_status_confidence = results.rightEyeStatusConfidence[i] + else: + logger.error(f"Get face interact result error: {ret}") + def _update_rgb_liveness_confidence(self, exec_param, flag, extends: List[FaceExtended]): if (flag == "object" and exec_param.enable_liveness) or ( flag == "bitmask" and exec_param & HF_ENABLE_LIVENESS): @@ -442,6 +476,19 @@ def _update_rgb_liveness_confidence(self, exec_param, flag, extends: List[FaceEx else: logger.error(f"Get rgb liveness result error: {ret}") + def _update_face_attribute_confidence(self, exec_param, flag, extends: List[FaceExtended]): + if (flag == "object" and exec_param.enable_face_attribute) or ( + flag == "bitmask" and exec_param & HF_ENABLE_FACE_ATTRIBUTE): + attribute_results = HFFaceAttributeResult() + ret = HFGetFaceAttributeResult(self._sess, PHFFaceAttributeResult(attribute_results)) + if ret == 0: + for i in range(attribute_results.num): + extends[i].gender = attribute_results.gender[i] + extends[i].age_bracket = attribute_results.ageBracket[i] + extends[i].race = attribute_results.race[i] + else: + logger.error(f"Get face attribute result error: {ret}") + def _update_face_quality_confidence(self, exec_param, flag, extends: List[FaceExtended]): if (flag == "object" and exec_param.enable_face_quality) or ( flag == "bitmask" and exec_param & HF_ENABLE_QUALITY): diff --git a/python/inspireface/param.py b/python/inspireface/param.py index 2d1752cb..3ee65878 100644 --- a/python/inspireface/param.py +++ b/python/inspireface/param.py @@ -2,7 +2,7 @@ # Session option from inspireface.modules.core.native import HF_ENABLE_NONE, HF_ENABLE_FACE_RECOGNITION, HF_ENABLE_LIVENESS, HF_ENABLE_IR_LIVENESS, \ - HF_ENABLE_MASK_DETECT, HF_ENABLE_AGE_PREDICT, HF_ENABLE_GENDER_PREDICT, HF_ENABLE_QUALITY, HF_ENABLE_INTERACTION + HF_ENABLE_MASK_DETECT, HF_ENABLE_FACE_ATTRIBUTE, HF_ENABLE_QUALITY, HF_ENABLE_INTERACTION # Face track mode from inspireface.modules.core.native import HF_DETECT_MODE_ALWAYS_DETECT, HF_DETECT_MODE_LIGHT_TRACK, HF_DETECT_MODE_TRACK_BY_DETECTION diff --git a/python/sample_face_detection.py b/python/sample_face_detection.py index 83ed6694..bee95956 100644 --- a/python/sample_face_detection.py +++ b/python/sample_face_detection.py @@ -3,6 +3,12 @@ import inspireface as ifac from inspireface.param import * import click +import numpy as np + +race_tags = ["Black", "Asian", "Latino/Hispanic", "Middle Eastern", "White"] +gender_tags = ["Female", "Male", ] +age_bracket_tags = ["0-2 years old", "3-9 years old", "10-19 years old", "20-29 years old", "30-39 years old", + "40-49 years old", "50-59 years old", "60-69 years old", "more than 70 years old"] @click.command() @click.argument("resource_path") @@ -17,7 +23,7 @@ def case_face_detection_image(resource_path, image_path): assert ret, "Launch failure. Please ensure the resource path is correct." # Optional features, loaded during session creation based on the modules specified. - opt = HF_ENABLE_FACE_RECOGNITION | HF_ENABLE_QUALITY | HF_ENABLE_MASK_DETECT | HF_ENABLE_LIVENESS + opt = HF_ENABLE_FACE_RECOGNITION | HF_ENABLE_QUALITY | HF_ENABLE_MASK_DETECT | HF_ENABLE_LIVENESS | HF_ENABLE_INTERACTION | HF_ENABLE_FACE_ATTRIBUTE session = ifac.InspireFaceSession(opt, HF_DETECT_MODE_ALWAYS_DETECT) # Load the image using OpenCV. @@ -35,12 +41,33 @@ def case_face_detection_image(resource_path, image_path): print(f"idx: {idx}") # Print Euler angles of the face. print(f"roll: {face.roll}, yaw: {face.yaw}, pitch: {face.pitch}") - # Draw bounding box around the detected face. + + # Get face bounding box x1, y1, x2, y2 = face.location - cv2.rectangle(draw, (x1, y1), (x2, y2), (0, 0, 255), 2) + + # Calculate center, size, and angle + center = ((x1 + x2) / 2, (y1 + y2) / 2) + size = (x2 - x1, y2 - y1) + angle = face.roll # 这里使用 roll 角度 + + # Get rotation matrix + rotation_matrix = cv2.getRotationMatrix2D(center, angle, 1.0) + + # Apply rotation to the bounding box corners + rect = ((center[0], center[1]), (size[0], size[1]), angle) + box = cv2.boxPoints(rect) + box = box.astype(int) + + # Draw the rotated bounding box + cv2.drawContours(draw, [box], 0, (100, 180, 29), 2) + + # Draw landmarks + lmk = session.get_face_dense_landmark(face) + for x, y in lmk.astype(int): + cv2.circle(draw, (x, y), 0, (220, 100, 0), 2) # Features must be enabled during session creation to use them here. - select_exec_func = HF_ENABLE_QUALITY | HF_ENABLE_MASK_DETECT | HF_ENABLE_LIVENESS + select_exec_func = HF_ENABLE_QUALITY | HF_ENABLE_MASK_DETECT | HF_ENABLE_LIVENESS | HF_ENABLE_INTERACTION | HF_ENABLE_FACE_ATTRIBUTE # Execute the pipeline to obtain richer face information. extends = session.face_pipeline(image, faces, select_exec_func) for idx, ext in enumerate(extends): @@ -50,6 +77,11 @@ def case_face_detection_image(resource_path, image_path): print(f"quality: {ext.quality_confidence}") print(f"rgb liveness: {ext.rgb_liveness_confidence}") print(f"face mask: {ext.mask_confidence}") + print( + f"face eyes status: left eye: {ext.left_eye_status_confidence} right eye: {ext.right_eye_status_confidence}") + print(f"gender: {gender_tags[ext.gender]}") + print(f"race: {race_tags[ext.race]}") + print(f"age: {age_bracket_tags[ext.age_bracket]}") # Save the annotated image to the 'tmp/' directory. save_path = os.path.join("tmp/", "det.jpg") diff --git a/python/sample_face_track_from_video.py b/python/sample_face_track_from_video.py index 02fbf9b5..c36d473b 100644 --- a/python/sample_face_track_from_video.py +++ b/python/sample_face_track_from_video.py @@ -2,7 +2,7 @@ import cv2 import inspireface as ifac from inspireface.param import * - +import numpy as np @click.command() @click.argument("resource_path") @@ -51,8 +51,34 @@ def case_face_tracker_from_video(resource_path, source, show): # Process frame here (e.g., face detection/tracking). faces = session.face_detection(frame) for idx, face in enumerate(faces): + print(f"{'==' * 20}") + print(f"idx: {idx}") + # Print Euler angles of the face. + print(f"roll: {face.roll}, yaw: {face.yaw}, pitch: {face.pitch}") + + # Get face bounding box x1, y1, x2, y2 = face.location - cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 0, 255), 2) + + # Calculate center, size, and angle + center = ((x1 + x2) / 2, (y1 + y2) / 2) + size = (x2 - x1, y2 - y1) + angle = face.roll # 这里使用 roll 角度 + + # Get rotation matrix + rotation_matrix = cv2.getRotationMatrix2D(center, angle, 1.0) + + # Apply rotation to the bounding box corners + rect = ((center[0], center[1]), (size[0], size[1]), angle) + box = cv2.boxPoints(rect) + box = box.astype(int) + + # Draw the rotated bounding box + cv2.drawContours(frame, [box], 0, (100, 180, 29), 2) + + # Draw landmarks + lmk = session.get_face_dense_landmark(face) + for x, y in lmk.astype(int): + cv2.circle(frame, (x, y), 0, (220, 100, 0), 2) if show: cv2.imshow("Face Tracker", frame) diff --git a/python/test/data/RD/d1.jpeg b/python/test/data/RD/d1.jpeg deleted file mode 100644 index b6f841b3..00000000 Binary files a/python/test/data/RD/d1.jpeg and /dev/null differ diff --git a/python/test/data/RD/d2.jpeg b/python/test/data/RD/d2.jpeg deleted file mode 100644 index c212929b..00000000 Binary files a/python/test/data/RD/d2.jpeg and /dev/null differ diff --git a/python/test/data/RD/d3.jpeg b/python/test/data/RD/d3.jpeg deleted file mode 100644 index 6030d471..00000000 Binary files a/python/test/data/RD/d3.jpeg and /dev/null differ diff --git a/python/test/data/RD/d4.jpeg b/python/test/data/RD/d4.jpeg deleted file mode 100644 index f5863d84..00000000 Binary files a/python/test/data/RD/d4.jpeg and /dev/null differ diff --git a/python/test/data/RD/d5.jpeg b/python/test/data/RD/d5.jpeg deleted file mode 100644 index 56078d20..00000000 Binary files a/python/test/data/RD/d5.jpeg and /dev/null differ diff --git a/python/test/data/bulk/Nathalie_Baye_0002.jpg b/python/test/data/bulk/Nathalie_Baye_0002.jpg deleted file mode 100644 index 70ffc90a..00000000 Binary files a/python/test/data/bulk/Nathalie_Baye_0002.jpg and /dev/null differ diff --git a/python/test/data/bulk/Rob_Lowe_0001.jpg b/python/test/data/bulk/Rob_Lowe_0001.jpg deleted file mode 100644 index 39e94a04..00000000 Binary files a/python/test/data/bulk/Rob_Lowe_0001.jpg and /dev/null differ diff --git a/python/test/data/bulk/Rob_Lowe_0002.jpg b/python/test/data/bulk/Rob_Lowe_0002.jpg deleted file mode 100644 index 91012435..00000000 Binary files a/python/test/data/bulk/Rob_Lowe_0002.jpg and /dev/null differ diff --git a/python/test/data/bulk/jntm.jpg b/python/test/data/bulk/jntm.jpg deleted file mode 100644 index af5f91ca..00000000 Binary files a/python/test/data/bulk/jntm.jpg and /dev/null differ diff --git a/python/test/data/bulk/kun.jpg b/python/test/data/bulk/kun.jpg deleted file mode 100644 index a6a34b46..00000000 Binary files a/python/test/data/bulk/kun.jpg and /dev/null differ diff --git a/python/test/data/bulk/view.jpg b/python/test/data/bulk/view.jpg deleted file mode 100644 index 7d0bdd5e..00000000 Binary files a/python/test/data/bulk/view.jpg and /dev/null differ diff --git a/python/test/data/bulk/woman.png b/python/test/data/bulk/woman.png deleted file mode 100644 index dd287092..00000000 Binary files a/python/test/data/bulk/woman.png and /dev/null differ diff --git a/python/test/data/bulk/woman_search.jpeg b/python/test/data/bulk/woman_search.jpeg deleted file mode 100644 index 75a383a2..00000000 Binary files a/python/test/data/bulk/woman_search.jpeg and /dev/null differ diff --git a/python/test/data/bulk/yifei.jpg b/python/test/data/bulk/yifei.jpg deleted file mode 100644 index 948661f7..00000000 Binary files a/python/test/data/bulk/yifei.jpg and /dev/null differ diff --git a/python/test/data/pose/left_face.jpeg b/python/test/data/pose/left_face.jpeg deleted file mode 100644 index 1b9b7853..00000000 Binary files a/python/test/data/pose/left_face.jpeg and /dev/null differ diff --git a/python/test/data/pose/left_wryneck.png b/python/test/data/pose/left_wryneck.png deleted file mode 100644 index f7f1651d..00000000 Binary files a/python/test/data/pose/left_wryneck.png and /dev/null differ diff --git a/python/test/data/pose/lower_face.jpeg b/python/test/data/pose/lower_face.jpeg deleted file mode 100644 index cfd98f68..00000000 Binary files a/python/test/data/pose/lower_face.jpeg and /dev/null differ diff --git a/python/test/data/pose/right_face.png b/python/test/data/pose/right_face.png deleted file mode 100644 index 2150073d..00000000 Binary files a/python/test/data/pose/right_face.png and /dev/null differ diff --git a/python/test/data/pose/right_wryneck.png b/python/test/data/pose/right_wryneck.png deleted file mode 100644 index aec02564..00000000 Binary files a/python/test/data/pose/right_wryneck.png and /dev/null differ diff --git a/python/test/data/pose/rise_face.jpeg b/python/test/data/pose/rise_face.jpeg deleted file mode 100644 index b1ea0b22..00000000 Binary files a/python/test/data/pose/rise_face.jpeg and /dev/null differ diff --git a/python/test/data/rotate/rot_0.jpg b/python/test/data/rotate/rot_0.jpg deleted file mode 100644 index 2fd96c40..00000000 Binary files a/python/test/data/rotate/rot_0.jpg and /dev/null differ diff --git a/python/test/data/rotate/rot_180.jpg b/python/test/data/rotate/rot_180.jpg deleted file mode 100644 index cc519ccd..00000000 Binary files a/python/test/data/rotate/rot_180.jpg and /dev/null differ diff --git a/python/test/data/rotate/rot_270.jpg b/python/test/data/rotate/rot_270.jpg deleted file mode 100644 index d14c45e0..00000000 Binary files a/python/test/data/rotate/rot_270.jpg and /dev/null differ diff --git a/python/test/data/rotate/rot_90.jpg b/python/test/data/rotate/rot_90.jpg deleted file mode 100644 index 478e875a..00000000 Binary files a/python/test/data/rotate/rot_90.jpg and /dev/null differ diff --git a/python/test/data/search/Mary_Katherine_Smart_0001_5k.jpg b/python/test/data/search/Mary_Katherine_Smart_0001_5k.jpg deleted file mode 100755 index ec232451..00000000 Binary files a/python/test/data/search/Mary_Katherine_Smart_0001_5k.jpg and /dev/null differ diff --git a/python/test/data/search/Teresa_Williams_0001_1k.jpg b/python/test/data/search/Teresa_Williams_0001_1k.jpg deleted file mode 100755 index d6a9913c..00000000 Binary files a/python/test/data/search/Teresa_Williams_0001_1k.jpg and /dev/null differ diff --git a/python/test/data/video/810_1684206192.mp4 b/python/test/data/video/810_1684206192.mp4 deleted file mode 100644 index 1f682c4e..00000000 Binary files a/python/test/data/video/810_1684206192.mp4 and /dev/null differ diff --git a/python/test/test_settings.py b/python/test/test_settings.py index 2201b4a0..22cd279f 100644 --- a/python/test/test_settings.py +++ b/python/test/test_settings.py @@ -8,14 +8,14 @@ ENABLE_BENCHMARK_TEST = True # Enabling will run all the CRUD tests, which will take time -ENABLE_CRUD_TEST = True +ENABLE_CRUD_TEST = False # Enabling will run the face search benchmark, which takes time and must be configured with the correct # 'LFW_FUNNELED_DIR_PATH' parameter ENABLE_SEARCH_BENCHMARK_TEST = True # Enabling will run the LFW dataset precision test, which will take time -ENABLE_LFW_PRECISION_TEST = True +ENABLE_LFW_PRECISION_TEST = False # Testing model name TEST_MODEL_NAME = "Pikachu" diff --git a/python/test/unit/test_tracker_module.py b/python/test/unit/test_tracker_module.py index 697a291a..0d85e98f 100644 --- a/python/test/unit/test_tracker_module.py +++ b/python/test/unit/test_tracker_module.py @@ -84,24 +84,6 @@ def test_face_pose(self): right_face_roll = faces[0].roll self.assertEqual(True, right_face_roll > 30) - def test_face_track_from_video(self): - # Read a video file - video_gen = read_video_generator(get_test_data("video/810_1684206192.mp4")) - results = [self.engine_tk.face_detection(frame) for frame in video_gen] - num_of_frame = len(results) - num_of_track_loss = len([faces for faces in results if not faces]) - total_track_ids = [faces[0].track_id for faces in results if faces] - num_of_id_switch = len([id_ for id_ in total_track_ids if id_ != 1]) - - # Calculate the loss rate of trace loss and switching id - track_loss = num_of_track_loss / num_of_frame - id_switch_loss = num_of_id_switch / len(total_track_ids) - - # Not rigorous, only for the current test of this video file - self.assertEqual(True, track_loss < 0.05) - self.assertEqual(True, id_switch_loss < 0.1) - - @optional(ENABLE_BENCHMARK_TEST, "All benchmark related tests have been closed.") class FaceTrackerBenchmarkCase(unittest.TestCase): benchmark_results = list() diff --git a/python/tmp/det.jpg b/python/tmp/det.jpg index 39f8c82c..7b298fd3 100644 Binary files a/python/tmp/det.jpg and b/python/tmp/det.jpg differ diff --git a/tools/error_table.md b/tools/error_table.md index 0836ed8a..0bb29fd1 100644 --- a/tools/error_table.md +++ b/tools/error_table.md @@ -18,33 +18,34 @@ | 16 | HERR_SESS_TRACKER_FAILURE | 1283 | Tracker module not initialized | | 17 | HERR_SESS_INVALID_RESOURCE | 1290 | Invalid static resource | | 18 | HERR_SESS_NUM_OF_MODELS_NOT_MATCH | 1291 | Number of models does not match | - | 19 | HERR_SESS_PIPELINE_FAILURE | 1288 | Pipeline module not initialized | - | 20 | HERR_SESS_REC_EXTRACT_FAILURE | 1295 | Face feature extraction not registered | - | 21 | HERR_SESS_REC_DEL_FAILURE | 1296 | Face feature deletion failed due to out of range index | - | 22 | HERR_SESS_REC_UPDATE_FAILURE | 1297 | Face feature update failed due to out of range index | - | 23 | HERR_SESS_REC_ADD_FEAT_EMPTY | 1298 | Feature vector for registration cannot be empty | - | 24 | HERR_SESS_REC_FEAT_SIZE_ERR | 1299 | Incorrect length of feature vector for registration | - | 25 | HERR_SESS_REC_INVALID_INDEX | 1300 | Invalid index number | - | 26 | HERR_SESS_REC_CONTRAST_FEAT_ERR | 1303 | Incorrect length of feature vector for comparison | - | 27 | HERR_SESS_REC_BLOCK_FULL | 1304 | Feature vector block full | - | 28 | HERR_SESS_REC_BLOCK_DEL_FAILURE | 1305 | Deletion failed | - | 29 | HERR_SESS_REC_BLOCK_UPDATE_FAILURE | 1306 | Update failed | - | 30 | HERR_SESS_REC_ID_ALREADY_EXIST | 1307 | ID already exists | - | 31 | HERR_SESS_FACE_DATA_ERROR | 1310 | Face data parsing | - | 32 | HERR_SESS_FACE_REC_OPTION_ERROR | 1320 | An optional parameter is incorrect | - | 33 | HERR_FT_HUB_DISABLE | 1329 | FeatureHub is disabled | - | 34 | HERR_FT_HUB_OPEN_ERROR | 1330 | Database open error | - | 35 | HERR_FT_HUB_NOT_OPENED | 1331 | Database not opened | - | 36 | HERR_FT_HUB_NO_RECORD_FOUND | 1332 | No record found | - | 37 | HERR_FT_HUB_CHECK_TABLE_ERROR | 1333 | Data table check error | - | 38 | HERR_FT_HUB_INSERT_FAILURE | 1334 | Data insertion error | - | 39 | HERR_FT_HUB_PREPARING_FAILURE | 1335 | Data preparation error | - | 40 | HERR_FT_HUB_EXECUTING_FAILURE | 1336 | SQL execution error | - | 41 | HERR_FT_HUB_NOT_VALID_FOLDER_PATH | 1337 | Invalid folder path | - | 42 | HERR_FT_HUB_ENABLE_REPETITION | 1338 | Enable db function repeatedly | - | 43 | HERR_FT_HUB_DISABLE_REPETITION | 1339 | Disable db function repeatedly | - | 44 | HERR_ARCHIVE_LOAD_FAILURE | 1360 | Archive load failure | - | 45 | HERR_ARCHIVE_LOAD_MODEL_FAILURE | 1361 | Model load failure | - | 46 | HERR_ARCHIVE_FILE_FORMAT_ERROR | 1362 | The archive format is incorrect | - | 47 | HERR_ARCHIVE_REPETITION_LOAD | 1363 | Do not reload the model | - | 48 | HERR_ARCHIVE_NOT_LOAD | 1364 | Model not loaded | + | 19 | HERR_SESS_LANDMARK_NUM_NOT_MATCH | 1300 | The number of input landmark points does not match | + | 20 | HERR_SESS_PIPELINE_FAILURE | 1288 | Pipeline module not initialized | + | 21 | HERR_SESS_REC_EXTRACT_FAILURE | 1295 | Face feature extraction not registered | + | 22 | HERR_SESS_REC_DEL_FAILURE | 1296 | Face feature deletion failed due to out of range index | + | 23 | HERR_SESS_REC_UPDATE_FAILURE | 1297 | Face feature update failed due to out of range index | + | 24 | HERR_SESS_REC_ADD_FEAT_EMPTY | 1298 | Feature vector for registration cannot be empty | + | 25 | HERR_SESS_REC_FEAT_SIZE_ERR | 1299 | Incorrect length of feature vector for registration | + | 26 | HERR_SESS_REC_INVALID_INDEX | 1300 | Invalid index number | + | 27 | HERR_SESS_REC_CONTRAST_FEAT_ERR | 1303 | Incorrect length of feature vector for comparison | + | 28 | HERR_SESS_REC_BLOCK_FULL | 1304 | Feature vector block full | + | 29 | HERR_SESS_REC_BLOCK_DEL_FAILURE | 1305 | Deletion failed | + | 30 | HERR_SESS_REC_BLOCK_UPDATE_FAILURE | 1306 | Update failed | + | 31 | HERR_SESS_REC_ID_ALREADY_EXIST | 1307 | ID already exists | + | 32 | HERR_SESS_FACE_DATA_ERROR | 1310 | Face data parsing | + | 33 | HERR_SESS_FACE_REC_OPTION_ERROR | 1320 | An optional parameter is incorrect | + | 34 | HERR_FT_HUB_DISABLE | 1329 | FeatureHub is disabled | + | 35 | HERR_FT_HUB_OPEN_ERROR | 1330 | Database open error | + | 36 | HERR_FT_HUB_NOT_OPENED | 1331 | Database not opened | + | 37 | HERR_FT_HUB_NO_RECORD_FOUND | 1332 | No record found | + | 38 | HERR_FT_HUB_CHECK_TABLE_ERROR | 1333 | Data table check error | + | 39 | HERR_FT_HUB_INSERT_FAILURE | 1334 | Data insertion error | + | 40 | HERR_FT_HUB_PREPARING_FAILURE | 1335 | Data preparation error | + | 41 | HERR_FT_HUB_EXECUTING_FAILURE | 1336 | SQL execution error | + | 42 | HERR_FT_HUB_NOT_VALID_FOLDER_PATH | 1337 | Invalid folder path | + | 43 | HERR_FT_HUB_ENABLE_REPETITION | 1338 | Enable db function repeatedly | + | 44 | HERR_FT_HUB_DISABLE_REPETITION | 1339 | Disable db function repeatedly | + | 45 | HERR_ARCHIVE_LOAD_FAILURE | 1360 | Archive load failure | + | 46 | HERR_ARCHIVE_LOAD_MODEL_FAILURE | 1361 | Model load failure | + | 47 | HERR_ARCHIVE_FILE_FORMAT_ERROR | 1362 | The archive format is incorrect | + | 48 | HERR_ARCHIVE_REPETITION_LOAD | 1363 | Do not reload the model | + | 49 | HERR_ARCHIVE_NOT_LOAD | 1364 | Model not loaded |