Python Library for emotion detection in text and images using multimodal function.
This library is an implementation of SENTI-Framework. It's structure and functionality overviews are as follows:
This library enables emotion recognition recognition in text and images using deep learning models. It also enables multimodal analysis using EmbraceNet+ merging both modalities.
After executing the required emotion recognition, the emotion, information regarding the person presenting it and the event can be stored in an ontology for further analysis. Fot this we provide Emonto, an extensible emotion ontology whose structure is shown in the next image.
In order to use this project:
- Set the root of this project as the working directory.
- Download the
YOLO-weights
andcheckpoints
folders available here and add them to theSentiLib/image_utils
directory. - If you wish to use your own pretrained models for this library, add or replace the models in the
SentiLib/assets
directory (we are working on enabling users to do their own pretraining for the models using our arqchitectures). - Install the library with
pip install .
An interactive example on how to use this library available here.
Upon publication we will add our work reference here.
This research was supported by the FONDO NACIONAL DEDESARROLLO CIENTÍFICO, TECNOLÓGICO Y DE INNOVACIÓN TECNOLÓGICA - FONDECYT as executing entity of CONCYTEC under grant agreement no.01-2019-FONDECYT-BM-INC.INV in the project RUTAS: Robots for Urban Tourism,Autonomous and Semantic web based.