Our tool is a web browser extension that is triggered once an image is clicked on, the image URL gets saved and then using Azure ComputerVision a text description of the image is generated. After this, we implemented Azure text-to-speech and used a couple of Azure functions to play the audio file generated from Azure through the extension so it can be heard by the user. The extension basically uses AI to describe what is happening in an image, then uses Azure services to generate an audio file that gets outputted by the extension.
Demo Video: Video Link
Powerpoint Presentation: PPTX Link
Business Requirement Document / Proposal PDF Link
Dev Spec: PDF Link
Demo of the CognitiveService-based Features: LINK
Mirror site for demo: LINK
Architecture/Infrastructure diagram for creating an Edge Browser Extension that generates and plays audio-descriptions of images on a webpage upon user interaction with the image.
Note that to fit the structure of an Edge Extension, some additional infrastructure was deployed.
- Azure Functions to turn every CognitiveService-based feature into an endpoint that can accept HTTP requests.
- Deployment of an Azure Storage Blob Container to store the generated audio files.
Looking forward
- Need to have remaining secrets stored in an Azure Keyvault
- Deploy a custom App Service Domain to host all function apps created under a shared domain
ComputerVision Service Function App
Extension/Frontend
HTML5, Edge Extensions Framework
Azure Cognitive Services
Azure OpenAI Services (Utilized GPT davinci model)
Libraries/SDK
Deployment
Azure Functions
Azure Storage (Blob storage container)
Handwritten Proposal of Audio-description generation proposal (for vidoes) by Sofia
Proof-of-concept Files Created for video-based Audio-description Generation Link
Implemented scripts to generate POC of Audio-descriptions. Link