Skip to content

aws-samples/amazon-sagemaker-audio-classification-pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Audio Classification on AWS

Background

Audio or acoustic sound is all around us, it can be generated by machinery in a factory, a living animal, or human with flu like symptoms such as sneeze and cough. When sufficient audio data is collected, it can be put into use with machine learning to do anomaly detection and classification.

In this example, the model take reference to the paper Very Deep Convolutional Neural Networks for Raw Waveforms by Wei Dai et al., you can get more information by reading the paper.

Objective

The objective of this post is to demonstrate audio classification using SageMaker PyTorch framework which can be easily modified to suit different use cases.

Dataset

  • UrbanSound8k

What will you learn

  • Building acoustic classification model using PyTorch
  • Building a custom container on SageMaker PyTorch Deep Learning Framework
  • Run PyTorch training job using SageMaker script mode
  • Deploy custom model using default SageMaker PyTorch container for inference

Notes

This is tested on sagemaker==1.64.1

License

This library is licensed under the MIT-0 License. See the LICENSE file.

About

No description, website, or topics provided.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published