Skip to content

Latest commit

 

History

History
69 lines (61 loc) · 4.78 KB

awesome-aiml-security-learning-resources.md

File metadata and controls

69 lines (61 loc) · 4.78 KB

Awesome AI/ML Security Learning Resources

AI/ML Learning Resources

ToC

  1. Books
  2. Videos
  3. Free/Paid Courses
  4. Free/Paid Labs
  5. Certifications
  6. Blogs/Articles
  7. AI/ML Security Tools

Books

Videos

Free/Paid Courses

Free/Paid Labs

AI/ML Security Certifications

Blogs/Articles

I have added links in one place as of now. I will categorise it later for better learning.

  1. LLM CS-324 from Stanford University
  2. COS 597G (Fall 2022): Understanding Large Language Models from Princeton University
  3. Intro to LLM Security from WhyLabs (Youtube)
  4. GenAI with LLM from Coursera
  5. LLM Security
  6. LLM AI Security and Governance Checklist from OWASP
  7. OWASP Top 10 for LLM Application
  8. Web LLM attacks from portswigger
  9. LLM Pentesting exam: https://secops.group/product/certified-ai-ml-pentester/
  10. Prompt injection jailbreaking: https://ogre51.medium.com/security-of-llm-apps-prompt-injection-jailbreaking-fb9fc5c883a8
  11. AI security challenges, CTF style: https://promptairlines.com/
  12. Play another LLM security challenge with Gandalf: https://gandalf.lakera.ai/
  13. Riding the RAG Trail: Access, Permissions and Context: https://www.lasso.security/blog/riding-the-rag-trail-access-permissions-and-context
  14. Securing Risks with RAG Architectures: https://ironcorelabs.com/security-risks-rag/
  15. Secure your RAG: https://ironcorelabs.com/security-risks-rag/
  16. Mitigating Security Risks in Retrieval Augmented Generation (RAG) LLM Applications: https://cloudsecurityalliance.org/blog/2023/11/22/mitigating-security-risks-in-retrieval-augmented-generation-rag-llm-applications#
  17. RAG: The Essential Guide: https://www.nightfall.ai/ai-security-101/retrieval-augmented-generation-rag
  18. Why RAG is revolutionising GenAI: https://www.immuta.com/guides/data-security-101/retrieval-augmented-generation-rag/
  19. PortSwigger LLM attacks: https://portswigger.net/web-security/llm-attacks
  20. LLM Security portal: https://llmsecurity.net/
  21. What are foundational models: https://www.datacamp.com/blog/what-are-foundation-models
  22. World’s first bug bounty platform for AI/ML: https://huntr.com/
  23. Protect AI's OSS portfolio includes tools aimed at improving the security of AI/ML software: https://github.com/protectai
    1. LLM Guard: https://github.com/protectai/llm-guard
    2. AI/ML exploits: https://github.com/protectai/ai-exploits
    3. Model scan: https://github.com/protectai/modelscan
    4. rebuff: https://github.com/protectai/rebuff
    5. NB Defense: https://github.com/protectai/nbdefense
  24. Safeguarding LLm with llm-guard: https://medium.com/@dataenthusiast.io/language-models-at-risk-safeguarding-ai-with-llm-guard-11a3e7923af5
  25. LLM Guard Playground: https://huggingface.co/spaces/protectai/llm-guard-playground
  26. Online Courses:
    1. https://www.coursera.org/learn/generative-ai-with-llms (by DeepLearning and AWS)
    2. https://www.coursera.org/specializations/generative-ai-engineering-with-llms#courses
    3. https://www.coursera.org/specializations/generative-ai-for-cybersecurity-professionals (Specialization from IBM [3 courses])
    4. https://www.coursera.org/specializations/ai-for-cybersecurity (Specialisation from John Hopkins University)
  27. NIST AI RMF Playbook: https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook
  28. AI RMF: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf
  29. Adversarial Machine Learning: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-2e2023.pdf
  30. Failure Models in Machine Learning: https://securityandtechnology.org/wp-content/uploads/2020/07/failure_modes_in_machine_learning.pdf
  31. A quick check on the AI Threat Model: https://plot4.ai/assessments/quick-check
  32. Threat Modeling AI/ML by Microsoft: https://learn.microsoft.com/en-us/security/engineering/threat-modeling-aiml
  33. Security Incident Response using LLM: https://engineering.mercari.com/en/blog/entry/20241206-streamlining-security-incident-response-with-automation-and-large-language-models/
  34. The foundation of AI Security by AttackIQ: https://www.academy.attackiq.com/courses/foundations-of-ai-security

AI/ML Security Tools