Survey of Small Language Models from Penn State, ...
-
Updated
Dec 31, 2024
Survey of Small Language Models from Penn State, ...
Deep Fact Validation
Provides web credibility models (Likert scale) to assign a trustworthiness score to a given website.
In this paper, we introduce SAShA, a new attack strategy that leverages semantic features extracted from a knowledge graph in order to strengthen the efficacy of the attack to standard CF models. We performed an extensive experimental evaluation in order to investigate whether SAShA is more effective than baseline attacks against CF models by ta…
In the dynamic landscape of medical artificial intelligence, this study explores the vulnerabilities of the Pathology Language-Image Pretraining (PLIP) model, a Vision Language Foundation model, under targeted attacks like PGD adversarial attack.
Codes and Datasets for our WSDM 2022 Paper: "MTLTS: A Multi-Task Framework To Obtain Trustworthy Summaries From Crisis-Related Microblogs"
Trustworthiness Monitoring & Assessment Framework
A list of tools and methods for building trustworthy software following TrustOps principles.
a matrix to provide the clarified definition and relationship information of trustworthiness characteristics between in the AI/ML standards
Visualization and embedding of large datasets using various Dimensionality Reduction (DR) techniques such as t-SNE, UMAP, PaCMAP & IVHD. Implementation of custom metrics to assess DR quality with complete explaination and workflow.
Independent continuation of a project from AstonHack 2017
Website for health data science at KDD 2021
Secure and trustworthy mobile AI.
In this work, we provide 24 combinations of attack/defense strategies, and visual-based recommenders to 1) access performance alteration on recommendation and 2) empirically verify the effect on final users through offline visual metrics.
An Assurance Process for Big Data Trustworthiness - Marco Anisetti, Claudio A. Ardagna, Filippo Berto
This repository is an implementation of the paper "Trustworthy Medical Image Segmentation with improved performance for in-distribution samples" published in Neural Networks.
Component M - Trustworthiness Monitoring & Assessment Framework
Proof of Freshness: collate proof of an authorship date.
Proposal of a novel adversarial attack approach, called Target Adversarial Attack against Multimedia Recommender Systems (TAaMR), to investigate the modification of MR behavior when the images of a category of low recommended products (e.g., socks) are perturbed to misclassify the deep neural classifier towards the class of more recommended prod…
Component K - Trustworthiness Monitoring & Assessment Framework
Add a description, image, and links to the trustworthiness topic page so that developers can more easily learn about it.
To associate your repository with the trustworthiness topic, visit your repo's landing page and select "manage topics."