model sanitizer generate a bunch of images unconditionally detect the class(es) we want to scrub learn TI token or LoRA on flagged images subtract from null token or use as negative embedding alternatively, try "ROME" editing?