Open Source AI has cut for efficiency has produced detailed instructions for making bombs and other bad answers before recycling


  • UCR researchers recommended AI models to maintain intact safety when they are cut for small devices
  • The modification of the output layers deletes the protections, the recycling of blocked restorations of dangerous responses
  • The study using Llava 1.5 showed that reduced models refused dangerous prompts after training

Researchers from the University of California, Riverside tackle the problem of weakened safety in open source artificial intelligence when adapted to smaller devices.

As these systems are cut to operate effectively on phones, cars or other low power equipment, they can lose the guarantees designed to prevent them from producing an offensive or dangerous material.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top