AIs are malleable, so it's surprisingly easy to get a "good" AI to do bad stuff. This is one reason why AI security is so important.
How much of this is moot if/when open-source models get really capable?
Are there infosec project ideas you're excited about? What have other folks done that you think resembles good work here?
How much of this is moot if/when open-source models get really capable?
Are there infosec project ideas you're excited about? What have other folks done that you think resembles good work here?