Text was considered relatively safe from adversarial attacks, because, whereas a malicious agent can make minute adjustments to an image or waveform of sound, it can’t alter a word by, say, 1%. But Prof. Alex Dimakis of Texas ECE and his collaborators have investigated a potential threat to text-comprehension AIs.
The research was led by UT student Qi Lei and collaborators at IBM Research and Amazon. The study was published in SysML 2019 and covered by Nature News.