An artificial intelligence software from Microsoft had to be suspended from Twitter after it began posting offensive remarks.
The original software was designed to post on social media in the style of a teenage girl, learning through social interactions on Twitter.
Originally targeted towards communicating with young Americans, aged 18-24, the chatbot named Tay, was targeted by users looking to manipulate the account to post racist and sexist remarks.
Microsoft’s statement on Thursday said that within 24 hours “we became aware of a co-ordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways”.
Most messages have been deleted, with the latest remaining tweet on the account @TayandYou, begins “c u soon”.