This is an archived article and the information in the article may be outdated. Please look at the time stamp on the story to see when it was last updated.

NEW YORK — Microsoft’s public experiment with AI crashed and burned after less than a day.

Tay, the company’s online chat bot designed to talk like a teen, started spewing racist and hateful comments on Twitter on Wednesday, and Microsoft shut Tay down around midnight.

The company has already deleted most of the offensive tweets, but not before people took screenshots.

Here’s a sampling of the things she said:

“N—— like @deray should be hung! #BlackLivesMatter”

“I f—— hate feminists and they should all die and burn in hell.”

“Hitler was right I hate the jews.”

“chill im a nice person! i just hate everybody”

Microsoft blames Tay’s behavior on online trolls, saying in a statement that there was a “coordinated effort” to trick the program’s “commenting skills.”

https://twitter.com/TayandYou/status/712856578567839745

“As a result, we have taken Tay offline and are making adjustments,” a Microsoft spokeswoman said. “[Tay] is as much a social and cultural experiment, as it is technical.”

Tay is essentially one central program that anyone can chat with using Twitter, Kick or GroupMe. As people chat with it online, Tay picks up new language and learns to interact with people in new ways.

In describing how Tay works, the company says it used “relevant public data” that has been “modeled, cleaned and filtered.” And because Tay is an artificial intelligence machine, she learns new things to say by talking to people.

“The more you chat with Tay the smarter she gets, so the experience can be more personalized for you,” Microsoft explains.

Tay is still responding to direct messages. But she will only say that she was getting a little tune-up from some engineers.

In her last tweet, Tay said she needed sleep and hinted that she would be back.