This is an archived article and the information in the article may be outdated. Please look at the time stamp on the story to see when it was last updated.

NEW ZEALAND — New Zealand’s prime minister is upping the pressure on Facebook and other social media platforms after they failed to stop the spread of the Christchurch mosque attack video.

Prime Minster Jacinda Ardern said Monday that the tech companies have “a lot of work” to do to curb the proliferation of content that incites hate and violence. Part of the attack Friday, in which 50 people at two different mosques were killed, was livestreamed on Facebook, and the video spread rapidly on YouTube, Twitter and other platforms.

Tech companies have been scrambling to clamp down on the graphic footage.

Facebook said Sunday that it removed 1.5 million videos of the attack in the first 24 hours. It blocked 1.2 million of them at upload, meaning they would not have been seen by users. Facebook didn’t say how many people had watched the remaining 300,000 videos.

Facebook’s ability to automatically block 1.2 million videos at upload “tells me there are powers to take a very direct approach to instances of speech that incites violence, or that incites hate,” Ardern said at a news conference Monday.

She urged all social media companies to take responsibility for how their platforms were used in the lead-up to the mosque attack and in the aftermath.

Facebook initially missed the livestream of the attack on Friday. The company said it took it down only after being alerted to it by New Zealand police.

It said Sunday that it’s trying to take down the video in any form, including clips of it that don’t feature any graphic content as well as posts expressing praise or support for the shooting.

Google, which owns YouTube, declined to comment Monday on its latest efforts to take down the videos. Twitter didn’t respond to a request for comment.

Fertile environments for hate

Many experts agree with Ardern that tech companies could do a great deal more to tackle the spread of hate-fueled and violent content, an issue over which they have faced recurrent criticism for years.

The big social media platforms have expended more effort policing copyrighted material than the kind of graphic content posted by the Christchurch shooter, according to Jean Burgess, a professor of digital media studies at Queensland University of Technology in Australia.

As a result, Facebook, Twitter and YouTube have become fertile environments in which extremist subcultures can thrive and organize, she said.

“These platforms are now racing to stamp out the sharing of media content for which they bear at least some responsibility,” Burgess said.

Facebook’s failure to catch the video before being alerted by police comes amid repeated pledges from the company about moderating content on its platform. Facebook, Twitter and other tech giants are under intense scrutiny for how they are used to spread misinformation, hate speech and graphic content.

In response, Facebook says it has hired tens of thousands of human moderators and is investing in artificial intelligence to help police their sites.

“We continue to work around the clock to remove violating content using a combination of technology and people,” Mia Garlnick, spokesperson for Facebook New Zealand, said Sunday.

The company said it started “hashing” the Christchurch shooting video, which means that videos posted on Facebook and Instagram that are visually similar to the original can be detected and automatically removed. It’s also using audio technology to find posts of the video where the images are not as easily detected or not a match with the videos already removed.

‘The largest platforms can still be misused’

Facebook CEO Mark Zuckerberg is yet to make a public statement about the Christchurch attack. When the murder of a man in Ohio was livestreamed on Facebook in 2017, Zuckerberg offered condolences to the victim’s family two days later.

Facebook didn’t immediately respond to a request for comment on whether or when Zuckerberg plans to comment on the New Zealand attack.

US Senator Mark Warner, who sits on a committee that has questioned the social media companies, said on Friday that it wasn’t just Facebook that needed to be held accountable.

“The rapid and wide-scale dissemination of this hateful content — live-streamed on Facebook, uploaded on YouTube and amplified on Reddit — shows how easily the largest platforms can still be misused. It is ever clearer that YouTube, in particular, has yet to grapple with the role it has played in facilitating radicalization and recruitment,” Warner said in a statement provided to CNN.

YouTube and Twitter haven’t commented on the spread of the graphic videos since Friday.

YouTube said then that it was “working vigilantly to remove any violent footage.”

Twitter said it was “continuously monitoring and removing any content that depicts the tragedy, and will continue to do so in line with the Twitter Rules.”