Scores of Muslim women featured in-app that was taken down * Technologies such as deepfake, tracking used to harass women
Victims struggle to be taken seriously, get justice
An open source app on the Github platform called ‘Bulli Bai’ – a derogatory term to describe Muslim women – had shared pictures of dozens of women without their consent before it was taken down.
Six months ago, pilot Hana Khan saw her picture on an app that appeared to be auctioning scores of Muslim women in India. The app was quickly taken down, no one was charged, and the issue shelved – until a similar app popped up on New Year’s Day.
Khan was not on the new app called Bulli Bai – a slur for Muslim women – that was hawking activists, journalists, an actor, politicians, and Nobel Laureate Malala Yousafzai as maids.
Amid growing outrage, the app was taken down, and four suspects were arrested this week. The fake auctions that were shared widely on social media are just the latest examples of how technology is being used – often with ease, speed, and little expense – to put women at risk through online abuse, theft of privacy or sexual exploitation.
For Muslim women in India who are often abused online, it is an everyday risk, even as they use social media to call out hatred and discrimination against their minority community.
“When I saw my picture on the app, my world shook. I was upset and angry that someone could do this to me, and I became angrier as I realized this nameless person was getting away with it,” said Khan, who filed a police complaint against the first app, Sulli Deals, another pejorative term for Muslim women.
“This time, I felt so much dread and despair that it was happening again to my friends, to Muslim women like me. I don’t know how to make it stop,” Khan, a commercial pilot in her 30s, told the Thomson Reuters Foundation. Mumbai police said they were investigating whether the Bulli Bai app was “part of a larger conspiracy”.
A spokesperson for GitHub, which hosted both apps, said it had “long-standing policies against content and conduct involving harassment, discrimination, and inciting violence. “We suspended a user account following the investigation of reports of such activity, all of which violate our policies.”
MISCONCEPTION
Advances in technology have heightened risks for women across the world, be it trolling or doing with their personal details revealed, surveillance cameras, location tracking, or deepfake pornographic videos featuring doctored images.
Deepfakes – or artificial, intelligence-generated, synthetic media – are used to create porn, with apps that let users strip clothes off women or swap their faces into explicit videos. Digital abuse of women is pervasive because “everybody has a device and a digital presence,” said Adam Dodge, chief executive of EndTAB, a U.S.-based nonprofit tackling tech-enabled abuse.
“The violence has become easier to perpetrate, as you can get at somebody anywhere in the world. The order of magnitude of harm is also greater because you can upload something and show it to the world in a matter of seconds,” he said. “And there is a permanency to it because that photo or video exists forever online,” he added.
The emotional and psychological impact of such abuse is “just as excruciating” as physical abuse, with the effects compounded by the virality, public nature, and permanence of the content online, said Noelle Martin, an Australian activist. At 17, Martin discovered her image had been photoshopped into pornographic images and distributed. Her campaign against image-based abuse helped change the law in Australia.
But victims struggle to be heard, she said. “There is a dangerous misconception that the harms of technology-facilitated abuse are not as real, serious, or potentially lethal as abuse with a physical element,” she said.
“For victims, this misconception makes speaking out, seeking support, and accessing justice much more difficult.”
PERSECUTION
Tracking lone creators and rogue coders is hard, and technology platforms tend to shield anonymous users who can easily create a fake email or social media profile. Even lawmakers are not spared: in November, the U.S. House of Representatives censured Republican Paul Gosar over a photoshopped anime video that showed him killing Democrat Alexandra Ocasio-Cortez. He then retweeted the video.
“With any new technology we should immediately be thinking about how and when it will be misused and weaponized to harm girls and women online,” said Dodge.
“Technology platforms have created a very imbalanced atmosphere for victims of online abuse, and the traditional ways of seeking help when we are harmed in the physical world are not as available when the abuse occurs online,” he said.
Some technology firms are taking action. Following reports that its AirTags – locator devices that can be attached to keys and wallets – were being used to track women, Apple launched an app to help users shield their privacy.
In India, the women on the auction apps are still shaken. Ismat Ara, a journalist showcased on Bulli Bai, called it “nothing short of online harassment.”
It was “violent, threatening and intending to create a feeling of fear and shame in my mind, as well as in the minds of women in general and the Muslim community,” Ara said in a police complaint that she posted on social media. Arfa Khanum Sherwani also featured for sale, wrote on Twitter: “The auction may be fake but the persecution is real.”
Indian police said on Thursday they had arrested a 20-year-old man they suspect created an online app that shared pictures of Muslim women for a virtual “auction”, as an investigation into the case of communal harassment widened.
K.P.S Malhotra, a police official in the capital New Delhi, said his team had arrested a 20-year-old engineering student from Jorhat in the eastern state of Assam after a probe that involved the state-run Computer Emergency Response Team.
“He is the person who had created the Bullibai app on Github. He had also created the Twitter handle @bullibai_ and other handles,” Malhotra said.
Police in the western city of Mumbai, who are also investigating the app, have separately arrested three people this week, including two 21-year-old engineering students and an 18-year-old woman. read more
Mumbai police said they were investigating whether the app, which did not involve any actual auctioning of people, was part of a “larger conspiracy”.
Several Indian Muslim journalists were targeted by the app, including Ismat Ara who filed and then shared on social media a police complaint on Sunday that said the app was “designed to insult Muslim women”.
“After today’s arrest by @DelhiPolice, I hope the culprits behind this elaborate harassment of Muslim women, including journalists like myself, will ultimately be caught & punished,” Ara said in a tweet on Thursday.
Muslims account for around 14% of India’s 1.3 billion population. Some sections of the community have been at odds with Prime Minister Narendra Modi’s administration and Hindu right-wing supporters, including over a controversial 2019 citizenship law that triggered large-scale protests.
The youngest of those arrested so far is from the northern Indian state of Uttarakhand. The 18-year-old began spending time on social media and made contact with Hindu right-wing users after finishing her school-leaving exams last year, a local police official who spoke to her earlier this week told Reuters.
The official, who declined to be named, said she had told him that her actions were based on Hindu right-wing ideology, which she had picked up on social media platforms, including Facebook, WhatsApp and Twitter.
“She came to social media to distract herself but she kept getting entangled in it,” the official said.
Reuters