Researchers work to tame deadly selfies

Researchers work to tame deadly selfies

Researchers are working on an app that could save people from being killed while taking dangerous selfies.

Carnegie Mellon University announced that researchers there are working with colleagues at Indraprastha Institute of Information Technology in Delhi, India to take on the issue of deadly selfies.

People around the globe have been putting themselves in reckless situations - on railroad tracks, on cliff edges -- to grab a memorable selfie. Researchers found that individuals died falling from high places, while the most group deaths happened around water, with some dying in capsized boats.

"In India, a number of deaths occurred when friends or lovers posed on railroad tracks, which is widely regarded as a symbol of long-term commitment in that culture," Carnegie Mellon reported. "Gun-related deaths in selfies occurred only in the U.S. and Russia. Road- and vehicle-related selfies and animal-related selfies also were associated with deaths."

selfie death map2 Carnegie Mellon University

Researchers are using machine learning to study cases of deadly selfies around the world. People have been increasingly putting themselves in dangerous situations to get a memorable selfie.

Men accounted for three out of every four deaths, the report noted.

There's also concern that selfie deaths will continue to rise as taking dangerous selfies grows in popularity, with people using hashtags like #dangerousselfie and #extremeselfie.

Researchers culled public records to compile a list of 127 deaths associated with people around the world taking selfies between March 2014 and September 2016. Using that information, along with news reports on selfie-related deaths, researchers were able to design a system that uses location, image and text to classify whether a selfie was taken during a dangerous situation.

With machine learning, the researchers then taught a computer to look for dangerous selfies on social media sites. The computer, using image recognition, looked for dangerous locations like extreme heights, locations near water or near railways and busy roads. Analysis of the image itself, as well as of any text it contained, helped train the computer to classify a selfie as dangerous or not.

According to Carnegie Mellon, the system was able to tell the difference between a dangerous selfie and one that is not risky 73% of the time.

That technology will be critical to developing an app that could be used to decrease the number of selfie deaths.

An app, which has not yet been developed, could be designed to warn a user or even disable the phone if a selfie is being taken in a dangerous situation. The problem, though, is that some people might use a warning as bragging rights that they're brave enough to put themselves in a dangerous situation.

"There can be no app for stupidity," Hemank Lamba, a Ph.D. student in Carnegie Mellon's Institute for Software Research, said in a statement.

The app also could be used to pinpoint areas where people are routinely taking dangerous selfies so they could be marked as "no selfie" zones.

Carnegie Mellon also noted that an app could be used for augmented reality games, like Pokemon Go, to keep users from putting themselves in risky situations while playing.

"When you see a problem in society," he explained, "you find ways to use technology to solve it."

IDG Insider


«How a social robot could make our leaders better


How a video-gaming programmer created a popular VR surgical simulator»
IDG Connect

IDG Connect tackles the tech stories that matter to you

Our Case Studies

IDG Connect delivers full creative solutions to meet all your demand generatlon needs. These cover the full scope of options, from customized content and lead delivery through to fully integrated campaigns.


Our Marketing Research

Our in-house analyst and editorial team create a range of insights for the global marketing community. These look at IT buying preferences, the latest soclal media trends and other zeitgeist topics.



Should the government regulate Artificial Intelligence?