Transparency system means ‘sneaky algorithms’ can’t hide
Analytics Software

Transparency system means ‘sneaky algorithms’ can’t hide

Earlier this month Facebook was accused of having political bias in its “trending news” section. Facebook was predictably outraged that it could ever be accused of such a thing and immediately set up an investigation. The social networking giant concluded that it found “no evidence of political bias” in the selection of its stories but admitted it couldn’t exclude the possibility of “unintentional bias” in the implementation of its guidelines.

Facebook on its part has implemented a number of changes since this whole investigation unfolded, but is it good enough? Increasingly a large number of people now use social networks like Facebook and Twitter as their primary source for news. Which gives those same networks plenty of power to manipulate us as they choose.

The scary part is no one really knows how these algorithms work and yet they influence our actions on a daily basis. They manipulate our moods, our election results and even the way we perceive the world. Google is our most powerful search engine and yet the way its algorithms control Google’s applications is deeply mysterious. This is despite its photo app being in the news for having a racist auto-tag feature.

In the US predictive policing is rising but the algorithms being used are leading to discrimination against African Americans and over-policing in some areas. A recent report by ProPublica found that the algorithms used to predict future crime were “remarkably unreliable” and only “20 percent of the people predicted to commit violent crimes actually went on to do so”.

A group of researchers at Carnegie Mellon University have recognised this issue and have developed a system that examines in depth how algorithms make decisions that influence our daily lives like credit applications to predictive policing. They have called this system the Quantitative Input Influence (QII) and the aim is to improve the “transparency of such decision-making systems”. To give the example of predictive policing, it can be used by the police to “test the system for early detection of harms like race-based discrimination”.

A person’s age and gender are often factors that influence key decisions and the researchers give an example of how they can be used for hiring:

“Consider a system that assists in hiring decisions for a moving company. Gender and the ability to lift heavy weights are inputs to the system. They are positively correlated with each other and with the hiring decisions. Yet transparency into whether the system uses the weight lifting ability or the gender in making its decisions (and to what degree) has substantive implications for determining if it is engaging in discrimination,” the researchers write in their report [PDF].

The researchers want to particularly focus on the areas of healthcare, predictive policing, education and defense as they feel these areas deserve the most attention in achieving algorithmic transparency. It remains to be seen whether this system will be adopted by companies but it is important and necessary – especially in an age where algorithms are subtly shaping our lives.



Also read:

Facebook’s news censorship: Manipulative algorithms are nothing new

Wikipedia’s Aaron Halfaker on anonymous editors and sneaky algorithms

Hitachi doesn’t need ‘Pre-cogs’ to predict crime

World White Web: Is racism still rampant on the internet?


«The real meaning of… ChatBots


This month in M&A: Apple and China – a complicated relationship »
Ayesha Salim

Ayesha Salim is Staff Writer at IDG Connect

  • twt
  • Mail

Our Case Studies

IDG Connect delivers full creative solutions to meet all your demand generatlon needs. These cover the full scope of options, from customized content and lead delivery through to fully integrated campaigns.


Our Marketing Research

Our in-house analyst and editorial team create a range of insights for the global marketing community. These look at IT buying preferences, the latest soclal media trends and other zeitgeist topics.



Should the government regulate Artificial Intelligence?