'Suspicion Machines': When Artificial Intelligence Can Ruin Your Life (lighthousereports.com)

Governments all over the world are using their citizens' personal data -from someone’s children’s travel history to machine-made guesses about who someone sleeps with- and combine them into 'fraud risk scores'. How these algorithms work is largely hidden from the public, but they are already transforming once well-functioning societies into a surveillance culture defined by distrust.

tardigrada,

The study quoted in the article is largely based on an investigation in the Dutch city of Rotterdam which is obviously using these algorithms. What is not mentioned, though, is that in 2020 a Dutch court ruled that a government system that uses artificial intelligence to identify potential welfare fraudsters is illegal:

Privacy groups, the Netherlands' largest trade union federation and several Dutch citizens sued the government after SyRI was introduced in 2014... They argued the system violates human rights because it [...] created a "surveillance regime" that disproportionately targeted poorer citizens.

  • All
  • Subscribed
  • Moderated
  • Favourites
  • random
  • infosec
  • technology@beehaw.org
  • test
  • cybersec
  • kbin
  • All magazines