Hitachi Vantara Next 2019 Part 3: The dark side of data and machine learning
By Ryan Noik 15 October 2019 | Categories: Corporate EventsShifting gears from the technicalities of its new offerings, the Hitachi Vantara conference, which took place in Las Vegas last week, then explored some of the ethical and societal challenges that data portends.
Techno-sociologist, New York Times contributor and author Zeynep Tufekci explained that even as we are in a major period of transition, with AI, machine learning bring major changes and opportunities, all societal shifts of this magnitude come with their pitfalls. Tufekci stressed that all technology has its lights and its shadows. It was the latter that Tufekci’s keynote address laid bare.
“Machine learning offers an entirely different way to use computers from how we have employed them in the past. With machine learning, we throw all this data this data at them and then leave it to them to generate insights beyond our control. They are less like something we have instructed and more like something we have grown,” she explained.
The mysterious machine
Tufekci elaborated that the tricky part of this approach is that the machines generating these insights are black boxes to us – we don’t understand what they are doing, nor how they ultimately come up with the insights they produce.
That, she warned could have truly dangerous consequences. To illustrate her point, she relayed an experience she had with YouTube’s recommendation algorithm, which is driven by artificial intelligence and machine learning.
She explained that while researching Donald Trump’s rallies for a New York Time column she was writing, she starting watching several of his speeches on YouTube so ensure she was quoting him correctly. Then, she noticed, the algorithm started recommending her other videos that were increasingly extreme, displaying white supremist and Holocaust denial content in her feed.
The reason for this, she discovered, was that the algorithm powering YouTube’s recommendation engine had used machine learning to increase viewers’ watch time. Because of this, the machine had discovered a human vulnerability that edgier content is more attractive and content that pushes boundaries gets more engagement.
Zeynep Tufekci
Human frailty uncovered
“YouTube discovered this vulnerability without a single engineer intending to push more extreme views. The result however, is that viewers are pushed to more extreme views automatically,” she warned. It’s not just inadvertently promoting greater polarization that is a risk. Tufekci pointed out that an algorithm written to help hotels find people to promote Las Vegas to may well generate results that identify compulsive gamblers, or people in manic states.
“The algorithm may well bring them more people to visit Las Vegas, but may also be doing something that is harmful to people’s wellbeing or unethical,” she warned. Another possibility of machine learning going awry is that machines could be prejudicial in dimensions we hadn’t considered, such as discriminating against female job applicants that it identifies as possibly falling pregnant.
Dark state of data misuse
And the potentials of data and machine learning can become darker still, with number plate or facial recognition data being used by totalitarian or authoritarian governments to clamp down on protesters or those assisting them.
So how then should companies entering this brave new data economy deal with the dark potentials of data abuse? Tufekci strongly urged assembling a ‘Red Team’ dedicated to pre-empting how the data they are gathering could be used for dark purposes. Then, guidelines can be created so as to obviate the risks while reaping the rewards that data management can bring.
“Get people in the room speculating about what can go wrong. Thinking about it before it does go wrong can ensure it doesn’t,” she concluded.
Most Read Articles
Have Your Say
What new tech or developments are you most anticipating this year?