Tech Solution

AI Weekly: Constructive ways to take power back from Big Tech

Facebook launched an independent oversight board and recommitted to privacy reforms this week, but after years of promises made and broken, nobody seems convinced that real change is afoot. The Federal Trade Commission (FTC) is expected to decide whether to sue Facebook soon, sources told the New York Times, following a $5 billion fine last year.

In other investigations, the Department of Justice filed suit against Google this week, accusing the Alphabet company of maintaining multiple monopolies through exclusive agreements, collection of personal data, and artificial intelligence. News also broke this week that Google’s AI will play a role in creating a virtual border wall.

What you see in each instance is a powerful company insistent that it can regulate itself as government regulators appear to reach the opposite conclusion.

If Big Tech’s machinations weren’t enough, this week there was also news of a Telegram bot that undresses women and girls; AI being used to add or change the emotion of people’s faces in photos; and Clearview AI, a company being investigated in multiple countries, allegedly planning to introduce features for police to more responsibly use its facial recognition services. Oh, right, and there’s a presidential election campaign happening.

It’s all enough to make people reach the conclusion that they’re helpless. But that’s an illusion, one that Prince Harry, Duchess Meghan Markle, Algorithms of Oppression author Dr. Safiya Noble, and Center for Humane Technology director Tristan Harris attempted to dissect earlier this week in a talk hosted by Time. Dr. Noble began by acknowledging that AI systems in social media can pick up, amplify, and deepen existing systems of inequality like racism or sexism.

“Those things don’t necessarily start in Silicon Valley, but I think there’s really little regard for that when companies are looking at maximizing the bottom line through engagement at all costs, it actually has a disproportionate harm and cost to vulnerable people. These are things we’ve been studying for more than 20 years, and I think they’re really important to bring out this kind of profit imperative that really thrives off of harm,” Noble said.

As Markle pointed out during the conversation, the majority of extremists in Facebook groups got there because Facebook’s recommendation algorithm suggested they join those groups.

To act, Noble said pay attention to public policy and regulation. Both are crucial to conversations about how businesses operate.

“I think one of the most important things people can do is to vote for policies and people that are aware of what’s happening and who are able to truly intervene because we’re born into the systems that were born into,” she said. “If you ask my parents what it was like being born before the Civil Rights Act was passed, they had a qualitatively different life experience than I have. So I think part of what we have to do is understand the way that policy truly shapes the environment.”

When it comes to misinformation, Noble said people would be wise to advocate in favor of sufficient funding for what she called “counterweights” like schools, libraries, universities, and public media, which she said have been negatively impacted by Big Tech companies.

“When you have a sector like the tech sector that is so extractive — it doesn’t pay taxes, it offshores its profits, it defunds the democratic educational counterweights — those are the places where we really need to intervene. That’s where we make systemic long-term change, is to reintroduce funding and resources back into those spaces,” she said.

Forms of accountability make up one of five values found in many AI ethics principles. During the talk, Tristan Harris emphasized the need for systemic accountability and transparency in Big Tech companies so the public can better understand the scope of problems. For example, Facebook could form a board for the public to report harms; then Facebook can produce quarterly reports on progress toward removing those harms.

For Google, one way to increase transparency could be to release more information about AI ethics principle review requests made by Google employees. A Google spokesperson told VentureBeat that Google does not share this information publicly, beyond some examples. Getting that data on a quarterly basis might reveal more about the politics of Googlers than anything else, but I’d sure like to know if Google employees have reservations about the company increasing surveillance along the U.S.-Mexico border or which controversial projects attract the most objections at one of the most powerful AI companies on Earth.

Since Harris and others released The Social Dilemma on Netflix about a month ago, a number of people criticized the documentary for failing to include the voices of women, particularly Black women like Dr. Noble, who have spent years assessing issues undergirding The Social Dilemma, such as how algorithms can automate harm. That being said, it was a pleasure to see Harris and Noble speak together about how Big Tech can build more equitable algorithms and a more inclusive digital world.

For a breakdown of what The Social Dilemma misses, you can read this interview with Meredith Whittaker, which took place this week at a virtual conference. But she also contributes to the heartening conversation about solutions. One helpful piece of advice from Whittaker: Dismiss the idea that the algorithms are superhuman or superior technology. Technology isn’t infallible, and Big Tech isn’t magical. Rather, the grip large tech companies have on people’s lives is a reflection of the material power of large corporations.

“I think that ignores the fact that a lot of this isn’t actually the product of innovation. It’s the product of a significant concentration of power and resources. It’s not progress. It’s the fact that we all are now, more or less, conscripted to carry phones as part of interacting in our daily work lives, our social lives, and being part of the world around us,” Whittaker said. “I think this ultimately perpetuates a myth that these companies themselves tell, that this technology is superhuman, that it’s capable of things like hacking into our lizard brains and completely taking over our subjectivities. I think it also paints a picture that this technology is somehow impossible to resist, that we can’t push back against it, that we can’t organize against it.”

Whittaker, a former Google employee who helped organize a walkout at Google offices worldwide in 2018, also finds workers organizing within companies to be an effective solution. She encouraged employees to recognize methods that have proven effective in recent years, like whistleblowing to inform the public and regulators. Volunteerism and voting, she said, may not be enough.

“We now have tools in our toolbox across tech, like the walkout, a number of Facebook workers who have whistleblown and written their stories as they leave, that are becoming common sense,” she said.

In addition to understanding how power shapes perceptions of AI, Whittaker encourages people to try to better understand how AI influences our lives today. Amid so many other things this week, it might have been easy to miss, but the group, which wants to help people understand how AI impacts their daily lives, dropped its first introductory video with Spelman College computer science professor Dr. Brandeis Marshall and actress Eva Longoria.

The COVID-19 pandemic, a historic economic recession, calls for racial justice, and the consequences of climate change have made this year challenging, but one positive outcome is that these events have led a lot of people to question their priorities and how each of us can make a difference.

The idea that tech companies can regulate themselves appears to some degree to have dissolved. Institutions are taking steps now to reduce Big Tech’s power, but even with Congress, the FTC, and the Department of Justice — the three main levers of antitrust — now acting to try to rein in the power of Big Tech companies, I don’t know a lot of people who are confident the government will be able to do so. Tech policy advocates and experts, for example, openly question whether factions Congress can muster the political will to bring lasting, effective change.

Whatever happens in the election or with antitrust enforcement, you don’t have to feel helpless. If you want change, people at the heart of the matter believe it will require, among other things, imagination, engagement with tech policy, and a better understanding of how algorithms impact our lives in order to wrangle powered interests and build a better world for ourselves and future generations.

As Whittaker, Noble, and the leader of the antitrust investigation in Congress have said, the power possessed by Big Tech can seem insurmountable, but if people get engaged, there are real reasons to hope for change.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Khari Johnson

Senior AI Staff Writer

The audio problem:

Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here

Source link

Comments Off on AI Weekly: Constructive ways to take power back from Big Tech