On Monday morning, organizers of NeurIPS, the largest annual gathering of AI researchers in the world, gave Best Paper awards to the authors of three research papers, including one detailing OpenAI’s GPT-3 language model. The week also started with AI researchers refusing to review Google’s AI papers until grievances are resolved following the firing of Ethical AI team co-lead Timnit Gebru. Googlers describe Gebru’s dismissal as an instance of “unprecedented research censorship” that raises questions of corporate influence in the field. According to one analysis, Google publishes more AI research than any other company or institution.
The tension between corporate interests, human rights, ethics, and power could be seen at workshops throughout the week. At the Muslim in AI workshop on Tuesday, participants explored GPT-3’s anti-Muslim bias, as well as the ways AI and IoT devices are used to control and surveil Muslims in China. The Washington Post reported this week that Huawei is thought to be working on AI with a “Uighur alarm” that lets authorities track members of the Muslim minority group. Huawei is a platinum sponsor of NeurIPS. In response to questions about Huawei and how NeurIPS handles ethical considerations when it comes to sponsors, a NeurIPS spokesperson told VentureBeat on Friday that a new sponsorship committee is being formed to evaluate sponsor criteria and “determine policies for vetting and accepting sponsors.”
Following a keynote address Wednesday, Microsoft Research Lab director Chris Bishop was asked if Big Tech companies’ monopoly on infrastructure and machine learning talent is stifling innovation. He responded by arguing that cloud computing allows developers to rent compute resources instead of undertaking the more expensive task of buying the hardware that powers machine learning.
On Friday, the Resistance AI workshop highlighted research that urges tech companies to go beyond scale to address societal issues. The workshop also showcased research that compares Big Tech’s research funding tactics to those employed by Big Tobacco. That workshop was organized to bring together an intersectional group of marginalized people from a range of backgrounds and champion AI that gives power back to people and steers clear of oppression.
“We were frustrated with the limitations of ‘AI for good’ and how it could be co-opted as a form of ethics-washing,” organizers said in a statement to VentureBeat. “In some ways, we still have a long way to go: Many of us are adjacent to big tech and academia, and we want to do better at engaging those who don’t have this kind of institutional power.”
This was also the first year NeurIPS required attendees to include societal impact and financial disclosure statements. Financial disclosures are due in January when authors submit final versions of their papers. Reviewers rejected four papers this year on ethical grounds.
On a very different front, the technical effort that went into putting on the NeurIPS research conference was historic. In all, 22,000 people attended the virtual conference, compared to 13,000 in-person attendees last year in Vancouver. The formula for how to put on a virtual NeurIPS event came out of ICLR and ICML, major AI research conferences held in the spring and summer, respectively.
Prior to the pandemic, prominent AI researchers had argued in favor of exploring remote options as a way to cut the carbon footprint associated with flying to events around the world. Some of those ideas were played out with short notice for the International Conference on Learning Representations (ICLR), the first major all-digital AI research conference.
Organizers say they had learned that Zoom was not a great venue for poster sessions. Instead, NeurIPS poster sessions took place in gather.town, a spatial video chat service. Each user had an avatar and the ability to move freely between posters summarizing research.
One matter that hasn’t been resolved yet is whether AI research conferences will continue to offer a virtual attendance option after the pandemic is contained. In addition to enabling greater access, virtual events mean lower costs for organizers, which translates to reduced dependence on corporate sponsorship money. But if a hybrid format is employed, an organizing committee member cautioned against the virtual offering becoming a second-class experience for those without the resources to travel to an in-person event.
One participant in a Q&A session between attendees and organizers summarized the mix of factors: “I sincerely hope we are able to return to in-person meetings. But I also think the benefits of the virtual experience should not be discarded, especially to enable more people to participate, who may face hardships in attending in person, such as for financial, visa-related, or other reasons.”
It’s tough to say what will happen with continuing efforts to address harm caused by AI or even whether a virtual conference format will be extended. But between an AI ethics meltdown at Google and NeurIPS hosting the largest virtual AI conference held to date, after this week machine learning may never be the same, and I hope that’s a good thing.
Thanks for reading,
Senior AI Staff Writer