A pandemic is raging with devastating consequences, and long-standing problems with racial bias and political polarization are coming to a head. Artificial intelligence (AI) has the potential to help us deal with these challenges. However, AI’s risks have become increasingly apparent. Scholarship has illustrated cases of AI opacity and lack of explainability, design choices that result in bias, negative impacts on personal well-being and social interactions, and changes in power dynamics between individuals, corporations, and the state, contributing to rising inequalities. Whether AI is developed and used in good or harmful ways will depend in large part on the legal frameworks governing and regulating it.
There should be a new guiding tenet to AI regulation, a principle of AI legal neutrality asserting that the law should tend not to discriminate between AI and human behavior. Currently, the legal system is not neutral. An AI that is significantly safer than a person may be the best choice for driving a vehicle, but existing laws may prohibit driverless vehicles. A person may manufacture higher-quality goods than a robot at a similar cost, but a business may automate because it saves on taxes. AI may be better at generating certain types of innovation, but businesses may not want to use AI if this restricts ownership of intellectual-property rights. In all these instances, neutral legal treatment would ultimately benefit human well-being by helping the law better achieve its underlying policy goals.
Consider the American tax system. AI and people are engaging in the same sorts of commercially productive activities—but the businesses for which they work are taxed differently depending on who, or what, does the work. For instance, automation allows businesses to avoid employer wage taxes. So if a chatbot costs a company as much as before taxes as an