Artificial intelligence (AI) has two clear goals, which are not mutually exclusive at the moment, but only one is good for humanity in the long term. The goals are to either make people more productive or replace them. Two recent events suggest that we may need to adjust our ethical behavior to properly utilize AI.
The first is about an artist using AI to create art, winning an unfair art competition; the other is about using AI to allow students to write better papers faster and with less effort .
The debate surrounding the incident is as usual, with many arguing that schools should ban calculators and personal computers because it reduces the need for students to learn multiplication tables and evade much primary research in favor of online searches and Wikipedias. Although over time the skills of using calculators and personal computers are more valuable (efficiency) to the students entering the workplace.
In short, what we ultimately have to think about is whether using AI to create better products faster is considered cheating, or should it be taken for granted?
There is a new perspective on AI: It may be easier to create AI that replaces humans than it is to augment humans. The first approach just needs to focus on replicating what the person does, creating a digital twin of them, and there are already companies doing this. For example, intelligent automation in manufacturing. There is no need for human contact, and humans lack a common language, common skills, common interests.
This means that the most efficient path for AI to use is not an augmentation path, but an alternative path, as AI operating alone within its parameters is not objectionable, but AI being used to significantly augment users, especially in competition, will considered cheating.
This is especially evident in self-driving cars, for example. For self-driving cars, the current default technology is to empower the driver, which Toyota calls a “Guardian Angel.” But in testing, Intel found that giving human drivers control options in self-driving cars increases stress for drivers because they don’t know if they’ll suddenly be asked to drive the car. Untrained drivers are more comfortable if the car does not offer a human driving option. This suggests that in the long run, self-driving cars that do not allow human drivers to control will be more popular and successful than those that offer human-driver control.
It’s normal for an artist or writer to collaborate on a work of art, a dissertation, or even a book with someone more capable than them. Also, it’s not uncommon for someone to create a book in another author’s name with permission from another author. Would it be worse if AI was used instead of teacher/mentor/collaborator/partner/ghostwriter?
Companies just want high-quality work, and if they can get a higher quality from a machine than a human, they’ll make and have made that choice. Just think of the processes that have automated manufacturing and warehouses over the past few decades.
We need to understand how to use AI, how to accept work products that make the most efficient use of AI resources, while ensuring that we can prevent intellectual property theft and plagiarism. If we don’t, the trend in AI will continue to shift from focusing on human assistance to replacing humans, which will be detrimental to industry or a growing number of occupations that make better use of AI.
GIPHY App Key not set. Please check settings