Whenever new technology takes hold, it raises ethical questions that can take society years to answer. Early industrialization allowed for incredible productivity increases but came at the cost of workers’ health. Eventually, laws limited unethical circumstances and defined acceptable working conditions. Today, we face another ethical quandary. Artificial intelligence has generated debates about ethical technology use; however, most companies are flocking to the technology without giving much consideration to these important questions. How can you use AI ethically and responsibly?
Ethical Technology in the Age of AI
Until recently, the ethical issues surrounding AI were mostly hypothetical, focusing on futuristic scenarios like a rogue AI attacking humanity. The launch of ChatGPT in 2022, however, has stirred up new discussions that are far more relevant in the present day. Since its debut, hundreds of AI apps have been released, many relying on ChatGPT behind the scenes. More applications emerged that raised the stakes, particularly ones capable of generating images and videos.
These programs rely on existing information to generate new content. ChatGPT was trained on massive quantities of books, articles, and other internet publications. Image-generating applications like Stable Diffusion or Dall-E studied images of real people and artworks. As a result, these programs can create content that is quite similar to real works, which raises several ethical questions. Chief among them, at what point does a generated work qualify as plagiarism or copyright infringement? Does using AI violate a client’s trust?
Intellectual Property and Copyright Issues
ChatGPT and applications based on it may unintentionally result in copyright infringement. Since these applications are trained on real-world writings, they can output very similar text, or in some cases, exact copies of what has been written before. Publishing this output could run afoul of copyright law. Furthermore, ChatGPT plagiarism checkers simply do not work reliably. These tools make an educated guess by looking at the average patterns of ChatGPT outputs but cannot confirm if ChatGPT produced the text.
It’s important to perform audits on any AI-generated text used in official documents or publications. Searching snippets of text in Google can quickly reveal if it’s been copied from another source. Alternatively, providing several rounds of prompts to the program can help refine text and make it truly unique.
Ethical Use of Likenesses
Image generation, and by extension video generation, is where AI enters a much murkier territory. While public-facing programs like Dall-E don’t allow users to call for pictures of specific famous individuals, it doesn’t take much work to get your own private AI model up and running. Individuals with decent computers can train a model and produce images of anyone within days. Video generation is still in progress, but new models like Sora show that we’re not far from indistinguishable video generation.
Ethical technology use would require that your company have written permission from an individual to use their likeness for AI training and content generation. Any such agreement should have clear limits on what the content can be used for. A major concern right now is the use of AI to create deceptive content. A Chinese company was recently scammed out of millions when an AI-generated video of the CFO instructed an employee to transfer the money.
Does AI Deceive a Client?
Companies have an ethical duty to their clients to uphold agreements and provide services as promised. Using AI could lead to situations where a company fails to deliver on this obligation. A recent case illustrates this perfectly. A lawyer used ChatGPT to write legal documents, ostensibly to save time. However, the AI-generated document had citations that did not exist, which in the legal world is a massive error. The law firm was fined and sanctioned as a result.
In this case, a client paid full price for hours of legal work but instead received a fraction of the real effort expected. Other companies should take note.
How Transparent Should Companies Be When Using AI?
Ultimately, companies need to consider how they use AI and how that use could affect their valued clients. Clients, in some cases, should be made aware of the use of AI. To what extent companies need to disclose their use of AI depends largely on how it’s being used, and how it may affect the client.
Narrow AI
Narrow AI is trained to perform very specific tasks. In most cases, narrow AI simply performs tasks that a human would be able to do following the same procedure. For example, a business analytics platform uses narrow AI to analyze financial data and output insights. These systems that are only for internal use have no bearing on your clients and can be used as you see fit.
However, if narrow AI applications impact information a client receives, you may want to develop a policy that governs how that information is reviewed and verified by human associates. Consider mentioning this in terms and conditions or noting it in documents. A simple disclaimer mentioning that portions of a report were developed with AI and checked by a named associate provides full transparency and transmits confidence to clients.
Generative AI
With generative AI, increased transparency is recommended. Several publicly available image generators use small watermarks to denote that the item was created with AI. Consider publishing a document that explains how your company uses generative AI tools.
One of the most common uses of generative AI today is the company chatbot. Many businesses do not inform clients that they are talking to an AI. However, we would encourage you to be honest. Clients may feel frustrated if a human suddenly takes over and lacks information from their previous chat or if they cannot get a hold of a real person to solve their issues.
Implementing AI Ethically
Consider how your clients may be affected by the tools you use and strive to be transparent where necessary. Ask yourself how you would feel if you found out AI was used for a service you rely on.
To learn more about how you can ethically implement AI in your operations, contact Edafio to schedule a meeting with one of our representatives.