The Potential of OpenAI and chatGPT: A Call for Responsible Regulation
Over the past few years, artificial intelligence (AI) has grown at an exponential rate. Many new advances are revolutionizing how we do everything, from writing a quick note to your colleagues to developing a new business plan that is 100% automated.
While these advances are undoubtedly changing the world, we do need to look at them with an unbiased opinion.
Two of the more interesting technologies include OpenAI and chatGPT. Odds are you have heard of or potentially used these platforms already, as they are offering exciting new possibilities.
However, like any new technology, there are some potential risks associated with advancing AI that are worth considering. Without proper regulation, we could face a scenario where the risks may outweigh any benefits we gain, primarily when these technologies are used in healthcare.
What is OpenAI?
OpenAI first came online in 2015 when a new venture was opened in San Francisco involving Elon Musk, Sam Altman, Greg Brockman, Wojciech Zaremba, Ilya Sutskever, and John Schulman.
They wanted to create a new AI that could empower people while remaining safe for the world.
Since its first founding, these tools have leveraged many deep learning (DL), machine learning (ML), and augmented reality (AR) technologies integrated with the OpenAI system.
This has led to an impressive lineup of tools like DALL E (an AI for image generation) and CodexAI (an AI for creating new technologies via GitHub).
Okay, Now ChatGPT?
ChatGPT is similar to OpenAI in that it generates unique content and processes based on prompts entered by human beings. ChatGPT is a natural language processing (NLP) tool driven by advanced AI technology.
It can create all kinds of unique content like blog articles, business plans, email campaigns, and more. In fact, the title of this article was produced using ChatGPT — just to give you an example.
Even though ChatGPT is not OpenAI, it was made by the same company with additional partners.
It has the capability to answer questions and complete a wide range of tasks but is not totally accurate, leading to potential risks when applied in ways without confirmation of research and statistics.
Why Should We Regulate These Technologies?
While both OpenAI and ChatGPT have many exciting benefits, there are equal risks we need to check out. Without any potential regulation, these technologies can be used for malicious reasons.
There have been cases of deepfake technology to create fake images, videos, and audio of human beings against their will.
Or, using AI to generate content for rapid onboarding of a website that is meant to be factual, but without fact checking, leads to erroneous material.
Most importantly, AI was developed primarily by middle-aged white men.
Nothing against that demographic, but it means any intrinsic bias (regardless of personal awareness) can be written into the code and technology behind such powerful AI tools.
Then there are the ethical concerns. With a powerful AI tool crafting software, technology, and visual media, it becomes way too easy to manipulate the public discourse. Take, for example, the height of the global pandemic.
Imagine if these tools were more publicly available for those that were either anti-vaccination or pro-vaccination. The exponential number of social media posts and articles claiming to be authentic would have been an overwhelming tidal wave.
Even one of the CEOs behind OpenAI has called for the regulation of ChatGPT and other generative technologies. Mira Murati is quoted as saying:
“It’s important for OpenAI and companies like ours to bring this into the public consciousness in a way that’s controlled and responsible,” she said in the interview. “But we’re a small group of people, and we need a ton more input in this system and a lot more input that goes beyond the technologies — definitely regulators and governments and everyone else.”
We are likely to see more and more government overreach to manage these technologies, including AI audits.
That could involve anything from checking the core background information being fed into OpenAI and ChatGPT or even using detection software with notifications of websites, content, and anything else generated from these tools so consumers know what they are looking at.
Kind of like a profanity warning on old-school CDs and DVDs.
The best thing the companies can do behind these types of AI is to get on board now. Be proactive about protecting the rights and information bias of their content by ensuring consumers understand where it originated.
There is also the call for FTC oversight in the United States.
A current AI policy think tank wants the U.S. government to investigate OpenAI to ensure the algorithmic bias, privacy, and tendency to produce inaccurate results do not violate the current federal consumer protection laws.
That inaccuracy is a big issue and leads us to the next topic of concern.
Using AI Tools in Healthcare
The healthcare industry is one of the most promising areas for implementing AI tools. These technologies can improve patient outcomes, reduce operational costs, and enhance the overall medical experience.
However, there are many ethical concerns about integrating such technology into the modern hospital, care unit, or group home experience.
There must be transparency and accountability when it comes to patient safety.
Having AI involved opens up many inaccurate results that could potentially lead to detrimental patient care. Included in that concept is the potential for AI to perpetuate bias and discrimination.
As many minority communities already suffer from a lack of trust in our current healthcare system, we need to be doubly sure there are no inaccuracies or potential biases in such integrations, so we do not erode what trust capital we do have.
Then there is the constant guillotine over the head of AI replacing human healthcare providers.
As good as AI is at enhancing the capabilities of our medical system, it lacks the human factor — that capacity for empathy and personal touch that comes with human-to-human interactions.
Therefore we need to regulate the use of OpenAI, ChatGPT, and other AI-backed tools to ensure it is used in a responsible and ethical manner that benefits the patients, providers, and systems we rely upon.
Wrapping it Up
Both OpenAI and ChatGPT can be the answer to many of the efficiency and communication issues we are experiencing around the globe.
Without regulation, these tools carry risks that could potentially outweigh any benefits, especially in the healthcare industry. That is why we must remain open-minded to oversight that ensures its safety.
Now is the time to contact our leaders, educate them on the technologies in question, and encourage regulation that would go a long way to compromise.
We want the social steps forward of integrating powerful technology, but not the two steps back of moving too quickly too soon.
Written by: Emmanuel J. Osemota