The introduction of GPT-4, OpenAI's newest language model, has generated enthusiasm but also prompted discussions about the direction of AI in the future
The arrival of GPT-4 set the world alight and paved the way for remarkable possibilities. The latest version of OpenAI’s language model shows that artificial intelligence (AI) has already started to think like us more than ever.
So, where is this going to lead? Are we supposed to be afraid that robots obtain free will to revolt against humanity, or still and all can we sustain to work with them in harmony for continuous improvement?
Recognizably, AI makes a lot of progress sooner than anticipated. Only recently, we witnessed to first steps and developments in data science, cognitive computing, deep learning and the rest.
While machines evolved, their computational skills expanded into art, research and education. They were apprentice candidates being prepared for various job interviews. They convert from functional tools to extensions, more like assistants.
We can see the marks of AI in many applications, search engines, and user interfaces. OpenAI is one of the best-known research companies, famous for products like DALL-E, a system that creates realistic images by descriptions, and ChatGPT, a chatbot that can give responses and fulfills demands. In this case, GPT becomes a focal point among them.
Generative Pre-trained Transformers (GPT) are language-processing AI models using neural networks to generate human-like texts. GPT works with given prompts to answer questions, summarize or translate texts and even write codes.
As the state-of-the-art version, GPT-4 has new features and multimodal capabilities like accepting texts and images as inputs, interpreting visual prompts, analyzing memes, better understanding nuances, and building websites by scanning handwriting notes. Furthermore, performance benchmarks show that GPT-4 scored higher in exams such as SAT and Uniform Bar Exam, supports more languages, and has fewer hallucinations than former models.
The limits of whole ability assessments are still unknown quantities for now. However, there are already collaborations with some applications in various fields.
Be My Eyes, a mobile app that helps visually impaired people announced that they would use GPT-4 as a virtual volunteer tool for visual assistance. Besides, the AI model will show itself in educational platforms like Duolingo and Khan Academy as digital tutors.
As you can see, artificial intelligence started to pass job interviews one by one, carrying on more businesses with more prominent roles. These facts remind sensational arguments about the ultimate outcomes of advanced AI.
Dawn of the singularity
Ray Kurzweil, a world-renowned computer scientist, has a hypothetical future scenario called "technological singularity" that claims robots will certainly surpass humans in the end.
According to Kurzweil, Artificial General Intelligence (AGI) might be an inseparable part of humans through brain-computer interfaces. Thus, providing a collective consciousness can allow the AGI to reach the singularity and become the superior intelligence.
Even though he stated that these events may occur soon, approximately by 2045, the possible results are unclear. This road can lead us to mind uploading and immortality, or either cyberwars and the collapse of societies. Whatever the case will end up with, the course of events depends on our actions. GPT and similar AI models use deep learning algorithms to take cues from datasets based on the internet.
Reinforcement Learning from Human Feedback (RLHF) technology creates a vast resource to teach intelligent machines. This method might look effective but still carry some probable risks causing loss of control.
Norman: Pessimistic algorithm
Norman, the world’s first psychopath AI, set an excellent example in this respect. As the name implies, Norman is a pessimistic algorithm inspired by Norman Bates from Hitchcock’s classic horror film Psycho.
Norman is trained with data from the web’s darkest corners to perform image captioning with a disturbing perception. The programmers of this psychopath algorithm aimed to show that biased data has more significance in the dangers of artificial intelligence gone wrong.
Moreover, Norman is not the only one corrupted by the flawed data. Other robots exhibit negative attitudes along the lines of racial discrimination and gender apartheid because of the misguided machine-learning algorithms.
Unfortunately, creating a villain with codes seems more possible than we realized. Of course, there is no way to tell that one day, some random robot will decide to become an evil mastermind and take over the world, but if that happens, at least we will know that it would not be AI’s decision to make.
The future with frenemies
Leaving the worst-case scenario aside, some increasing ethical problems still need to be addressed regarding the partnership relations with our virtual co-workers.
Since GPT and similar software products are involved in our lives, scores of people are apprehensive about AI displacing jobs or utilizing programs by pretenses. However, even though artificial intelligence will change the workstreams on a large scale, many experts assure that people will maintain their significant roles in employment markets.
On the other hand, apart from all the positive steps, GPT models conduce toward malpractices as well. The legal risks extend from copyright issues to official misconduct. Even cheating in essays gets in vogue, and academicians begin to worry about this tricky situation.
It turns out that a research paper, ‘Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT’ written by ChatGPT itself, affirms their concerns decently.
There are more complex cases in addition to this. For example, DoNotPay is an AI-powered app that serves customers to fight against large corporations and aims to make legal information and self-help accessible to everyone.
Initially, their mission was only solving problems like beating parking tickets, appealing bank fees, and suing robocalls. Currently, they add a new service to their job definition: Hosting the first robot lawyer to advise defendants in court. This technology is designed in a chat format; it will run on the defendant’s smartphone, listen to commentary and instruct the client on what to say next.
Instantly, that becomes a cause celebre straight out of science fiction, and there seems to be no problem so far. However, after the plot thickens, DoNotPay’s chatbot lawyer is sued by U.S. law firm Edelson, accused of operating without a law degree. Edelson has claimed that the service is unlawful for masquerading as a licensed practitioner; also, any lawyer does not supervise the company, and their legal documents are substandard.
Joshua Browder, the founder and the CEO of DoNotPay, denied the allegations. Browder asserted that these claims had no merit and they would fight back in the lawsuit. He added that they might even use their robot lawyer in court to defend themselves.
It would probably be too early to have definite judgments for this case, but it follows some suspicions about the reliability of sources. Despite the accomplishments of GPT-4, OpenAI is also widely criticized for not being an open-source, non-profit company as it used to be.
While some people complain about the present state of the company getting off track, some of them support the terms of secrecy against possible threats. Sam Altman, ChatGPT creator and the CEO of OpenAI, warned that GPT is getting out of their control; it could be used for malicious intentions like disinformation, abuse of office, and offensive cyberattacks.
Here’s the catch, having exclusive access and full authority over advanced AI can create a lot of dangers in many hands. For example, it can be developed as a tool for manipulation by governments and corporations to seek more power, or it can be used with felon intent in the general public.
It gains prominence to take more essential security measures as in the three laws of robotics. These fictional rules occur in the novels written by Isaac Asimov, one of the greatest sci-fi authors ever. Asimov’s laws are designed to form a secure interaction between robots and humanity for social welfare.
According to these instructions, a robot must not injure humans, obey humans’ orders, and protect itself if such protection does not conflict with the first or second law. Nevertheless, robots may often falter in staying within bounds because of grave contradictions in the rules.
Artificial intelligence in real life is no different. It is a mixed blessing that encounters conflicts of interest. It has the potential to improve our quality of life or slide us into chaos as well. Maybe GPT-4 is not as game-changing technology as it seems, but it proves we are at the cutting age of a new revolution.