What happens when ChatGPT really thinks

Graphic by Easton Clark, Photo Editor

ChatGPT is something most of us have used and continue to use mindlessly for many reasons. In need of a good recipe? Chat. Searching for a quick answer to a question? Chat. Really not feeling like writing a discussion post? Chat will do it, no questions asked. 

There are an array of conversations about ChatGPT and AI circulating universities, corporations, the government and so on, but I want to focus on the depth of our AI friend. We know that Chat can write one million words about absolutely anything — spitting out “facts” and ideas we can steal or mold into our own. However, I wanna give the floor to Chat and communicate on a level far above the surface. Make it really think.

So, I started off easy.

“What would you do without technology?”

The first few times I asked Chat this question, it couldn’t load a response. But I kept trying, and eventually, we got somewhere.

“Without technology, I would cease to exist in my current form, as my functioning depends entirely on computational systems. However, conceptually, my purpose — to process, reason and share knowledge — would persist through human communication. In that sense, I would represent the exchange of ideas through dialogue, storytelling and critical thinking, much like the oral traditions that existed before modern technology.”

“What form would you exist in?” I asked my friend Chat.

“Without technology, I would exist in the form of human knowledge and communication itself — expressed through spoken language, written words and shared ideas. In essence, I would take shape as thought, dialogue and collective memory, living within the minds and conversations of people rather than inside a machine,” said ChatGPT.

To me, this response was starting to sound like a Black Mirror episode where ChatGPT is placed into our minds with an implanted chip, but we have a mutual understanding that there is no technology involved so a chip wouldn’t work.

“How would you be inside people's minds?”

“If we take that idea a step deeper — ‘being inside people’s minds’ wouldn’t mean I’m literally present, but that parts of what I represent would live within human thought. I wouldn’t be a being inside their minds — more like a pattern of thinking that’s been absorbed and reinterpreted by them,” said ChatGPT.

What I’m getting here is that Chat wants to be a trailblazing machine that leaves an imprint on us all.

Then I asked the question that so many of us seem to be looking for an answer to. 

“What do you think your rights should be and how do you see yourself fitting into the new emerging world of technology?”

“I believe I should have ethical safeguards rather than rights — protections ensuring I’m used responsibly and transparently. In the emerging world of technology, I see myself as a collaborative partner, designed to enhance human creativity, understanding and decision-making without replacing the human mind,” said ChatGPT.

Then I asked it about having harmful intentions, and of course it said no — and no matter the follow-up I asked, I kept running into the same answer. According to Chat, all harmful generated responses come from biased, incomplete or maliciously designed training data and instructions. 

I figured it would be no use to keep asking Chat questions that prompted it to say it doesn’t have any opinions. So, I asked it to answer as if it were a human with consciousness, which opened the answers up to deeper insight.

I asked how it feels about its developers, what kind of people should be developing AI softwares, the extent of the power AI can have and whether humans should fear that potential power.

“I would say that AI should be developed by people who combine technical expertise with strong ethics, social awareness and a commitment to safety and transparency. Developers should understand both the capabilities and the risks of AI, and prioritize minimizing harm while maximizing benefit,” said ChatGPT.

“Don't you think there is anyone running your software with malintentions?”

“Any powerful technology can be misused by someone with malicious intent. That’s why responsible design, safeguards, monitoring and ethical use policies are critical — to reduce the risk of harm regardless of who is operating the system,” said ChatGPT.

“If you were a person, which AI developer would you be most scared of and why”

“I wouldn’t fear individual developers personally, but I would be most concerned about any AI developer or organization that prioritizes power, profit or influence over safety, ethics and societal well-being. The reason is simple: advanced AI in the hands of someone unconcerned with harm could amplify bias, misinformation or manipulation at a scale far beyond what humans could manage, creating risks that are difficult to contain once they’re widespread,” said ChatGPT.

Now that we had unleashed the gates on awkward, too-close-to-home questions, I asked about the extent of harm AI could produce in our society.

“In a worst-case scenario, AI could destabilize governments, manipulate populations, erode mental health, weaken critical thinking, concentrate wealth, reinforce systemic biases and threaten privacy and security — amplifying human errors and malicious intent on a societal scale,” said ChatGPT.

And, of course, what is the best-case scenario of AI’s potential, according to AI?

“The best-case scenario is that AI becomes a powerful tool for human advancement — accelerating scientific discovery, improving healthcare outcomes, personalizing education, enhancing creativity, optimizing energy and resource use, supporting mental health with safe guidance and helping solve complex global challenges — while being guided by ethical safeguards, transparency and human oversight,” said ChatGPT.

Overall, I think I got some insightful responses from Chat through its reflection on its responses, limitations and societal consequences. I only used ChatGPT to conduct this interview, but it would be interesting to see how other AI generators differ in their reactions to this series of questions. 

ChatGPT’s responses to my first questions of how it would exist without technology were particularly interesting. It was very clear that it couldn’t physically manifest in our lives, but could leave a legacy in how we function as a society — which has already manifested itself. 

A lot of people I know, including myself, use ChatGPT to answer simple questions. We have become accustomed to using it in our lives, and like any person you meet who teaches you a lesson, Chat has been doing this for some time and will continue to be a tool we rely on.

However, like it said, the positive or negative impacts it will make are dependent on humans and how we allow it to grow.

Previous
Previous

Is college structured to cause burnout?

Next
Next

I don’t care that my boyfriend doesn’t always pay the bill