Wednesday, April 26, 2023

ChaptGPT, AI and the Singularity

 I thought I'd throw my hat in the ring with all of the talk on the future of AI. 

ChatGPT

After playing around with ChatGPT a little, I've decided to make it my life coach. I'm serious. For example, last night I asked ChatGPT "How can I get more people to buy the book I self-published on Amazon?" I got a very helpful, actionable response.

The Singularity

In case you haven't heard about it, the "Singularity" is this prediction that technology advancement will keep accelerating until it surpasses human ability to understand or control it. If you think technology is advancing fast now, just wait and it will advance even faster. Then keep repeating the last sentence until your head spins off.

A mathematical singularity goes to infinity at a certain point. For example, if it's determined that the singularity will happen on a specific date, then technological innovation will be infinite on that date.

My problem with this is hopefully represented by an example. When the COVID-19 pandemic started, nobody knew how big it would get. I started tracking the numbers and attempted to find a mathematical function that could be used to predict how big it would be. I found a very good fit with an exponential growth curve. But there was a problem. It predicted that as time went on, more people would get COVID than there existed on the planet. The function didn't account that COVID would run out of people to infect. (See My COVID post).

I believe that there are other similar systematic constraints with technological advancement that will prevent a singularity. I don't know what they are just like I didn't think of the constraints early in the pandemic. Maybe it will be limited by how much electricity we can produce, or by computer memory (an AI with infinite abilities will need infinite power and storage). Maybe it will be a fundamental limitation, like our programming languages don't support what this AI will need. Or, maybe the AI will need humans to help it replicate (it doesn't have an opposable thumb, yet), and there are only so many humans that will be able to do this. All of these are potential limitations.

AI

I've listened to several of Lex Fridman's podcasts on AI and the most recent one resonated with me: his interview with Manolis Kellis. I'll summarize my impression with what I think is likely to happen with AI. 

Currently, many of our highest paid fields require intelligence. For example, doctors, lawyers, scientists, engineers, software developers. With our very diverse population, there may be people more ideal for these roles, but they are exclude by intelligence filters. Only the intellectually elite can make it into these roles. For example, what if there was a grandma somewhere that had the right bed side manners, love, wisdom, perspective that would make her a better doctor, but she just doesn't know all of the technical doctoring stuff. In the future, she won't need to know the technical stuff, she can use AI trained on diseases and treatments to help her with this part. I know I'm oversimplifying as I would want to know that grandma had some level of training and competence. 

Today we have a similar case with machinery. You don't need the fastest runner if you have a car. You don't need to strongest with a pick and shovel if you have an excavator. You don't need the person with the loudest voice if you have a microphone (or a blog). AI has the potential to be just another tool that humans use.

I saw a funny post on Reddit r/ProgrammingHumor. Someone posted a text they got from a friend that used ChatGPT to write code. They asked their programmer friend "Now what do I do with this code? How can I use it?" Similarly, a home builder uses many tools to build a house, but those tools can't build the house on their own. You need a skilled person to use all of the tools. 

Can AI do harm? We'll find out.

No comments: