I’ve been aware of the various AI happenings recently. I have even tried some of them. I had one generate an image for me just a few weeks ago. It was even better than I expected and I used it for my project. Around that same time, I was composing multiple drafts of an email with many points in it for a specific audience. On a whim I had an AI have a go at it. What I got back was 100% sendable.

But I have just watched the demo of GPT-4o and I have some thoughts. First, wow. I did not expect it to be this good. Seriously, go watch it.

Back? Good. So here’s what I think. I get that large language models like ChatGPT don’t “know” anything or “understand” you. They’re just algorithms that compose responses to your questions by predicting what word will probably follow the previous one and they do so with no awareness of you, what you’re asking, or what they are providing. In short, the lights may be on but there’s definitely nobody home.

But I am now wondering…does that matter?

If you’re getting human-like responses, does it matter what is (or is not) happening inside? And by the way, we understand quite a bit about our own brains and their physical structure and processes. There’s nothing in science that confirms that there is anyone at home in you. In fact, there’s even some evidence to show that our brains make decisions before our conscious mind does. Evidence that suggests that what is actually happening is that we’re acting in a highly deterministic way and that we are after the fact making up reasons and justifications for our “decisions.”

There may be nobody home, but it’s not possible to live your life that way. There may be no free will, but accepting that as an academic or scientific conclusion cannot change your lived experience. We, therefore, accept some things on faith. Will it be wrong not to extend that faith to ever more sophisticated AIs?

Remember, this stuff is coming to our smartphones later this year.