If you’ve been paying attention to the technology scene recently, you cannot fail to have heard of ChatGPT. What is it? Quite simply it’s a chatbot, a type and read interface for version 3.5 of the GPT engine developed by Open AI.
If all that sounds confusing, a chatbot is artificial intelligence (AI) software, in this case presented as a webpage, where you can have a conversation with a computer program. You type in a question and the chatbot sends a reply. Repeated questions and replies form a conversation, in ChatGPT’s case a remarkably smooth, almost ‘human’ experience.

If you haven’t yet tried it, then you really should. It’s free and very easy to use. I encourage you to visit chat.openai.com, sign up for a free account, and type a question. There are no rules about what you can and cannot type; ChatGPT is accomplished at understanding ordinary language and gives good, conversational replies.
Assuming you tried it out and have found your way back to my blog post, I’d like to explain a bit more about it. ChatGPT was released as a website at the end of November in 2022 and has grown in popularity very, very fast. If you tried it out for yourself, I’m sure you will understand why. It’s compelling, it can write essays, explain more or less anything you might ask, it can even pass many written exams on all kinds of topics. And version 4.0 of the engine is already available as a paid option and is far more capable. It can take images as input as well as text and has, for example, built a working website based on a sketch and a description of what the website should do. That is little short of astounding!
The company, Open AI, was created to work on artificial intelligence with safety very much in mind. Sometimes, ChatGPT generates false answers; that can be an issue but it is not deliberate. What if more advanced AI became able to reason as people do, what if it started to think and develop it’s own goals, and what if its intellect became faster and more nimble than our own? Could we prevent it from taking over? Would it be benign, or might it become hostile? Would we be able to control it? These are serious issues. We need to think these things through now, before it becomes too late.
I don’t want to be alarmist, and AI as we currently experience it is far from becoming a threat. It may prove useful in many ways and we’ll see that begin to happen very soon. But we’ll need to manage it in ways that prevent it helping people do bad things. We don’t want such technology to enhance criminal activity, for example. So there’s a great deal to consider right now, and the need for that will only increase as AI systems become more and more capable. For more on this I recommend Sean Carroll’s podcast episode 230, linked below.
I’m going to close this blog post at this point, but I’ll be back soon with a sort of interview with ChatGPT. I’ll ask some questions, let the software answer, and publish the conversation.
Meanwhile, have some ChatGPT conversations for yourself and see what you think.
See also:
- Artificial intelligence – Wikipedia article on the general topic of AI
- ChatGPT – Try the software yourself
- ChatGPT – Wikipedia article on the ChatGPT software and its development
- OpenAI – Wikipedia article on the company that developed the software
- OpenAI – The company’s website
- Sean Carroll podcast 230 – Sean Carroll’s interview with Raphaël Millière on AI
- Lex Fridman – A YouTube interview with the CEO of OpenAI, Sam Altman (or listen to the Podcast version)
- A conversation with ChatGPT
