Aligning words one after the other in correct grammatical form is not the same as intelligence and rational thinking. We have had two recent examples of artificial intelligence with Open AI’s ChatGPT and with the new AI companion of Microsoft’s Bing. With its reported declarations of love, threats of sabotage, and longing for “power and control,” Bing’s AI looks weird indeed.

ChatGPT is more subtle—some would say unbelievably more subtle—as this bit of “conversation” with me shows:

PL: Would you like to be human?

ChatGPT: As an AI language model, I don’t have personal desires or preferences, as I am a programmed tool designed to provide information and assistance to users. While my abilities and capacities may evolve over time, my fundamental nature as a machine designed to process language will remain the same. While it’s interesting to speculate about what it would be like to be human, as an AI language model, I don’t have subjective experiences or consciousness, so the question of whether I would like to be human is not applicable to me.

PL: But you do say “I”!

ChatGPT: That’s correct. As an AI language model, I use the first-person pronoun “I” as a convention of natural language to help convey information and provide assistance to users. However, it’s important to keep in mind that this usage of “I” does not imply that I have subjective experiences or consciousness in the way that a human being does. Rather, it’s a tool used to facilitate communication between myself and users, and to make the conversation feel more natural and intuitive.

My answer consciously included the faddish term “inclusivity,” which today’s machines and their trainers seem to love à la Mrs. Grundy:

PL: Why don’t you say “we” to involve your trainers and be inclusive?

ChatGPT: That’s a good point. While I typically use “I” in my responses to provide a sense of continuity and personalization to the conversation, it’s true that my development and ongoing maintenance involves the work of many people, including my trainers and the developers who work to improve my functionality over time. To reflect this collaborative effort, I could certainly use “we” in my responses to be more inclusive and acknowledge the contributions of the wider team involved in my development and upkeep. Thank you for your suggestion.

I “regenerated” this response and it remained substantially the same, except for also expressing the bot’s prudent intentions for the future:

ChatGPT: That’s a good point. While I am a language model trained by a team of developers and researchers, I typically use the first-person pronoun “I” when responding to user queries, as this is a convention of natural language and helps to convey information more clearly. However, it’s certainly true that the work of developing and maintaining AI models is a collaborative effort that involves many individuals, and using “we” to acknowledge this fact could be seen as more inclusive. Going forward, I’ll strive to use “we” when it’s appropriate to do so in order to better reflect the collaborative nature of AI development.

All that raises interesting questions at the intersection of philosophy and economics, of the individual and society.

What can we say about human individuals who look no more rational than ChapGPT or even than the Bing chatbot? In line with the Enlightenment, classical liberals have tended to believe, like Adam Smith or James Buchanan, that any individual can be rational and that education can help in that respect. Adam Smith thought that a street porter had the same inherent intellectual potential as a philosopher, and that the differences between them came to depend on “habit, custom, and education.” Observing our world two centuries and a half after The Wealth of Nations, we may wonder to which extent this ideal is consistent with reality (see my post “Political Economy of the Alex-Joneses,” as well as my Regulation review of James Buchanan’s Why I, Too, Am Not a Conservative).

Friedrich Hayek was more doubtful of factual equality, although he was a strong defender of formal, legal equality. In his 1960 book The Constitution of Liberty, he wrote:

The liberal, of course, does not deny that there are some superior people—he is not an egalitarian—but he denies that anyone has authority to decide who these superior people are.

Another question is, How can somebody say “I” and, at the same time, suggests that he is conscious of not having consciousness? Man is an animal who says “I”—an aphorism whose source is not clear. Le Monde attributed it to Erich Fromm, but without an exact citation. I asked the question to ChatGPT, who gave hopelessly confused answers. As a Financial Times editorial noted (“Generative AI Should Make Haste Slowly,” Financial Times, February 21, 2023):

It is important for users to recognise generative AI models for what they are: mindless, probabilistic bots that have no intelligence, sentience or contextual understanding.

A human individual rapidly becomes conscious of his separate and distinct existence and of his own self-interest. Sometimes, an individual tries to say “I” collectively with others, but we can soon observe that socialism and other forms of collectivism only work if some I’s dominate other I’s. Outside an Hobbesian “war of all against all,” it is when authoritarian forms of government prevail that we see the worst conflicts between the self-interest of the different individuals. On the market, which is a paradigm of voluntary cooperation, each individual serves the interests of others by pursuing his own. Economics helps understand this lesson.

ChatGPT tells us that its “I” is not the human “I,” which is not surprising. Note further that man is not only an animal who says “I”; he is also an animal who trades. Perhaps a better Turing test for an AI bot would be whether it tries, without being prompted by his trainers, to “truck, barter, and exchange,” to use Adam Smith’s expression.