If simulating modal human thinking is the goal of artificial intelligence then I would certify chatGPT is highly successful. It is as stupid as humans except that its stupidity comes much faster.
Having taken interviews, oral examinations, PhD or master’s thesis defense over a long time, I have been often struck by a class of students who talk a lot and give a lot of information without even touching the answer to the pointed question asked. Sometimes, and not rarely, they get away making a good impression. My interaction with chatGPT reminded me of such students.
I started exploring chatGPT after having postponed it for a long time. I thought AI would be better than humans in areas in which computers are obviously more efficient. That is in compiling large data, using quantitative methods, verifying an opinion against published evidence etc. But it doesn’t do anything like this. It doesn’t claim to do such things. It only repeats what has been said more often and by the elite, irrespective of whether it is true or not, logical or not, self contradicting or not.
I was particularly keen to see what it does when there is a contradiction between the belief in a field of science and actual data. So for obvious reasons I asked questions about diabetes. It started answering with the typical mainstream belief. When I pointed out that there was evidence going against it, it admitted that there is, but again started reiterating the beliefs. Then I pointed out that your statements are mutually contradicting, on which it admitted that there are contradictions and gave an excuse that the system is complex. When I gave specific evidence that shows his belief was wrong, he said yes there is evidence on the contrary but again went back to the belief. When I questioned what the evidence for this belief is, it said there is plenty of evidence from decades of research without giving any specific experiments or data as evidence. You can access the entire communication here (https://drive.google.com/file/d/1IhiFoscOWwD_cxhGguO0ssd7tkwmyiiW/view?usp=sharing).
This is precisely what people in this field have been doing. Everyone knows that multiple lines of evidence directly contradict the theory but they are not willing to give it up. They continue to live with circular logic, falsified hypotheses, internal contradictions and complete clinical failure but they will still continue with it, keep on cherry picking convenient findings and claim that they are doing science. It looks like AI is also an expert in this kind of self deception. This is fine, if your definition of intelligence itself is mimicking human thinking. But there is one important component missing. Human behavior has huge individual variability. While for modal personalities conformity is more important than logical soundness, there is a rare individual who craves for sound logic, someone who looks at data more than opinions. Science progresses by such outliers, and not by modal human thinking. If AI is like modal human intelligence and does not incorporate the outliers, it would prove to be retrogressive rather than progressive. People have already started using chatGPT in research. There are talks about AI doing peer reviews and all. Even if such a use of AI is officially banned, people are going to use it I have no doubt. This is the greatest risk. Using this kind of AI in science will only increase the conformity bias and make publishing disruptive thinking, surprising results, path breaking and paradigm shifting research increasingly more difficult.
Wonderful article, Milind.
LikeLike
Thanks Subhash
LikeLike