This week in the newsletter we are paying homage to a legendary computer scientist Kenneth Lane Thompson who wrote the Unix operating system, the B programming language, the C programming language, and in his 80's worked at Google, where he invented the Go programming language. A pioneering computer scientist. Pioneering or not what does he have to do with behavioural economics, that is a good question you have. Nothing directly Is the truest reply but if you read on you will discover the reason this 40 year paper/talk is today more important than ever before. 
 
In 1983 he was along with Dennis Ritchie awarded the ACM Turing prize, think of it as an equivalent of the Nobel prize of the behaviour economics. The paper is short. I would urge you to read it if you can. The paper has a fair bit of computerese. So let me break down the paper without getting into the technical argument, how do we trust something that we did not create? The abstract of the paper is highlighted below. Read it and just think about it. 
 
Abstract
 
To what extent should one trust a statement that a program is free of Trojan horses? Perhaps it is more important to trust the people who wrote the software.
 
 
What makes the paper interesting from the behavioural side of things is the moral  that Thompson drew, and which in the day, and age of online assault on trust, and moral conundrums are still not only relevant but make us think about automata and cellular systems in a new light. 
 
"MORAL The moral is obvious. You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well-installed microcode bug will be almost impossible to detect. After trying to convince you that I cannot be trusted, I wish to moralize. I would like to criticize the press in its handling of the "hackers," the 414 gang, the Dalton gang, etc. The acts performed by these kids are vandalism at best and probably trespass and theft at worst. It is only the inadequacy of the criminal code that saves the hackers from very serious prosecution. The companies that are vulnerable to this activity, (and most large companies are very vulnerable) are pressing hard to update the criminal code. Unauthorized access to computer systems is already a serious crime in a few states and is currently being addressed in many more state legislatures as well as Congress. There is an explosive situation brewing. On the one hand, the press, television, and movies make heros of vandals by calling them whiz kids. On the other hand, the acts performed by these kids will soon be punishable by years in prison. I have watched kids testifying before Congress. It is clear that they are completely unaware of the seriousness of theft acts. There is obviously a cultural gap. The act of breaking into a computer system has to have the same social stigma as breaking into a neighbor's house. It should not matter that the neighbor's door is unlocked. The press must learn that misguided use of a computer is no more amazing than drunk driving of an automobile."
 
The point that Thompson is making throughout applies to something that can compile and execute a code. Think of the latest AI wunderobject and if it uses anything at the level of deep neural networks or beyond it executes code. I discovered Thompson’s original paper while I read the fascinating 2022 paper by Or Zamir who along with his co-authors wrote a fascinating paper on Planting Undetectable Backdoors in Machine Learning Models, the inexplicable problem of trusting a system that will play an important, and larger role in our lives. I searched around and found the remarkable talk/paper from 1983. For it asks an important question, and answers it. Trust. How does one trust an opaque dataset? How do we trust trust in models we do not control, data we do not see, motives we do not understand? Should AI have an ethics board? A small but fascinating conversation from IBM is hyperlinked below in case you want to explore more. 
 
This week we can only offer questions. Next week we build on trust and look at the cost of altruism.  
Share this post