

Discover more from Art Science Millennial
What PAP ex-candidate Ivan Lim can teach us about automation and AI
The thought process used by many in this saga are also used by machines to draw conclusions in an uncertain world and are subjected to the same errors and risk
Welcome to Art Science Millennial, a newsletter for non-techies navigating the world of tech! I know the struggle because I’m one of you.
Anyone following the news on Singapore’s general election even casually would have heard of a certain Ivan Lim. Within hours of his introduction as a People’s Action Party (PAP) new face, a steady stream of social media posts accused him of being a workplace bully who enjoys power trips. While some of his supporters came to his defence, it was not enough to save his nascent political career and he withdrew his candidacy three days after his debut in white.

Former PAP candidate Ivan Lim.
Ivan Lim is a teachable moment. And no, it’s not about integrity and character. Instead, let’s use this short-lived saga to illustrate two key concepts behind automation and artificial intelligence (AI). These concepts are fundamental to understanding how AI tries to reach the right conclusions and how there is always a chance those conclusions are wrong.
Concept 1: Not all errors are created equal
Whenever we (or AI systems) reach a conclusion, there is a risk of making either of the following mistakes - concluding that something is true when it’s false, or concluding that something is false when it is true. Using Ivan Lim as an example, the two types of mistakes we could make are:
Type I error (false positive): We conclude that Ivan Lim is a workplace bully but he is not actually one.
Type II error (false negative): We conclude that Ivan Lim is not a workplace bully but he is actually one.

An extreme example of Type I and Type II errors.
It is important to note that the use of positive and negative refers to presence and absence, not whether something is good or bad. Case in point - a test result of HIV positive means the virus is present, but it is undoubtedly not a good thing.
False positives and false negatives are crucial to understanding automated systems driven by AI, as a major application of such systems involves classifying things and striking a balance between false positives and false negatives. Which type of error - false positive or false negative - is the greater sin is a judgment call and it often comes down to which will hurt your objective more.
False positives and false negatives in a factory
Consider a coffee mug factory and its automated defect detection system, which scans a manufactured coffee mug and determines what the chances are that it’s defective. How many coffee mugs the factory throws out depends on how stringent the probability threshold is. If all mugs with more than 20 per cent probability of defects are flagged by the system automatically, too many perfectly fine mugs (false positives) will end up being discarded. On the other hand, If the system flags only mugs with more than 99 per cent probability of defects, too many cracked mugs (false negatives) get shipped to customers.
A factory owner inundated with customer complaints will conclude that throwing out some perfectly fine mugs is the lesser evil and lower the probability threshold. This results in more false positives but also increases the chances that defective mugs (true positives) are captured. On the flip side, a factory owner under pressure to reduce wastage will conclude that shipping some cracked mugs is the lesser evil and raise the probability threshold. This results in more false negatives but also increases the chances that perfectly fine mugs (true negatives) make it through inspection.
False positives and false negatives in a political controversy
In the case of Ivan Lim, the fact that the imbroglio gathered steam suggests that a significant number of people found a possible false positive - branding him a workplace bully when he is actually not one - way more acceptable than a possible false negative.
Why was there more appetite for a false positive? Well, this is true for almost all targets of public online shaming - they are usually assumed guilty until proven innocent. But I suspect that Ivan Lim’s predicament was also exacerbated because he was slated to be fielded in Jurong Group Representative Constituency (GRC), the PAP’s best-performing ward in the last election. His detractors probably felt that if he made it to Nomination Day, getting elected would be almost a done deal. Better then to render judgment now while chances of keeping him out of parliament were more realistic.
Useful viewing
This video runs through what false positives and false negatives are, as well as how we choose between different prediction systems based on whether false positives or false negatives are more detrimental to our goal.
Like what you’re reading so far? Sign up so you don’t miss the next update of Art Science Millennial!
Concept 2: Updating our beliefs in the face of new information
Each time a fresh criticism popped up on social media, the feeling grew that something was not quite right with Ivan Lim. That his critics were prepared to be named added credence to the accusations.
This thought process is not just intuitive, it is also grounded in a branch of probability - known as Bayes’ Theorem - where we update our beliefs when presented with new information.
Bayes’ Theorem is used in some automated systems (including in email spam filters) but we don’t need to think of tech to see its relevance to how we deal with everyday uncertainties. To apply it, we need to understand simple probability and the Bayes’ Theorem formula, but I promise to explain every single notation and calculation.
PLEASE READ THIS WARNING: This following example uses numbers that are simply estimates meant to demonstrate the nature of Bayes’ Theorem. It is absolutely not meant as definitive, mathematical proof of whether or not Ivan Lim is a workplace bully.
Probability is based on the chances of something (known as an event) being true. We can also think of it as the chances of the event occuring. The probability of an event lies between 0 and 1, with 1 meaning an event is definitely true/absolutely going to happen, and 0 meaning it is definitely false/there is no chance in hell it’s going to happen. Sometimes probabilities are expressed as percentages, so a 90 per cent chance equates to 0.9.
For the Ivan Lim episode, we are interested in finding out:
What is the probability that Ivan Lim is a workplace bully, given that a named accusation of workplace bullying against him surfaces on social media.
To get this probability, we use the Bayes’ Theorem formula:

Ok, take a deep breath and stay with me. We can do this. It consists of just three elements - x, y, and z. Let’s talk about them one by one:
x is the probability that Ivan Lim is a workplace bully. We make an initial estimate by putting aside the allegations and treating Ivan Lim as just another boss in Singapore. Let’s assume that 1 per cent of bosses are bullies in Singapore, so x = 0.01. This is also known as the prior probability.
y is the probability that a named accusation of workplace bullying against Ivan Lim surfaces on social media, given that he is a workplace bully. In other words, assuming that he is indeed a workplace bully, what are the chances a named accusation surfaces on social media? While a named accusation has more credibility than an anonymous one, I’ll give what I feel is a conservative estimate of 15 per cent, so y = 0.15.
z is the probability that a named accusation of workplace bullying against Ivan Lim surfaces on social media, given that he is not a workplace bully. Perhaps the accusation comes from a disgruntled co-worker with a personal vendetta, or it’s motivated by hatred for the ruling party. So let’s estimate z by looking at the PAP rookie candidates. Various attacks have been lobbed at them since their introduction, but it looks like only one other candidate has been accused of high-handedness at the workplace. Two out of 27 PAP new candidates works out to about 7 per cent, so let’s estimate that z = 0.07. This is of course assuming that the accusations against both candidates are baseless.
We plug the numbers into the formula to get the revised estimate of Ivan Lim being a workplace bully, given that a named accusation of workplace bullying surfaced on social media. This is also known as the posterior probability.


So based on one named accusation, the chances have increased to 2 per cent, which is not a big deal and which also makes sense, since - come on - it’s just one accusation, right?
“Today’s posterior is tomorrow’s prior”
This quote is from a book by statistician Dennis Lindley and it guides our actions when a second named accusation is posted. We can follow the same steps, but replace the initial estimate (0.01) with our revised estimate (0.021), since we want to build on existing information.


Based on two accusations, we now have a 4 per cent chance, which is still hardly incriminating. But each additional accusation has a snowballing effect on the probability that Ivan Lim is indeed a workplace bully. By the seventh accusation, the chances are 68 per cent, or two-thirds.
A big caveat is that even seemingly slight tweaks to any of the variables - x, y, and z - will yield pretty different results. For instance:
If we lend less credence to the social media accusation (step 2), and so decrease y from 0.15 to 0.1, it would take 15 accusations to increase the probability that Ivan Lim is indeed a workplace bully to the same 68 per cent.
Let’s go further and look at how different y values affect the probability that Ivan Lim is a workplace bully after five accusations.

And just a reminder: The numbers used are simply estimates meant to demonstrate the nature of Bayes’ Theorem. This entire example is absolutely not meant as definitive, mathematical proof of whether or not Ivan Lim is a workplace bully.
If you endured this rather long mental exercise, there are several lessons you can take away.
Adopting a probabilistic way of thinking forces us to quantify our assumptions: What are the chances someone is a bully? How much credibility does an online accusation have? If you don’t agree with the probabilities I assigned to these events, you can simply make your own assumptions based on your own reasoning and test them out with the formula.
Instead of relying on “gut feeling”, we have a better sense of how much each fresh piece of information affects our perspective when we see the probability going up each time we gain new knowledge.
On the other hand, a probabilistic structure can also provide a veneer of false legitimacy for pre-existing prejudices. If you start off already inclined to believe Ivan Lim is a workplace bully, the formula will return a pretty high probability to confirm what you wanted to believe all along. Only this time you get to kid yourself, and whomever you succeed in hoodwinking, by saying: “The math says so.”
Useful links
Statistician and writer Nate Silver provides an easy-to-understand example involving cheating spouses and discovered underwear in pages 243-245 of his book “The Signal and The Noise”. A summary (not written by him) of that example can be found here. My explanation of Bayes’ Theorem is informed by this book.
How else is Bayes’ Theorem useful? Turns out it’s applied to search for missing planes.
For a more technical discussion of whether the prior estimate betrays bias and subjectivity, check out this link.
Another way the Bayes’ Theorem formula is expressed is shown below. Its Wikipedia page provides a succinct explanation of it.
I’d love to know what you think of this newsletter and what you’d like me to write about. You can reach me at zi.liang.chong@gmail.com or by leaving a comment if you’re reading this on the Art Science Millennial website. If you enjoyed this piece, sign up so you get subsequent updates in your inbox!