Francisco Contreras – Unabashedly Skeptical

"The essence of the independent mind lies not in what it thinks, but in how it thinks." –Christopher Hitchens

Category Archives: Intellectual Development

Work to Live or Live to Work?

“ Work is what people have to do in order to be able to buy Iphones and party”

This sentence was uttered by one of my middle-school students during a class discussion about what work means to them. Regrettably, she was not an outlier in uttering this opinion, but a representative of the class consensus; and as far as I can tell, society’s consensus also. For quite some time it has been generally accepted that one’s “job”  is merely the obligatory, often grueling task that one has to do in order to fund leisure. Do you want to have nice things? Well suck it up and work. Do you want to travel and go to shows and concerts? Well suck it up and work.  Indeed , the so-called Monday-Friday workweek is often seen as an oppressive barrier from you and your “real life”. “TGIF”, am I right? This is our modern philosophy of work, and it is prevalent anywhere we look. To take one example, high-school students today are told to choose  college majors solely based on the amount of expected income the majors might provide in the future. These students are no longer told to consider qualitative values of choosing a profession, whether the job will allow them to express themselves, whether it would help society, etc. Is this conception of work one that we should have as a society? What have been the consequences of excluding work from our personal identity? How has this happened and how ought we to change it?


Small Is Beautiful by E. F. Schumacher is considered to be a modern classic on alternative economics. It is a wonderfully enlightening book that critiques our underlying economic  assumptions and encourages a new perspective  on how we can move towards a sustainable economic future. This collection of essays was written in 1973,  but its ideas remain as relevant today as on its day of publication. Among the most interesting of the essays is one entitled Buddhist Economics, where he argues that the current philosophy of work  in much of today’s society is dissatisfying, confusing, and ultimately unsustainable.   

2015-01-09-consumerismSchumacher explains that due to the adoption of Keynesian Economics and capitalist economic structures, our conception of work, which had been previously informed mostly by religious traditions, took a turn for the materialistic with the dawn of the Industrial Revolutions in the early 1800s. Modern economic theory reduces work to an activity to acquire monetary wealth, a mere economic utility.  The reasoning behind this new philosophy of work was that it was the only way to meet the industrial employment demands of the new economy. If people continued to view work as part of their personal and social identity and not as merely a way to to make money, then they probably wouldn’t have accepted the impersonal, grueling jobs of the industrial era.

“For at least another hundred years we must pretend to ourselves and to everyone that fair is foul and foul is fair; for foul is useful and fair is not. Avarice and usury and precaution must be our gods for a little longer still. For only they can lead us out of the tunnel of economic necessity into daylight. “  

— John Maynard Keynes

From this philosophy stems the desire to avoid work at at all costs, which Schumacher argues has estranged us from finding a fulfillment and purpose in life.  He instead encourages us to adopt what he calls a Buddhist Perspective of Human Labor. Which consists of three parts.

1: Work To Develop One’s Faculties

Our work should allow for the full expression of our creative and intellectual capacities. It should not be an obligation, but an exercise of our mental dexterity.

2: Work To Overcome Ego-Centeredness

Schumacher states that according to Buddhist tradition, work is the main way in which we can go outside ourselves and collaborate with others. 

3.Work for Necessary Sustenance  

The last point of this perspective deals directly with material necessity. Unlike our modern perspective, working for material gains should be only a component of why we work, not our sole aim.   


Personally a lot of these ideas seem interestingly reminiscent of Henry David Thoreau, which makes sense considering he took a lot of inspiration from Eastern Philosophy.  One could argue that the main message in all of his work is his fierce opposition to the mindless consumerism and materialism of his day. Though fiercely eloquent, Thoreau’s thought always featured the tricky juxtaposition of idealism and practicality, and I think that’s the main challenge we too must wrestle with if we are to continue to re-think our philosophy of work. However, it is clear to me that we must give it a shot.

“… to make a living not merely holiest and honorable, but altogether inviting and glorious; for if getting a living is not so, then living is not”

 –Live Without Principle, Henry David Thoreau



Freedom of Speech: Establishing First Principles

There have recently been numerous controversies regarding alleged breaches of freedoms of speech and expression in the public sphere, in American college campuses of all places.  The loudest of these events, of course, have been the media-free safe zones and reporting of hateful speech proposed by some protesters affiliated with the  Concernedstudent1950  movement at the University of Missouri, as well as the controversy over Erika Christakis’ email at Yale University.  These and other events clearly challenge the principle of free inquiry that American universities have traditionally nurtured, and at least in the case of Yale, explicitly championed. It looks like these groups are ,deliberately or otherwise, weakening the very same principle that allows them to freely voice their concerns and express their opinions.  Are they aware  of the dangers of challenging academic freedom? Wait… am I even aware of them? Why should dissenting, polemical, and –yes, even offensive ideas be allowed to air publicly?

Freedom of Speech: Establishing First Principles

In philosophy, to establish first principles –sometimes called a priori terms– is to provide the foundational axioms from which further arguments can be inferred and developed. When we talk about the concept of freedom of speech, I think no one has established first principles more clearly and elegantly than John Stuart Mill in his book On Liberty (1859). In chapter two of Mill’s philosophical work he lays out the following three reasons we should unequivocally defend free expression: 1. I May Be Wrong, 2.You May Be Right and 3. Process and Understanding of Truth.

1. I May Be Wrong

 If any opinion is compelled to silence, that opinion may, for aught we can certainly know, be true. To deny this is to assume our own infallibility” (pg50)


Milton argues that to attempt to silence someone’s opinion is to assume one’s infallibility, to believe to be the possessor of absolute truth. But what could be more evidently wrong than the assumption of infallibility?  The whole history of human progress, especially since the Scientific Revolution (which Dr. Yuval Harari aptly calls the “Discovery of Ignorance”)  has been founded on the fact that our tendency to be wrong is the only certainty we can lay claim to.   Consider the scientific method.  In order for a hypothesis to leave the realm of the conjectural, it must be subjected to strict experimentation.  Following that,  experiments are to be meticulously examined and repeatedly reproduced. And even after this whole process is successfully accomplished, the scientist always knows that their theory could, at any moment, be unrecognizably changed and improved upon.

If this acknowledgment of fallibility is necessary to answer even the simplest scientific question, how much more is it to answer complex ethical questions?

2. You May Be Right

“Though the silenced opinion be an error, it may, and very commonly does, contain a portion of truth; and since the general or prevailing opinion on any subject is rarely or never the whole truth, it is only by the collision of adverse opinions that the remainder of the truth has any chance of being supplied” (pg51)

Mill’s second point is the natural continuation of the first. Once you assume that you can never be absolutely right (because people are prone to make mistakes), it is easy to see that the only way to form accurate beliefs is by listening to differing opinions aware of the possibility that they might have something to offer. For example, Newtonian mechanics has helped us understand many things. Using its simple set of ideas, we can explain the movements of objects and the forces underlying them with elegant reasoning and mathematical validity.  The theories seemed very complete, indeed in their time the amount of phenomena they could explain was unprecedented. However, despite its apparent coherence, Newtonian mechanics was questioned and out of its limitations arose what is called quantum mechanics.  Imagine how much poorer we would be if we had tried to stop Einstein just because we  thought of Newtonian mechanics as being absolutely right.


On this point, Mill also warns against finding shelter in the false security of consensus. Some feel justified to silence the types of speech that seem obviously wrong just because said opinion is held by an overwhelming majority. However, I think that not only sets a dangerous precedent against dissent, but also exhibits the worst kind of intellectual and moral arrogance. Let’s never forget that one of humanity’s most shameful mistakes was socially accepted, politically endorsed, and religiously justified. Indeed, in Frederick Douglass’s famous words, “slavery was preached from the pulpits” and the first steps of its eradication were radical opinions and dissenting literature.

3. The Process and Understanding of Truth

“Even if the received opinion be not only true, but the whole truth; unless it is suffered to be, and actually is, vigorously and earnestly contested, it will, by most of those who receive it, be held in the manner of a prejudice, with little comprehension or feeling of its rational grounds”(pg50)


So far two of the three the examples that I have provided to try to explain Mill’s points have been scientific. I do this under the assumption that arriving at a scientific truth is similar to arriving at an ethical one in that the process has to include some sort of conflict of ideas (what is usually called a dialectical notion of truth). Scientific examples are usually better because they do not carry the political and ideological baggage that ethical examples often do. But I do want to address a point that can only be explained through ethical examples.

It seems like people are tempted to silence an opinion because it is offensive, hateful, or insensitive.In 2005 there were attempts to censor some cartoons because they were offensive towards Muslims. A few years late, Charlie Hebdo was victim of mass disapproval under the same grounds. And just very recently there was a petition signed attempting to ban Donald Trump from the UK. I believe that by trying to censor these people on the grounds of offense, we lose the value of defending our opinions (whatever they may be).  


For example, there is no doubt that Trump has been in the habit of saying some very xenophobic, racist, sexist, and overall unintelligent things. It is perfectly reasonable that people would rather not hear him except for the fact that they lose the opportunity to figure out why what he is saying is so horribly wrong. I think no one has put it best than Thomas Paine than when he wrote in his  introduction to the Age of Reason ,“The most formidable weapon against errors of every kind is Reason. I have never used any other, and I trust I never shall.” That is the thought I want to leave you with.

Now, I am weary to end without a forceful clearing of the throat. My disapproval of the student protesters and other groups is limited to the actions that have undermined freedom of expression, not of any of their particular arguments (except, of course, in the case of Trump). For instance, I am glad students and faculty members are convening to challenge what they consider to be systemic racism in their universities. I encourage them to continue to do so and hope they’ll remember that if there be any legitimacy in their arguments, it will be unearthed through open dialogue, not censorship.


In Defense of Uncertainty (Lesson #1, Rationality Series)

It has been a while since I read the book Rationality: From AI to Zombies by Eliezer Yudkowsky (from now RAIZ). I have been meaning to write about it, but the book is 1800+ pages long and covers very many topics in varying degrees, from evolutionary psychology to quantum mechanics. Instead of writing one long review, I will write a series of short posts expanding upon a specific idea from the book.

In Defense of Uncertainty.

Certainly one of the biggest takeaways from RAIZ  is its attitude towards epistemic certainty. That is to say that due to our systematic, unconscious errors in judgement and the fact that both the instrument (brain) and the method (science) for obtaining truth is clumsy and fallible, we must adopt an attitude of humility when making judgments. This has been a recurring theme in this blog, so I don’t want to repeat myself too much. Instead, I want to show how this attitude towards certainty has worked in my life.

Prior to being exposed to the ideas of this book (and of “Rationality”/Mental Models material in general), I thought I was certain about many things, especially political things.  When I realized that certainty was pretty much unattainable at an ideological level, my whole political thought-process was radically transformed.  To illustrate what I mean, and in homage of the quirky rhetoric in which RAIZ is written, I will do it by way of a dialogue. The following exchange is between a slightly caricatured version of my former self –Irrational Luis–, and an idealized, rational me –Rational Francisco–.

Irrational_Luis:¨ Reds are wrong. Their goal is to suppress freedom and discourage progress. We Greens must stand for the oppressed and fight against the forces that seek to destroy everything that we have accomplished  ”

Rational_Francisco: “How can you believe something like that?… You mean to say that everyone who isn’t a Green must be freedom-hating and oppressive?”

Irrational_Luis “ Yup, pretty much, and I will prove it to you.  Take a hard look at the history of the world. If you’re really looking, you’ll notice that  the biggest oppressors, killers, and liars have been Red. Even when they claimed to be Green, their actions were undoubtedly Red.   Likewise, people who have advocated morality and have brought prosperity on our country have been Green. Even when they claimed to be Red, their actions were undoubtedly Green”

Rational_Francisco:: “ I see… “

Irrational_Luis: “ You’re not convinced? Well I have plenty more evidence. I go to a group that gets together every Tuesday and Thursday and talks about all the ways in which Green ideas are correct. We also talk about all the silly Red ideas that we hear and we discuss how they are wrong. We talk for hours and even make bumper stickers of our favorite Green soundbites. “

Rational _Francisco:: “You seem pretty certain of your position”

Irrational_Luis: “Of course. How couldn’t I be?”

Rational_Francisco:“ Well, have you ever considered that you might be wrong?”

Irrational_Luis: “Look, buddy, I’m no fool. I base my opinions on truth. Either the world is Red, or the world is Green. As simple as that. If the world all of a sudden changed to Red, then I would be Red. Got it?  But the world is not Red”

Rational_Francisco:“Would you agree that the world is a complex place?”

Irrational_Luis:“Why yes, of course. The world is very complex”

Rational Luis:“Okay. Do you think that there are things we haven’t yet understood about the world?”

Irrational_Luis:“ Where are you getting at? Why so many questions?”

Rational_Francisco:”I’m just trying to understand.”

Irrational_Luis: “Are you even a Green?”


Irrational_Luis: I KNEW IT. You’re one of those filthy..”

Rational_Francisco:“I’m not a Red either”

Irrational_Luis: “Ah, you must be a foreigner then. My cousin Sandra once talked with one of those hippies and…”

Rational_Francisco: “I’m not. Um… please just answer my question.  Do you think there things out there that you don’t understand?

Irrational_Luis: “Yeah… sure.  But that doesn’t mean that the Reds know any better. You see, those people believe that…”

Rational_Francisco: “Let me stop you right there. You admitted that the world is a complex place and that there are things that you don’t understand, so why must reality fit your narrative?

The theories and ideas that you have about the world are, by definition, no more than abstractions that you construct  in order to understand reality. You cannot expect to form accurate beliefs when you are dismissive of anything that contradicts your preconceptions. You discriminately require more evidence for anything that opposes your view and all too readily accept things that fit. You hang around people who think the same way as you and make caricatures of those who don’t.

Breaking free from your self-imposed delusion requires that you discard your need for certainty and narrative coherence. Get used to the fact that you are going to be wrong about many things.  Seek ideas and people who disagree with you and always listen to what they have to say. Re-phrase their ideas as best and as generously as you can and try to find value in their words.

Be humble and admit that there is no reason for you to understand the world.  It’s good that you are passionate, but don’t worry too much about whether things are Red or Green, worry about whether things are true. Be a champion for truth.  That should be your chief worry. Truth. Be suspicious of when you feel certain about something, because that could just be your passions diverting you from the truth.

Here, take this book. It should get you started. Never stop reading. Never stop questioning. Learn some math.”


I originally wrote the dialogue for other purposes, but I think the idea is original enough to merit being reproduced in this blog. It also seems to have the capacity to potentially make me cringe when I look back on it some time hence, which is always fun I guess.

Having Opinions and Getting them Right

“In think chocolate ice-cream is tastier than vanilla”

“In my opinion, raising the minimum wage will improve the economy”

Most people wouldn’t think twice about classifying these two statements as “opinions”. Yet, they’re fundamentally different statements. The first is a statement of subjective preference that cannot be proven one way or the other and the second is a quantifiable belief claim that can be tested empirically. However, most people fail to make this distinction (or fail to realize its implications).  This is particularly problematic once you realize that quantifiable belief claims can have real, life-altering effects. If I believe eating kiwis will cure my cancer, I will be significantly worse off  than if  got treatment.


Therefore it seems that quantifiable belief claims must be subjected to a standard  of empirical rigor proportional to the claim’s effect on its environment (the more important a claim is, the more rigor you must place in verifying that claim). With that in mind, let me attempt to make a “Checklist for  Having Important Opinions” . Let’s go.

Checklist for Having Important Opinions

1. Account for Personal Biases- It seems like I’ve talked about cognitive biases in nearly every post I’ve written. However, it is one of the most important tools to make sure your belief claims are objectively true.  Make sure that the degree to which you believe your quantifiable belief claim is the degree to which you have analysed the evidence for it and not degree to which you want to believe it.  According to this article, there are certain tested steps you can make to avoid some of the major biases:

  1. Consider the Opposite. “I am inclined to believe X. What are some reasons that X might be incorrect?” (I believe kiwis cure cancer. What are some seasons I can think of that show that kiwis wont cure my cancer?).
  2. Provide Reasons. Take a quick look through any major social media site and you’ll find people stating quantifiable belief claims in the form of assertions. Instead of doing this, provide reasons for your belief.  “I believe X. I believe X because… “
  3. Get Proper Training.  If you’re going to make statements about economic policy, make sure you have the sufficient training in economics to be able to analyse the data accurately. Likewise with belief claims involving statistics, medical research, penal law, etc.
  4. Reference Class Forecasting.  When you are faced with a difficult problem, find a reference of similar cases consider them in your predictions.


2. Look for Quality Evidence  If your opinion is the type of belief claim that can be supported by empirical studies, look for the studies themselves and analyse them. Do not rely on third parties to interpret the studies for you.

  • Reading the primary source material is important because people are subject to errors of judgement and can interpret the results of a study in ways that confirm their presuppositions. The higher the degree of separation from the primary sources of evidence, the more likely it is to be misinterpreted.
  • Second, find out if there is expert consensus (usually achieved after an overwhelming number of repeatable, peer-reviewed studies).
  • Lastly,  if you have any reason to doubt the veracity of the study (of it its results are particularly controversial), look for meta-analyses and analyse them objectively.


3. Update your beliefs – It is not easy to incorporate new evidence to old beliefs. In order to avoid making  arguments impervious to  evidence, we must make sure to accurately update our beliefs in light of new evidence (provided we have done steps 1 & 2).  Also, we must make sure that the way we update our beliefs corresponds to the strength of the evidence (see Conservatism Bias)


Not Enough

I have found these steps personally useful. However,  although they might be necessary conditions for true beliefs, they are not sufficient ones. Having truthful quantified belief claims is an active process that no checklist will be able to perfect. According to the de-biasing article, going through the steps does not guarantee that you wont have biases, it just decreases their likelihood.

Likewise, seeking primary sources of evidence and reading meta-analyses may not be enough.  First, studies themselves are not impervious to bias or shoddy experimentation (even after peer review). Then,  groups of experts each with their own set of studies, might fundamentally differ on core findings (on cognitive priming, for example) . Lastly, there are times when different sets of meta-analyses point to opposite results. Scientists make mistakes as well.

Now, this is not to say that one should adopt a defeatist view of holding true beliefs (ie. “I cannot be 100% sure that I’m right, so I shouldn’t try”). What I mean to say is that having certain types of opinions require different degrees of justifications. These justifications are difficult and even if executed correctly are hardly perfect, so we should approach beliefs fully aware of our ignorance and limitations, and with the honest intention of getting them right (regardless of our feelings).

Resources and Further Reading

This article on “Reference Class Forecasting” and this one, too

-These two articles (one and two) by Scott Alexander on the fallibility of experts.

-These two great guides to Bayesian updating (one and two).

-This article on the “Twelve Virtues of Rationality”

Remember Anything. Forever.

Let’s assume we’ve effectively learned a skill. We have climbed through the higher order cognitive skills of Bloom’s taxonomy through the use of Buzan mind maps, memory palaces, and have even taken advantage of the cognitive benefits of teaching our target skill (whether it be computer programming, systems analysis, or even language learning). We have now internalized the axioms of knowledge necessary to make divergent neural connections and produce creative output. However, there is one enemy lurking in the corner threatening to  throw away all our hard work: memory decay.


This evil bastard will start eating away at our newly acquired knowledge the moment we finish internalizing it, and if we’re not careful,  will deteriorate it until it is no longer a part of our mental repertoire of skills.  There is, however, a scientifically tested way to defeat memory decay indefinitely. This can be done through spaced repetition,  or more specifically Spaced Repetition Software (SRS from now on).


The Spacing Effect And Spaced Repetition Software

The spacing effect is the phenomenon whereby we place a piece of knowledge to long-term memory through rehearsing it at longer time intervals (as opposed to “cramming” the information the moment we learn it).  In fact, if we get it right we can make sure to review a piece of information right before memory decay begins.

This is exactly what SRS allows us to do. SRS prompts reminders of the target information through “flashcards” that are perfectly timed to beat the memory decay found in the forgetting curve. In other words, this software acts as a neural prosthesis through which we can make sure a piece of information stays in our memory forever, ready for active recall at any moment.  Who said superpowers were only for superheroes?!

Visualization of the forgetting curved and the strategically timed SRS reminders.

Visualization of the forgetting curved and the strategically timed SRS reminders.


Guide and Tips for Using SRS

Now that we have the neural prosthesis that will give us the key to superhuman memory, what is the best way to use it?  I think it would be extremely inefficient to use this information to memorize stand alone facts that can be easily archived in the internet. For the first time in history we have ubiquitous, instant access to unlimited amounts of information. Not even the best of us can be better than a computer at storing information, so it would be useless to try (apart from being wasteful of our precious review time). Instead, I think we should commit to memory information that encourages skills and mental models (and information that would be advantageous if we had memorized, like romantic poems 😀 ) . 

For example, I have written before about the limitations of our  thinking. The human brain is by design prone to cognitive biases that prevent us from making optimal, rational decisions. These biases are ingrained into our subconscious thinking and the only way to work around them is to be able to diagnose situations when they might happen and prevent accordingly. In order to do this, we must have perfect recall of these biases and use them to analyse our most important decisions. This is why I’ve decided to memorize all the known cognitive biases using Spaced Repetition Software.

Similarly, we can use SRS to memorize other meta-cognitive skills that will allow us to understand the world around us (my personal recommendations being Bayesian statistics, Game theory, Heuristics and Biases and foreign languages).

Lastly, we must make sure not to confuse memorization with true learning. I realize this might be a hard distinction to make given the “mindless-memorization” focus of most modern schooling (see this article for more information). When you use SRS, you are just choosing information that you want to remember forever. IF you want to actually learn something, you need to do much more than just memorization. (for an overview of the science of learning, I recommend this book or this online course).


  1. My favorite Spaced Repetition Software: Anki
  2. Dr. Wozniak’s guide to making flashcards for SRS software. 
  3. Gwern’s amazing literature review of Spaced Repetition Software.
  4. Janki’s Guide to using Anki efficiently 
  5. QS Primer: Spaced Repetition and Learning

Effective Writing

The underlying purpose of everything I write is to effectively communicate a thought or idea. Sometimes I write addressing a particular audience (or potential audience) or just to recall and solidify a concept that had been only mental. However, every time I write there is one particular challenge: how do I construct a thought in the best way possible to achieve my purpose. What words should I use? what kind of vocabulary is best for this particular topic? Should I avoid technical jargon? Would I benefit from imitating the style of my favorite authors? How much revision is too much?

mark_twainOver the years, Earnest Hemingway, George Orwell, Kurt Vonnegut and many other famous writers have offered writing advice. Their tips and admonitions go from whimsical (Mark Twain encouraging writers to substitute the word “very” for  “damn”) to  oddly specific (Elmore Lenard’s advice to never open the book with weather).  However, most of these tips do not get to the nature of what constitutes good writing. Instead, most are just constraints  that have helped them become great writers in their particular genre.  These tips agree on some things (ie. they all warn against using overly descriptive language and are dismissive of the passive voice) and contradict each other on others ( ie. George Orwell’s advocacy for simplicity vs E.B White’s defense of richer language).  The list is also very lengthy. There are dozens of these tips and while many are seem valuable and useful, heeding to them all  would be impractical and inefficient.

I think there is a better way.  Judging whether my writing has achieved the purpose of effectively communicating my message should not consist of going through the checklist subjective writing advice, but making sure my work aligns with the way humans interact with information.

Cognitive Strain

Most of the tips that these famous authors give are just ways to achieve the same underlying goal; that good writing should seek to decrease cognitive strain whenever possible. Cognitive strain is the amount of effort it takes your brain to process information.  Daniel Kahneman shows that the less cognitive strain a thought provokes, the more  digestible it will be for the brain. There are three general principles he gives in order to minimize cognitive strain in writing:

1. Credibility Through Simplicity

A clever study done by Danny Oppenheimer shows that people find simple language more credible than complex language. Choosing the simplest word possible (especially when talking about a complex subject) increases credibility and is widely taken as a sign of superior intelligence. Now, this does not mean that there is no place for technical jargon or rich vocabulary, it just means you should only use complex words when there is no other word that would equally convey the meaning of your thought.  Using a thesaurus to replace simple words with bombastic ones in order to appear more intelligent gives precisely the opposite intended effect.

This principle of credibility through simplicity not only makes sense of some of the very specific writing advice such as Elmore Leonard’s suggestion to use adjectives sparingly, or Stephens King’s hatred of adverbs, but also distills all this information to a principle based on how the human brain works.

2. Memorability

Kahneman’s book also talks about an experiment where a group of students were given a group of aphorisms which they would rate for insightfulness. One group was given aphorisms that rhymed and the other group was given the same aphorism without the rhyme. The rhyming aphorisms showed significantly higher insightfulness scores even though the two groups of sayings were virtually identical in meaning and content. This experiment shows the principle that great authors have known for ages and have tried to encourage in their tips,  form is just as important as content. There is something about how our brains read written information that associates rhyme and beautiful sounding sentences with truthfulness and validity.

Now, before we can go off and use our favorite rhymed platitude, don’t let’s forget that while rhymed phrases do sound more pleasing to the brain, their effect gets lost with constant repetition. This is why authors advice against using clichés. Phrases that were once clever and resonant quickly lose their pleasing effect the more you use them. If we are to use rhyming and assonant phrases to our advantage, they must be unknown or original.

3. Readability

Lastly, reducing cognitive strain not only deals with the syntax of your writing, but also the presentation. Studies show that choosing a readable font and highlighting main ideas in boldface letters reduces cognitive strain and makes a piece of writing more approachable. Granted, this principle is a cognitive bias that could be used to highlight bad ideas, but one can exploit this bias in decision-making to make sure your piece of writing achieves as much effective communication as possible.


Intuition: Friend or Foe?


n.  The ability to understand something immediately, without the need for conscious reasoning.

Intuition Is Your Enemy

Perhaps the greatest lesson I have learned from Daniel Kahneman’s work on decision-making is that we should be extremely skeptical of our intuitions.  We all have a subconscious, automatic system of thinking that often produces deviations in judgement, or cognitive biases. These cognitive biases can make us prone to make poor decisions (see my previous post), and produce misleading, contradictory, and even racist intuitions;  all while being completely unaware of what is happening.

IAT_ChartPrejudiced Intuitions

In 1995,  the psychologist Anthony Greenwald created the Implicit Association Test in order to confirm the idea that implicit and explicit memory applies to social constructs and attitudes.   It has been over a decade since this test was created, and the results have been shocking. In a meta-analysis of these tests, it showed that in all 185 of the studies analysed, 70 % of the people showed an implicit social preference for light skin over darker skin. The interesting part is that only 20% self-reported to having this preference. In other words, a majority of people have bias against darker skin, but very few are actually aware of it.  Furthermore, the analysis also showed that these implicit biases are predictive of social behavior.

What does this mean? Everyone has probably heard of or experienced situations where employers have denied a position to an otherwise qualified applicant on the basis of an “expert intuition”. This study suggests that the “expert intuition” may well just be an unconscious, implicit bias towards race, gender or age.

Moreover, evidence against expert intuition goes beyond just implicit associations.  Research shows that for instance political forecasters are no better than ordinary people at predicting events, professional wine connoisseurs cannot tell white wine from red wine in a blind taste test,  and simple mathematical models have proven vastly superior at diagnosing patients than professional psychiatrists.  The body of research is ever-growing and gives us compelling reasons to seriously distrust expert intuitions.

Intuition is Your Friend 

In Blink: The Power or Thinking Without Thinking, journalist Malcolm Gladwell makes the argument that fast, unconscious decisions are a tool for improved decision-making. He says that our mind is excellent at unconsciously spotting patterns from limited windows of experience and sees this phenomenon as a useful skill that should be exploited for our benefit.  In fact, he provides numerous examples of experts using their unconscious intuitions (or “rapid cognition”, as Gladwell calls them) with surprising speed and accuracy. He tells the stories of chess grand-masters able to come up with the perfect counter-move within a couple of seconds of their opponent’s action, of a tennis coach who is able to spot when a player will do  a double-fault moments before the racket even touches the ball,  and of art experts able to differentiate highly elaborate copies of artwork from originals with only an instant of close inspection.


In fact, the study of highly accurate, expert intuition is part a decision-making framework called Naturalistic Decision-Making. NDM researchers usually favor the snap decisions of highly skilled, highly experienced individuals instead of formal analysis and optimal performance models. NDM research shows that in certain environments, these intuitive decisions actually outperform the outcome of  optimal decision models and other formal analyses.


Different Perspectives

What to make of this apparent contradiction? On one hand you’ve got Kahneman and his “Heuristics and Biases” camp advocating skepticism in the face of expert intuition and highlighting the many instances where skilled experts fail miserably.  On the other hand, there is Gladwell, Gary Klein and the NDM crowd showing how intuitive expert decisions are often excellent, sometimes even better than deliberate conscious analysis.

When, if ever, should I trust expert intuitions? When should I trust my intuitions? Are Gladwell’s examples rare anomalies, or do they represent a complex pattern?

The answer to these questions turns out to be simple and elegant. After a few hours of digging through paper after paper looking for a possible way to make sense of this intriguing paradox, I stumbled upon a paper aptly titled A Failure to Disagree. It is co-authored by none other than Daniel Kahneman and Gary A. Klein, the leading advocates of the two decision-making frameworks.  In the paper they explained how the frameworks in which they were operating were two different perspectives of the same phenomenon, not opposing and contradictory views. The paper proposed various conditions with which to judge the validity of intuitive judgement, but I will summarize them in three questions.

Diagnostic Questions to Judge the Value of Intuition

  1. Has this intuitive judgement been made in a predictable environment?
  2. Does this environment provide immediate and accurate feedback?
  3. Do you have solid grounds for saying that this individual has had enough opportunity to learn the regularities of the environment? (sufficient expertise)

If the answer to these three questions is positive, then you have enough information to trust intuitive judgement. If not, you should use your skepticism and be ever wary of cognitive biases in intuitive decision-making.


One of the aims of my Open Source Learning Project is to learn to think critically and make better decisions. My quick experience in the study of intuition taught me something much more profound than just an updated model of decision-making; this experience taught me to look beyond the false dichotomy I instinctively created when confronted with seemingly opposing views. The truth turned out to be very useful, enlightening and even beautiful; but it did require a lot of effort and deep thinking. It could have been much easier to choose the perspective I liked the most and rationalize my decision accordingly. However, choosing to pursue the truth is what gave true understanding of the subject, and that is what’s most important.

Further Reading & Resources

Sources of Power: How People Make Decisions by Gary Klein (Amazon)

Implicit Association Test (free to take)

Talks@Google: Daniel Kahneman (video)

Daniel Kahneman’s experience on working with Klein (video)

Learning How to Think: “What You See Is All There Is”

The Problem

There is a severe problem in today’s society that forces you to pick sides. Whether it be sports or politics, society leaves little room for nuance and depth. There seems to be a tendency to sharply dichotomize the world into black or white, completely neglecting the wide spectrum of colors that seems to make up reality.  This phenomenon could be explained with the letters “WYSIATI”.

left-right-compromise                                                      Liberal-vs-Conservative

WYSIATI is an acronym coined by psychologist and economic theorist Daniel Kahneman and it stands for “What You See Is All There Is”. Kahneman uses this word to describe a human bias that further illustrates the fact that decision making is not entirely based on rational thought. In fact, according to Mr. Kahneman’s work, our decision-making processes are much less rational than we’d like them  to be.  For example, imagine I were describing a friend to you and I said that this friend is “kind, nurturing, and patient”. Imagine I then asked you if this friend would be a good parent. You immediately have an answer for that, don’t you? Why is that? You really don’t know much about the friend I was describing other than the meager information I decided to share with you. What if I added “manic depressive” to my list of adjectives? Would you have come to the same conclusion?


Kahneman explains that the human mind, marvelous as it is, is a machine for jumping to conclusions using insufficient, unreliable, or unverified information. This feature of our brain is predictable and reproducible.  Moreover, we seem to be very confident about the conclusion that pops into our head from this sparse information. It is not very difficult to see the evolutionary benefit of this feature. If primitive homo sapiens saw the brush moving in usual patterns, it benefited him to jump to the conclusion that it was a predator hiding in the brush then to pause and seek more evidence.

However, this cognitive feature is obviously detrimental if one is to think about muti-dimensional complex subjects such as politics, education,  and ethics. The fact is, most things do no neatly fit into coherent narratives, especially not ones constructed from objectively insufficient information. No matter how coherent your little story sounds, your church going neighbor is probably not a racist ,trigger-happy bigot.  Knowing your peer voted for a specific political party does not give you nearly enough evidence to construct a narrative about her.  You have no reliable evidence on which to extrapolate further behavior, yet we do that all the time; we increase great divisions using miscategorizations and ad hominem arguments.  Unfortunately, the degree to which we are certain about a specific conclusion depends not on the strength of the evidence on which it is based, but on the coherence of the narrative we have created for ourselves. Thus, we must look beyond the narrative that we automatically construct, especially on  complex and important subjects.

This is clearly easier said than done. Like I said, this particular bias in decision making –WYSIATI– is ingrained into our cognitive psyche; it is a feature of our automatic, subconscious System 1. Asking you to not create a narrative when presented with information would be like asking you not to read a certain word if I spell it out for you.

“I just wrote the letters ‘H-E-L-LO’, on the board; however, you must not read the letters I just wrote. Think of these letters as nothing more than a random grouping of characters and not a formulation of a word.”

This would be impossible. What can we do, then?

My Solution


1. Recognition (Grammar)

Kahneman’s groundbreaking research seems to suggest that there is not much to be done to reprogram our brains in order to avoid cognitive biases. However, we can acquire the tools to be able to recognize them and hopefully minimize their effects.

Memorize cognitive biases and logical fallacies.  Logical fallacies have been studied at least since the time of Aristotle and are constantly used by political leaders, advertisers, and con-men. Yet, perhaps only philosophy majors can list them and their uses.  Watch any political speech and tally how many logical fallacies are employed. After you finish picking your jaw off the floor, commit them to memory.

2. The Grind (Logic)

“It is the mark of an educated mind to be able to entertain a thought without accepting it.”  — Aristotle

Know the opposing view so well that you could be able to recreate its argument and defend it if necessary. There is almost always more nuance to the opposing view than the people in your camp would have you believe.

Overconfidence is also a feature of our brain. For the most part we truly believe the narratives we construct even to the point of ignoring and discrediting contradictory evidence (see also “Confirmation Bias”).  Don’t trust your conclusions without rigorous testing.  Welcome contradictory evidence. Come up with plausible conclusions that correspond to the evidence and not the other way around. Always realize that there is a possibility that your conclusions are incomplete and incorrect and be ready to adjust them accordingly.

3. Debate (Rhetoric)

Whether it be through speech or the written word, seize every opportunity to reiterate and defend your arguments. Seek debates, seek criticism, be at the mercy of peer reviews.  I have found that the mere act of outlining or presenting my arguments forces me to analyse them again, especially if I am presenting them to a peer or a group of my peers.

This three step process is the one that has worked for me.  I have been following this method for several months now and have already noticed the drastic changes in how I process information and how I make decisions.

If you haven’t figured it out already, this “process” has been modeled directly by the Trivium Method for critical and creative thinking. I will be writing more about it soon!

Further Reading

Thinking, Fast and Slow by Daniel Kahnman

Wikipedia: Cognitive Bias

Overview of the Trivium Method