WE HAVE DISCUSSED THE power and the limitations of logic, and the power and limitations of emotions. I am going to conclude with a discussion of how to blend logic and emotions to be a helpfully, persuasively, powerfully rational person. Not just a person who follows the rules of logic, but one who can use logic to illuminate the world of emotional humans.
I will begin by summarizing what I think logical behavior includes and doesn’t include, at the most basic level. More subtly, I’ll talk about what it means to be not just logical, but reasonable. Then I’ll go further and describe what I think it means to be powerfully logical, when you’re not just following the basic rules of logic but also using advanced techniques to build complex logical arguments and investigations, and you are thus able to follow complex logical arguments.
I will show that even if everyone were logical in this way, there would still be plenty of scope for logical disagreement. But most importantly, I will describe what form I think these disagreements would take, and what a logical argument would look like. I wish all arguments took this form. It doesn’t mean no emotions would be used. In fact, I’m going to show that even better than being a logical person, I would like everyone to be an intelligently logical person. I think this involves not just being logical, but using logic in a way that seeks to help other people, and that this involves a crucial blend of logical techniques and emotions instead of a fight between them. This is what I think intelligence consists of, and it is summed up in this diagram:
I believe logic is at the core of human intelligence, but that it does not work in isolation.
A logical human is one who uses logic. But how? We have seen all sorts of human situations in which logic has limits. To call ourselves logical we should still use logic as far as we can, and no further. Some people see the limitations of logic and conclude that they don’t need to use it at all. But this would be like throwing away a bicycle because it can’t fly.
I believe that a logical human uses logic, but necessarily has core beliefs that they don’t try to justify. This is the starting point of their logic. Then, everything they believe should be attainable from their core beliefs, using logic. Moreover, they should believe everything that follows logically from their core beliefs, and their beliefs should not cause any contradictions.
The idea of core beliefs is analogous to the role of axioms in mathematics, as we discussed in Chapter 11. Believing everything that follows logically from your core beliefs corresponds to the logical notion of “deductive closure”, which we discussed in Chapter 12. The idea that your beliefs should not cause contradictions corresponds to the logical notion of “consistency”, which we discussed in Chapter 9.
If these are the basic principles of being logical, what does it mean to be illogical? “You’re being illogical!” is used to try and shut down arguments, often by people who like to think of themselves as rational, against people who lead with their emotions (or simply anyone who disagrees with them). But two people can be logical and still disagree, if their logical systems are taking them to different places. Someone who is leading with their emotions might not be able to articulate what is logical about their thinking, but that doesn’t mean it is actively illogical.
Being illogical means doing things that go against logic, or cause logical contradictions. But I think it is important that these only really count as logical contradictions if they are contradictions within your own system of beliefs. This is a crucial point because one person’s logic might look like idiocy to another person. I think this is where the battle cry “You’re just not being logical” comes from.
Given my definition of a logical person above, there are several valid ways I could judge you to be illogical:
1. Your beliefs cause contradictions, or
2. there are things you believe that you cannot deduce from your fundamental beliefs, or
3. there are logical implications of things you believe that you do not believe.
An example of the first case is all those people who support the Affordable Care Act but not Obamacare. As we’ve seen, this causes a contradiction because ACA and Obamacare are the same thing, thus those people support and don’t support the same thing–a contradiction. An example of the second case might be things that people “just feel”, such as when they “just feel” that a relationship is not going to work, or they “just feel” that evolution isn’t right, or they “just feel” that it was definitely a vaccination that caused their child to develop autism. An example of the third case is when some men say they don’t think health insurance should include maternity cover because they don’t think anyone should have to pay for treatment for other people, and they regard maternity cover as only for women (despite the fact that it helps everyone who is born). And yet, they still think prostate cancer treatment should be covered, although that is only for men. In fact, isn’t the whole principle of insurance that you pay even when you’re not sick, so that everyone can benefit? I think the statement “I don’t think anyone should have to pay for treatment for other people” logically implies “I don’t believe in insurance”. Thus if the man in question still believes in insurance at all, he is being illogical in the third sense. (Of course, we could perform this analogy pivot and discover that probably the principle he believes deep down is that men should not have to pay for things that only affect women, but it’s perfectly fine for women to have to pay for things that only affect men.)
There are a few things to note here. First of all, contradicting someone else’s logic doesn’t mean you’re illogical. Someone might say “It’s just not logical, mathematically, to pay $50 to eat something in a restaurant when you could make it at home and spend only $5 on the ingredients.” That might be true in their system of beliefs, but in my system of beliefs it might well make sense logically to pay for the luxury of having food cooked for me instead of doing it myself. And not having to do the grocery shopping or clean up afterwards. All this doesn’t necessarily mean that I am being illogical, it just means that we have different axioms.
The next thing to note is that the question of fundamental beliefs is a gray area. Suppose someone believes, without being able to justify it, that the moon landings didn’t really happen. But perhaps they simply think of this as a fundamental belief? It might not seem very fundamental to someone else, but that’s a separate question. It comes down to the ability to follow long chains of deductions. We have already mentioned the example of someone saying “I don’t believe in gay marriage because I believe that marriage should be between a man and a woman.” They may think of “marriage is between a man and a woman” as a fundamental belief, whereas someone else thinks of it as a constructed belief that needs justifying. Likewise if someone believes that you should only vote for someone you truly believe in. One person might think that is an axiom, whereas someone else thinks it needs justification. (I’m amazed that people who think this way ever get to vote at all, but that’s a different question.)
The question of whether or not a belief is fundamental enough to count as an axiom is very different from the question of an axiom actually being unreasonable. None of these questions is very clear cut, as we’ll discuss shortly. Even the issue of believing something “just because you feel it” could be justifiable if one of your fundamental beliefs is “everything I feel to be true is true”. (Incidentally this sounds similar but is very different from saying that feelings are always true.)
Finally note that even the third point, about believing all the things implied by your axioms, gets us into trouble with gray areas. As we discussed in Chapter 12, following the logic inexorably can push us through gray areas to undesirable extremes. For example, if we move in tiny increments, we can logically deduce that it is acceptable to eat any amount of cake at all. The ability to understand gray areas in a nuanced way is an aspect of powerful logic that we will come back to.
The main lesson here is that we need to understand the difference between “illogical” and “unreasonable”.
I will judge you to be unreasonable if I think your fundamental beliefs are not reasonable. But this might not mean you’re contradicting logic, it just means we have some fundamental disagreements. If two mathematical systems have different axioms they do not disagree–they are just different systems, and the best we can do is discuss which system is a better model of the situation in question.
We should acknowledge that what counts as a “reasonable” fundamental belief is a gray area, and is an unavoidably sociological concept: different cultures count different things as reasonable. However, I think a key component of “reasonableness” is that there should be some sort of framework for verification and adjustment.
If one of your core beliefs is that the moon is made of cheese, I would say that this is not reasonable, although it makes for fun fiction (as in Wallace and Gromit’s A Grand Day Out). But what is my framework for thinking this? First of all, by a logical argument: cheese is a product of milk, and milk comes from animals. How could all that milk product have got into orbit? Secondly, an argument by evidence: people have been to the moon and brought back dust, and it was not cheese.
Of course, there are some people who believe that the moon landings were fake, and that all the evidence about them is part of a huge conspiracy. I would also say this is not reasonable, because I believe in scientific evidence as one of my core beliefs. I will come back to questions of reasonable doubt and skepticism later.
Before we go further we should note that there are some axioms that don’t really need to be reasonable: those that are more like personal taste. We are allowed to like and dislike food, like and dislike music. But even those tastes can sometimes be justified further. I used to think my dislike of toast was simply an axiom of mine, but people challenged it so often that I have now explained it more fundamentally by the fact that I don’t like crunchy things, and that is because it feels violent to chew them. You might think I’m absurd, or ridiculously sensitive, but I think it’s within my rights as a reasonable person to decide I don’t like the feeling of chewing something crunchy.
Aside from outright contradictions it is hard to talk about what counts as reasonable core beliefs without being stuck floating in a space of relativism: you might worry that I can only call someone’s beliefs unreasonable relative to mine, at which point they can call mine unreasonable relative to theirs, and indeed many arguments take this futile form in which both sides call the other unreasonable and no progress is made.
Setting aside questions of personal taste, there is one criterion for reasonableness that I think has a chance of not being relative, and the clue is right there in the word “reasonable”: are your beliefs open to being reasoned with? That is to say, are you open to changing them? Do you have a framework for knowing when it is time to change them? Are there any circumstances at all under which you would change them?
In one of my favorite moments of Macbeth, Macduff is trying to persuade Malcolm to come back from exile and fight Macbeth for the throne of Scotland. Malcolm has a clever and wise way of discerning whether or not this is a trap to lure him to danger. He starts portraying himself as a terrible person, and describes what a cruel and evil king he would be. He needs to see whether Macduff’s support of him is rational or not. If it is rational, then in the face of Malcolm’s admissions he will withdraw his support. If he does not withdraw his support, Malcolm will conclude that the support is not rational and he is therefore not to be trusted. In the event, Macduff despairs and cries, “Oh Scotland, Scotland!” and withdraws his support, determining to leave Scotland himself forever. Because Macduff withdraws his support in the face of the supposed new evidence showing how unsuitable Malcolm is to be king, Malcolm is reassured that the support is rational.
I think this openness to changing one’s conclusions or axioms in the face of evidence is an important sign of rationality. If someone continues to support a person or idea or doctrine regardless of further and further evidence then this is a sign that the support is blind rather than rational. There is a difference between loyalty and blind support, and a difference between healthy skepticism and science denial. I think it’s an example of fuzzy logic. Loyalty means not changing your support over minor issues. Blind support means not changing your support over major issues, or any issues at all. Of course, a question remains over what counts as “major” and “minor” issues.
Here are some things I have changed my mind about over the years. I have already mentioned compulsory voting in Chapter 13. I also now support liberal arts education because I see that this can happen either informally (as in the education I received) or formally (as in the US system). I now support a more active form of feminism because I see that the passive form was not achieving the change I want to see. I (grudgingly) support getting up early, because I’ve discovered it helps me lose weight, possibly for hormonal reasons. And I believe in doing things for myself, not just for other people, because I see that if I neglect myself I reduce my ability to do things for other people.
If I examine these cases carefully I see that I have changed my mind about axioms from a combination of logic, evidence and emotions. Even if it’s not explicit, there is some kind of framework there.
We have discussed the framework that math and science have for deciding what to accept as truth. For math it’s logical proof. For experimental science the framework consists of finding evidence. It is based in statistics, which means that scientists are required to find evidence to back up a theory to a good level of certainty. The framework then says that if new evidence arises to overturn that level of certainty or even point in a different direction, science changes the theory accordingly. This is very different from the kind of “theory” where you just make something up because you feel like it.
We can examine something similar for the framework of news reporting. Reporters are supposed to gather information to back up their story, according to a certain framework of accountability. It is less rigorously defined than in science, but there are still standards to do with cross-checking and reliability of sources. Again, this is very different from the kind of “news” where someone just makes something up. In both cases the report might turn out to be wrong, but in the first case there is a procedure for discovering it is wrong and retracting it, whereas in the second case there isn’t.
This is the crucial difference between erroneous reporting and “fake news”. Unfortunately the term “fake news” has been appropriated by some people to mean, more or less, “anything I disagree with”. If a newspaper retracts an article because they find that their sources turned out to be unreliable or misinformed, some people are likely to shout “Fake news!” However, at least the newspaper has a framework and procedure for verification of its reports. It is always unfortunate when something only turns out to be wrong after publication, but this happens in science despite much more rigorous validation processes, so is bound to happen in journalism, which works with less rigor and much more time pressure. It is important for the rational among us to maintain the distinction between statements arrived at via a framework and those without. It is tempting to try and distinguish between “facts” and falsehoods, but if you follow logic carefully you should find it difficult to say for sure what a fact is. The best we can do is have a statement verified according to a well-described framework, and an allowance for the fact that the framework might later find it to be wrong.
At this point we are once again in danger of getting caught in a loop, because there are reasonable and unreasonable frameworks. If “reasonable” is defined according to having a “reasonable framework”, have we actually got anywhere or are we just making a cyclic definition?
I think this is why people can disagree so much about what counts as reasonable and what doesn’t: because the notion of what counts as a reasonable framework is sociological, just like the notion of what counts as a valid mathematical proof turned out to be sociological. One group of people thinks that the scientific method is the most reasonable framework, whereas another group thinks it is a conspiracy. One group thinks that the Bible is the most reasonable framework, and another group thinks it is a piece of fiction.
This is why one of the few things I can come back to as a sign of unreasonableness is if someone is absolutely unprepared to change their mind about something. This often takes the form of hero-worship, and I believe it is very dangerous to rational society.
Skepticism is an important part of rationality, and loyalty is an important part of humanity, but both become dangerous when taken to extremes. Blind skepticism and blind loyalty arise when there are no conditions under which someone will change their mind–or that the conditions are so extreme that they might as well not exist.
For example, a climate change denier might say they’ll believe in global warming if the average temperature on earth rises by 10°C in one year. That hardly counts as being “open” to changing one’s mind because it’s a bit like saying “OK I’ll believe in that if hell freezes over”. Deniers of evolution will probably not change their minds no matter what quantity of evidence is produced supporting it, so scientists should probably stop using evidence as a way of trying to persuade them, and try using emotions.
Blind loyalty can be dangerous in another way. When people support a person regardless of anything at all, it can lead to that person gaining a sort of cult status as a superstar or “genius”. Unconditional support sounds like a noble thing, but really should be in some kind of gray area like so many other things. How badly does someone have to behave for you to stop supporting them? Parents are often thought to show unconditional love for their children, but this might be pushed close to or beyond its limits if the child grows up to be a mass murderer.
That is an extreme case, but we see less extreme cases around us all the time in the form of people who exploit their power. When someone starts feeling like they have unconditional support of people who revere them as some kind of “genius”, they might start behaving badly, knowing that they can count on the blind loyalty of their followers. This can happen in all fields, including science and academia, music, TV and film, and the restaurant industry. It contributes to a culture in which exploitation and harassment are widespread, and so I think we should stop it. Of course this is not a simple issue. At what point should we withdraw our support for someone? It comes back to the difference between “minor” issues and “major” issues and is yet another gray area.
Gray areas have been popping up repeatedly throughout this book. They seem to be everywhere, and I think we need to accept that and deal with it, and acknowledge that being rational involves accepting that some things are rather fuzzy. For example, many things are “just theories” but that doesn’t make them all equally trustworthy, or equally dubious–it depends what sort of framework has been used to establish that theory. Similarly if a large group of people or sources agree with each other, that doesn’t necessarily mean that there is a conspiracy, but it might–it depends, again, what sort of framework has been used to establish that agreement.
There are many degrees of trust and skepticism that we can show towards theories, sources, experts and evidence. It’s not just about trusting something or not, there’s a huge gray area in between.
Should we believe scientific “experts” or not? At one extreme, some people think that scientists are all in a conspiracy with each other. At another extreme, some people regard science as absolute and unassailable truth. Against science, some people think that trusting science means you’re an unthinking sheep, and that intelligent people are always skeptical about everything. They cite scientific theories from the past that have turned out to be wrong. In favor of science, some people think that those who are skeptical of science are being irrational and using emotions instead of logic. Both sets of people are liable to think the others are being stupid, and this is not a helpful situation.
I think we should acknowledge that there are gray areas everywhere. For skepticism there is healthy skepticism and blind skepticism and everything in between. For trust there is also healthy trust, blind trust, and everything in between. I would say that healthy skepticism and trust come from, again, a well-defined framework, including evidence and logic.
Blind trust and blind skepticism might actually look on the surface quite similar to the healthy versions. The two versions might be equally fervent. But I call someone’s trust or skepticism blind if they can’t justify it to many steps. I can’t justify my belief in science to the end (because there is no end) but I can keep going for a while: I believe in the system of the scientific framework because it has checks and balances; it is self-reflective and self-critical; it is a process rather than an end result; it has a framework for updating itself and has known occasions when it has found itself to be wrong and corrected itself.
Some people think that admitting you’re wrong is a sign of weakness, or that changing your mind is a sign of indecision. But I think both of these are an important sign of having some framework for your trust and skepticism. That, to me, is a sign of a more powerful form of rationality.
Being rational is a start, but is not enough. You can avoid illogic but still not get anywhere, like someone who travels safely by simply never going anywhere. That is different from travelling safely while going all over the world. Being powerfully rational means not just using logic and avoiding logical inconsistencies, but using logic to build complex arguments and gain new insights.
Throughout this book I have discussed logical techniques and processes that I think contribute to powerful rationality. This starts with abstraction, which is what enables us to use better logic in the first place. I think it then has three main components: paths made of long chains of logic, packages made of a collection of concepts structured into a new compound unit, and pivots using levels of abstraction to build bridges to previously disconnected places.
Abstraction is the discipline of separating out relevant details from irrelevant ones, and finding the principles that are really behind a situation in such a way that we can try to apply logic.
It is then important to be able to follow a long chain of deductions, both forwards and backwards, and not just a single step like a child who can’t get further than “If I don’t get ice cream I will scream.” We follow logic forwards, to comprehend all the consequences of one’s thinking, and backwards, to construct and understand complex justifications of things. This includes being able to axiomatize a system down to very fundamental beliefs, rather than just believing things because you do, and it also includes being able to understand someone else’s beliefs. If you can’t follow long chains of logic backwards you will be stuck taking almost everything you believe as a fundamental belief. This isn’t exactly illogical, but it’s not very insightful either, and hardly leaves open the possibility for fruitful discussion. “Why do you think that?” “Because I do.” I think powerful rationality involves being able to unpack your reasoning down to a very small number of core beliefs, and being able to answer “Why do you think that?” down to very deep levels. Just like mathematicians should be able to fill in their proofs to as deep a level of “fractalization” as anyone might ask, we should be able to do that with our beliefs too.
Building inter-related ideas into compound units is an important source of power in logic. The ability to think of a group of things as one unit is something we do naturally every day, when we think of a family, a team, or compound nouns for animals: a flock of birds, a swarm of bees, a herd of cows. We think of a school (and all the people making it up), a business, a theater company. I much prefer using singular verbs with these compound nouns, as I really am thinking of them as single units. I will say “My family is going out for dinner” rather than “my family are going out for dinner”.
Packaging complex systems into single units should not mean forgetting that the system is made of individuals. Powerful rationality involves understanding the way in which the individuals are interrelated, forming the whole system, as we saw in Chapter 5. After looking at those huge diagrams of interconnected causations you might despair that the situation is so complicated. However, if we develop our logical power so that we are able to comprehend and reason with those complex systems as single units, then it will no longer seem complicated. Gray areas are encompassed in this idea about complex systems, as they consist of situations where instead of getting a simplistic yes or no answer out, we have a whole range of related answers on a sliding scale. This is like having a range of probabilities for different possible outcomes, rather than trying to predict one outcome. It might seem hard to understand a range of probabilities rather than one prediction, but a powerfully rational person will then develop the skill of understanding the more difficult concept, rather than giving up and resorting to the simplistic one. The same is true of gray areas.
We tend to look for a single cause or a single answer to a question. One way to find one cause for a complex situation is simply to ignore all the others, as people frequently do when blaming an individual for a complicated situation. However, another way to find a single cause is to package the whole system up and be able to regard that as “one cause”.
This enables us to think more clearly and also move to different levels of abstraction. We discussed at length in Chapter 13 how analogies consist of using abstraction to make pivots to other situations. I believe powerful rationality involves great facility at moving between different levels of abstraction to make different sorts of pivots, to move between different contexts and see many points of view.
Powerful rationality involves being able to separate axioms from implications, which is related to being able to separate logic from emotions. This doesn’t mean suppressing one or the other, but understanding what role each is playing in a situation, and what each is contributing. It involves finding logical justifications or causes of emotional facts, including other people’s. This leads me to an even more important aspect of rationality: how to use it in human interactions.
I think there is something even better than being a powerfully rational person, and that is being an intelligently rational person, which is someone who is not just powerfully rational, but uses that power to help the world, somewhat in the way that the best superheroes use their superpower to help the world. And the best way I think that we can use this superpower to help the world is to bridge divides, foster a more nuanced and less divisive dialogue, and work towards a community that operates as one connected whole.
Life doesn’t have to be a zero-sum game, where the only way to win is to ensure that someone else loses. People who think it does are usually trying to manipulate other people whom they think they can beat. I may sound rather optimistic, but there are abundant examples of situations where people collaborate for the greater good, rather than compete. This is the essence of teamwork and communities, and perhaps the very essence of humanity. We are not, after all, each living in a cave by ourselves, but living in communities at many different scales: families, neighborhoods, schools, companies, cities, countries, and even, with any luck, cooperation between countries.
I believe in a slightly modified version of Carlo M. Cipolla’s theory of intelligence in The Basic Laws of Human Stupidity. He defines stupidity and intelligence according to benefits and losses to yourself and others.
If you benefit yourself but harm others, you are a bandit. If you benefit others but hurt yourself (or incur losses), he calls this “unfortunate”, though I might rather say you are being a martyr. Both of these make life into a zero-sum game. On the other hand there are people who hurt others and themselves at the same time, as in the prisoner’s dilemma. Cipolla defines this as stupidity. The remaining possibility is to help yourself and others at the same time, and Cipolla defines intelligence to be the quadrant of mutual benefit:
This is an eye-opening definition of intelligence, involving nothing to do with knowledge, achievements, grades, qualifications, degrees, prizes, talent or ability. I like it, and it is this form of intelligence that I will use to describe intelligent rationality. Intelligent rationality is where you don’t just use logic, and you don’t just use it powerfully, but you use it in human interactions to help everyone. The aim should be to help achieve better mutual understanding, to help others and yourself at the same time. If you are only using logic to defeat someone else’s argument and promote your own, that is the intellectual version of being a bandit.
Intelligent rationality is about using logic in human interactions, and so it must involve emotions to back up logical arguments in all the ways I have already described. Without this, I don’t believe we have any serious chance of reaching mutual understanding with those who seem to disagree with us. Conversely, intelligent rationality should involve being able to find the logic in someone else’s emotional response as well as our own, rather than just calling emotions wrong.
For example, when I was offered a chance to move to Chicago I was perplexed because rationally it was obviously the best choice for me, but emotionally I felt reluctant. In order to understand this dissonance I wrote a list of weighted pros and cons, and I discovered why I was confused: in favor of the move were a small number of really enormous points, but putting me off the move was a huge list of minor details. I had emotionally become swamped by the huge quantity of minor details. Once I had discovered the source of my fear I was able to reduce it, and in the end I made the decision with no hesitation, and no regrets.
Another example is when I eat far too much ice cream although I know it’s going to make me feel ill later. I could tell myself I’m just being illogical, but it’s more nuanced than that: I am prioritizing short-term pleasure (delicious ice cream) over medium-term pain. That’s not illogical; it’s a choice, and once I see it as that, I am sometimes able to make a different choice.
Arguing and reasoning with oneself is a good first step, but what about arguing with others? What should we do about people who disagree with us?
It is important to acknowledge that logical people can still disagree. It doesn’t mean that one person is being illogical, although that might be the case. Possibly both people are being illogical. It also doesn’t mean that both people are being stupid. Logical people might disagree because they are starting with different axioms.
For example, perhaps one person believes in helping other people, and another person believes that everyone should help themselves. Those are different fundamental beliefs, but neither is illogical. In fact, I would say it’s a false dichotomy: I believe that everyone should help themselves, but that some people are privileged with more resources to help themselves than others, so we should all also try to help those less privileged than us.
Logical people can also disagree because of the limits of logic. Once we’ve reached those limits there are many different ways we can proceed, depending on what means we choose to help us once logic has run out. Often it is a case of picking a different way of dealing with a gray area, or picking a different place to draw an arbitrary line in a gray area. If one person accuses the other of not being logical, it may be the case that neither person is being entirely logical because the scope of logic has run out.
I think an important aspect of being more than just basically logical involves being able to find the sources of these disagreements, and this involves using logic more powerfully, to have better arguments.
What I want to see in the world is more good arguments. What do I mean by that? I think that a good argument has a logical component and an emotional component and they work together. This is just like the fact that a well-written mathematical paper has a fully watertight logical proof, but it also has good exposition, in which the ideas are sketched out so that we humans can feel our way through the ideas as well as understanding the logic step by step. A good paper also deals with apparent paradoxes, where the logical situation appears to contradict our intuition.
The important first step in a disagreement is to find the true root of the disagreement. This should be something very close to a fundamental principle. We should do this by following long chains of logic in both our argument and theirs. We should try and express it in as general a principle as possible, so that we can fully investigate it using analogies.
Next we should build some sort of bridge between our different positions. We should use our best powers of abstraction and pivots to try and find a sense in which we are really just at different parts of a gray area on the same principle.
We should then engage our emotions to make sure we engage theirs and understand them where they are, and then try and edge slowly to where we can meet. This will include finding out what, if anything, would persuade them to change their mind. We also have to show that we are reasonable ourselves, and that we are open to moving our position too, as we should be if we are reasonable. If we really understand their point of view we may discover things we didn’t know that really do cause us to move our position or even change our mind.
I think a good argument, at root, is one in which everyone’s main aim is to understand everyone else. How often is that actually the case? Unfortunately most arguments set out with the aim of defeating everyone else–most individuals are trying to show that they are right and everyone else is wrong. I don’t think this is productive as a main aim. I used to be guilty of this as much as anyone, but I have come to realize that discussions really don’t have to be competitions. If everyone sets out to understand everyone else, we can all find out how our belief systems differ. This doesn’t mean that one person is right and the other wrong–perhaps everyone is causing a contradiction relative to everyone else’s belief system; this is different from people causing a contradiction relative to their own belief system. Unfortunately too many arguments turn into a cycle of attack and defense. In a good argument nobody feels attacked. People don’t feel threatened by a different opinion, and don’t need to take things as criticizm when they’re just a different point of view. This is everyone’s responsibility, and if everyone is an intelligently powerful rational human being, everyone will assume that responsibility for themselves. In order to achieve that, we all need to feel safe. Until everyone is in fact that intelligent, those who are should try to take responsibility for helping everyone to feel unattacked. I try to remind myself as much as possible in any potentially divisive situation: it’s not a competition. Because it almost never is, in fact, a competition.
A good argument does invoke emotions, but not to intimidate or belittle. A good argument invokes emotions to make connections with people, to create a path for logic to enter people’s hearts not just their minds. This takes longer than throwing barbed comments at each other and trying to throw the “killer shot” that will end the discussion, and I think this is right. Logic is slow, as we saw when we looked at how it fails in emergencies. When we are not in an emergency we should have slow arguments. Unfortunately the world is tending to drive things faster and faster, with shorter and shorter attention spans meaning that we are under pressure to convince people in 280 characters, or in a pithy comment that can fit in a few words around an amusing picture, or a clever one-liner–correct or otherwise–so that someone can declare “mind = blown” or “mic drop”. But this leaves little room for nuance or investigation or finding the sense in which we agree along with the sense in which we disagree. It leaves no time for building bridges.
I would like us all to build bridges to people who disagree with us. But what about people who don’t want bridges? People who really want to disagree? This is a meta problem. First we have to persuade people to want those bridges, just like motivating people to want to learn some mathematics before we have any hope at all of sharing it.
As humans in a community, our connections with each other are really all we have. If we were all hermits living in isolation humanity would not have reached the place it has. Human connections are usually thought of as being emotional, and mathematics is usually thought of as being removed from emotions and thus removed from humanity. But I firmly believe that mathematics and logic, used in powerful conjunction with emotions, can help us build better and more compassionate connections between humans. But we must do it in a nuanced way. We have seen that black and white logic causes division and extreme viewpoints. False dichotomies are dangerous in the divisions they cause, both in the mind and between people. Logic and emotions is one of those false dichotomies. We should not pit ourselves in futile battles against other humans with whom we are trying to coexist on this earth. And we should not pit logic against emotions in a futile battle that logic can’t win. It’s not a battle. It’s not a competition. It’s a collaborative art. With logic and emotions working together we will achieve better thinking, and thus the greatest possible understanding of the world and of each other.