Interpretation is how we thing of the language; it’s what it means. The same exact language can have different interpretations and you can have a formal system with an uninterpreted language. An example of differing interpretations for the same language (in the context of a system as well, actually) is that of quantum mechanics. The two main interpretations are the Copenhagen Interpretation and the Many Worlds interpretation. They are two ways of making sense of the data of QM experiments and they use the same math. They’re just different ideas of what the math “means”.

The interpretation of our is pretty straightforward. From the common name of the system (which we’ll get to later) with this interpretation and semantic, “Propositional Calculus”, we can pretty much guess. It’s the math of ideas. It is how we get from one proposition to another proposition. The symbols we introduced in the previous post will be given names corresponding to bits of natural language.

The semantic is how we use the symbols to approximate natural language. We define the way the connectives work such that they behave like the bit of natural language from which they get their names. The connectives are operators and are defined by something called the assignment function. I could bore you with the exact definition of the assignment function, or I could show it to you in tables. Yeah, I’ll go with that.

So, let’s start with the “horseshoe”. It’s technically called the “conditional” or “material conditional”. This connective corresponds with the bit of English “If a, then b”. In that example, “a” is called the “antecedent” and “b” is called the “consequent”. Now, let’s get to that table.

α | β | α⊃β |

T | T | T |

T | F | F |

F | T | T |

F | F | T |

Now let’s do conjunction. This corresponds to “and”, and it is easy to see why via the table. Each of the wffs connected to the conjunction is called a “conjunct”.

α | β | α•β |

T | T | T |

T | F | F |

F | T | F |

F | F | F |

Having done conjunction, let’s move on to disjunction. This corresponds to one type of “or”. This is an inclusive or (think “and/or”) rather than an xor. Each of the wffs connected to the the disjunction is called a “disjunct”.

α | β | α∨β |

T | T | T |

T | F | T |

F | T | T |

F | F | F |

There is a connected related to the conditional called the “biconditional”. It corresponds to “if and only if” and is often abbreviated in text as “iff”. We’ll see in the next post why “if and only if” is a very good description. This connective is define, as you guessed, by yet another table:

α | β | α≡β |

T | T | T |

T | F | F |

F | T | F |

F | F | T |

Now we end the post with the easiest assignment table. This one is for negation. It corresponds to “not”.

α | ~α |

T | F |

F | T |

To follow the proof, we’ll need the tools of metamathematics. This, however, requires formal logic. So, here we begin:

What is logic? How is it different from math? Well, they’re both formalisms. Roughly speaking, logics deal with values and maths deal with quantities. That’s not a rigorous distinction, but it’s enough to just get a working difference. Aside from that, they’re virtually identical ideas. They are languages, what they languages mean, and how we use them.

A formal language can be really anything you want. It’s formal; you get to make it! All you need for a formal language is a set of allowed “primitive symbols” (think of letters of a word-in fact, they’re often called “letters” and are often actually letters) and rules for combining them into what are called “well-formed formulas” (“wffs” for short). That should really make sense. We don’t need the meaning or usage to have a language. If aliens visit Earth in ten million years and all of humanity is dead, they could find something like Wikipedia. From this large body of language, they could learn the language. They can learn what letter combinations are permissible to make words and what word combinations are permissible to make sentences. What they couldn’t learn, however, is the semantics.

In fact, we don’t even need the semantics to use logic. To use it, all we need is to make a formal system. If we’re using a language in a system without a semantic, then our system is using an “uninterpreted language”. All a system is is the language, a set of axioms, and rules which let us write new wffs from he axioms and any wffs that follow from the use of the rules.

The semantic is what the language means. It’s often that wffs have “True” or “False” values, but that need not be the case. Like with mathematical systems, the semantic could be about quantities. Our systems will, however, have Truth Values.

So, to recap: once you have a language, you can do two things either independently or jointly-you can make a formal semantic and/or you can make a formal system. This has been very non-technical and non-rigorous, but we will get more rigorous as we go along. Now, onto application.

We need to start with what is often called “Propositional Calculus”. The name kind of gives away the idea behind the semantic and system; it’s “the math of ideas”. That’s actually a pretty good way to think about it. So, let’s get to work and define our language:

Our primitive symbols come in two large flavors-connectives and letters.

Our letters can be any english letter or greek letter (the greek letters will stand for entire wffs picked at random). Some letters will be variables and others will be constants, but we need not worry about that now.

The connectives are as follows:

‘⊃’, ‘≡’, ‘˜’, ‘•’, ‘∨’, ‘)’, ‘(‘

That’s it. We just have letters and five connectives (not counting parenthesis). Not bad, eh? Now, how to we hook these things together to make sense?

Any letter is a wff. For any wffs ‘α’ and ‘β’, the following are wffs:

~α

α⊃β

α≡β

α•β

α∨β

(α)

That’s it! There’s our language. Next up, the semantic!

]]>Would there be interest in a series of posts working from zero knowledge of formal logic through some meta-logic ending with Gödel’s proof and its implications on philosophy and science?

Drop some comments and let me know.

]]>For example, take a single molecule of water. How many objects are there? 1? 3? 4? 6? 7? 9? 10? 20? 21? 24? 41? It’s all in what constitutes an “object”. If we’re only counting molecules, there’s obviously 1. If we’re counting atoms, well, that’s clearly 3. But what of “objects”? Is the molecule itself an object to be counted with the atoms? If so, that makes it 4! We can go on and on until we’re down to quarks, gluons (for those that don’t know, it turns out that the weak nuclear force is just the residual effect of the strong force), and electrons. But do we still count the compositions in between the fundamentals and the molecule?

Here’s where the divide comes in. Mereological Realists say that there’s only one object, but Mereological Nihilists say that there’s only the fundamentals and their reactions. That is, the nihilists say that molecules are useful fictions describing common interactions between fundamentals. And the same argument goes on up to macroscopic objects. For the realists, a chair is one object; for the nihilists, it’s trillions of objects.

Now, why did I say this is a confused argument? Because they’re prima facie talking past each other. The two camps don’t disagree on matters of fact. The realists don’t deny that quarks and leptons exist and nihilists don’t deny that chairs are useful descriptions (which is actually their claim). The whole debate is based on confusing a matter of fact with a matter of perspective.

The solution: how many objects there are depends on what level of abstraction on which you’re working.

Like I said, sometimes I hate philosophy. If only the entirety of my field (as opposed to the current majority) were empirical analytic philosophers, I would be saved many facepalms. I’ll just stick with playing on the cutting edge of Philosophy of Language, Philosophy of Mind, and rational extensions of Natural Ethics.

]]>I’m going to use a tool known as Bayes’ Theorem. This is an equation that let’s us calculate the probability of an event taking into account the evidence. It’s essentially the mathematical basis for the famous Carl Sagan quote: “Extraordinary claims require extraordinary evidence”. The equation for only two options is:

\[ P(h|e)=\frac{P(e|h){\times}P(p)}{P(e|h){\times}P(p)+P(e|-h){\times}P(-p)} \]

That is, the probability that our hypothesis is true given the evidence (P(h|e)) is dependent on the probability of the event without any evidence for this specific instance (P(p) called a “prior probability”), and how well the evidence fits with what we would expect to see if our hypothesis is true (P(e|h)).

Now, to the example with a lot of evidence and a low prior probability. In 2011, an estimated 0.5% of the general US population used cocaine. This means the prior probability of any given person is really low. Let’s assume we have a drug test with a 99% accuracy. This means the evidence if it shows up positive is really good for someone having actually done the drug. Now, let’s pop in the numbers:

\[ P(h|e)=\frac{(0.99){\times}(0.005)}{(0.99){\times}(0.005)+P(0.01){\times}(0.995)} \]

This turns out to be about 33%. That is, with a 99% accurate drug test, if a person tests positive for cocaine, there is about a 66% chance of it being a false positive. So, given a low prior probability, even with really good evidence, the hypothesis is likely to not be true.

There are indeed cases where absence of evidence is evidence of absence.

]]>Fallacies come in two forms: logical and informal. For the most part, when people talk about logical fallacies, they really mean informal fallacies. Logical fallacies are distinct in logical structure, but informal fallacies are almost all the same in terms of structure. here is a popular list of informal fallacies.

There’s an issue, though. Often, it is the case that in an effort to eradicate all fallacious thinking, people misidentify perfectly fine arguments as fallacies. That is somewhat understandable, though, since some fallacies look a lot like legitimate ways to argue.

For example, Modus Tollens is a legitimate argument form, but it looks like a logical fallacy called “Affirming the Consequent”.

Modus Tollens:

If a is true, then b is true.

b is not true.

Therefore, a is not true.

That is perfectly fine and actually very useful. It does, however look a lot like Affirming the Consequent.

Affirming the Consequent:

If a is true, then b is true.

b is true.

Therefore, a is true.

This is fallacious, because b could be true for any number of reasons that have nothing to do with a. Now, if we change up the last line of that fallacy just a little bit, it turns into a legitimate way to argue called “Abductive Reasoning”.

Abduction:

If a is true, then b is true.

b is true.

Therefore, we have reason to think a is true.

Abduction, like induction, is probabilistic and so this move is ok.

Most of the informal fallacies on the list to which I linked above are just examples of the informal fallacy known as “Non Sequiter”.

Non Sequiter:

a is true.

Therefore b is true.

It’s fallacious, because it just doesn’t follow. There’s no reason in the logical structure to conclude b from a being true. A very common (and often misidentified) example is Ad Hominem. Almost everyone on the internet thinks they know what an Ad Hom is, but many of them are wrong.

“You’re wrong, so you’re stupid” is not an Ad Hom. Nor is “You’re wrong, stupid-face!”. However, “You’re stupid, so you’re wrong” **is** an Ad Hom. Personal attacks are only fallacious if they’re used to conclude that someone is wrong. Other than that, they’re just bad manners. Again, we can even tweak this slightly to make it non-fallacious. If we say, “You’re stupid, so you’re *probably* wrong.” is indeed fine as it’s not deductive.

There may or may not be more entries regarding critical thinking and logic to follow; I’ve yet to decide.

]]>