Monday 23 February 2015

Law of the unconscious statistician

Problem: Let X be a uniform random variable on [0,1], and let $Y=\tan(\pi(x - 1/2))$. Calculate E(Y) if it exists.

Note that the expected value formula ∫ t*f(t) dt does not apply in this case. If we use that, we are computing the expected value of X rather than Y. Then how can we find E(Y)? We use Law of the unconscious statistician (LOTUS):

The law of the unconscious statistician is a theorem used to calculate the expected value of a function g(X) of a random variable X when one knows the probability distribution of X but one does not explicitly know the distribution of g(X).

The expected value of Y = g(X) where $g(x)=\tan(\pi(x - 1/2))$ is given as follows:

$\int_{-\infty}^{\infty} g(x) p(x) dx$

where p(x) is the probability density function of X. Here p(x) is 1 for $0 \leq x \leq 1$ and 0 otherwise.

Hence we want to evaluate $\int_0^1 \tan(\pi(x - 1/2)) dx$, which is $-\pi^{-1}\ln \cos(\pi(x-1/2))|_0^1$. So it is infinity on either (0, 1/2) or (1/2, 1). Thus, the expected value of Y does not exist.

Intuitive explanation: At each value of x, there's a probability of p(x) of having that value, and the corresponding value of Y is just g(x), so add up g(x) p(x) over all x to get the expected value of Y.

More to explore:
http://www.quora.com/What-is-an-intuitive-explanation-of-the-Law-of-the-unconscious-statistician

Sunday 22 February 2015

Basel problem

Prove $$\zeta(2)=\sum\limits_{n=1}^\infty \frac{1}{n^2}=\frac{\pi^2}{6}.$$ There are many proofs on this. Here we present 3 proofs.

Euler's method
Having learnt factorization, we may ask: can we "factorize" the sine function? It turns out we can. Note that the roots to the equation $\large \frac{\sin x}{x}=0$ are $n\pi$, where $n=\pm 1,\pm 2, \pm 3, ...$ We can then express $\large \frac{\sin x}{x}$ as an infinite product of linear factors $\large (1+\frac{x}{\pi})(1-\frac{x}{\pi})(1+\frac{x}{2\pi})(1-\frac{x}{2\pi})(1+\frac{x}{3\pi})(1-\frac{x}{3\pi})...,$ or equivalently $\large (1-\frac{x^2}{\pi^2})(1-\frac{x^2}{2^2\pi^2})(1-\frac{x^2}{3^2\pi^2})...(1-\frac{x^2}{n^2\pi^2})...$
From this expression, the coefficient of $x^2$ is $\large -\frac{1}{\pi^2}(\frac{1}{1^2}+\frac{1}{2^2}+\frac{1}{3^2}+...+\frac{1}{n^2}+...)=-\frac{1}{\pi^2}\sum\limits_{n=1}^\infty \frac{1}{n^2}$
Since $\large \frac{\sin x}{x}=1-\frac{x^2}{3!}+\frac{x^4}{5!}-...$, we can then conclude that $\large \frac{-1}{\pi^2}\sum\limits_{n=1}^\infty \frac{1}{n^2}=\frac{-1}{3!}\Rightarrow \sum\limits_{n=1}^\infty \frac{1}{n^2}=\frac{\pi^2}{6}$.

Fourier Series
Prerequisites:
$\large{f(x)=\frac{a_0}{2}+\sum\limits_{n=1}^\infty(a_n \cos nx+b_n \sin nx)\:\: -\pi \leq x \leq \pi\\
a_0=\frac{1}{\pi}\int_{-\pi}^\pi f(x)\:dx\\
a_n=\frac{1}{\pi}\int_{-\pi}^\pi f(x)\cos nx\:dx\\
b_n=\frac{1}{\pi}\int_{-\pi}^\pi f(x)\sin nx\:dx}$
[Explanations: later]

Parserval's Identity [Proof: later]
$\large \frac{1}{\pi}\int_{-\pi}^\pi |f(x)|^2 dx=\frac{a_0^2}{2}+\sum\limits_{n=1}^\infty (a_n^2+b_n^2)$


Consider the Fourier series of $f(x)=x,\:\: 0 \leq x \leq 2\pi$.
$\begin{align}a_0&=\frac{1}{\pi}\int_0^{2\pi} x\:dx=\frac{1}{\pi}\frac{x^2}{2}|_0^{2\pi}=2\pi\\
a_n&=\frac{1}{\pi}\int_0^{2\pi} x \cos nx\:dx\\
&=\frac{1}{\pi}\int_0^{2\pi} x\:d(\frac{\sin nx}{n})\\
&=\frac{1}{\pi}(x\frac{\sin nx}{n}|_0^{2\pi}-\int_0^{2\pi} \frac{\sin nx}{n}\:dx)\\
&=\frac{1}{\pi}\frac{\cos nx}{n^2}|_0^{2\pi}\\
&=0\\
b_n&=\frac{1}{\pi}\int_0^{2\pi} x \sin nx\:dx\\
&=\frac{1}{\pi}\int_0^{2\pi} x\:d(-\frac{\cos nx}{n})\\
&=\frac{1}{\pi}(-x\frac{\cos nx}{n}|_0^{2\pi}+\int_0^{2\pi} \frac{\cos nx}{n}\:dx)\\
&=\frac{1}{\pi}(\frac{-2\pi\cos 2n\pi}{n}+\frac{\sin nx}{n^2}|_0^{2\pi})\\
&=-\frac{2}{n}
\end{align}$

Thus, the Fourier series of x is:
$\large \frac{2\pi}{2}+\sum\limits_{n=1}^\infty(\frac{-2}{n})\sin nx$

Using Parserval's Identity,
$\large \frac{1}{\pi}\int_0^{2\pi}x^2dx=\frac{(2\pi)^2}{2}+\sum\limits_{n=1}^\infty(\frac{-2}{n})^2$.
Since $\large \frac{1}{\pi}\int_0^{2\pi}x^2dx=\frac{1}{\pi}\frac{x^3}{3}|_0^{2\pi}=\frac{8\pi^2}{3}$,
we have $\large \frac{8\pi^2}{3}=2\pi^2+4\sum\limits_{n=1}^\infty \frac{1}{n^2} \Rightarrow \sum\limits_{n=1}^\infty \frac{1}{n^2}=\frac{\pi^2}{6}$

Sidenote:
Consider the Fourier series of $x^2$ and $x^k$.
We have $\large \sum\limits_{n=1}^\infty \frac{1}{n^4}=\frac{\pi^4}{90}$ and $\large \zeta(2k)=\sum\limits_{n=1}^\infty \frac{1}{n^{2k}}$ respectively.

Probabilistic approach
[later]

Key points:
Sine function can be expressed as an infinite product of linear factors.
Riemann zeta function

Friday 20 February 2015

Complex Number

A complex number $x+yi$ is defined to be the point in the Argand diagram with coordinates (x, y).

Properties of complex number



As can be seen in the figure, $x=r\cos\theta,y=r\sin\theta$.
Then, the polar form will be $r(\cos\theta+i\sin\theta)=rcis\:\theta$, where $cis\:\theta=\cos\theta+i\sin\theta$.
Modulus: $|z|=r=+\sqrt{x^2+y^2}$
$\tan\theta=\frac{y}{x}$
Argument: $arg\:z=\theta$

$\begin{align}zw&=r(\cos\theta+i\sin\theta)r'(\cos\phi+i\sin\phi)\\
&=rr'[(\cos\theta\cos\phi-\sin\theta\sin\phi)+i(\cos\theta\sin\phi-\sin\theta\cos\phi)]\\
&=rr'[\cos(\theta+\phi)+i\sin(\theta+\phi)]\end{align}$
$|zw|=|z||w|$
Arguments add upon multiplication.

$z^n=r^n(\cos\theta+i\sin\theta)^n=r^n(\cos n\theta+i\sin n\theta)$
This is due to De Moive's Theorem.

Root of unity (applied in number theory, the theory of group characters, the discrete Fourier transform, etc)
A complex number that gives 1 when raised to some integer power n.
$z^n=1$

$\epsilon=\cos\frac{2\pi}{n}+i\sin\frac{2\pi}{n}\\
\epsilon^k=\cos\frac{2\pi k}{n}+i\sin\frac{2\pi k}{n}\\
\epsilon^{n-1}=\cos\frac{2\pi (n-1)}{n}+i\sin\frac{2\pi (n-1)}{n}\\
\epsilon^{n}=1\\
|\epsilon|=1 \Rightarrow \epsilon^k\:\:\text{lie on the unit circle.}$



Complex Exponential Taylor Series
$$e^z=\sum_{n=0}^\infty \dfrac{z^n}{n!}\\ \cos z=\sum_{n=0}^\infty \dfrac{(-1)^nz^{2n}}{(2n)!}\\ \sin z=\sum_{n=0}^\infty \dfrac{(-1)^n z^{2n+1}}{(2n+1)!}$$
Direct multiplication of series shows $\large e^{z+w}=e^z\cdot e^w$, thus $\large e^z=e^{x+yi}=e^x\cdot e^{iy}$.
$e^{iy}=\sum_{n=0}^\infty \dfrac{(iy)^n}{n!}=\sum_{n=0}^\infty \dfrac{(-1)^nz^{2n}}{(2n)!}+\sum_{n=0}^\infty \dfrac{(-1)^n z^{2n+1}}{(2n+1)!}$
Sidenote: $e^{2\pi i}=1,e^{\pi i}=-1,e^{\frac{\pi i}{2}}=i,e^{\frac{3\pi i}{2}}=-i,e^{\frac{2\pi i}{8}}=\sqrt{i}=\dfrac{\sqrt{2}}{2}+i\dfrac{\sqrt{2}}{2}$

More to explore:
http://www.math.vt.edu/people/dlr/m2k_bas_cmpext.pdf

Sunday 15 February 2015

Tridiagonal matrix

Definition:
A square matrix with nonzero elements only on the diagonal and along the subdiagonal and superdiagonal.

A simple example: $$\begin{vmatrix}
2 & -1 & & & & \\
-1 & 2 & -1 & & & \\
& -1 & 2 & -1 & & \\
& & -1 & 2 & -1 &\\
& & & -1 & 2 & -1 \\
& & & & -1 & 2 \\
\end{vmatrix}$$ We can make use of recurrence relation to find this determinant.
Expand the determinant by the first row: $$2\begin{vmatrix}
2 & -1 & & & \\
-1 & 2 & -1 & & \\
& -1 & 2 & -1 &\\
& & -1 & 2 & -1 \\
& & & -1 & 2 \\
\end{vmatrix}+
\begin{vmatrix}
-1 &  -1 & & & \\
& 2 & -1 & & \\
& -1 & 2 & -1 &\\
& & -1 & 2 & -1 \\
& & & -1 & 2 \\
\end{vmatrix}$$ which is equal to $$2\begin{vmatrix}
2 & -1 & & & \\
-1 & 2 & -1 & & \\
& -1 & 2 & -1 &\\
& & -1 & 2 & -1 \\
& & & -1 & 2 \\
\end{vmatrix}-
\begin{vmatrix}
2 & -1 & & \\
-1 & 2 & -1 &\\
& -1 & 2 & -1 \\
& & -1 & 2 \\
\end{vmatrix}$$ Let $f_n$ be the determinant of the $n\times n$ tridiagonal matrix with diagonal elements all equal to $2$, the sub and super diagonal elements all $-1$.

In general, we have $$f_n=2f_{n-1}-f_{n-2}\\ f_1=2, f_2=3, f_n=n+1.$$ Thus, the determinant of this $6\times 6$ matrix is $\fbox7$.

More to explore:
continuant
continued fraction
http://www.sciencedirect.com/science/article/pii/S0096300307007825
Chapter 6, 7

Saturday 14 February 2015

Recurrence relations

Special Example:
Find a closed-form formula for the Fibonacci sequence defined by $F_{n+1}=F_{n}+F_{n-1}, n>0, F_0=0, F_1=1$.

Characteristic equation:
$x^2-x-1=0$
$\large x_1=\phi=\frac{1+\sqrt{5}}{2}, x_2=-\phi^{-1}=\frac{1-\sqrt{5}}{2}$

General solution for the recurrence:
$\large F_n=c_1\frac{1+\sqrt{5}}{2}+c_2\frac{1-\sqrt{5}}{2}$

Using the initial conditions,
$\large{ c_1+c_2=0, c_1\phi-c_2\phi^{-1}=1 \Rightarrow c_1=\frac{1}{\sqrt{5}}, c_2=-\frac{1}{\sqrt{5}}\\
F_n=\frac{1}{\sqrt{5}}[\phi^n-(-\phi^{-1})^n]}$

Arithmetic Sequence
$a_n=a_{n-1}+d=a_0+dn$

Example:
$1, 4, 7, 10, 13, ...\\
a_0=1\\
a_n=a_{n-1}+3=a_0+3n=1+3n$

Geometric Sequence
$a_n=r\cdot a_{n-1}=r^n\cdot a_0$

Example:
$1, 4, 16, 64, 256, ...\\
a_0=1\\
a_n=4a_{n-1}=4^n \cdot a_0=4^n$

Polynomial
$a_n=a_{n-1}+p(n)$

Example:
$5, 0, -8, -17, -25, -30, ...\\
a_0=5\\
a_n=a_{n-1}+p(n)\\
a_1=a_0+p(1) \Rightarrow p(1)=-5\\
p(2)=-8, p(3)=-9, p(4)=-8, p(5)=-5\\
p(n)=n^2-6n$

Any recursion will have a polynomial closed form formula of degree one higher than the degree of p. [?]

Thus, $a_n=c_3n^3+c_2n^2+c_1n+c_0$.
It remains to solve the linear system:
$c_0=5\\
c_3+c_2+c_1+c_0=0\\
8c_3+4c_2+2c_1+c_0=-8\\
27c_3+9c_2+3c_1+c_0=-17\\
\large{c_3=\frac{1}{3},c_2=-\frac{5}{2},c_1=-\frac{17}{6},c_0=5\\
a_n=\frac{1}{3}n^3-\frac{5}{2}n^2-\frac{17}{6}n+5}$

Linear Combination

Example:
$1, 4, 13, 46, 157, ...\\
a_0=1, a_1=4\\
a_n=2a_{n-1}+5a_{n-2}$

Characteristic equation:
$x^2=2x+5\\
x^2-2x-5=0\\
x=1\pm \sqrt{6}$

General solution for the recurrence:
$a_n=c_1(1+\sqrt{6})^n+c_2(1-\sqrt{6})^n$

Using the initial conditions,
$c_1+c_2=1, c_1(1+\sqrt{6})+c_2(1-\sqrt{6})=4\\
\large{ c_1=\frac{2+\sqrt{6}}{4}, c_2=\frac{2-\sqrt{6}}{4}\\
a_n=\frac{2+\sqrt{6}}{4}(1+\sqrt{6})^n+\frac{2-\sqrt{6}}{4}(1-\sqrt{6})^n}$

Generating Functions
$A(x)=\sum_{k=0}^\infty a_k x^k$

A generating function is a formal power series where the coefficient of $x^n$ is the nth term of the sequence.

Example:
$2, 5, 14, 41, 122, ...\\
a_0=2, a_n=3a_{n-1}-1$

$\large{A(x)\\
=\sum_{k=0}^\infty a_k x^k\\
=2+\sum_{k=1}^\infty a_k x^k\\
=2+\sum_{k=1}^\infty(3a_{k-1}-1)x^k\\
=2+\sum_{k=1}^\infty3a_{k-1}x^k-\sum_{k=1}^\infty x^k\\
=2+\sum_{k=1}^\infty3xa_{k-1}x^{k-1}-\sum_{k=1}^\infty x^k\\
=2+3xA(x)-\frac{x}{1-x}\\
(3x-1)A(x)=\frac{x}{1-x}-2}$

$\large{\begin{align}A(x) &= \frac{\frac{x}{1-x}-2}{3x-1}\\
&=\frac{3x-2}{(3x-1)(1-x)}\\
&=\frac{3}{2}\frac{1}{1-3x}+\frac{1}{2}\frac{1}{1-x}\\
&=\frac{3}{2}(1+3x+9x^2+...+3^kx^k+...)+\frac{1}{2}(1+x+x^2+...+x^k+...)\\
&=\sum_{k=0}^\infty \frac{3^{k+1}+1}{2} x^k\\
\end{align}}$

$\large a_n=\frac{3^{n+1}+1}{2}$

[Click here for a review on methods of partial fractions.]

Explanations:
[later]

Applications
Matrix
[later]

Probability
[later]

More to explore:
difference equations
dynamical systems
Mandelbrot and Julia set
logistic equation

Question:
Can we always find the nth term of a sequence?

Sunday 8 February 2015

Methods to find probability I

I believe one way to be sure about the answer to a probability problem is to be able to use different methods to arrive at the same probability.

A good example:
Tossing two fair coins once.
Find the conditional probability that both coins show tails given that
a) the first coin shows a tail;
b) at least one coin shows a tail.

Method I. Direct Way
a) P(TT | T first)
= P(T second)
= 1/2

b) P(TT | at least 1 T)
= (*) P(TT and at least 1 T) / P(at least 1 T)
= P(TT) / [1 - P(no T)]
= (1/2)^2 / (1-1/4)
= 1/3

(*) Note that P(TT and at least 1 T) =/= 1/2.

A related problem:

Assume the chance of having a boy or a girl in a family is equal.
a) In a two-kid family, given that the elder one is a boy, find the probability of the younger one being a boy.
b) In a two-kid family, given that one of them is a boy, find the probability of the other child also being a boy.

Denote boy by B, girl by G, the first letter being the older one, the second letter the younger one.

All possibilities of a two-kid family:
{GG, BG, GB, BB}

a) P(_B | B_) = 1/2
The younger one being a boy does not depend on whether or not the elder one is a boy.

b) P(BB | 1 B) = 1/3
We have to rule out GG if we are given that one of them is a boy.

Moral of the story: list out all possibilities to understand the problem! Don't rely on your intuition!


Method IIa. Enumeration (Listing all possibilities)
{HH, HT, TH, TT}

a) P(TT | T first)
= P({TT} | {TH, TT})
= P({TT} and {TH, TT}) / P({TH, TT})
= P({TT}) / P({TH, TT})
= #P({TT}) / #P({TH, TT})
= 1/2

b) P(TT | at least 1 T)
= P({TT} | {HT, TH, TT})
= P({TT} and {HT, TH, TT}) / P({HT, TH, TT})
= P({TT}) / P({HT, TH, TT})
= #P({TT}) / #P({HT, TH, TT})
= 1/3

Method IIb. Enumeration (Listing all possibilities)
{HH, HT, TH, TT}

a) P(TT | T first)
= 1/2
Since we know the first coin shows a tail, the possibilities are {TH, TT}. Then 1/2.

b) P(TT | at least 1 T)
= 1/3
We already know there is at least one tail, which means we can rule out the case HH. From the remaining cases {HT, TH, TT}, we know the probability of having TT from the given condition is 1/3.

Method III. Logic
Let Ti denote the event that the i-th coin shows a tail.

a) P(TT | T first)
= P(T1 and T2 | T1)
= P(T1 and T2 and T1) / P(T1)
= P(T1 and T2) / P(T1)
= (1/2)(1/2) / (1/2)
= 1/2

b) P(TT | at least 1 T)
= P[T1 and T2 | T1 or T2]
Note that T1 and T2 are not mutually exclusive. T1 or T2 includes three cases HT, TH, TT.
= P[(T1 and T2) and (T1 or T2)] / P(T1 or T2)
= P[(T1 and T2 and T1) or (T1 and T2 and T2)] / P(T1 or T2)
= P(T1 and T2) / P(T1 or T2)
= P(T1 and T2) / [P(T1) + P(T2) - P(T1 and T2)]
= (1/2)(1/2) / [1/2 + 1/2 - (1/2)(1/2)]
= 1/3

Method IV. Generating Functions (Refer to my previous post here.)
t: Tail appears
h: Head appears
$(t+h)^2=t^2+2th+h^2$

Since order matters in this problem,
$(t+h)^2=t^2+th+ht+h^2$

a) P(TT | T first) = 1/2
Nominator: coefficient of t^2
Denominator: sum of coefficients of t^2 and th

b) P(TT | at least 1 T) = 1/3
Nominator: coefficient of t^2
Denominator: sum of coefficients of terms with t, namely t^2, th, ht

Friday 6 February 2015

Real-Life Maths

Angle

City Planning
For safe traffic, street intersections are often made at right or obtuse angles. In this case, visibility is easier when turning. A sharp 60° turn is more prone to accidents because the turn is difficult. It would be easier for the driver if the road were constructed like the figure on the right, namely, having an additional intersection. That way, the car can first turn at 150°, and then at 90°.

A vehicle can turn easier with the restructured street design (right).
Parking spaces
Angles can determine the number of cars parking in a lot. Parking arrangements are often either perpendicular or angled. An advantage to the former case is that more cars can be fit in the parking lot. As for using obtuse-angled spaces, it would be easier for drivers to turn the car at an obtuse angle than a right angle, which also means that there will less accidents.

Set

Search Engines
Almost all search engines use a "Boolean conjunction" model.
If you search "interesting mathematics" in search engines, the search result will be the the pages simultaneously containing "interesting" and "mathematics".


[More applications later]

Wednesday 4 February 2015

Derivation of Normal Distribution

The derivation can be found here. There is so much to learn from this proof. You can try to read it to see if you understand it. If you encounter difficulty understanding the first part of the proof, you can refer to the explanations below.

Concepts:
integration as sum of area of rectangles
probability as area under the curve
describing one thing from different perspectives
polar coordinates
polar integration
solving differential equation by separating variables
differential equation true for any $x$ and $y \Rightarrow$ equals a constant
large errors less likely than small errors $\Rightarrow C$ must be negative
characteristic of a probability distribution: total area under the curve must be $1$
symmetry: properties of even and odd functions
rewriting the product of two integrals as a double integral
evaluating a double integral using polar coordinates
improper integral
integration by parts
transformation of graphs

Explanations:



The probability of the dart falling in the vertical strip from $x$ to $x+\Delta x$ is $p(x)\Delta x$. We can think of probability as the area under the curve. $p(x)\Delta x$ is the area of the rectangle. Similarly, the probability of the dart falling in the horizontal strip from $y$ to $y+\Delta y$ is $p(y)\Delta y$.

Since we assumed that errors in perpendicular directions are independent, the probability of the dart falling in the shaded region can be given by $\large p(x)\Delta x \cdot p(y) \Delta y$. Now, any region $r$ units from the origin with area $\Delta x \cdot \Delta y$ has the same probability. [Finding probability from two perspectives] We thus have $\large p(x)\Delta x \cdot p(y) \Delta y=g(r)\Delta x \Delta y$, which means $g(r)=p(x)p(y)$.

[Sidenote: Using two methods to express the same thing can yield interesting results.
$\int \sin 2x \cos x dx=\int 2\sin x \cos^2 x dx=-2\int u^2 du=-\frac{2}{3}\cos^3 x+C_1$
$\int \sin 2x \cos x dx=\frac{1}{2}\int(\sin 3x+\sin x)dx=-\frac{1}{6}\cos 3x-\frac{1}{2}\cos x+C_2$
$\frac{-2}{3}\cos^3 x=-\frac{1}{6}\cos 3x-\frac{1}{2}\cos x+K$
Put $x=0$, we have $K=0$.
Therefore, we reach the identity $\cos 3x=4\cos^3 x-3\cos x$ from integration!]

The remainder of the proof is well-explained in the link.

Related:
Same derivation with some details
A similar derivation
Another similar derivation
Relation to central limit theorem

More to explore:
Derivation of Gaussian Distribution from Binomial
Deriving the normal density as a solution of a differential equation
Other advanced derivations
Distributions Derived from the Normal Distribution

Monday 2 February 2015

Logic, arguments, and writing

Basic concepts
A proposition is a sentence that is either true or false but is not both true and false. To each proposition, we assign a truth value, true or false, depending on whether the proposition is true or false. Note that not every sentence is a proposition. One example is "this statement is false".

Simple sentences have the form: subject-predicate, where the predicate describes the subject of the sentence. There are five standard ways to create new propositions from existing simple propositions: conjunction, disjunction, negation, conditional, biconditional, through the use of "and", "or", "not", "if-then", "if and only if" respectively. We call these items logical connectives.

Inclusive & Exclusive or
As a demonstration, we give the truth table for P or Q.

$\small \begin{array}{c|c|c} P&Q&P \vee Q \\ \hline T&T&T\\T&F&T\\F&T&T\\F&F&F\end{array}$

Note that in mathematics, "or" is used inclusively, contrary to the case in everyday discourse where "or" is used exclusively. When we say coffee or tea in real life, we mean exactly one of them but not both. In logic, when we say "A or B", it can exactly one of A and B or both.

Implication
P => Q - If P then Q
P: hypothesis of the implication / antecedent
Q: conclusion of the implication / consequent

For the proposition P => Q,
Converse: Q => P
Contrapositive: ~Q => ~P
Inverse: ~P => ~Q

Negation
~P - not-P / It is not the case that P.

Logical equivalence
Suppose A and B are propositions formed from a collection of propositions P, Q, R, S, ... using logical connectives.
The propositions A and B are logically equivalent if A and B have the same truth values for all possible truth values of the propositions P, Q, R, S, ...
≅ - logically equivalent

Let A be a proposition formed from propositions P, Q, R, ... using the logical connectives.
A is called a tautology if A is true for every assignment of truth values to P, Q, R, ...
A is called a contradiction if A is false for every assignment of truth values to P, Q, R, ...



Relation to arguments and writing

Tautology
When a word repeats the meaning of another word in the same phrase it is called tautology.

Some common superfluities:
absolute certainty, actual facts (and its cousin, true facts), a downward plunge, advance warning, arid desert, attach together, circle round, burn down, connect/collaborate/couple/gather/join/link/meet/merge/unite together, each and every one, early beginnings, eat up, final completion, final upshot, forward planning, free gift, future prospects, grateful thanks, general consensus, have got (simply have is fine), hurry up, inside of, important essentials, in between, lend out, lonely isolation, more preferable, mutual cooperation, new beginner, new creation, mew innovation, original source, other alternative, outside of, over with, proceed onward, really excellent, reduce down, renew again, seldom ever, set a new world record, still continue, this day and age, totally finished, tiny little child, unexpected surprise, usual habit, whether or not, widow woman...

We should avoid these tautologies in our writing.

Double Negatives
~(~P) ≅ P
Example:
The bomb attack was not unexpected.
The bomb attack was expected.
Although they can be useful, they are also often confusing. If you want to get your message across, it's better to use the latter version.

Fallacy of affirming the consequent
Since $[(P \Rightarrow Q) \wedge Q] \Rightarrow P$ is not always true, $P$ is not a valid consequence of $P \Rightarrow Q$ and $Q$.
Example:
"If I went to the grocery store, then I bought milk."
"I bought milk. Therefore, I went to the grocery store." is NOT VALID.
I could have bought the milk at a gas station, from a street vendor, or from an illicit milk dealer.
One might be tempted to say that this reasoning is valid, because we might associate buying milk with going to a grocery store. The form of the argument, however, does not lead to that conclusion.
Moral of the story: The converse of a statement may not be true.

Fallacy of denying the antecedent
$\neg Q$ is not a valid consequence of $P \Rightarrow Q$ and $\neg P$. When $P$ is false and $Q$ is true, $[(P \Rightarrow Q) \wedge \neg P] \Rightarrow \neg Q$ is false and thus is not a tautology.
Example:
"If I went to the grocery store, then I bought milk."
"I didn't go to the grocery store. Therefore, I didn't buy milk." is WRONG.
I could have bought the milk elsewhere.

Question
It's only a matter of time for people to accept...
Such a statement is an unfalsifiable claim or tautology [?]