A cyclic sum is a summation that cycles through all the values of a function and takes their sum.
Rigorous definition
Consider a function $f(a_1,a_2,a_3,\cdots,a_n)$. The cyclic sum $\sum\limits_{\text{cyc}}f(a_1,a_2,a_3,\cdots,a_n)$ is equal to $$f(a_1,a_2,a_3,\cdots,a_n)+f(a_2,a_3,a_4,\cdots,a_n,a_1)+\cdots+f(a_n,a_1,a_2,\cdots,a_{n-2},a_{n-1}).$$ The notation $\sum\limits_{\text{cyc}}$ implies that all variables are cycled through. Another notation is $\sum\limits_{a,b,c}$, which implies that the cyclic sum only cycle through those variables underneath the sigma. [Note: Do not confuse this notation with the symmetric sum.]
Examples
Consider the permutation $p=(a\;b\;c)$. The cyclic sum $\sum\limits_p a$ is the sum that cycles through the permutation: $$\sum_p a=a+b+c.$$
They often come up in inequalities: $$\sum_{a,b,c}\frac{a^3}{3}=\frac{a^3}{3}+\frac{b^3}{3}+\frac{c^3}{3}\geq \sqrt[3]{\frac{(abc)^3}{3^3}}=\frac{abc}{3}.$$
They are extremely helpful in inequalities involving many letters. Instead of writing all the terms of the sum explicitly, we can employ this notation. Check out this answer.
Cyclic numbers
$142857$, the six repeating digits of $\frac{1}{7}$, $0.\overline{142857}$, is the best-known cyclic number in base $10$.
$$1\times 142,857=142,857\\ 2\times 142,857=285,714\\ 3\times 142,857=428,571\\ 4\times 142,857=571,428\\ 5\times 142,857=714,285\\ 6\times 142,857=857,142\\ 7\times 142,857=999,999$$
When multiplied by $2, 3, 4, 5$, or $6$, the answer will be a cyclic permutation of itself, and will correspond to the repeating digits of $\dfrac{2}{7},\dfrac{3}{7},\dfrac{4}{7},\dfrac{5}{7},\dfrac{6}{7}$, respectively.
$$1\div 7=0.\overline{142,857}\\ 2\div 7=0.\overline{285,714}\\ 3\div 7=0.\overline{428,571}\\ 4\div 7=0.\overline{571,428}\\ 5\div 7=0.\overline{714,285}\\ 6\div 7=0.\overline{857,142}\\ 7\div 7=0.\overline{999,999}=1\\ 8\div 7=1.\overline{142,857}\\ 9\div 7=1.\overline{285,714}$$
One last interesting thing about this fraction: $$\begin{align}\frac{1}{7}&=0.142857142857...\\&=0.14+0.0028+0.000056+0.00000112+0.0000000224+0.000000000448+\cdots\\&=\frac{14}{100}+\frac{28}{100^2}+\frac{56}{100^3}+\frac{112}{100^4}+\frac{224}{100^5}+\cdots+\frac{7\cdot 2^n}{100^n}+\cdots\\&=\frac{7}{50}+\frac{7}{50^2}+\frac{7}{50^3}+\frac{7}{50^4}+\frac{7}{50^5}+\cdots+\frac{7}{50^n}+\cdots\\&=\sum_{k=1}^\infty \frac{7}{50^k}.\end{align}$$Each term is double the prior term shifted two places to the right.
Reference
Definition of cyclic sum
Examples
More to explore
The Alluring Lore of Cyclic Numbers by Michael W. Ecker
Monday, 7 September 2015
Sunday, 6 September 2015
Find $\max xy^3z^7$ where $x+y+z=1$
As a brief introduction, 'problem of the day' posts will be about mathematical thinking (problem-solving and proofs). The questions and solutions are extracted from online sources. I will add alternative solutions and mention something interesting about the problems if I can.
Today's problem:
Find the maximum of $xy^3z^7$ with non-negative real numbers in the plane $x+y+z=1$.
We make use of AM-GM inequality: $$1=x+y+z=x+\frac{1}{3}y+\frac{1}{3}y+\frac{1}{3}y+\frac{1}{7}z+\cdots+\frac{1}{7}z\geq 11 \sqrt[11]{x(\frac{1}{3}y)^3(\frac{1}{7}z)^7}.$$ To obtain the maximum, we want to find $x,y,z$ for the equality case. Namely, $x=\dfrac{1}{3}y=\dfrac{1}{7}z=\dfrac{1}{11}$. We thus have the maximum $xy^3z^7=\dfrac{3^3 7^7}{11^{11}}$.
Alternatively, we can use lagrange multiplier since this is an optimisation problem with a constraint. Let $F(x,y,z,\lambda)=xy^3z^7-\lambda(x+y+z-1)$. Then, we have a system of equations $$\begin{cases}F_x=y^3z^7-\lambda=0\\F_y=3xy^2z^7-\lambda=0\\F_z=7xy^3z^6-\lambda=0\\ x+y+z=1.\end{cases}$$ From the first three equations, we know $y=3x,z=7x$. Therefore, substituting them into the last equation, we have $x+3x+7x=1$ and thus $x=\dfrac{1}{11},y=\dfrac{3}{11},z=\dfrac{7}{11}$.
Today's problem:
Find the maximum of $xy^3z^7$ with non-negative real numbers in the plane $x+y+z=1$.
We make use of AM-GM inequality: $$1=x+y+z=x+\frac{1}{3}y+\frac{1}{3}y+\frac{1}{3}y+\frac{1}{7}z+\cdots+\frac{1}{7}z\geq 11 \sqrt[11]{x(\frac{1}{3}y)^3(\frac{1}{7}z)^7}.$$ To obtain the maximum, we want to find $x,y,z$ for the equality case. Namely, $x=\dfrac{1}{3}y=\dfrac{1}{7}z=\dfrac{1}{11}$. We thus have the maximum $xy^3z^7=\dfrac{3^3 7^7}{11^{11}}$.
Alternatively, we can use lagrange multiplier since this is an optimisation problem with a constraint. Let $F(x,y,z,\lambda)=xy^3z^7-\lambda(x+y+z-1)$. Then, we have a system of equations $$\begin{cases}F_x=y^3z^7-\lambda=0\\F_y=3xy^2z^7-\lambda=0\\F_z=7xy^3z^6-\lambda=0\\ x+y+z=1.\end{cases}$$ From the first three equations, we know $y=3x,z=7x$. Therefore, substituting them into the last equation, we have $x+3x+7x=1$ and thus $x=\dfrac{1}{11},y=\dfrac{3}{11},z=\dfrac{7}{11}$.
Sunday, 9 August 2015
Orthogonal functions
Legendre polynomials (important in spherical harmonics)
There are three general ways of forming these polynomials.
1. Schmidt orthogonalisation of $\{x^n\}$ on $[-1,1]$
Let $p_1(x)=\dfrac{1}{\lVert 1\rVert}$. Then $\lVert 1\rVert^2=\int_{-1}^1 1dx=2$ and thus $\boxed{p_1(x)=\dfrac{1}{\sqrt{2}}}$. Notice that $\int_{-1}^1 f(x)dx=0$ for any odd function $f(x)$. So $x$ is orthogonal to $1$. We then find $p_2(x)=\dfrac{x}{\lVert x \rVert}$. Since $\lVert x \rVert^2=\int_{-1}^1 x^2dx=\dfrac{2}{3}$, we have $\boxed{p_2(x)=\sqrt{\dfrac{3}{2}}x}$. Although $x^2$ is orthogonal to $x$ (and hence $p_2$), it is not orthogonal to $1$. Let $q_3(x)=x^2-\langle x^2,p_1 \rangle p_1$, and then normalise to get $p_3=\dfrac{q_3}{\lVert q_3 \rVert}$. Now $\langle x^2,p_1 \rangle=\dfrac{1}{\sqrt{2}}\int_{-1}^1 x^2 dx=\dfrac{\sqrt{2}}{3}$, so $q_3(x)=x^2-\dfrac{\sqrt{2}}{3}\dfrac{1}{\sqrt{2}}=x^2-\dfrac{1}{3}$. $\lVert q_3 \rVert^2=\int_{-1}^1 (x^2-\dfrac{1}{3})^2dx=\dfrac{8}{45}$. Thus $\boxed{p_3(x)=\sqrt{\dfrac{45}{8}}(x^2-\dfrac{1}{3})=\sqrt{\dfrac{5}{2}}(\dfrac{3}{2}x^2-\dfrac{1}{2})}$.
The whole orthonormal set of functions is $\{\sqrt{\dfrac{2n+1}{2}}P_n(x)\}$, where $P_n(x)$ signifies the Legendre polynomial of rank $n$.
2. Solution of Legendre's differential equation
$$\dfrac{d}{dx}[(1-x^2)\dfrac{df}{dx}]+l(l+1)f=0,$$ where $l$ is a positive integer or zero.
One can verify that the polynomials that are generated by orthogonalisation of $\{x^n\}$ on $[-1,1]$ are solutions of this differential equation. Now we attempt a power series solution $\sum_n a_nx^n$ for $f$. Then $$\dfrac{d}{dx}[(1-x^2)\sum_n na_nx^{n-1}]+[l(l+1)\sum_n a_nx^n]=0\\ \dfrac{d}{dx}[\sum_n na_nx^{n-1}-\sum_n na_nx^{n+1}]+\sum_n l(l+1)a_nx^n=0\\ \sum_n n(n-1)a_nx^{n-2}-\sum_n n(n+1)a_nx^n+\sum_n l(l+1)a_nx^n=0.$$ Collecting all similar powers of $x$, we obtain $$\sum_n x^n\underbrace{[(n+2)(n+1)a_{n+2}-n(n+1)a_n+l(l+1)a_n]}_{(*)}=0.$$ Since the functions $\{x_n\}$ are linearly independent, $(*)$ must be zero. This gives the recurrence relation for the expansion coefficients: $$a_{n+2}=\dfrac{[n(n+1)-l(l+1)]a_n}{(n+2)(n+1)}.$$ This equation allows us to find every other coeffcient when we are given the first. For example, if $a_0=1$, then $a_2=\dfrac{-l(l+1)}{2},a_4=\dfrac{6-l(l+1)}{12}\cdot \dfrac{-l(l+1)}{2}$, and so on.
If $l$ is even, eventually $n$ will equal $l$ for some term, then $n(n+1)$ will equal $l(l+1)$ and the following coefficients will also equal zero. So the even terms of the power series cut off at $n=l$ whereas the infinite series of odd powers diverges at $x=\pm 1$. The finite series of even powers gives $$\begin{align}l=0\quad &a_0=1\quad &f_0=1\\ l=2\quad &a_0=1\\ &a_2=-3\quad &f_2=-3x^2+1,\end{align}$$ which are proportional to the Legendre polynomials derived by orthogonalisation. We can choose $a_0$ such that the polynomial $f_n=1$ at $x=+1$. Then $$\begin{align}l=0\quad &a_0=1\quad &P_0=1\\l=2\quad &a_0=\dfrac{-1}{2}\\&a_2=\dfrac{3}{2}\quad &P_2=\dfrac{3}{2}x^2-\dfrac{1}{2}.\end{align}$$
If $l$ is odd, the series in odd powers terminates at $n=l$, and the series in even powers is an infinite series, divergent at $x=\pm 1$. Again, we choose $a_0$ such that $f_n(+1)=1$. Then $$\begin{align}l=1\quad &a_1=1\quad &P_1=x\\ l=3\quad &a_1=\dfrac{-3}{2}\\ &a_3=\dfrac{5}{2}\quad &P_3=\dfrac{5}{2}x^3-\dfrac{3}{2}x.\end{align}$$
3. Generating function
A generating function for a set of functions is a function of two variables such that when this function is expanded as power series in one of the variables, the coefficients are the set of functions in other variable. Often many useful relations may be derived from a generating function; nonetheless, there are no general methods for constructing a generating function.
The generating function for Legendre polynomials is $$F(x,t)=(1-2xt+t^2)^{-1/2}=\sum_n P_n(x)t^n.$$ It can be expanded as a binomial series $$F(x,t)=1+\dfrac{1\cdot 1}{2\cdot 1!}(2xt-t^2)+\dfrac{1\cdot 3\cdot 1}{2\cdot 2\cdot 2!}(2xt-t^2)^2+\frac{1\cdot 3\cdot 5\cdot 1}{2\cdot 2\cdot 2\cdot 3!}(2xt-t^2)^3+\cdots$$ and then rearranged in powers of $t$: $$\begin{align}F(x,t)&=1+(x)(t)+(\dfrac{-1}{2}+\dfrac{3}{2}x^2)(t^2)+(\dfrac{-3}{2}x+\dfrac{5}{2}x^3)(t^3)+\cdots\\&=\sum_n P_n(x)t^n.\end{align}$$
Hermite polynomials (important in Quantum Mechanics)
They are solutions of the differential equation
$$y''-2xy'+2py=0\quad y(0)=1,\quad y'(0)=1.$$
Chebyshev polynomials (important in numerical analysis)
They are solutions of the differential equation
$$(1-x^2)y''-xy'+n^2y=0\quad y(0)=1,\quad y'(0)=0.$$
They can also be represented as $$T_n(x)=\cos (n\cos^{-1} x).$$
Applications
physical applications [later]
The Legendre polynomials are important in some quantum-chemical problems. They are the basis for the wave functions for angular momentum, and thus occur in problems involving spherical motion, such as that of the electron in a hydrogen atom or the rotations of a molecule. They are also important in describing the angular dependence of one-electron atomic orbitals; this dependence, in turn, forms the basis of the geometry of chemical compounds.
Reference:
Mathematics for Quantum Chemistry by Jay Martin Anderson
There are three general ways of forming these polynomials.
1. Schmidt orthogonalisation of $\{x^n\}$ on $[-1,1]$
Let $p_1(x)=\dfrac{1}{\lVert 1\rVert}$. Then $\lVert 1\rVert^2=\int_{-1}^1 1dx=2$ and thus $\boxed{p_1(x)=\dfrac{1}{\sqrt{2}}}$. Notice that $\int_{-1}^1 f(x)dx=0$ for any odd function $f(x)$. So $x$ is orthogonal to $1$. We then find $p_2(x)=\dfrac{x}{\lVert x \rVert}$. Since $\lVert x \rVert^2=\int_{-1}^1 x^2dx=\dfrac{2}{3}$, we have $\boxed{p_2(x)=\sqrt{\dfrac{3}{2}}x}$. Although $x^2$ is orthogonal to $x$ (and hence $p_2$), it is not orthogonal to $1$. Let $q_3(x)=x^2-\langle x^2,p_1 \rangle p_1$, and then normalise to get $p_3=\dfrac{q_3}{\lVert q_3 \rVert}$. Now $\langle x^2,p_1 \rangle=\dfrac{1}{\sqrt{2}}\int_{-1}^1 x^2 dx=\dfrac{\sqrt{2}}{3}$, so $q_3(x)=x^2-\dfrac{\sqrt{2}}{3}\dfrac{1}{\sqrt{2}}=x^2-\dfrac{1}{3}$. $\lVert q_3 \rVert^2=\int_{-1}^1 (x^2-\dfrac{1}{3})^2dx=\dfrac{8}{45}$. Thus $\boxed{p_3(x)=\sqrt{\dfrac{45}{8}}(x^2-\dfrac{1}{3})=\sqrt{\dfrac{5}{2}}(\dfrac{3}{2}x^2-\dfrac{1}{2})}$.
The whole orthonormal set of functions is $\{\sqrt{\dfrac{2n+1}{2}}P_n(x)\}$, where $P_n(x)$ signifies the Legendre polynomial of rank $n$.
2. Solution of Legendre's differential equation
$$\dfrac{d}{dx}[(1-x^2)\dfrac{df}{dx}]+l(l+1)f=0,$$ where $l$ is a positive integer or zero.
One can verify that the polynomials that are generated by orthogonalisation of $\{x^n\}$ on $[-1,1]$ are solutions of this differential equation. Now we attempt a power series solution $\sum_n a_nx^n$ for $f$. Then $$\dfrac{d}{dx}[(1-x^2)\sum_n na_nx^{n-1}]+[l(l+1)\sum_n a_nx^n]=0\\ \dfrac{d}{dx}[\sum_n na_nx^{n-1}-\sum_n na_nx^{n+1}]+\sum_n l(l+1)a_nx^n=0\\ \sum_n n(n-1)a_nx^{n-2}-\sum_n n(n+1)a_nx^n+\sum_n l(l+1)a_nx^n=0.$$ Collecting all similar powers of $x$, we obtain $$\sum_n x^n\underbrace{[(n+2)(n+1)a_{n+2}-n(n+1)a_n+l(l+1)a_n]}_{(*)}=0.$$ Since the functions $\{x_n\}$ are linearly independent, $(*)$ must be zero. This gives the recurrence relation for the expansion coefficients: $$a_{n+2}=\dfrac{[n(n+1)-l(l+1)]a_n}{(n+2)(n+1)}.$$ This equation allows us to find every other coeffcient when we are given the first. For example, if $a_0=1$, then $a_2=\dfrac{-l(l+1)}{2},a_4=\dfrac{6-l(l+1)}{12}\cdot \dfrac{-l(l+1)}{2}$, and so on.
If $l$ is even, eventually $n$ will equal $l$ for some term, then $n(n+1)$ will equal $l(l+1)$ and the following coefficients will also equal zero. So the even terms of the power series cut off at $n=l$ whereas the infinite series of odd powers diverges at $x=\pm 1$. The finite series of even powers gives $$\begin{align}l=0\quad &a_0=1\quad &f_0=1\\ l=2\quad &a_0=1\\ &a_2=-3\quad &f_2=-3x^2+1,\end{align}$$ which are proportional to the Legendre polynomials derived by orthogonalisation. We can choose $a_0$ such that the polynomial $f_n=1$ at $x=+1$. Then $$\begin{align}l=0\quad &a_0=1\quad &P_0=1\\l=2\quad &a_0=\dfrac{-1}{2}\\&a_2=\dfrac{3}{2}\quad &P_2=\dfrac{3}{2}x^2-\dfrac{1}{2}.\end{align}$$
If $l$ is odd, the series in odd powers terminates at $n=l$, and the series in even powers is an infinite series, divergent at $x=\pm 1$. Again, we choose $a_0$ such that $f_n(+1)=1$. Then $$\begin{align}l=1\quad &a_1=1\quad &P_1=x\\ l=3\quad &a_1=\dfrac{-3}{2}\\ &a_3=\dfrac{5}{2}\quad &P_3=\dfrac{5}{2}x^3-\dfrac{3}{2}x.\end{align}$$
3. Generating function
A generating function for a set of functions is a function of two variables such that when this function is expanded as power series in one of the variables, the coefficients are the set of functions in other variable. Often many useful relations may be derived from a generating function; nonetheless, there are no general methods for constructing a generating function.
The generating function for Legendre polynomials is $$F(x,t)=(1-2xt+t^2)^{-1/2}=\sum_n P_n(x)t^n.$$ It can be expanded as a binomial series $$F(x,t)=1+\dfrac{1\cdot 1}{2\cdot 1!}(2xt-t^2)+\dfrac{1\cdot 3\cdot 1}{2\cdot 2\cdot 2!}(2xt-t^2)^2+\frac{1\cdot 3\cdot 5\cdot 1}{2\cdot 2\cdot 2\cdot 3!}(2xt-t^2)^3+\cdots$$ and then rearranged in powers of $t$: $$\begin{align}F(x,t)&=1+(x)(t)+(\dfrac{-1}{2}+\dfrac{3}{2}x^2)(t^2)+(\dfrac{-3}{2}x+\dfrac{5}{2}x^3)(t^3)+\cdots\\&=\sum_n P_n(x)t^n.\end{align}$$
Hermite polynomials (important in Quantum Mechanics)
They are solutions of the differential equation
$$y''-2xy'+2py=0\quad y(0)=1,\quad y'(0)=1.$$
Chebyshev polynomials (important in numerical analysis)
They are solutions of the differential equation
$$(1-x^2)y''-xy'+n^2y=0\quad y(0)=1,\quad y'(0)=0.$$
They can also be represented as $$T_n(x)=\cos (n\cos^{-1} x).$$
Applications
$\min \int_{-1}^1 |x^3-a-bx-cx^2|dx$
Geometrically, the minimum, call it $Q$, is the square of the distance from $x^3$ to the subspace $S$ spanned by the functions $1,x,x^2$. We can compute the orthogonal projection $\phi(x)$ of $x^3$ into $S$ and use the Pythagoras' theorem: $Q^2=\lVert x^3 \rVert^2-\lVert \phi(x)\rVert^2$. We first need to transform the basis $\{1,x,x^2\}$ into an orthonormal one, namely $\{p_1,p_2,p_3\}$ as above. $\phi(x)=\langle x^3,p_1\rangle p_1+\langle x^3,p_2 \rangle p_2+\langle x^3,p_3 \rangle p_3$. $x^3$ is orthogonal to $p_1$ and $p_2$ by oddness. Since $\langle x^3,p_2 \rangle=\sqrt{\dfrac{3}{2}}\int_{-1}^1 x^3x dx=\sqrt{\dfrac{3}{2}}\dfrac{2}{5}=\dfrac{6}{5}$, we have $\lVert\phi(x)\rVert^2=\dfrac{6}{25}\lVert p_2 \Vert^2=\dfrac{6}{25}$. With $\lVert x^3 \Vert^2=\int_{-1}^1 x^6 dx=\dfrac{2}{7}$, we have $Q=\dfrac{2}{7}-\dfrac{6}{25}=\dfrac{8}{175}$.
Geometrically, the minimum, call it $Q$, is the square of the distance from $x^3$ to the subspace $S$ spanned by the functions $1,x,x^2$. We can compute the orthogonal projection $\phi(x)$ of $x^3$ into $S$ and use the Pythagoras' theorem: $Q^2=\lVert x^3 \rVert^2-\lVert \phi(x)\rVert^2$. We first need to transform the basis $\{1,x,x^2\}$ into an orthonormal one, namely $\{p_1,p_2,p_3\}$ as above. $\phi(x)=\langle x^3,p_1\rangle p_1+\langle x^3,p_2 \rangle p_2+\langle x^3,p_3 \rangle p_3$. $x^3$ is orthogonal to $p_1$ and $p_2$ by oddness. Since $\langle x^3,p_2 \rangle=\sqrt{\dfrac{3}{2}}\int_{-1}^1 x^3x dx=\sqrt{\dfrac{3}{2}}\dfrac{2}{5}=\dfrac{6}{5}$, we have $\lVert\phi(x)\rVert^2=\dfrac{6}{25}\lVert p_2 \Vert^2=\dfrac{6}{25}$. With $\lVert x^3 \Vert^2=\int_{-1}^1 x^6 dx=\dfrac{2}{7}$, we have $Q=\dfrac{2}{7}-\dfrac{6}{25}=\dfrac{8}{175}$.
physical applications [later]
The Legendre polynomials are important in some quantum-chemical problems. They are the basis for the wave functions for angular momentum, and thus occur in problems involving spherical motion, such as that of the electron in a hydrogen atom or the rotations of a molecule. They are also important in describing the angular dependence of one-electron atomic orbitals; this dependence, in turn, forms the basis of the geometry of chemical compounds.
Reference:
Mathematics for Quantum Chemistry by Jay Martin Anderson
Saturday, 8 August 2015
Mathematics for quantum chemistry
Introduction
Quantum mechanics grew up from two different points of view, which represent two analogous mathematical formulations of eigenvalue problems. One is the wave mechanics of Schrodinger. In wave mechanics, operators are differential expressions, such as $\dfrac{d}{dx}$, and the eigenvalue equation takes the form of a differential equation, and relies on the calculus for its solution. The other formulation is the matrix mechanics of Heisenberg, in which operators are represented by algebraic entities called matrices; instead of a function in the eigenvalue equation, the matrix operator operates on a vector $\zeta$ to transform $\zeta$ into a vector parallel to $\zeta$, but $q$ times as long: $$Q\zeta=q\zeta.$$ This is the matrix-mechanical formulation of the eigenvalue problem. The solution of this form of the eigenvalue problem relies on algebra.
These two approaches to quantum mechanical problems are deeply interrelated; the work of Dirac shows the underlying equivalence of the two points of view, as well as of the corresponding mathematical techniques. [...]
Orthogonal functions
Definition: The expansion interval is the range of the independent variables assumed by the functions under consideration.
Definition: The inner product of the two complex-valued functions $f$ and $g$ of a continuous variable on their expansion interval $[a,b]$ is $$\langle f|g \rangle=\int_a^b f(x)^*g(x)dx.$$
The order is not important for real-valued functions but important for complex-valued functions: $$\begin{align}\langle g|f \rangle&=\int g(x)^*f(x)dx\\&=\big(\int f(x)^*g(x)dx\big)^*\\ &=\langle f|g \rangle^*.\end{align}$$
Definition: Two functions $f(x)$ and $g(x)$ are said to be orthogonal on the interval $[a,b]$ if their inner product on $[a,b]$ is zero: $$\langle f|g \rangle=\int_a^b f^*g=0=\int_a^b g^*f=\langle g|f \rangle.$$
Definition: The norm of a function on the interval $[a,b]$ is the inner product of the function with itself, and may be symbolised by $N$: $$N(f)=\langle f|f \rangle=\int_a^b f^*g=0=\int_a^b f^*f.$$
The norm of a function is a real, positive quantity: $$f^*f=(\text{Re}\:f-i\text{Im}\:f)(\text{Re}\:f+i\text{Im}\:f)=(\text{Re}\:f)^2+(\text{Im}\:f)^2.$$
Definition: A function is said to be normalised if its norm is one: $\langle f|f \rangle=1$.
Suppose $f$ has a norm $N$. Then the function $\dfrac{f}{\sqrt{N}}$ will have a norm of one, since $$\langle \dfrac{f}{\sqrt{N}}|\dfrac{f}{\sqrt{N}} \rangle=\dfrac{1}{N}\langle f|f \rangle=\dfrac{N}{N}=1.$$
Reminder: The expansion interval must be specified before a statement about orthogonality or normality can be made. On $[-1,1]$, $\langle x|x^2 \rangle=\int_{-1}^1 x^*x^2 dx=\int_{-1}^1 x^3 dx=\dfrac{x^4}{4}|_{-1}^1=0$, so we can say that on $[-1,1]$, $x$ and $x^2$ are orthogonal functions. However, on $[0,1]$, $\langle x|x^2 \rangle=\int_0^1 x^3 dx=\dfrac{x^4}{4}|_0^1=\dfrac{1}{4}\neq 0$, so the functions are not orthogonal in this interval.
Definition: $\{F_i\}$ is a complete set of functions such that any other function $f$ may be expressed as a linear combination of members of the set $\{F_i\}$ on a prescribed expansion interval. [Namely, if the set $\{F_i\}$ is complete, then we may expand $f$ in terms of the functions $F_i$: $f(x)=\sum_{n=1}^\infty a_n F_n(x)$.]
Definition: An orthogonal set of functions is a set of functions each of which is orthogonal on a prescribed interval to all other members of the set. [That is, the set $\{F_i\}$ is is an orthogonal set if every member is orthogonal to every other member: $\langle F_i|F_j \rangle=0$ for all $i,j$ such that $i\neq j$.]
Example: The set $\{\sin nx, \cos nx\}$, on $[-\pi,\pi]$, for $n$ zero or positive integers, is an orthogonal set of functions.
Definition: An orthonormal set of functions is an orthogonal set of functions, each of which is normalised. [That is, the set $\{F_i\}$ is orthonormal if $\langle F_i|F_j \rangle=0$ for all $i\neq j$ and $\langle F_i|F_i \rangle=1$ for all $j$.]
These two equations occurs so often in discussing orthonormal functions that a special symbol has been introduced to combine them. The Kronecker delta symbol $\delta_{ij}$ has the meaning $\delta_{ij}=0$ for $i\neq j$, $\delta_{ij}=1$ for $i=j$.
The Fourier series is an expansion of a function in the orthonormal functions which are proportional to $\{\sin mx,\cos mx\}$. Let us derive their norm on the interval $[-\pi,-\pi]$. $$N(\sin mx)=\langle \sin mx|\sin mx \rangle=\int_{-\pi}^\pi \sin^2 mx dx=\dfrac{1}{m}\int_{-m\pi}^{m\pi}\sin^2 y dy=\pi.$$ Similarly, $$N(\cos mx)=\langle \cos mx|\cos mx \rangle=\dfrac{1}{m}\int_{-m\pi}^{m\pi}\cos^2 y dy=\pi.$$ Also, $$N(\sin 0x)=\langle \sin 0x|\sin 0x \rangle=\int_{-\pi}^\pi 0 dx=0$$ and $$N(\cos 0x)=\langle \cos 0x|\cos 0x \rangle=\int_{-\pi}^{\pi}1dx=2\pi.$$ Summarising the results, $$N(\sin mx)=\begin{cases} \pi\quad m\neq 0\\ 0\quad m=0 \end{cases}\\ N(\cos mx)=\begin{cases} \pi\quad m\neq 0\\ 2\pi \quad m=0 \end{cases}$$ or using the Kronecker delta, we have $$N(\sin mx)=\pi-\pi\delta_{m0}\\N(\cos mx)=\pi+\pi\delta_{m0}.$$ To calculate the expansion coefficients for an expansion in orthonormal functions, we use the set of functions $\{\dfrac{1}{\sqrt{2\pi}},\dfrac{1}{\sqrt{\pi}}\sin mx,\dfrac{1}{\sqrt{\pi}}\cos mx\}$ for $m=1,2,\cdots$, for expansion on $[-\pi,\pi]$. For these functions, the expansion coefficients will be $$a_0=\langle \dfrac{1}{\sqrt{2\pi}}|f\rangle\\a_m=\langle \dfrac{1}{\sqrt{\pi}}\cos mx|f\rangle\\b_m=\langle \dfrac{1}{\sqrt{\pi}}\sin mx|f\rangle,$$ where the expansion is $$f(x)=a_0\dfrac{1}{\sqrt{2\pi}}+\sum_{m=1}^\infty a_m(\dfrac{1}{\sqrt{\pi}}\cos mx)+\sum_{m=1}^\infty b_m(\dfrac{1}{\sqrt{\pi}}\sin mx).$$ Usually the constants are removed from the terms by explicitly writing out the expansion coefficients $$f(x)=\dfrac{1}{2\pi}\langle 1|f\rangle+\dfrac{1}{\pi}\sum_{m=1}^\infty [\langle \cos mx|f\rangle \cos mx+\langle \sin mx|f\rangle \sin mx].$$ This gives the final result $$f(x)=\dfrac{c_0}{2}+\sum_{m=1}^\infty c_m \cos mx+\sum_{m=1}^\infty d_m \sin mx\\c_m=\dfrac{1}{\pi}\langle \cos mx|f \rangle\\ d_m=\dfrac{1}{\pi}\langle \sin mx|f \rangle$$ on $[-\pi,\pi]$. This is the usual form of Fourier series. Note the lead term is divided by two because the norm of $\cos 0x$ is $2\pi$, whereas for all other values of $m$, the norm of $\cos mx$ is $\pi$.
Using the property of evenness and oddness of functions, we can make two simple extensions of the Fourier series. The Fourier expansion on $[-\pi,\pi]$ of an odd function is made up of sine terms $$f(x)=\sum_{m=1}^\infty d_m \sin mx$$ since if $f$ is odd, all inner products $\langle \cos mx|f \rangle \equiv 0$. The Fourier expansion on $[-\pi,\pi]$ of an even function is made up only of cosine terms: $$f(x)=\dfrac{c_0}{2}+\sum_{m=1}^\infty c_m \cos mx$$ since if $f$ is even, all inner products $\langle \sin mx|f \rangle \equiv 0$.
[...]
Reference:
Mathematics for Quantum Chemistry by Jay Martin Anderson
Quantum mechanics grew up from two different points of view, which represent two analogous mathematical formulations of eigenvalue problems. One is the wave mechanics of Schrodinger. In wave mechanics, operators are differential expressions, such as $\dfrac{d}{dx}$, and the eigenvalue equation takes the form of a differential equation, and relies on the calculus for its solution. The other formulation is the matrix mechanics of Heisenberg, in which operators are represented by algebraic entities called matrices; instead of a function in the eigenvalue equation, the matrix operator operates on a vector $\zeta$ to transform $\zeta$ into a vector parallel to $\zeta$, but $q$ times as long: $$Q\zeta=q\zeta.$$ This is the matrix-mechanical formulation of the eigenvalue problem. The solution of this form of the eigenvalue problem relies on algebra.
These two approaches to quantum mechanical problems are deeply interrelated; the work of Dirac shows the underlying equivalence of the two points of view, as well as of the corresponding mathematical techniques. [...]
Orthogonal functions
Definition: The expansion interval is the range of the independent variables assumed by the functions under consideration.
Definition: The inner product of the two complex-valued functions $f$ and $g$ of a continuous variable on their expansion interval $[a,b]$ is $$\langle f|g \rangle=\int_a^b f(x)^*g(x)dx.$$
The order is not important for real-valued functions but important for complex-valued functions: $$\begin{align}\langle g|f \rangle&=\int g(x)^*f(x)dx\\&=\big(\int f(x)^*g(x)dx\big)^*\\ &=\langle f|g \rangle^*.\end{align}$$
Definition: Two functions $f(x)$ and $g(x)$ are said to be orthogonal on the interval $[a,b]$ if their inner product on $[a,b]$ is zero: $$\langle f|g \rangle=\int_a^b f^*g=0=\int_a^b g^*f=\langle g|f \rangle.$$
Definition: The norm of a function on the interval $[a,b]$ is the inner product of the function with itself, and may be symbolised by $N$: $$N(f)=\langle f|f \rangle=\int_a^b f^*g=0=\int_a^b f^*f.$$
The norm of a function is a real, positive quantity: $$f^*f=(\text{Re}\:f-i\text{Im}\:f)(\text{Re}\:f+i\text{Im}\:f)=(\text{Re}\:f)^2+(\text{Im}\:f)^2.$$
Definition: A function is said to be normalised if its norm is one: $\langle f|f \rangle=1$.
Suppose $f$ has a norm $N$. Then the function $\dfrac{f}{\sqrt{N}}$ will have a norm of one, since $$\langle \dfrac{f}{\sqrt{N}}|\dfrac{f}{\sqrt{N}} \rangle=\dfrac{1}{N}\langle f|f \rangle=\dfrac{N}{N}=1.$$
Reminder: The expansion interval must be specified before a statement about orthogonality or normality can be made. On $[-1,1]$, $\langle x|x^2 \rangle=\int_{-1}^1 x^*x^2 dx=\int_{-1}^1 x^3 dx=\dfrac{x^4}{4}|_{-1}^1=0$, so we can say that on $[-1,1]$, $x$ and $x^2$ are orthogonal functions. However, on $[0,1]$, $\langle x|x^2 \rangle=\int_0^1 x^3 dx=\dfrac{x^4}{4}|_0^1=\dfrac{1}{4}\neq 0$, so the functions are not orthogonal in this interval.
Definition: $\{F_i\}$ is a complete set of functions such that any other function $f$ may be expressed as a linear combination of members of the set $\{F_i\}$ on a prescribed expansion interval. [Namely, if the set $\{F_i\}$ is complete, then we may expand $f$ in terms of the functions $F_i$: $f(x)=\sum_{n=1}^\infty a_n F_n(x)$.]
Definition: An orthogonal set of functions is a set of functions each of which is orthogonal on a prescribed interval to all other members of the set. [That is, the set $\{F_i\}$ is is an orthogonal set if every member is orthogonal to every other member: $\langle F_i|F_j \rangle=0$ for all $i,j$ such that $i\neq j$.]
Example: The set $\{\sin nx, \cos nx\}$, on $[-\pi,\pi]$, for $n$ zero or positive integers, is an orthogonal set of functions.
Definition: An orthonormal set of functions is an orthogonal set of functions, each of which is normalised. [That is, the set $\{F_i\}$ is orthonormal if $\langle F_i|F_j \rangle=0$ for all $i\neq j$ and $\langle F_i|F_i \rangle=1$ for all $j$.]
These two equations occurs so often in discussing orthonormal functions that a special symbol has been introduced to combine them. The Kronecker delta symbol $\delta_{ij}$ has the meaning $\delta_{ij}=0$ for $i\neq j$, $\delta_{ij}=1$ for $i=j$.
The Fourier series is an expansion of a function in the orthonormal functions which are proportional to $\{\sin mx,\cos mx\}$. Let us derive their norm on the interval $[-\pi,-\pi]$. $$N(\sin mx)=\langle \sin mx|\sin mx \rangle=\int_{-\pi}^\pi \sin^2 mx dx=\dfrac{1}{m}\int_{-m\pi}^{m\pi}\sin^2 y dy=\pi.$$ Similarly, $$N(\cos mx)=\langle \cos mx|\cos mx \rangle=\dfrac{1}{m}\int_{-m\pi}^{m\pi}\cos^2 y dy=\pi.$$ Also, $$N(\sin 0x)=\langle \sin 0x|\sin 0x \rangle=\int_{-\pi}^\pi 0 dx=0$$ and $$N(\cos 0x)=\langle \cos 0x|\cos 0x \rangle=\int_{-\pi}^{\pi}1dx=2\pi.$$ Summarising the results, $$N(\sin mx)=\begin{cases} \pi\quad m\neq 0\\ 0\quad m=0 \end{cases}\\ N(\cos mx)=\begin{cases} \pi\quad m\neq 0\\ 2\pi \quad m=0 \end{cases}$$ or using the Kronecker delta, we have $$N(\sin mx)=\pi-\pi\delta_{m0}\\N(\cos mx)=\pi+\pi\delta_{m0}.$$ To calculate the expansion coefficients for an expansion in orthonormal functions, we use the set of functions $\{\dfrac{1}{\sqrt{2\pi}},\dfrac{1}{\sqrt{\pi}}\sin mx,\dfrac{1}{\sqrt{\pi}}\cos mx\}$ for $m=1,2,\cdots$, for expansion on $[-\pi,\pi]$. For these functions, the expansion coefficients will be $$a_0=\langle \dfrac{1}{\sqrt{2\pi}}|f\rangle\\a_m=\langle \dfrac{1}{\sqrt{\pi}}\cos mx|f\rangle\\b_m=\langle \dfrac{1}{\sqrt{\pi}}\sin mx|f\rangle,$$ where the expansion is $$f(x)=a_0\dfrac{1}{\sqrt{2\pi}}+\sum_{m=1}^\infty a_m(\dfrac{1}{\sqrt{\pi}}\cos mx)+\sum_{m=1}^\infty b_m(\dfrac{1}{\sqrt{\pi}}\sin mx).$$ Usually the constants are removed from the terms by explicitly writing out the expansion coefficients $$f(x)=\dfrac{1}{2\pi}\langle 1|f\rangle+\dfrac{1}{\pi}\sum_{m=1}^\infty [\langle \cos mx|f\rangle \cos mx+\langle \sin mx|f\rangle \sin mx].$$ This gives the final result $$f(x)=\dfrac{c_0}{2}+\sum_{m=1}^\infty c_m \cos mx+\sum_{m=1}^\infty d_m \sin mx\\c_m=\dfrac{1}{\pi}\langle \cos mx|f \rangle\\ d_m=\dfrac{1}{\pi}\langle \sin mx|f \rangle$$ on $[-\pi,\pi]$. This is the usual form of Fourier series. Note the lead term is divided by two because the norm of $\cos 0x$ is $2\pi$, whereas for all other values of $m$, the norm of $\cos mx$ is $\pi$.
Using the property of evenness and oddness of functions, we can make two simple extensions of the Fourier series. The Fourier expansion on $[-\pi,\pi]$ of an odd function is made up of sine terms $$f(x)=\sum_{m=1}^\infty d_m \sin mx$$ since if $f$ is odd, all inner products $\langle \cos mx|f \rangle \equiv 0$. The Fourier expansion on $[-\pi,\pi]$ of an even function is made up only of cosine terms: $$f(x)=\dfrac{c_0}{2}+\sum_{m=1}^\infty c_m \cos mx$$ since if $f$ is even, all inner products $\langle \sin mx|f \rangle \equiv 0$.
[...]
Reference:
Mathematics for Quantum Chemistry by Jay Martin Anderson
Wednesday, 5 August 2015
Orthogonality and Gram Schmidt process
Orthogonal bases
Let $\{v_1,\cdots,v_n\}$ be an orthogonal basis of the Euclidean space $V$. We want to find coordinates of the vector $u$ in this basis, namely the numbers $a_1,\cdots,a_n$ such that $$u=a_1v_1+\cdots+a_nv_n.$$ We can take the inner product with $v_1$, so $$\langle u,v_1\rangle=a_1\langle v_1,v_1\rangle+\cdots+a_n\langle v_n,v_1\rangle.$$ Since $v_1,\cdots,v_n$ are pairwise orthogonal, we have $\langle v_i,v_j \rangle=0$ for $i\neq j$. Thus, $\langle u,v_1\rangle=a_1\langle v_1,v_1\rangle \Rightarrow a_1=\dfrac{\langle u,v_1\rangle}{\langle v_1,v_1\rangle}$. Similarly, multiplying by $v_2,\cdots,v_n$, we have other coefficients $$a_2=\dfrac{\langle u,v_2\rangle}{\langle v_2,v_2\rangle},\cdots,a_n=\dfrac{\langle u,v_n\rangle}{\langle v_n,v_n\rangle}.$$ These coefficients are called Fourier coefficients of the vector $u$ with respect to basis $\{v_1,\cdots,v_n\}$.
Theorem: Let $\{v_1,\cdots,v_n\}$ be an orthogonal basis of the Euclidean space $V$. Then for any vector $u$, $$u=\dfrac{\langle u,v_1\rangle}{\langle v_1,v_1\rangle}v_1+\cdots+\dfrac{\langle u,v_n\rangle}{\langle v_n,v_n\rangle}v_n.$$ This expression is called Fourier decomposition and can be obtained in any Euclidean space.
Projections
The projection of the vector $v$ along the vector $w$ is the vector $\text{proj}_w v=cw$ such that $u=v-cw$ is orthogonal to $w$. We have $\langle u,w \rangle=0$, which means $$\langle v-cw,w \rangle=0 \iff \langle v,w \rangle-c\langle w,w \rangle=0 \iff c=\dfrac{\langle v,w \rangle}{\langle w,w \rangle}.$$ Thus the orthogonal complement $u$ is $$u=v-\text{proj}_w v=v-\dfrac{\langle v,w \rangle}{\langle w,w \rangle}w.$$ With this formula, we can find the distance between a point and a plane. Now, let's generalise our constructions.
Theorem: Let $W$ be a subspace with orthogonal basis $\{w_1,\cdots,w_n\}$. The projection $\text{proj}_w v$ of any vector $v$ along $W$ is $$\text{proj}_w v=\dfrac{\langle v,w_1 \rangle}{\langle w_1,w_1 \rangle}w_1+\cdots+\dfrac{\langle v,w_n \rangle}{\langle w_n,w_n \rangle}w_n.$$ [This follows from the Fourier decomposition above.] In particular, $u=v-\text{proj}_w v$ is orthogonal to the subspace $W$.
Proof: Take the inner product of $u$ with $w_i$, $$\langle u,w_i \rangle=\langle v,w_i \rangle-\bigg(\dfrac{\langle v,w_1 \rangle}{\langle w_1,w_1 \rangle}\langle w_1,w_i \rangle+\cdots+\dfrac{\langle v,w_n \rangle}{\langle w_n,w_n \rangle}\langle w_n,w_i \rangle\bigg).$$ Since $\langle w_i,w_j \rangle =0$ for $i\neq j$, we have $$\begin{align}\langle u,w_i \rangle&=\langle v,w_i \rangle-\dfrac{\langle v,w_i \rangle}{\langle w_i,w_i \rangle} \langle w_i,w_i \rangle\\&=\langle v,w_i \rangle-\langle v,w_i \rangle\\&=0.\end{align}$$ Thus $u$ is orthogonal to every $w_i$, and so it is orthogonal to $W$. $\Box$
Gram Schmidt orthogonalisation process
Let $\{v_1,\cdots,v_n\}$ be a basis in the Euclidean space. We can construct orthogonal basis $\{w_1,\cdots,w_n\}$ of this space. $$\begin{align}w_1&=v_1\\w_2&=v_2-\dfrac{\langle v_2,w_1\rangle}{\langle w_1,w_1\rangle}w_1\\w_3&=v_3-\dfrac{\langle v_3,w_1\rangle}{\langle w_1,w_1\rangle}w_1-\dfrac{\langle v_3,w_2\rangle}{\langle w_2,w_2\rangle}w_2\\ \cdots\\ w_n&=v_n-\dfrac{\langle v_n,w_1\rangle}{\langle w_1,w_1 \rangle}w_1-\dfrac{\langle v_n,w_2\rangle}{\langle w_2,w_2\rangle}w_2-\cdots-\dfrac{\langle v_n,w_{n-1}\rangle}{\langle w_{n-1},w_{n-1}\rangle}w_{n-1}.\end{align}$$
[...]
Let $\{v_1,\cdots,v_n\}$ be an orthogonal basis of the Euclidean space $V$. We want to find coordinates of the vector $u$ in this basis, namely the numbers $a_1,\cdots,a_n$ such that $$u=a_1v_1+\cdots+a_nv_n.$$ We can take the inner product with $v_1$, so $$\langle u,v_1\rangle=a_1\langle v_1,v_1\rangle+\cdots+a_n\langle v_n,v_1\rangle.$$ Since $v_1,\cdots,v_n$ are pairwise orthogonal, we have $\langle v_i,v_j \rangle=0$ for $i\neq j$. Thus, $\langle u,v_1\rangle=a_1\langle v_1,v_1\rangle \Rightarrow a_1=\dfrac{\langle u,v_1\rangle}{\langle v_1,v_1\rangle}$. Similarly, multiplying by $v_2,\cdots,v_n$, we have other coefficients $$a_2=\dfrac{\langle u,v_2\rangle}{\langle v_2,v_2\rangle},\cdots,a_n=\dfrac{\langle u,v_n\rangle}{\langle v_n,v_n\rangle}.$$ These coefficients are called Fourier coefficients of the vector $u$ with respect to basis $\{v_1,\cdots,v_n\}$.
Theorem: Let $\{v_1,\cdots,v_n\}$ be an orthogonal basis of the Euclidean space $V$. Then for any vector $u$, $$u=\dfrac{\langle u,v_1\rangle}{\langle v_1,v_1\rangle}v_1+\cdots+\dfrac{\langle u,v_n\rangle}{\langle v_n,v_n\rangle}v_n.$$ This expression is called Fourier decomposition and can be obtained in any Euclidean space.
Projections
The projection of the vector $v$ along the vector $w$ is the vector $\text{proj}_w v=cw$ such that $u=v-cw$ is orthogonal to $w$. We have $\langle u,w \rangle=0$, which means $$\langle v-cw,w \rangle=0 \iff \langle v,w \rangle-c\langle w,w \rangle=0 \iff c=\dfrac{\langle v,w \rangle}{\langle w,w \rangle}.$$ Thus the orthogonal complement $u$ is $$u=v-\text{proj}_w v=v-\dfrac{\langle v,w \rangle}{\langle w,w \rangle}w.$$ With this formula, we can find the distance between a point and a plane. Now, let's generalise our constructions.
Theorem: Let $W$ be a subspace with orthogonal basis $\{w_1,\cdots,w_n\}$. The projection $\text{proj}_w v$ of any vector $v$ along $W$ is $$\text{proj}_w v=\dfrac{\langle v,w_1 \rangle}{\langle w_1,w_1 \rangle}w_1+\cdots+\dfrac{\langle v,w_n \rangle}{\langle w_n,w_n \rangle}w_n.$$ [This follows from the Fourier decomposition above.] In particular, $u=v-\text{proj}_w v$ is orthogonal to the subspace $W$.
Proof: Take the inner product of $u$ with $w_i$, $$\langle u,w_i \rangle=\langle v,w_i \rangle-\bigg(\dfrac{\langle v,w_1 \rangle}{\langle w_1,w_1 \rangle}\langle w_1,w_i \rangle+\cdots+\dfrac{\langle v,w_n \rangle}{\langle w_n,w_n \rangle}\langle w_n,w_i \rangle\bigg).$$ Since $\langle w_i,w_j \rangle =0$ for $i\neq j$, we have $$\begin{align}\langle u,w_i \rangle&=\langle v,w_i \rangle-\dfrac{\langle v,w_i \rangle}{\langle w_i,w_i \rangle} \langle w_i,w_i \rangle\\&=\langle v,w_i \rangle-\langle v,w_i \rangle\\&=0.\end{align}$$ Thus $u$ is orthogonal to every $w_i$, and so it is orthogonal to $W$. $\Box$
Gram Schmidt orthogonalisation process
Let $\{v_1,\cdots,v_n\}$ be a basis in the Euclidean space. We can construct orthogonal basis $\{w_1,\cdots,w_n\}$ of this space. $$\begin{align}w_1&=v_1\\w_2&=v_2-\dfrac{\langle v_2,w_1\rangle}{\langle w_1,w_1\rangle}w_1\\w_3&=v_3-\dfrac{\langle v_3,w_1\rangle}{\langle w_1,w_1\rangle}w_1-\dfrac{\langle v_3,w_2\rangle}{\langle w_2,w_2\rangle}w_2\\ \cdots\\ w_n&=v_n-\dfrac{\langle v_n,w_1\rangle}{\langle w_1,w_1 \rangle}w_1-\dfrac{\langle v_n,w_2\rangle}{\langle w_2,w_2\rangle}w_2-\cdots-\dfrac{\langle v_n,w_{n-1}\rangle}{\langle w_{n-1},w_{n-1}\rangle}w_{n-1}.\end{align}$$
[...]
Subscribe to:
Posts (Atom)