Manipulation of Sums Using the Sigma Symbol

Manipulation of sums via the symbol \Sigma (sigma), is based on a small number of rules. The purpose of this article is to list them and give examples of their use, without any claim to originality. Sigma

To begin, let’s ask about the interest of the notation \Sigma.

1 – Abandonment of the Suspension Points

Reading the formula: Sigma

    \[ 1+2+3+\cdots+10\]

each one understands instantly what it returns: to calculate this expression, we must add the natural numbers of 1 until 10. The use of suspension points does not seem to be an obstacle to understanding.

Same thing for :

    \[ 1+4+9+16+25+\cdots+625\]

We can easily guess that it is the sum of the squares of the integers of 1 at 25.

But in the case of: Sigma

    \[ 1+0+3+4+9+12+19+24+\cdots+5001\]

we do not see, even after a certain period of reflection, what the points of suspension hide.

Yet these numbers were not chosen at random. These are the first terms of the sequence defined by the formula:

    \[ u_{n}=\left\lfloor \frac{n^{2}+1}{2}\right\rfloor +\left(-1\right)^{n}\]

or \left\lfloor t\right\rfloor designates the whole part (by default) of the real t. Indeed : Sigma

    \begin{eqnarray*} u_{0}=\left\lfloor \frac{1}{2}\right\rfloor +\left(-1\right)^{0}=1\\ u_{1}=\left\lfloor 1\right\rfloor +\left(-1\right)^{1}=0\\ u_{2}=\left\lfloor \frac{5}{2}\right\rfloor +\left(-1\right)^{2}=3\\ u_{3}=\left\lfloor 5\right\rfloor +\left(-1\right)^{3}=4\\ u_{4}=\left\lfloor \frac{17}{2}\right\rfloor +\left(-1\right)^{4}=9\end{eqnarray*}

And so on\ldots{} One could therefore think that the suspension points can be used, provided that there is no doubt as to the identity of the underlying sequence. But it’s not so simple …

For example, if we pose for any integer n\geqslant1 :

    \[ t_{n}=n+\left\lfloor \frac{n}{6}\right\rfloor \]

the first terms of the sequel\left(t_{n}\right)_{n\geqslant1} are :

    \begin{eqnarray*} t_{1}=1+\left\lfloor \frac{1}{6}\right\rfloor =1\\ t_{2}=2+\left\lfloor \frac{1}{3}\right\rfloor =2\\ t_{3}=3+\left\lfloor \frac{1}{2}\right\rfloor =3\\ t_{4}=4+\left\lfloor \frac{2}{3}\right\rfloor =4\\ t_{5}=5+\left\lfloor \frac{5}{6}\right\rfloor =5\end{eqnarray*}

warning-math-os But beware :

    \[ t_{6}=6+\left\lfloor 1\right\rfloor =7\qquad\text{(and not }6)\]

So, when we write: Sigma

    \[ 1+2+3+4+5+\cdots+10\]

why would not it be, after all, the sum of the first nine terms of the sequel \left(t_{n}\right)_{n\geqslant1}?

This shows the need for a completely explicit notation, which eliminates any ambiguity. We therefore give up the suspension points and we adopt the notation \Sigma.

2 – The symbol \lyxmathsym{\textgreek{S}}

Given a list \left(u_{k}\right)_{1\leqslant k\leqslant n} real numbers (or, more generally, complex), we note:

    \[ \sum_{k=1}^{n}u_{k}\]

to designate what we would have noted so far: u_{1}+\cdots+u_{n}. This formula reads:

“Sum, for k varying from 1 until n , from u index k “.

The symbol k is the summation index . Sigma

warning-math-os It is essential to understand that the sum does not depend at all on k. For this reason, this symbol is called “dumb”. Concretely, this means that it can be replaced by any other symbol … provided that it is not already used in the context of the calculation!

For example, given n\in\mathbb{N}^{\star} and \theta\in\mathbb{R}, the sum:

    \[ e^{\alpha+i\theta}+e^{2\alpha+i\theta}+e^{3\alpha+i\theta}+\cdots+e^{n\alpha+i\theta}\]

can be noted: Sigma

    \[ \sum_{k=1}^{n}e^{k\alpha+i\theta}\]

but certainly not:

    \[ \sum_{i=1}^{n}e^{i\alpha+i\theta}\]

since the symbol i would be used to designate two different things !!

Let’s go back to the general case. Instead of the notation {\displaystyle \sum_{k=1}^{n}u_{k}}, one of two variants can be used:

    \[ \sum_{1\leqslant k\leqslant n}u_{k}\qquad\text{et}\qquad\sum_{k\in\mathbb{N}_{n}}u_{k}\]

the symbol \mathbb{N}_{n} designating all the integers of 1 until n, that some note rather \left[\!\vert1;n\vert\!\right].

Writing {\displaystyle \sum_{k\in\mathbb{N}_{n}}u_{k}} becomes easily generalized {\displaystyle \sum_{i\in I}u_{i}} or I is a finite and non-empty set (and where, for all i\in I,u_{i} denotes a complex number).

Note that in writing {\displaystyle \sum_{i\in I}u_{i}}, nothing indicates how the terms are added together. But it does not matter, since the addition of complex numbers is a commutative and associative operation. Commutativity makes it possible to modify the order of terms without affecting the total, while associativity says that the different possible parentheses are equivalent. Sigma

A more successful way to express the equivalence of different parentheses is as follows. If we partition I in p subsets I_{1},\cdots,I_{p} (which means that I_{k} are non-empty, two to two disjoint and that their union is I ), then (general formula of associativity):

    \[ \sum_{i\in I}u_{i}=\sum_{1\leqslant k\leqslant p}\left(\sum_{i\in I_{k}}u_{i}\right)\]

We will see in Section 7 an important practical consequence of this formula: the inversion of double sums on rectangular or triangular summation domains. Sigma

Let us add that, by convention, a sum of complex numbers indexed by the empty set is null. This convention has the merit of keeping true the general formula of associativity, even if some subsets I_{k} are empty.

Now let’s move on to the rules used in practice to manipulate sums.

3 – Separate / Merge

The order of the terms being of no importance for the calculation of a sum, we see that if x_{1},\cdots,x_{n} and y_{1},\cdots,y_{n} are any complex numbers, then:

    \[ \boxed{\sum_{i=1}^{n}\left(x_{i}+y_{i}\right)=\sum_{i=1}^{n}x_{i}+\sum_{i=1}^{n}y_{i}}\]

Parentheses are recommended in the left-hand side, not to mention indispensable! For example :

    \[ \sum_{i=1}^{3}\left(i+1\right)=\left(1+1\right)+\left(2+1\right)+\left(3+1\right)=9\]

while, by default:

    \[ \sum_{i=1}^{3}i+1\]

is interpreted in: Sigma

    \[ \left(\sum_{i=1}^{3}i\right)+1=7\]

But let’s go back to the last framed equality. When we go from left to right, we say we split the sum in two. And when we go from right to left, we say we merge the two sums into one.

It is necessary for the merger that the two sets of indices coincide. If this is not the case, we can eventually come back by performing a re-indexing in one of the two sums: I have not spoken to you about re-indexing, but we’ll see that a little further (see section 5).

4 – Develop / Factorize

The well-known formula of distributivity is generalized without effort (simple recurrence) to give this: if x_{1},\cdots,x_{n} and \lambda are complex numbers, so

    \[ \boxed{\sum_{i=1}^{n}\lambda x_{i}=\lambda\sum_{i=1}^{n}x_{i}}\]

When we go through this equality from left to right, we say we put \lambda as a factor in the sum. And when we go from right to left, we say that we develop or distribute \lambda on the sum.

And pay attention to the beginner’s mistake: to have the right to factorize by \lambda, it is still necessary that this coefficient \lambda is independent of the summation index.

If you know the properties of the binomial coefficients, you probably know that for any pair of integers \left(n,k\right) checking 1\leqslant k\leqslant nSigma

    \[ k\binom{n}{k}=n\binom{n-1}{k-1}\]

This relationship is sometimes called “pawn formula”. A classic exercise is to ask for the calculation of the sum:

    \[ S_{n}=\sum_{k=1}^{n}k\binom{n}{k}\]

To put k in factor in this sum would be monstrous! There is, moreover, in this form, nothing to put in factor. But by writing instead: Sigma

    \[ S_{n}=\sum_{k=1}^{n}n\binom{n-1}{k-1}\]

we can factorize by n, Which leads to :

    \[ S_{n}=n\sum_{k=1}^{n}\binom{n-1}{k-1}\]

Finally, the sum of the terms of the \left(n-1\right)– th line of Pascal’s triangle is equal to 2^{n-1}, so :

    \[ \boxed{\sum_{k=1}^{n}k\binom{n}{k}=n\,2^{n-1}}\]

5 – Change index

To change the index in (or: re-index) an amount is simply to re-number the terms. For example, the sum S=x_{1}+x_{2}+x_{3}+x_{4} can be written:

    \[ S=\sum_{i=1}^{4}x_{i}\]

but also :

    \[ S=\sum_{k=2}^{5}x_{k-1}\]

or : Sigma

    \[ S=\sum_{j=0}^{3}x_{j+1}\]

To go from the first writing to the second, we ask j=i-1 and to go from first to third, we put k=i+1.

These examples are very simple: we re-indexed the sum by shifting the old index by one unit. We are sometimes led to perform other types of re-indexing. For example, if we consider:

    \[ S=\sum_{i=0}^{n}x_{i}\]

and ask j=n-i, we obtain :

    \[ S=\sum_{j=0}^{n}x_{n-j}\]

Index changes of the type j=i+c or j=-i+c (where the whole c is fixed) are quite common.

In a more general way, given two finite sets I and J , if f:I\rightarrow J is bijective and if \left(x_{j}\right)_{j\in J} is a family of complex numbers indexed by J, so :

    \[ \sum_{i\in I}x_{f\left(i\right)}=\sum_{i\in J}x_{j}\]

We say that we go from the left member to the right one by posing j=f\left(i\right).

Let’s see an example of this mechanism, considering a finite group \left(G,\times\right) and a morphism \Phi from this group to the group \left(\mathbb{C}^{\star},\times\right) non-zero complex numbers. Let’s calculate the sum:

    \[ S=\sum_{g\in G}\Phi\left(g\right)\]

Yes \Phi is the constant morphism (i.e. \Phi\left(g\right)=1 for everything g\in G ), so S=\text{card}\left(G\right).

And if not, there is h\in G such as \Phi\left(h\right)\neq1. The application G\rightarrow G,\thinspace\gamma\rightarrow h\gamma being bijective (this is called a group translation G) , we can perform in the sum the index change defined by g=h\gamma , Which give :

    \[ S=\sum_{\gamma\in G}\Phi\left(h\gamma\right)=\sum_{\gamma\in G}\Phi\left(h\right)\thinspace\Phi\left(\gamma\right)=\Phi\left(h\right)\thinspace S\]

and so :

    \[ \left(1-\Phi\left(h\right)\right)S=0\]

finally: S=0. In summary :

    \[ \boxed{\sum_{g\in G}\Phi\left(g\right)=\left\{ \begin{array}{cc} \text{card}(G) & \text{ si }\forall g\in G,\,\Phi(g)=1\\ 0 & \text{sinon}\end{array}\right.}\]

6 – Telescopic ceilings

Given an integer n\geqslant1 and complex numbers t_{0},\cdots,t_{n} the expression:

    \[ S=\sum_{k=0}^{n-1}\left(t_{k}-t_{k+1}\right)\]

simplifies itself S=t_{0}-t_{n}.

This is understood by explicitly writing the first few terms and the last few (the calculation that follows assumes n\geqslant3 ):

    \[ S=\left(t_{0}-t_{1}\right)+\left(t_{1}-t_{2}\right)+\left(t_{2}-t_{3}\right)+\cdots+\left(t_{n-3}-t_{n-2}\right)+\left(t_{n-2}-t_{n-1}\right)+\left(t_{n-1}-t_{n}\right)\]

We can clearly see the terms being offset in pairs, with the exception of t_{0} and -t_{n}, who are the two “only survivors” …

Such a summation is said to be “telescopic”. This name probably refers to what happens when folding a telescopic telescope : only its ends remain visible!

The formula

    \[ \boxed{\sum_{k=0}^{n-1}\left(t_{k}-t_{k+1}\right)=t_{0}-t_{n}}\]

can be properly justified in two ways:

either by recurrence on n, either by separating into two sums, then re-indexing one of them. Things become interesting when summation does not appear, at first glance, as being telescopic …

For example, if we pose for any integer n\geqslant2 :

    \[ \boxed{S_{n}=\sum_{k=2}^{n}\frac{1}{k\left(k-1\right)}}\]

We can cleverly, for everything k\geqslant2 :

    \[ \frac{1}{k\left(k-1\right)}=\frac{1}{k-1}-\frac{1}{k}\]

It is clear then that:

    \[ \boxed{S_{n}=1-\frac{1}{n}}\]

Another example, consider for everything n\geqslant1 :

    \[ \boxed{T_{n}=\sum_{k=1}^{n}\frac{k}{\left(k+1\right)!}}\]

Noticing that for everything k\geqslant1 :

    \[ \frac{k}{\left(k+1\right)!}=\frac{\left(k+1\right)-1}{\left(k+1\right)!}=\frac{1}{k!}-\frac{1}{\left(k+1\right)!}\]

we see that :

    \[ \boxed{T_{n}=1-\frac{1}{\left(n+1\right)!}}\]

Last example, let’s add the first terms of the Fibonacci suite. It is recalled that the continuation of Fibonacci \left(F_{n}\right)_{n\in\mathbb{N}} is defined by:

    \[ F_{0}=0\qquad F_{1}=1\qquad\forall n\in\mathbb{N}^{\star},\thinspace F_{n+1}=F_{n}+F_{n-1}\]

To calculate explicitly:

    \[ \boxed{V_{n}=\sum_{k=1}^{n}F_{k}}\]

we can use the recurrence formula, which gives:

    \[ V_{n}=\sum_{k=1}^{n}\left(F_{k+1}-F_{k-1}\right)\]

This time the “telescoping” is done, not between a term and its immediate neighbor, but rather two in two. The easiest way to avoid getting caught in the carpet is to write:

    \[ V_{n}=\sum_{k=1}^{n}\left(F_{k+1}-F_{k}\right)+\sum_{k=1}^{n}\left(F_{k}-F_{k-1}\right)\]

so that :

    \[ V_{n}=F_{n+1}-F_{1}+F_{n}-F_{0}\]

finally:

    \[ \boxed{V_{n}=F_{n+2}-1}\]

7 – Swap two sums

Consider two integers n,p\geqslant1 as well as np complex numbersa_{i,j} with 1\leqslant i\leqslant n and 1\leqslant j\leqslant p . Then ask:

    \[ S=\sum_{\left(i,j\right)\in\mathbb{N}_{n}\times\mathbb{N}_{p}}a_{i,j}\]

As explained in Section 2, this notation makes sense because it does not matter the order in which the terms are added and regardless of the parenthesis used.

In particular, the whole \mathbb{N}_{n}\times\mathbb{N}_{p} can be partitioned “in rows” or “in columns”, as suggested by the illustration below:

PICT

This leads to the following formula, called an inversion formula for a rectangular summation domain:

    \[ \boxed{\sum_{j=1}^{p}\left(\sum_{i=1}^{n}\, a_{i,j}\right)=\sum_{i=1}^{n}\left(\sum_{j=1}^{p}\, a_{i,j}\right)}\qquad\left(\bigstar\right)\]

The case of a triangular summation domain is just as important in practice. For example, if we consider:

    \[ \sum_{\left(i,j\right)\in T}a_{i,j}\qquad\text{avec }T=\left\{ \left(i,j\right)\in\mathbb{N}^{2};\thinspace1\leqslant j\leqslant i\leqslant n\right\} \]

we can, again, sum “in lines” or “in columns”:

PICT

And here is the corresponding formula:

    \[ \boxed{\sum_{j=1}^{n}\,\left(\sum_{i=j}^{n}\, a_{i,j}\right)=\sum_{i=1}^{n}\,\left(\sum_{j=1}^{i}\, a_{i,j}\right)}\qquad\left(\spadesuit\right)\]

Let’s give two examples of calculations involving formulas \left(\bigstar\right)and\left(\spadesuit\right).

Example 1

Since n\in\mathbb{N}^{\star} and p\in\mathbb{N} we pose:

    \[ \boxed{S_{n,p}=\sum_{k=1}^{n}\, k^{p}}\]

It is known that:

    \[ S_{n,1}=\frac{n\left(n+1\right)}{2};\quad S_{n,2}=\frac{n\left(n+1\right)\left(2n+1\right)}{6};\quad S_{n,3}=\frac{n^{2}\left(n+1\right)^{2}}{4}\]

How to obtain these formulas in a “natural” way? One approach is to calculate the expression in two ways:

    \[ W_{n,p}=\sum_{k=1}^{n}\left[\left(k+1\right)^{p+1}-k^{p+1}\right]\]

On the one hand, the summation is telescopic:

W_{n,p}=\left(n+1\right)^{p+1}-1

and on the other hand, according to the binomial formula:

    \[ W_{n,p}=\sum_{k=1}^{n}\left[\sum_{j=0}^{p}\binom{p+1}{j}k^{j}\right]\]

After inversion of the sums (the domain is rectangular) and putting into factor of the binomial coefficient, one obtains:

    \[ W_{n,p}=\sum_{j=0}^{p}\left[\binom{p+1}{j}\sum_{k=1}^{n}k^{j}\right]=\sum_{j=0}^{p}\binom{p+1}{j}S_{n,j}\qquad\left(\diamondsuit\right)\]

hence, comparing the equalities \textbackslash{}left(\textbackslash{}clubsuit\textbackslash{}right) and \textbackslash{}left(\textbackslash{}diamondsuit\textbackslash{}right) , the “strong” recurrence formula:

    \[ \boxed{\sum_{j=0}^{p}\binom{p+1}{j}S_{n,j}=\left(n+1\right)^{p+1}-1}\]

If explicit formulas are known for each of the sums S_{n,0},S_{n,1},...,S_{n,p-1}, then this equality makes it possible to calculate S_{n,p}.

For example, knowing the formulas:

    \[ S_{n,0}=n,\qquad S_{n,1}=\frac{n\left(n+1\right)}{2},\qquad S_{n,2}=\frac{n\left(n+1\right)\left(2n+1\right)}{6}\]

we obtain by applying the above (with p=3 ):

    \[ \binom{4}{0}S_{n,0}+\binom{4}{1}S_{n,1}+\binom{4}{2}S_{n,2}+\binom{4}{3}S_{n,3}=\left(n+1\right)^{4}-1\]

that is to say :

    \[ n+2n\left(n+1\right)+n\left(n+1\right)\left(2n+1\right)+4\thinspace S_{n,3}=\left(n+1\right)^{4}-1\]

from where, after some small calculations not very bad:

    \[ \boxed{S_{n,3}=\frac{n^{2}\left(n+1\right)^{2}}{4}}\]

Example 2

For any integer n\geqslant1, we classically note H_{n} the n-th “harmonic number”: Sigma

    \[ H_{n}=\sum_{k=1}^{n}\frac{1}{k}\]

There are lots of things to know about the suite \left(H_{n}\right)_{n\geqslant1} but we will focus on the following recurrence formula: Sigma

    \[ H_{n+1}=1+\frac{1}{n+1}\sum_{j=1}^{n}H_{k}\qquad\left(\star\star\right)\]

She demonstrates herself with the help of \left(\spadesuit\right)Sigma

    \begin{eqnarray*} \sum_{k=1}^{n}H_{k} & = & \sum_{k=1}^{n}\left(\sum_{j=1}^{k}\frac{1}{j}\right)\\ \\ & = & \sum_{j=1}^{n}\left(\sum_{k=j}^{n}\frac{1}{j}\right)\\ \\ & = & \sum_{j=1}^{n}\frac{n-j+1}{j}\\ \\ & = & \left(n+1\right)H_{n}-n\\ \\ & = & \left(n+1\right)\left(H_{n+1}-1\right)\end{eqnarray*}

With this formula \left(\star\star\right), we have a funny proof of the divergence of the sequel \left(H_{n}\right)_{n\geqslant1}. Indeed, if this suite converges towards a real \lambda according to Cesàro’s lemma: {\displaystyle {\lim_{n\rightarrow\infty}\frac{1}{n+1}\sum_{j=1}^{n}H_{k}=\lambda}} and so, going to the limit in \left(\star\star\right) it would result that \lambda=\lambda+1, which is absurd! Sigma

Add a Comment

Your email address will not be published. Required fields are marked *