# Characters in Analysis and Algebra

Let \(M\) be a monoid and \(K\) be a field, then by a character of \(G\) in \(K\) we mean a monoid-homomorphism

\[
\chi:M \to K^\ast.
\] By trivial character we mean a character such that \(\chi(M)=\{1\}\). We are particularly
interested in the linear independence of characters. Functions \(f_i:M \to K\) are called **linearly
independent over \(K\)** if
whenever \[
a_1f_1+\cdots+a_nf_n=0
\] with all \(a_i \in K\), we
have \(a_i=0\) for all \(i\). \(\def\Tr\operatorname{Tr}\)

## Character in Fourier Analysis

In Fourier analysis we are always interested by functions like \(f(x)=e^{-inx}\) or \(g(x)= e^{-ixt}\), corresponding to Fourier series (integration on \(\mathbb{R}/2\pi\mathbb{Z}\)) and Fourier transform. Later mathematicians realised that everything can be set in a locally compact abelian (LCA) group. For this reason we need to generalise these functions, and the bounded ones coincide with our definition of characters.

Let \(G\) be a LCA group, then \(\gamma:G \to \mathbb{C}\) is called a
*character* if \(|\gamma(x)|=1\)
for all \(x \in G\) and \[
\gamma(x+y)=\gamma(x)\gamma(y).
\] Note since \(G\) is
automatically a monoid, this coincide with our ordinary definition of
character. The set of *continuous* characters form a group \(\Gamma\), which is called the *dual
group* of \(G\).

If \(G=\mathbb{R}\), solving the equation \(\gamma(x+y)=\gamma(x)\gamma(y)\) in whatever way he or she likes we obtain \(\gamma(x)=e^{Ax}\) for some \(A \in \mathbb{C}\). But \(|e^{Ax}| \equiv 1\) (or merely being bounded) forces \(A\) to be purely imaginary, say \(A=it\), then we have \(\gamma(x)=e^{itx}\). Hence the dual group of \(\mathbb{R}\) can be determined by (the speed of) rotation on the unit circle.

With this we have our generalised version of Fourier transform. Let
\(G\) be a LCA group, \(f \in L^1(G)\), then the **Fourier
transform** is given by \[
\hat{f}(\gamma) = \int_G f(x)\gamma(-x)dx, \quad \gamma \in \Gamma.
\] One can intuitively verify that \(\hat{f}\) is exactly the Gelfand transform
of \(f\), the step of which will be
sketched below. On one hand, one can indeed verify that \(f \mapsto \hat{f}(\gamma)\) is indeed a
Banach algebra homomorphism \(L^1(G) \to
\mathbb{C}\), for all \(\gamma \in
\Gamma\). This is a plain application of Fubini's theorem. On the
other hand, let \(h:L^1(G) \to
\mathbb{C}\) be any non-trivial Banach algebra homomorphism. One
can investigate that \(\| h \| =1\) and
hence \(h\) is a bounded linear
functional. By Riesz's representation theorem, there is some \(\phi \in L^\infty(G)\) with \(\| \phi\|_\infty = 1\) such that \[
h(f) = \int_G f(x)\phi(x)dx.
\] We can indeed assume that \(\phi\) is continuous. With \(h\) being algebra homomorphism, we can see
\[
\phi(x+y)=\phi(x)\phi(y).
\] We know that \(|\phi(x)| \le
1\) but \(\phi(-x)=\phi(x)^{-1}\) forces \(|\phi(x)|=1\). The proof is done after some
routine verification of uniqueness.

Indeed, with this identification, we can also identify \(\Gamma\) as the maximal ideal space of \(L^1(G)\), which results in the following interesting characterisation.

If \(G\) is discrete, then \(\Gamma\) is compact; if \(G\) is compact, then \(\Gamma\) is discrete.

*Proof.* If \(G\) is
discrete, then \(L^1(G)\) has a unit.
The maximal ideal space, which can be identified as \(\Gamma\), is a compact Hausdorff space.

If \(G\) is compact, then its Haar measure can be normalised so that \(m(G)=1\). We prove that the singleton containing the unit alone is an open set. Let \(\gamma \in \Gamma\) be a character \(\ne 1\), then there exists some \(x_0\) such that \(\gamma(x_0) \ne 1\). As a result, \[ \int_G \gamma(x)dx = \gamma(x_0)\int_G \gamma(x-x_0)dx = \gamma(x_0)\int_G \gamma(x)dx \] and hence \(\int_G\gamma(x)dx=0\). If \(\gamma=1\) then \(\int_G \gamma(x)=1\).

Besides, the compactness of \(G\) implies the constant function \(f \equiv 1\) is in \(L^1(G)\). As a result, \(\hat{f}(1)=1\) but \(\hat{f}(\gamma)=0\) whenever \(\gamma \ne 1\). But \(\hat{f}\) is continuous, \(\{\gamma:{f}(\gamma) \ne 0\}=\{1\}\) is open. \(\square\)

## Linear Independence of Characters

If characters of \(G\) are linear independent, then they are pairwise distinct, but what about the converse? Dedekind answered this question affirmatively. But his approach is rather complicated: it needed determinant. However, Artin found a neat way to do it:

Theorem (Dedekind-Artin)Let \(M\) be a monoid and \(K\) a field. Let \(\chi_1,\dots,\chi_n\) be distinct characters of \(G\) in \(K\). Then they are linearly independent over \(K\).

*Proof.* Suppose this is false. Let \(N\) be the smallest integer that \[
a_1\chi_1+a_2\chi_2+\cdots+a_N\chi_N = 0
\] but not all \(a_i\) are \(0\), for distinct \(\chi_i\). Since \(\chi_1 \ne \chi_2\), there is some \(z \in M\) such that \(\chi_1(z) \ne \chi_2(z)\). Yet still we
have \[
a_1\chi_1(zx)+\cdots+a_N\chi_N(zx)=0.
\] Since \(\chi_i\) are
characters, for all \(x \in M\) we have
\[
a_1\chi_1(z)\chi_1(x)+\cdots+a_N\chi_N(z)\chi_N(x)=0.
\] We now have a linear system \[
\begin{pmatrix}
a_1 & a_2 & \cdots & a_N \\
a_1\chi_1(z) & a_2\chi_2(z) & \cdots & a_N\chi_N(z)
\end{pmatrix}
\begin{pmatrix}
\chi_1 \\
\chi_2 \\
\vdots \\
\chi_N
\end{pmatrix} =
\begin{pmatrix}
0 \\ 0
\end{pmatrix}
\] If we perform Gaussian elimination once, we see \[
\begin{pmatrix}
a_1 & a_2 & \cdots & a_N \\
0 & \left(\frac{\chi_2(z)}{\chi_1(z)}-1\right)a_2 & \cdots &
\left(\frac{\chi_N(z)}{\chi_1(z)}-1\right)a_N\chi_N(z)
\end{pmatrix}
\begin{pmatrix}
\chi_1 \\
\chi_2 \\
\vdots \\
\chi_N
\end{pmatrix} =
\begin{pmatrix}
0 \\ 0
\end{pmatrix}
\] But this is to say \[
\left(\frac{\chi_2(z)}{\chi_1(z)}-1\right)a_2\chi_2 + \cdots +
\left(\frac{\chi_N(z)}{\chi_1(z)}-1\right)a_N\chi_N(z)\chi_N=0
\] Note by assumption \(\frac{\chi_2(z)}{\chi_1(z)}-1 \ne 0\) and
therefore we found \(N-1\) distinct and
linearly independent characters, contradicting our assumption. \(\square\)

As an application, we consider an \(n\)-variable equation:

Let \(\alpha_1,\cdots,\alpha_n\) be distinct non-zero elements of a field \(K\). If \(a_1,\cdots,a_n\) are elements of \(K\) such that for all integers \(v \ge 0\) we have \[ a_1\alpha_1^v + \cdots + a_n\alpha_n^v = 0 \] then \(a_i=0\) for all \(i\).

*Proof.* Consider \(n\)
distinct characters \(\chi_i(v)=\alpha^v\) of \(\mathbb{Z}_{\ge 0}\) into \(K^\ast\). \(\square\)

## Hilbert's Theorem 90

The linear independence of characters gives us a good chance of studying the relation of the field extension and the Galois group.

Hilbert's Theorem 90 (Modern Version)Let \(K/k\) be a Galois extension with Galois group \(G\), then \(H^1(G,K^\ast)=1\) and \(H^1(G,K)=0\). This is to say, the first cohomology group is trivial for both addition and multiplication.

It may look confusing but the classic version is about cyclic extensions (\(K/k\) is cyclic if it is Galois and the Galois group is cyclic).

Hilbert's Theorem 90 (Classic Version, Multiplicative Form)Let \(K/k\) be cyclic of degree \(n\) with Galois group \(G\) generated by \(\sigma\). Then \[ \frac{\ker N}{1/\sigma{A}} \cong 1 \] where \(1/\sigma{A}\) consists of all elements of the form \(\alpha/\sigma(\alpha)\) with \(\alpha \in A\), and \(N(\beta)\) is the norm of \(\beta \in K\) over \(k\).

This corresponds to the statement that \(H^1(G,K^\ast)=1\). On the other hand,

Hilbert's Theorem 90 (Classic Version, Additive Form)Let \(K/k\) be cyclic of degree \(n\) with Galois group \(G\) generated by \(\sigma\). Then \[ \frac{\ker \Tr}{(1-\sigma){A}} \cong 0 \] where \((1-\sigma)A\) consists of all elements of the form \((1-\sigma)(\alpha)\) with \(\alpha \in A\), and \(\Tr(\beta)\) is the norm of \(\beta \in K\) over \(k\).

This corresponds to, of course, the statement that \(H^1(G,K)=0\). Note this indeed asserts an
exact sequence \[
0 \to k \to K \xrightarrow{1-\sigma} K \xrightarrow{\Tr} K \to 0.
\] Before we prove it we recall what is group cohomology. Let
\(G\) be a group. We consider the
category **\(G\)-mod** of
left \(G\)-modules. The set of
morphisms of two objects \(A\) and
\(B\), for which we write \(\operatorname{Hom}_G(A,B)\), consists of
all objects of \(G\)-set maps from
\(A\) to \(B\). The *cohomology groups of \(G\) with coefficients in \(A\)* is the right derived functor of
\(\operatorname{Hom}_G(\mathbb{Z},-)\):
\[
H^\ast (G,A) \cong
\operatorname{Ext}^\ast_{\mathbb{Z}[G]}(\mathbb{Z},A).
\] It follows that $H^0(G,A) _G(Z,A)=A/ga-a:g G,a A $. In
particular, if \(G\) is trivial, then
\(\operatorname{Hom}_G(\mathbb{Z},-)\)
is exact and therefore \(H^\ast(G,A)=0\) whenever \(\ast \ne 0\). We will see what will happen
when \(G\) is a Galois group of a
Galois extension. If the modern version is beyond your reach, you can
refer to the classic version. As a side note, the modern version can
also be done using Shapiro's lemma.

### Proof of the Modern Version

*Proof.*Note \(\alpha:G \to K^\ast\) is an 1-cocyle if and only if $

*{}=*(_) $ for all \(\sigma,\tau \in G\). By Artin's lemma, for each 1-cocyle \(\alpha\), the following map is nontrivial: \[ \Lambda=\sum_{\sigma \in G}\alpha_\sigma \sigma:K \to K. \] Suppose \(\gamma=\Lambda(\theta) \ne 0\). Then $$ \[\begin{aligned} \tau\gamma &= \tau \sum_{\sigma \in G}\alpha_\sigma \sigma(\theta) = \sum_{\sigma \in G}\tau(\alpha_\sigma)\tau\sigma(\theta) = \sum_{\sigma \in G}\alpha_\tau^{-1}\alpha_{\sigma\tau}\tau\sigma(\theta) \\ &= \alpha_\tau^{-1}\sum_{\sigma \in G}\alpha_{\sigma\tau}\sigma\tau(\theta) = \alpha_\tau^{-1}\gamma \end{aligned}\]

$$ which is to say \(\alpha_\tau = \gamma/\tau\gamma\). Replacing \(\gamma\) with \(\gamma^{-1}\) gives what we want: cocycle coincides with coboundary. So much for the multiplicative form.

For the additive form, take \(\theta \in K \setminus \ker Tr\). Given a \(1\)-cocycle \(\alpha\) in the additive group \(K\), we put \[ \beta = \frac{1}{\Tr(\theta)}\sum_{\tau \in G}\alpha_\tau \tau(\theta) \] Since cocycle satisfies \(\alpha_{\sigma\tau}=\alpha_\sigma+\sigma\alpha_\tau\), we get \[ \sigma\beta = \frac{1}{\Tr(\theta)}\sum_{\tau \in G}(\alpha_{\sigma\tau}-\alpha_\sigma)\sigma\tau(\theta) = \beta -\alpha_\sigma \] which gives \(\alpha_\sigma = \beta-\sigma\beta\). Replacing \(\beta\) with \(-\beta\) gives what we want. \(\square\)

### Proof of the Classic Version

*Additive form.* Pick any \(\beta-\sigma\beta\), we see \(\Tr(\beta-\sigma\beta)=\sum_{\tau \in
G}\tau\beta-\sum_{\tau \in G}\tau\beta=0\).

Conversely, assume \(\Tr(\alpha)=0\). By Artin's lemma, the trace function is not trivial, hence there exists some \(\theta \in K\) such that \(\Tr(\theta)\ne 0\), then we take \[ \beta = \frac{1}{\Tr(\theta)}[\alpha\theta^\sigma+(\alpha+\sigma\alpha)\theta^{\sigma^2}+\cdots+(\alpha+\sigma\alpha+\cdots+\sigma^{n-2}\alpha)\theta^{\sigma^{n-1}}] \] where for convenience we write \(\sigma\theta=\theta^\sigma\). Therefore \[ \beta-\sigma\beta = \frac{1}{\Tr(\theta)}\alpha(\theta+\theta^{\sigma}+\theta^{\sigma^2}+\cdots+\theta^{\sigma^{n-1}})=\alpha \] because other terms are cancelled. \(\square\)

*Multiplicative form.* This can be done in a quite similar
setting. For any \(\alpha=\beta/\sigma\beta\), we have \[
N(\alpha)=N(\beta)/N(\sigma\beta)=\left(\prod_{\tau \in
G}\tau\beta\right)/ \left( \prod_{\tau \in G}\tau\sigma\beta\right)=1.
\] Conversely, assume \(N(\alpha)=1\). By Artin's lemma, following
function is not trivial: \[
\Lambda:\operatorname{id}+\alpha\sigma+\alpha^{1+\sigma}\sigma^2+\cdots+\alpha^{1+\sigma+\cdots+\sigma^{n-2}}\sigma^{n-1}.
\] Suppose now \(\beta=\Lambda(\theta)
\ne 0\). It follows that \[
\begin{aligned}
\alpha\beta^\sigma &=
\alpha(\theta+\alpha\theta^\sigma+\cdots+\alpha^{1+\sigma+\cdots+\sigma^{n-2}}\theta^{\sigma^{n-1}})^\sigma
\\
&=
\alpha(\theta^\sigma+\alpha^\sigma\theta^{\sigma^2}+\cdots+\underbrace{\alpha^{\sigma+\sigma^2+\cdots+\sigma^{n-1}}\theta^{\sigma^n}}_{=\alpha^{-1}\theta})
\\
&=
\alpha\theta^\sigma+\alpha^{1+\sigma}+\cdots+\alpha^{1+\sigma+\cdots+\sigma^{n-2}}\theta^{n-1}+\theta
\\
&=\beta
\end{aligned}
\] and this is exactly what we want. \(\square\)

## Applications

Consider the extension \(\mathbb{Q}(i)/\mathbb{Q}\). The Galois group \(G=\{1,\tau\}\) is cyclic and generated by \(\tau\) the complex conjugation. Now we pick whatever \(N(a+bi)=a^2+b^2=1\) where \(a,b \in \mathbb{Q}\), we have some \(r=s+ti \in \mathbb{Q}(i)\) such that \[ a+bi = \frac{s+ti}{s-ti}=\frac{s^2-t^2+2sti}{s^2+t^2}= \frac{s^2-t^2}{s^2+t^2}+\frac{2st}{s^2+t^2}i \] If we put \((x,y,z)=(s^2-t^2,2st,s^2+t^2)\), we actually get a Pythagorean triple (if \(s,t\) are fractions, we can multiply them with the \(\gcd\) of the denominators so they are integers.). Conversely, all Pythagorean triple \((x,y,z)\), we assign it with \(\frac{x}{z}+\frac{y}{z}i \in \mathbb{Q}(i)\) then we have an element of norm \(1\). Through this we have found all solutions to \(x^2+y^2=z^2\). i.e.

TheoremIntegers \(x,y,z\) satisfy the Diophantine equation \(x^2+y^2=z^2\) if and only if \((x,y,z)\) is proportional to \((m^2-n^2,2mn,m^2+n^2)\) for some integers \(m,n\).

This can be generalised to all Diophantine equations of the form \(x^2+Axy+By^2=Cz^2\) for some nonzero constant \(C\) and constant \(A,B\) such that the discriminant \(A^2-4B\) is square-free. You can find some discussion here.

The additive form is a good friend of "character \(p\)" things. Artin-Schreier's theorem is a good example of \(p\)-to-the-\(p\).

Theorem (Artin-Schreier)Let \(k\) be a field of character \(p\) and \(K/k\) an extension of degree \(p\). Then there exists \(\alpha \in K\) and \(\alpha\) is the zeroof an equation \(X^p-X-a=0\) for some \(a \in k\).

*Proof.* Note the Galois group \(G\) of \(K/k\) is cyclic and \(\Tr(-1)=p(-1)=0\), we are able to use the
additive form. Let \(\sigma\) be the
generator of \(G\), there exists some
\(\alpha \in K\) such that \[
\sigma\alpha = \alpha+1.
\] Hence \(\sigma(\sigma(\alpha))=\sigma(\alpha+1)=\alpha+1+1\),
and by induction we get \[
\sigma^i(\alpha) = \alpha+i, \quad i=1,2,\cdots,p
\] and \(\alpha\) has \(p\) conjugates. Therefore \([k(\alpha):k] \ge p\). But in the meantime
\[
[K:k]=[K:k(\alpha)][k(\alpha):k]
\] we can only have \([K:k(\alpha)]=1\), which is to say \(K=k(\alpha)\). In the meantime, \[
\sigma(\alpha^p-\alpha)=(\alpha+1)^p-(\alpha+1)=\alpha^p+1^p-\alpha-1 =
\alpha^p-\alpha.
\] Hence \(\alpha^p - \alpha\)
lies in the fixed field of \(\sigma\),
which happens to be \(k\). Putting
\(a=\alpha^p-\alpha\) and our proof is
done. \(\square\).

For the case when the character is \(0\) please see here. There is a converse, which deserves a standalone blog post. It says that the polynomial \(f(X)=X^p-X-a\) either has one root in \(k\), in which case all its roots are in \(k\); or it is irredcible, in which case if \(\alpha\) is a root then \(k(\alpha)\) is cyclic of degree \(p\) over \(k\). But I don't know if many people are fans of "character \(p\)" things.

## References

- Serge Lang,
*Algbra, Revised Third Edition*. - Charles A. Weibel,
*An Introduction to Homological Algebra*. - Noam D. Elkies,
*Pythagorean triples and Hilbert’s Theorem 90*. (https://abel.math.harvard.edu/~elkies/Misc/hilbert.pdf) - Jose Capco,
*The Two Artin-Schreier Theorems*. (https://www3.risc.jku.at/publications/download/risc_5477/the_two_artin_schreier_theorems__jcapco.pdf) - Walter Rudin,
*Fourier Analysis on Groups*.

Characters in Analysis and Algebra

https://desvl.xyz/2021/10/10/Characters-in-Analysis-and-Algebra/