Английская Википедия:Gowers' theorem
In mathematics, Gowers' theorem, also known as Gowers' Ramsey theorem and Gowers' FINk theorem, is a theorem in Ramsey theory and combinatorics. It is a Ramsey-theoretic result about functions with finite support. Timothy Gowers originally proved the result in 1992,[1] motivated by a problem regarding Banach spaces. The result was subsequently generalised by Bartošová, Kwiatkowska,[2] and Lupini.[3]
Definitions
The presentation and notation is taken from Todorčević,[4] and is different to that originally given by Gowers.
For a function <math>f\colon \N \to \N</math>, the support of <math>f</math> is defined <math>\operatorname{supp}(f) = \{ n : f(n) \neq 0 \}</math>. Given <math>k \in \N</math>, let <math>\mathrm{FIN}_k</math> be the set
- <math>\mathrm{FIN}_k = \big\{ f\colon \N \to \N \mid \operatorname{supp}(f) \text{ is finite and } \max(\operatorname{range}(f)) = k \big\}</math>
If <math>f \in \mathrm{FIN}_n</math>, <math>g \in \mathrm{FIN}_m</math> have disjoint supports, we define <math>f+g \in \mathrm{FIN}_k</math> to be their pointwise sum, where <math>k = \max \{ n, m \}</math>. Each <math>\mathrm{FIN}_k</math> is a partial semigroup under <math>+</math>.
The tetris operation <math>T\colon \mathrm{FIN}_{k+1} \to \mathrm{FIN}_k</math> is defined <math>T(f)(n) = \max \{ 0, f(n)-1 \}</math>. Intuitively, if <math>f</math> is represented as a pile of square blocks, where the <math>n</math>th column has height <math>f(n)</math>, then <math>T(f)</math> is the result of removing the bottom row. The name is in analogy with the video game. <math>T^{(k)}</math> denotes the <math>k</math>th iterate of <math>T</math>.
A block sequence <math>(f_n)</math> in <math>\mathrm{FIN}_k</math> is one such that <math>\max(\operatorname{supp}(f_m)) < \min(\operatorname{supp}(f_{m+1}))</math> for every <math>m</math>.
The theorem
Note that, for a block sequence <math>(f_n)</math>, numbers <math>k_1, \ldots, k_n</math> and indices <math>m_1 < \cdots < m_n</math>, the sum <math>T^{(k_1)}(f_{m_1}) + \cdots + T^{(k_n)}(f_{m_n})</math> is always defined. Gowers' original theorem states that, for any finite colouring of <math>\mathrm{FIN}_k</math>, there is a block sequence <math>(f_n)</math> such that all elements of the form <math>T^{(k_1)}(f_{m_1}) + \cdots + T^{(k_n)}(f_{m_n})</math> have the same colour.
The standard proof uses ultrafilters,[1][4] or equivalently, nonstandard arithmetic.[5]
Generalisation
Intuitively, the tetris operation can be seen as removing the bottom row of a pile of boxes. It is natural to ask what would happen if we tried removing different rows. Bartošová and Kwiatkowska[2] considered the wider class of generalised tetris operations, where we can remove any chosen subset of the rows.
Formally, let <math>F\colon \N \to \N</math> be a nondecreasing surjection. The induced tetris operation <math>\hat{F}\colon \mathrm{FIN}_k \to \mathrm{FIN}_{F(k)}</math> is given by composition with <math>F</math>, i.e. <math>\hat{F}(f)(n) = F(f(n))</math>. The generalised tetris operations are the collection of <math>\hat{F}</math> for all nondecreasing surjections <math>F\colon \N \to \N</math>. In this language, the original tetris operation is induced by the map <math>T\colon n \mapsto \max \{ n-1, 0 \}</math>.
Bartošová and Kwiatkowska[2] showed that the finite version of Gowers' theorem holds for the collection of generalised tetris operations. Lupini[3] later extended this result to the infinite case.
References