Английская Википедия:Goldston-Pintz-Yıldırım sieve

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

The Goldston-Pintz-Yıldırım sieve (also called GPY sieve or GPY method) is a sieve method and variant of the Selberg sieve with generalized, multidimensional sieve weights. The sieve led to a series of important breakthroughs in analytic number theory.

It is named after the mathematicians Dan Goldston, János Pintz and Cem Yıldırım.[1] They used it in 2005 to show that there are infinitely many prime tuples whose distances are arbitrarily smaller than the average distance that follows from the prime number theorem.

The sieve was then modified by Yitang Zhang in order to prove a finite bound on the smallest gap between two consecutive primes that is attained infinitely often.[2] Later the sieve was again modified by James Maynard (who lowered the bound to <math>600</math>[3]) and by Terence Tao.

Goldston-Pintz-Yıldırım sieve

Notation

Fix a <math>k\in \N</math> and the following notation:

  • <math>\mathbb{P}</math> is the set of prime numbers and <math>1_{\mathbb{P}}(n)</math> the characteristic function of that set,
  • <math>\Lambda(n)</math> is the von Mangoldt function,
  • <math>\omega(n)</math> is the small prime omega function (which counts the distinct prime factors of <math>n</math>)
  • <math>\mathcal{H}=\{h_1,\dots,h_k\}</math> is a set of distinct nonnegative integers <math>h_i\in\Z_+\cup \{0\}</math>.
  • <math>\theta(n)</math> is another characteristic function of the primes defined as
<math>\theta(n)=\begin{cases} \log(n) & \text{if }n\in \mathbb{P}\\ 0 & \text{else.}\end{cases}</math>
Notice that <math>\theta(n)=\log((n-1)1_{\mathbb{P}}(n)+1)</math>.

For an <math>\mathcal{H}</math> we also define

  • <math>\mathcal{H}(n):=(n+h_1,\dots,n+h_k)</math>,
  • <math>P_{\mathcal{H}}(n):=(n+h_1)(n+h_2)\cdots (n+h_k)</math>
  • <math>\nu_p(\mathcal{H})</math> is the amount of distinct residue classes of <math>\mathcal{H}</math> modulo <math>p</math>. For example <math>\nu_3(\{0,2,4\})=3</math> and <math>\nu_3(\{0,2\})=2</math> because <math>\{0,2,4\}\stackrel{\pmod{3}}{=}\{0,1,2\}</math> and <math>\{0,2\}\stackrel{\pmod{3}}{=}\{0,2\}</math>.

If <math>\nu_p(\mathcal{H})<k</math> for all <math>p\in \mathbb{P}</math>, then we call <math>\mathcal{H}</math> admissible.

Construction

Let <math>\mathcal{H}=\{h_1,\dots,h_k\}</math> be admissible and consider the following sifting function

<math>\mathcal{S}(N,c;\mathcal{H}):=\sum\limits_{n=N+1}^{2N}\left(\sum\limits_{h_i\in \mathcal {H}}1_{\mathbb{P}}(n+h_i)-c\right)w(n)^2,\quad w(n)\in \R,\quad c>0.</math>

For each <math>n\in [N+1,2N]</math> this function counts the primes of the form <math>n+h_i</math> minus some threshold <math>c</math>, so if <math>\mathcal{S}>0</math> then there exist some <math>n</math> such that at least <math>\lfloor c \rfloor +1</math> are prime numbers in <math>\mathcal{H}(n)</math>.

Since <math>1_{\mathbb{P}}(n)</math> has not so nice analytic properties one chooses rather the following sifting function

<math>\mathcal{S}(N;\mathcal{H}):=\sum\limits_{n=N+1}^{2N}\left(\sum\limits_{h_i\in \mathcal{H}}\theta(n+h_i)-\log(3N)\right)w(n)^2.</math>

Since <math>\log(N)<\theta(n+h_i)<\log(2N)</math> and <math>c=\log(3n)</math>, we have <math>\mathcal{S}>0</math> only if there are at least two prime numbers <math>n+h_i</math> and <math>n+h_j</math>. Next we have to choose the weight function <math>w(n)</math> so that we can detect prime k-tuples.

Derivation of the weights

A candidate for the weight function is the generalized von Mangoldt function

<math>\Lambda_k(n)=\sum\limits_{d\mid n}\mu(d)\left(\log\left(\frac{n}{d}\right)\right)^k,</math>

which has the following property: if <math>\omega(n)>k</math>, then <math>\Lambda_k(n)=0</math>. This functions also detects factors which are proper prime powers, but this can be removed in applications with a negligible error.[1]Шаблон:Rp

So if <math>\mathcal{H}(n)</math> is a prime k-tuple, then the function

<math>\Lambda_k(n;\mathcal{H})=\frac{1}{k!}\Lambda_k(P_{\mathcal{H}}(n))</math>

will not vanish. The factor <math>1/k!</math> is just for computational purposes. The (classical) von Mangoldt function can be approximated with the truncated von Mangoldt function

<math>\Lambda(n)\approx \Lambda_R(n):=\sum\limits_{\begin{array}{c} d\mid n\\ d\leq R \end{array}}\mu(d)\log\left(\frac{R}{d}\right),</math>

where <math>R</math> now no longer stands for the length of <math>\mathcal{H}</math> but for the truncation position. Analogously we approximate <math>\Lambda_k(n;\mathcal{H})</math> with

<math>\Lambda_R(n;\mathcal{H})=\frac{1}{k!}\sum\limits_{\begin{array}{c} d\mid P_{\mathcal{H}}(n)\\ d\leq R \end{array}}\mu(d)\left(\log\left(\frac{R}{d}\right)\right)^k</math>

For technical purposes we rather want to approximate tuples with primes in multiple components than solely prime tuples and introduce another parameter <math>0\leq \ell \leq k</math> so we can choose to have <math>k+\ell</math> or less distinct prime factors. This leads to the final form

<math>\Lambda_R(n;\mathcal{H},\ell)=\frac{1}{(k+\ell)!}\sum\limits_{\begin{array}{c} d\mid P_{\mathcal{H}}(n)\\ d\leq R \end{array}}\mu(d)\left(\log\left(\frac{R}{d}\right)\right)^{k+\ell}</math>

Without this additional parameter <math>\ell</math> one has for a distinct <math>d=d_1d_2\cdots d_k</math> the restriction <math>d_1\leq R, d_2\leq R, \dots ,d_k\leq R</math> but by introducing this parameter one gets the more looser restriction <math>d_1d_2\dots d_k\leq R</math>.[1]Шаблон:Rp So one has a <math>k+\ell</math>-dimensional sieve for a <math>k</math>-dimensional sieve problem.[4]

Goldston-Pintz-Yıldırım sieve

The GPY sieve has the following form

<math>\mathcal{S}(N;\mathcal{H},\ell):=\sum\limits_{n=N+1}^{2N}\left(\sum\limits_{h_i\in \mathcal{H}}\theta(n+h_i)-\log(3N)\right)\Lambda_R(n;\mathcal{H},\ell)^2,\qquad |\mathcal{H}|=k</math>

with

<math>\Lambda_R(n;\mathcal{H},\ell)=\frac{1}{(k+\ell)!}\sum\limits_{\begin{array}{c} d\mid P_{\mathcal{H}}(n)\\ d\leq R \end{array}}\mu(d)\left(\log\left(\frac{R}{d}\right)\right)^{k+\ell},\quad 0\leq \ell\leq k</math>.[1]Шаблон:Rp

Proof of the main theorem by Goldston, Pintz and Yıldırım

Consider <math>(\mathcal{H}_1,\ell_1, k_1)</math> and <math>(\mathcal{H}_2,\ell_2, k_2)</math> and <math>1\leq h_0\leq R</math> and define <math>M:=k_1+k_2+\ell_1+\ell_2</math>. In their paper, Goldston, Pintz and Yıldırım proved in two propositions that under suitable conditions two asymptotic formulas of the form

<math>\sum\limits_{n\leq N}\Lambda_R(n;\mathcal{H}_1,\ell_1)\Lambda_R(n;\mathcal{H}_2,\ell_2) = C_1\left(\mathcal{S}(\mathcal{H}^{i})+o_M(1)\right)N</math>

and

<math>\sum\limits_{n\leq N}\Lambda_R(n;\mathcal{H}_1,\ell_1)\Lambda_R(n;\mathcal{H}_2,\ell_2)\theta(n+h_0)

= C_2\left(\mathcal{S}(\mathcal{H}^j)+o_M(1)\right)N</math> hold, where <math>C_1,C_2</math> are two constants, <math>\mathcal{S}(\mathcal{H}^{i})</math> and <math>\mathcal{S}(\mathcal{H}^{j})</math> are two singular series whose description we omit here.

Finally one can apply these results to <math>\mathcal{S}</math> to derive the theorem by Goldston, Pintz and Yıldırım on infinitely many prime tuples whose distances are arbitrarily smaller than the average distance.[1]Шаблон:Rp

References

Шаблон:Reflist