Английская Википедия:Fine-grained reduction

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

In computational complexity theory, a fine-grained reduction is a transformation from one computational problem to another, used to relate the difficulty of improving the time bounds for the two problems. Intuitively, it provides a method for solving one problem efficiently by using the solution to the other problem as a subroutine. If problem <math>A</math> can be solved in time <math>a(n)</math> and problem <math>B</math> can be solved in time <math>b(n)</math>, then the existence of an <math>(a,b)</math>-reduction from problem <math>A</math> to problem <math>B</math> implies that any significant speedup for problem <math>B</math> would also lead to a speedup for problem <math>A</math>.

Definition

Let <math>A</math> and <math>B</math> be computational problems, specified as the desired output for each possible input. Let <math>a</math> and <math>b</math> both be time-constructible functions that take an integer argument <math>n</math> and produce an integer result. Usually, <math>a</math> and <math>b</math> are the time bounds for known or naive algorithms for the two problems, and often they are monomials such as <math>n^2</math>.Шаблон:R

Then <math>A</math> is said to be <math>(a,b)</math>-reducible to <math>B</math> if, for every real number <math>\epsilon>0</math>, there exists a real number <math>\delta>0</math> and an algorithm that solves instances of problem <math>A</math> by transforming it into a sequence of instances of problem <math>B</math>, taking time <math>O\bigl(a(n)^{1-\delta}\bigr)</math> for the transformation on instances of size <math>n</math>, and producing a sequence of instances whose sizes <math>n_i</math> are bounded by <math>\sum_i b(n_i)^{1-\epsilon}<a(n)^{1-\delta}</math>.Шаблон:R

An <math>(a,b)</math>-reduction is given by the mapping from <math>\epsilon</math> to the pair of an algorithm and <math>\delta</math>.Шаблон:R

Speedup implication

Suppose <math>A</math> is <math>(a,b)</math>-reducible to <math>B</math>, and there exists <math>\epsilon>0</math> such that <math>B</math> can be solved in time <math>O\bigl(b(n)^{1-\epsilon}\bigr)</math>. Then, with these assumptions, there also exists <math>\delta>0</math> such that <math>A</math> can be solved in time <math>O\bigl(a(n)^{1-\delta}\bigr)</math>. Namely, let <math>\delta</math> be the value given by the <math>(a,b)</math>-reduction, and solve <math>A</math> by applying the transformation of the reduction and using the fast algorithm for <math>B</math> for each resulting subproblem.Шаблон:R

Equivalently, if <math>A</math> cannot be solved in time significantly faster than <math>a(n)</math>, then <math>B</math> cannot be solved in time significantly faster than <math>b(n)</math>.Шаблон:R

History

Fine-grained reductions were defined, in the special case that <math>a</math> and <math>b</math> are equal monomials, by Virginia Vassilevska Williams and Ryan Williams in 2010. They also showed the existence of <math>(n^3,n^3)</math>-reductions between several problems including all-pairs shortest paths, finding the second-shortest path between two given vertices in a weighted graph, finding negative-weight triangles in weighted graphs, and testing whether a given distance matrix describes a metric space. According to their results, either all of these problems have time bounds with exponents less than three, or none of them do.Шаблон:R

The term "fine-grained reduction" comes from later work by Virginia Vassilevska Williams in an invited presentation at the 10th International Symposium on Parameterized and Exact Computation.Шаблон:R

Although the original definition of fine-grained reductions involved deterministic algorithms, the corresponding concepts for randomized algorithms and nondeterministic algorithms have also been considered.Шаблон:R

References

Шаблон:Reflist