Lattice problem

From Infogalactic: the planetary knowledge core
(Redirected from Shortest vector problem)
Jump to: navigation, search

In computer science, lattice problems are a class of optimization problems on lattices. The conjectured intractability of such problems is central to construction of secure lattice-based cryptosystems. For applications in such cryptosystems, lattices over vector spaces (often \mathbb{Q}^n) or free modules (often \mathbb{Z}^n) are generally considered.

For all the problems below, assume that we are given (in addition to other more specific inputs) a basis for the vector space V and a norm N. The norms usually considered are L2. However, other norms (such as Lp) are also considered and show up in a variety of results.[1] Let \lambda(L) denote the length of the shortest non-zero vector in the lattice L, that is,

\lambda(L) = \min_{v \in L \setminus \{\mathbf{0}\}} \|v\|_N.

Shortest vector problem (SVP)

File:SVP.svg
This is an illustration of the shortest vector problem (basis vectors in blue, shortest vector in red).

In SVP, a basis of a vector space V and a norm N (often L2) are given for a lattice L and one must find the shortest non-zero vector in V, as measured by N, in L. In other words, the algorithm should output a non-zero vector v such that N(v)=\lambda(L).

In the \gamma-approximation version \text{SVP}_\gamma, one must find a non-zero lattice vector of length at most \gamma \lambda(L).

Known results

The exact version of the problem is only known to be NP-hard for randomized reductions.[2][3]

By contrast, the equivalent problem with respect to the uniform norm is known to be NP-hard [4]

Approach techniques: Lenstra–Lenstra–Lovász lattice basis reduction algorithm produces a "relatively short vector" in polynomial time, but does not solve the problem. Kannan's HKZ basis reduction algorithm solves the problem in n^{\frac{n}{2 e} + o(n)} time where n is the dimension. Lastly, Schnorr presented a technique that interpolates between LLL and HKZ called Block Reduction. Block reduction works with HKZ bases and if the number of blocks is chosen to be larger than the dimension, the resulting algorithm Kannan's full HKZ basis reduction.

GapSVP

The problem \text{GapSVP}_\beta consists of differentiating between the instances of SVP in which the answer is at most 1 or larger than \beta, where \beta can be a fixed function of n, the number of vectors. Given a basis for the lattice, the algorithm must decide whether \lambda(L) \leq 1 or \lambda(L)>\beta. Like other promise problems, the algorithm is allowed to err on all other cases.

Yet another version of the problem is \text{GapSVP}_{\zeta,\gamma} for some functions \zeta,\gamma. The input to the algorithm is a basis B and a number d. It is assured that all the vectors in the Gram–Schmidt orthogonalization are of length at least 1, and that \lambda(L(B)) \leq \zeta(n) and that 1 \leq d \leq \zeta(n)/\gamma(n) where n is the dimension. The algorithm must accept if \lambda(L(B)) \leq d, and reject if \lambda(L(B)) \geq \gamma(n).d. For large \zeta (\zeta(n)>2^{n/2}), the problem is equivalent to \text{GapSVP}_\gamma because[5] a preprocessing done using the LLL algorithm makes the second condition (and hence, \zeta) redundant.

Closest vector problem (CVP)

File:CVP.svg
This is an illustration of the closest vector problem (basis vectors in blue, external vector in green, closest vector in red).

In CVP, a basis of a vector space V and a metric M (often L2) are given for a lattice L, as well as a vector v in V but not necessarily in L. It is desired to find the vector in L closest to v (as measured by M). In the \gamma-approximation version \text{CVP}_\gamma, one must find a lattice vector at distance at most \gamma.

Relationship with SVP

The closest vector problem is a generalization of the shortest vector problem. It is easy to show that given an oracle for \text{CVP}_\gamma (defined below), one can solve \text{SVP}_\gamma by making some queries to the oracle.[6] The naive method to find the shortest vector by calling the \text{CVP}_\gamma oracle to find the closest vector to 0 does not work because 0 is itself a lattice vector and the algorithm could potentially output 0.

The reduction from \text{SVP}_\gamma to \text{CVP}_\gamma is as follows: Suppose that the input to the \text{SVP}_\gamma problem is the basis for lattice B=[b_1,b_2,\ldots,b_n]. Consider the basis B^i=[b_1,\ldots,2b_i,\ldots,b_n] and let x_i be the vector returned by \text{CVP}_\gamma(B^i, b_i). The claim is that the shortest vector in the set \{x_i-b_i\} is the shortest vector in the given lattice.

Known results

Goldreich et al. showed that any hardness of SVP implies the same hardness for CVP.[7] Using PCP tools, Arora et al. showed that CVP is hard to approximate within factor 2^{\log^{1-\epsilon}(n)} unless \operatorname{NP} \subseteq \operatorname{DTIME}(2^{poly(\log n)}).[8] Dinur et al. strengthened this by giving a NP-hardness result with \epsilon=(\log \log n)^c for c<1/2.[9]

Sphere decoding

The algorithm for CVP, especially the Fincke and Pohst variant,[10] have been used for data detection in multiple-input multiple-output (MIMO) wireless communication systems (for coded and uncoded signals).[11][12] In this context it is called sphere decoding due to the radius used internal to many CVP solutions.[13]

It has been applied in the field of the integer ambiguity resolution of carrier-phase GNSS (GPS).[14] It is called LAMBDA method in that field.

GapCVP

This problem is similar to the GapSVP problem. For \text{GapCVP}_\beta, the input consists of a lattice basis and a vector v and the algorithm must answer whether

  • there is a lattice vector such that the distance between it and v is at most 1.
  • every lattice vector is at a distance greater than \beta away from v.

Known results

The problem is trivially contained in NP for any approximation factor.

Schnorr, in 1987, showed that deterministic polynomial time algorithms can solve the problem for \beta=2^{O(n(\log \log n)^2/\log n)}.[15] Ajtai et al. showed that probabilistic algorithms can achieve a slightly better approximation factor of \beta=2^{O(n \log \log n/\log n)}.[16]

In 1993, Banaszczyk showed that \text{GapCVP}_n is in NP \cap coNP.[17] In 2000, Goldreich and Goldwasser showed that \beta=\sqrt{n/\log n} puts the problem in both NP and coAM.[18] In 2005, Aharonov and Regev showed that for some constant c, the problem with \beta=c\sqrt{n} is in NP \cap coNP.[19]

For lower bounds, Dinur et al. showed in 1998 that the problem is NP-hard for \beta=n^{o(1/\log{\log{n}})}.[20]

Shortest independent vectors problem (SIVP)

Given a lattice L of dimension n, the algorithm must output n linearly independent v_1, v_2, \ldots, v_n so that \max \|v_i\| < \max_{B} \|b_i\| where the right hand side considers all basis B=\{b_1,\ldots,b_n\} of the lattice.

In the \gamma-approximate version, given a lattice L with dimension n, find n linearly independent vectors v_1, v_2,\ldots, v_n of length max ||v_i|| ≤ \gamma \lambda_n(L), where \lambda_n(L) is the n'th successive minimum of L.

Bounded distance decoding

This problem is similar to CVP. Given a vector such that its distance from the lattice is at most \lambda(L)/2, the algorithm must output the closest lattice vector to it.

Covering radius problem

Given a basis for the lattice, the algorithm must find the largest distance (or in some versions, its approximation) from any vector to the lattice.

Shortest basis problem

Many problems become easier if the input basis consists of short vectors. An algorithm that solves the Shortest Basis Problem (SBP) must, given a lattice basisB, output an equivalent basis B' such that the length of the longest vector in B' is as short as possible.

The approximation version SBP_\gamma problem consist of finding a basis whose longest vector is at most \gamma times longer than the longest vector in the shortest basis.

Use in cryptography

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Average case hardness of problems forms a basis for proofs-of-security for most cryptographic schemes. However, experimental evidence suggests that most NP-hard problems lack this property: they are probably only worst case hard. Many lattice problems have been conjectured or proven to be average-case hard, making them an attractive class of problems to base cryptographic schemes on. Moreover, worst-case hardness of some lattice problems have been used to create secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even against quantum computers.

The above lattice problems are easy to solve if the algorithm is provided with a "good" basis. Lattice reduction algorithms aim, given a basis for a lattice, to output a new basis consisting of relatively short, nearly orthogonal vectors. The Lenstra–Lenstra–Lovász lattice basis reduction algorithm (LLL) was an early efficient algorithm for this problem which could output an almost reduced lattice basis in polynomial time.[21] This algorithm and its further refinements were used to break several cryptographic schemes, establishing its status as a very important tool in cryptanalysis. The success of LLL on experimental data led to a belief that lattice reduction might be an easy problem in practice. However, this belief was challenged when in the late 1990s, several new results on the hardness of lattice problems were obtained, starting with the result of Ajtai.[2]

In his seminal papers, Ajtai showed that the SVP problem was NP-hard and discovered some connections between the worst-case complexity and average-case complexity of some lattice problems.[2][22] Building on these results, Ajtai and Dwork created a public-key cryptosystem whose security could be proven using only the worst case hardness of a certain version of SVP,[23] thus making it the first result to have used worst-case hardness to create secure systems.[24]

See also

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. 2.0 2.1 2.2 Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. Lua error in package.lua at line 80: module 'strict' not found.
  7. Lua error in package.lua at line 80: module 'strict' not found.
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. Lua error in package.lua at line 80: module 'strict' not found.
  10. Lua error in package.lua at line 80: module 'strict' not found.
  11. Lua error in package.lua at line 80: module 'strict' not found.
  12. Lua error in package.lua at line 80: module 'strict' not found.
  13. Lua error in package.lua at line 80: module 'strict' not found.
  14. Lua error in package.lua at line 80: module 'strict' not found.
  15. Lua error in package.lua at line 80: module 'strict' not found.
  16. Lua error in package.lua at line 80: module 'strict' not found.
  17. Lua error in package.lua at line 80: module 'strict' not found.
  18. Lua error in package.lua at line 80: module 'strict' not found.
  19. Lua error in package.lua at line 80: module 'strict' not found.
  20. Lua error in package.lua at line 80: module 'strict' not found.
  21. Lua error in package.lua at line 80: module 'strict' not found.
  22. Lua error in package.lua at line 80: module 'strict' not found.
  23. Lua error in package.lua at line 80: module 'strict' not found.
  24. Lua error in package.lua at line 80: module 'strict' not found.

Further reading

  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.