Английская Википедия:Adjacency matrix

Материал из Онлайн справочника
Версия от 07:45, 1 января 2024; EducationBot (обсуждение | вклад) (Новая страница: «{{Английская Википедия/Панель перехода}} {{Short description|Square matrix used to represent a graph or network}} In graph theory and computer science, an '''adjacency matrix''' is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are Neighbourhood (graph theory)|adjac...»)
(разн.) ← Предыдущая версия | Текущая версия (разн.) | Следующая версия → (разн.)
Перейти к навигацииПерейти к поиску

Шаблон:Short description In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph.

In the special case of a finite simple graph, the adjacency matrix is a (0,1)-matrix with zeros on its diagonal. If the graph is undirected (i.e. all of its edges are bidirectional), the adjacency matrix is symmetric. The relationship between a graph and the eigenvalues and eigenvectors of its adjacency matrix is studied in spectral graph theory.

The adjacency matrix of a graph should be distinguished from its incidence matrix, a different matrix representation whose elements indicate whether vertex–edge pairs are incident or not, and its degree matrix, which contains information about the degree of each vertex.

Definition

For a simple graph with vertex set Шаблон:Math, the adjacency matrix is a square Шаблон:Math matrix Шаблон:Mvar such that its element Шаблон:Mvar is one when there is an edge from vertex Шаблон:Math to vertex Шаблон:Math, and zero when there is no edge.[1] The diagonal elements of the matrix are all zero, since edges from a vertex to itself (loops) are not allowed in simple graphs. It is also sometimes useful in algebraic graph theory to replace the nonzero elements with algebraic variables.[2] The same concept can be extended to multigraphs and graphs with loops by storing the number of edges between each two vertices in the corresponding matrix element, and by allowing nonzero diagonal elements. Loops may be counted either once (as a single edge) or twice (as two vertex-edge incidences), as long as a consistent convention is followed. Undirected graphs often use the latter convention of counting loops twice, whereas directed graphs typically use the former convention.

Of a bipartite graph

The adjacency matrix Шаблон:Mvar of a bipartite graph whose two parts have Шаблон:Mvar and Шаблон:Mvar vertices can be written in the form

<math>A = \begin{pmatrix} 0_{r,r} & B \\ B^\mathsf{T} & 0_{s,s} \end{pmatrix},</math>

where Шаблон:Mvar is an Шаблон:Math matrix, and Шаблон:Math and Шаблон:Math represent the Шаблон:Math and Шаблон:Math zero matrices. In this case, the smaller matrix Шаблон:Mvar uniquely represents the graph, and the remaining parts of Шаблон:Mvar can be discarded as redundant. Шаблон:Mvar is sometimes called the biadjacency matrix.

Formally, let Шаблон:Math be a bipartite graph with parts Шаблон:Math, Шаблон:Math and edges Шаблон:Mvar. The biadjacency matrix is the Шаблон:Math 0–1 matrix Шаблон:Mvar in which Шаблон:Math if and only if Шаблон:Math.

If Шаблон:Mvar is a bipartite multigraph or weighted graph, then the elements Шаблон:Mvar are taken to be the number of edges between the vertices or the weight of the edge Шаблон:Math, respectively.

Variations

An Шаблон:Nowrap-adjacency matrix Шаблон:Mvar of a simple graph has Шаблон:Math if Шаблон:Math is an edge, Шаблон:Mvar if it is not, and Шаблон:Mvar on the diagonal. The Seidel adjacency matrix is a Шаблон:Math-adjacency matrix. This matrix is used in studying strongly regular graphs and two-graphs.[3]

The distance matrix has in position Шаблон:Math the distance between vertices Шаблон:Mvar and Шаблон:Mvar. The distance is the length of a shortest path connecting the vertices. Unless lengths of edges are explicitly provided, the length of a path is the number of edges in it. The distance matrix resembles a high power of the adjacency matrix, but instead of telling only whether or not two vertices are connected (i.e., the connection matrix, which contains boolean values), it gives the exact distance between them.

Examples

Undirected graphs

The convention followed here (for undirected graphs) is that each edge adds 1 to the appropriate cell in the matrix, and each loop adds 2.[4] This allows the degree of a vertex to be easily found by taking the sum of the values in either its respective row or column in the adjacency matrix.

Labeled graph Adjacency matrix
Файл:6n-graph2.svg <math>\begin{pmatrix}

2 & 1 & 0 & 0 & 1 & 0\\ 1 & 0 & 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 & 1 & 1\\ 1 & 1 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 \end{pmatrix}</math>
Coordinates are 1–6.

Файл:Symmetric group 4; Cayley graph 1,5,21 (Nauru Petersen); numbers.svg


Nauru graph

Файл:Symmetric group 4; Cayley graph 1,5,21 (adjacency matrix).svg


Coordinates are 0–23.
White fields are zeros, colored fields are ones.

Directed graphs

The adjacency matrix of a directed graph can be asymmetric. One can define the adjacency matrix of a directed graph either such that

  1. a non-zero element Шаблон:Mvar indicates an edge from Шаблон:Mvar to Шаблон:Mvar or
  2. it indicates an edge from Шаблон:Mvar to Шаблон:Mvar.

The former definition is commonly used in graph theory and social network analysis (e.g., sociology, political science, economics, psychology).[5] The latter is more common in other applied sciences (e.g., dynamical systems, physics, network science) where Шаблон:Mvar is sometimes used to describe linear dynamics on graphs.[6]

Using the first definition, the in-degrees of a vertex can be computed by summing the entries of the corresponding column and the out-degree of vertex by summing the entries of the corresponding row. When using the second definition, the in-degree of a vertex is given by the corresponding row sum and the out-degree is given by the corresponding column sum.

Labeled graph Adjacency matrix
Файл:Symmetric group 4; Cayley graph 4,9; numbers.svg


Directed Cayley graph of S4

Файл:Symmetric group 4; Cayley graph 4,9 (adjacency matrix).svg


Coordinates are 0–23.
As the graph is directed, the matrix is not necessarily symmetric.

Trivial graphs

The adjacency matrix of a complete graph contains all ones except along the diagonal where there are only zeros. The adjacency matrix of an empty graph is a zero matrix.

Properties

Spectrum

The adjacency matrix of an undirected simple graph is symmetric, and therefore has a complete set of real eigenvalues and an orthogonal eigenvector basis. The set of eigenvalues of a graph is the spectrum of the graph.[7] It is common to denote the eigenvalues by <math>\lambda_1\geq \lambda_2\geq \cdots \geq \lambda_n.</math>

The greatest eigenvalue <math>\lambda_1</math> is bounded above by the maximum degree. This can be seen as result of the Perron–Frobenius theorem, but it can be proved easily. Let Шаблон:Mvar be one eigenvector associated to <math>\lambda_1</math> and Шаблон:Mvar the component in which Шаблон:Mvar has maximum absolute value. Without loss of generality assume Шаблон:Mvar is positive since otherwise you simply take the eigenvector <math>-v</math>, also associated to <math>\lambda_1</math>. Then

<math>\lambda_1 v_x = (Av)_x = \sum_{y=1}^n A_{x,y}v_y \leq \sum_{y=1}^n A_{x,y} v_x = v_x \deg(x).</math>

For Шаблон:Mvar-regular graphs, Шаблон:Mvar is the first eigenvalue of Шаблон:Mvar for the vector Шаблон:Math (it is easy to check that it is an eigenvalue and it is the maximum because of the above bound). The multiplicity of this eigenvalue is the number of connected components of Шаблон:Mvar, in particular <math>\lambda_1>\lambda_2</math> for connected graphs. It can be shown that for each eigenvalue <math>\lambda_i</math>, its opposite <math>-\lambda_i = \lambda_{n+1-i}</math> is also an eigenvalue of Шаблон:Mvar if Шаблон:Mvar is a bipartite graph.[8] In particular −Шаблон:Mvar is an eigenvalue of any Шаблон:Mvar-regular bipartite graph.

The difference <math>\lambda_1 - \lambda_2</math> is called the spectral gap and it is related to the expansion of Шаблон:Mvar. It is also useful to introduce the spectral radius of <math>A</math> denoted by <math>\lambda(G) = \max_{\left|\lambda_i\right| < d} |\lambda_i|</math>. This number is bounded by <math>\lambda(G) \geq 2\sqrt{d-1} - o(1)</math>. This bound is tight in the Ramanujan graphs, which have applications in many areas.

Isomorphism and invariants

Suppose two directed or undirected graphs Шаблон:Math and Шаблон:Math with adjacency matrices Шаблон:Math and Шаблон:Math are given. Шаблон:Math and Шаблон:Math are isomorphic if and only if there exists a permutation matrix Шаблон:Mvar such that

<math>P A_1 P^{-1} = A_2.</math>

In particular, Шаблон:Math and Шаблон:Math are similar and therefore have the same minimal polynomial, characteristic polynomial, eigenvalues, determinant and trace. These can therefore serve as isomorphism invariants of graphs. However, two graphs may possess the same set of eigenvalues but not be isomorphic.[9] Such linear operators are said to be isospectral.

Matrix powers

If Шаблон:Mvar is the adjacency matrix of the directed or undirected graph Шаблон:Mvar, then the matrix Шаблон:Math (i.e., the matrix product of Шаблон:Mvar copies of Шаблон:Mvar) has an interesting interpretation: the element Шаблон:Math gives the number of (directed or undirected) walks of length Шаблон:Mvar from vertex Шаблон:Mvar to vertex Шаблон:Mvar. If Шаблон:Mvar is the smallest nonnegative integer, such that for some Шаблон:Mvar, Шаблон:Mvar, the element Шаблон:Math of Шаблон:Math is positive, then Шаблон:Mvar is the distance between vertex Шаблон:Mvar and vertex Шаблон:Mvar. A great example of how this is useful is in counting the number of triangles in an undirected graph Шаблон:Mvar, which is exactly the trace of Шаблон:Math divided by 6. We divide by 6 to compensate for the overcounting of each triangle (3! = 6 times). The adjacency matrix can be used to determine whether or not the graph is connected.

Data structures

The adjacency matrix may be used as a data structure for the representation of graphs in computer programs for manipulating graphs. The main alternative data structure, also in use for this application, is the adjacency list.[10][11]

The space needed to represent an adjacency matrix and the time needed to perform operations on them is dependent on the matrix representation chosen for the underlying matrix. Sparse matrix representations only store non-zero matrix entries and implicitly represent the zero entries. They can, for example, be used to represent sparse graphs without incurring the space overhead from storing the many zero entries in the adjacency matrix of the sparse graph. In the following section the adjacency matrix is assumed to be represented by an array data structure so that zero and non-zero entries are all directly represented in storage.

Because each entry in the adjacency matrix requires only one bit, it can be represented in a very compact way, occupying only |V |2 / 8 bytes to represent a directed graph, or (by using a packed triangular format and only storing the lower triangular part of the matrix) approximately |V |2 / 16 bytes to represent an undirected graph. Although slightly more succinct representations are possible, this method gets close to the information-theoretic lower bound for the minimum number of bits needed to represent all Шаблон:Mvar-vertex graphs.[12] For storing graphs in text files, fewer bits per byte can be used to ensure that all bytes are text characters, for instance by using a Base64 representation.[13] Besides avoiding wasted space, this compactness encourages locality of reference. However, for a large sparse graph, adjacency lists require less storage space, because they do not waste any space representing edges that are not present.[11][14]

An alternative form of adjacency matrix (which, however, requires a larger amount of space) replaces the numbers in each element of the matrix with pointers to edge objects (when edges are present) or null pointers (when there is no edge).[14] It is also possible to store edge weights directly in the elements of an adjacency matrix.[11]

Besides the space tradeoff, the different data structures also facilitate different operations. Finding all vertices adjacent to a given vertex in an adjacency list is as simple as reading the list, and takes time proportional to the number of neighbors. With an adjacency matrix, an entire row must instead be scanned, which takes a larger amount of time, proportional to the number of vertices in the whole graph. On the other hand, testing whether there is an edge between two given vertices can be determined at once with an adjacency matrix, while requiring time proportional to the minimum degree of the two vertices with the adjacency list.[11][14]

See also

References

Шаблон:Reflist

External links

Шаблон:Commons category

Шаблон:Graph representations Шаблон:Matrix classes Шаблон:Authority control

  1. Шаблон:Citation.
  2. Шаблон:Citation.
  3. Шаблон:Cite journal
  4. Шаблон:Cite conference
  5. Шаблон:Citation
  6. Шаблон:Citation
  7. Шаблон:Harvtxt, Chapter 2 ("The spectrum of a graph"), pp. 7–13.
  8. Шаблон:Citation
  9. Godsil, Chris; Royle, Gordon Algebraic Graph Theory, Springer (2001), Шаблон:ISBN, p.164
  10. Шаблон:Harvtxt, p. 361: "There are two data structures that people often use to represent graphs, the adjacency list and the adjacency matrix."
  11. 11,0 11,1 11,2 11,3 Шаблон:Citation.
  12. Шаблон:Citation.
  13. Шаблон:Citation.
  14. 14,0 14,1 14,2 Шаблон:Citation.