Английская Википедия:Evolution of a random network

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

Шаблон:More citations needed

Evolution of a random network is a dynamical process, usually leading to emergence of giant component accompanied with striking consequences on the network topology. To quantify this process, there is a need of inspection on how the size of the largest connected cluster within the network, <math>{N_{G}}</math>, varies with the average degree <math>{\left \langle k \right \rangle}</math>.[1] Networks change their topology as they evolve, undergoing phase transitions. Phase transitions are generally known from physics, where it occurs as matter changes state according to its thermal energy level, or when ferromagnetic properties emerge in some materials as they are cooling down. Such phase transitions take place in matter because it is a network of particles, and as such, rules of network phase transition directly apply to it. Phase transitions in networks happen as links are added to a network, meaning that having N nodes, in each time increment, a link is placed between a randomly chosen pair of them. The transformation from a set of disconnected nodes to a fully connected network is called the evolution of a network.

If we begin with a network having N totally disconnected nodes (number of links is zero), and start adding links between randomly selected pairs of nodes, the evolution of the network begins. For some time we will just create pairs of nodes. After a while some of these pairs will connect, forming little trees. As we continue adding more links to the network, there comes a point when a giant component emerges in the network as some of these isolated trees connect to each other. This is called the critical point. In our natural example, this point corresponds to temperatures where materials change their state. Further adding nodes to the system, the giant component becomes even larger, as more and more nodes get a link to another node which is already part of the giant component. The other special moment in this transition is when the network becomes fully connected, that is, when all nodes belong to the one giant component, which is effectively the network itself at that point.[1]

Conditions for emergence of a giant component

In the Erdős–Rényi model,[2][3] the average degree <math>\langle k \rangle </math> of a graph with n vertices and N edges is given by <math>\langle k \rangle =\frac{2N}{n}</math>. The condition for the emergence of a giant component is:

<math>{\left \langle k \right \rangle=1}</math>.

Thus, one link is sufficient for its emergence of the giant componentШаблон:Citation needed. If expressing the condition in terms of <math>{p}</math>, one obtainsШаблон:Citation needed:
<math>{P_c=\frac{1}{N-1}\approx \frac{1}{N}}</math> (1)
Where <math>{N}</math> is the number of nodes, <math>{P_c}</math> is the probability of clusteringШаблон:Citation needed. Therefore, the larger a network, the smaller <math>{p}</math> is sufficient for the giant component.

Regimes of evolution of a random network

Three topological regimes with its unique characteristics can be distinguished in network science: subcritical, supercritical and connected regimes.

Subcritical regime

The subcritical phase is characterised by small isolated clusters, as the number of links is much less than the number of nodes. A giant component can be designated any time to be the largest isolated small component, but the difference in cluster sizes is effectively negligible in this phase.

<math>{0<\left \langle k \right \rangle< 1}</math>, <math>{\left(p<\frac{1}{n}\right)}</math>

For <math>{\left \langle k \right \rangle = 0}</math> the network consists of <math>{N}</math> isolated nodes. Increasing <math>{\left \langle k \right \rangle}</math> means adding <math>{N\left \langle k \right \rangle = pN(N-1)/2}</math> links to the network. Yet, given that <math>{\left \langle k \right \rangle < 1}</math>, there is only a small number of links in this regime, hence mainly tiny clusters could be observed. At any moment the largest cluster can be designated to be the giant component. Yet in this regime the relative size of the largest cluster, <math>{\frac{N _{G}}{N}}</math>, remains zero. The reason is that for <math>{\left \langle k \right \rangle < 1}</math> the largest cluster is a tree with size <math>{N_{G}\sim lnN}</math>, hence its size increases much slower than the size of the network. Therefore, <math>{N_{G}/N = ln N/N\rightarrow 0 }</math> in the <math>{N\rightarrow \infty}</math> limit. In summary, in the subcritical regime the network consists of numerous tiny components, whose size follows the exponential distribution. Hence, these components have comparable sizes, lacking a clear winner that we could designate as a giant.[1]

Critical point

As we keep connecting nodes, the pairs joining together will form small trees, and if we keep connecting nodes, a distinguishable giant component emerges at critical point <math>{\left \langle k \right \rangle < 1}</math>.

This means that at the moment that each component has on average 1 link, a giant component emerges. This point corresponds to probability Шаблон:Math, as the probability of having a link between two nodes is the ratio of the one case when that one link connect the two randomly chosen nodes, divided by all the other possibilities how that one connection can connect one of the nodes to another node, which is Шаблон:Math, as a node can connect to all other nodes but itself (excluding the possibility of a self loop in this model).

This also has the implication, that the larger a network is, the smaller Шаблон:Math it needs to have a giant component emerging.

<math>{0<\left \langle k \right \rangle= 1}</math>, <math>{\left(p=\frac{1}{n}\right)}</math>.

The critical point separates the regime where there is not yet a giant component ( <math>{0<\left \langle k \right \rangle< 1}</math> ) from the regime where there is one ( <math>{\left \langle k \right \rangle> 1}</math> ). At this point, the relative size of the largest component is still zero. Indeed, the size of the largest component is <math>{N_{G}\sim N^{\tfrac{2}{3}}}</math>. Consequently, <math>{N_{G}}</math> grows much slower than the network's size, so its relative size decreases as <math>{\frac{N_{G}}{N}\sim N^{-\tfrac{1}{3}}}</math> in the <math>{N\rightarrow \infty}</math> limit. Note, however, that in absolute terms there is a significant jump in the size of the largest component at <math>{\left \langle k \right \rangle=1}</math>. For example, for a random network with <math>{N = 7\ast109}</math> nodes, comparable to the globe's social network, for <math>{0<\left \langle k \right \rangle< 1}</math> the largest cluster is of the order of <math>{N_{G}\simeq lnN = ln(7\ast109)\simeq 22.7}</math>. In contrast at <math>{\left \langle k \right \rangle= 1}</math> we expect <math>{N_{G}\simeq N^{\tfrac{2}{3}} = (7\ast109){\tfrac{2}{3}}\sim 3\ast106}</math>, a jump of about five orders of magnitude. Yet, both in the subcritical regime and at the critical point the largest component contains only a vanishing fraction of the total number of nodes in the network.

In summary, at the critical point most nodes are located in numerous small components, whose size distribution follows. The power law form indicates that components of rather different sizes coexist. These numerous small components are mainly trees, while the giant component may contain loops. Note that many properties of the network at the critical point resemble the properties of a physical system undergoing a phase.[1]

Supercritical regime

Once the giant component had emerged upon passing the critical point, as we add more connections, the network will consist of a growing giant component, and less and less smaller isolated clusters and nodes. Most real networks belong to this regime. The size of the giant component is described as follows <math>{N_{G} = (p - p_{c})N}.</math>

<math>{\left \langle k \right \rangle>1}</math>, <math>{\left(p>\frac{1}{n}\right)}</math>.

This regime has the most relevance to real systems, as for the first time we have a giant component that looks like a network. In the vicinity of the critical point the size of the giant component varies as:

<math>{N_{G}/N \sim \left \langle k \right \rangle-1}</math>

or

<math>{N_{G} \sim (p - p_{c})N}</math> (2)

where pc is given by (1). In other words, the giant component contains a finite fraction of the nodes. The further we move from the critical point, a larger fraction of nodes will belong to it. Note that (2) is valid only in the vicinity of <math>{\left \langle k \right \rangle=1}</math>. For large <math>{\left \langle k \right \rangle}</math> the dependence between <math>{N_{G}}</math> and <math>{\left \langle k \right \rangle}</math> is nonlinear. In summary in the supercritical regime numerous isolated components coexist with the giant component, their size distribution following exponential distribution. These small components are trees, while the giant component contains Loops and cycles. The supercritical regime lasts until all nodes are absorbed by the giant.[1]

Connected regime

As connections are added to a network, there comes a point when <math>\langle k \rangle = lnN</math>, and the giant component absorbs all nodes, so there are no isolated nodes or unconnected components.

<math>{\left \langle k \right \rangle >\ln N}</math>, <math>{\left ( p>\frac{\ln N}{N} \right)}</math>.

For sufficiently large p the giant component absorbs all nodes and components, hence <math>{N_{G} \simeq N}</math>. In the absence of isolated nodes the network becomes connected. The average degree at which this happens depends on <math>{N}</math> as <math>{\left (\frac{\ln N}{N} \right)\rightarrow 0}</math>. Note that when we enter the connected regime the network is still relatively sparse, as <math>{\left (\frac{\ln N}{N} \right)\rightarrow 0}</math> for large N. The network turns into a complete graph only at <math>{\left \langle k \right \rangle = N - 1}</math>. In summary, the random network model predicts that the emergence of a network is not a smooth, gradual process: The isolated nodes and tiny components observed for small <math>\langle k \rangle</math> collapse into a giant component through a phase.[1]

Examples for occurrences in nature

Water-ice transition

Phase transitions take place in matter, as it can be considered as a network of particles. When water is frozen, upon reaching 0 degree (the critical point) the crystalline structure of ice emerges according to the phase transitions of random networks: As cooling continues, each water molecule binds strongly to four others, forming the ice lattice, which is the emerging network.

Magnetic phase transition

Similarly, magnetic phase transition in ferromagnetic materials also follow the pattern of network evolution: Above a certain temperature, spins of individual atoms can point in two different directions. However, upon cooling the material down, upon reaching a certain critical temperature, spins start to point in the same direction, creating the magnetic field. The emergence of magnetic properties in the structure of the material resemble to the evolution of a random network.[1]

Applications

Physics and chemistry

As we could see in the above examples, network theory applies to the structure of materials, therefore it is also applied in research related to materials and their properties in physics and chemistry.

Particularly important areas are polymers,[4] gels,[5] and other material development such as cellulose with tunable properties.[6]

Biology and medicine

Phase transitions are used in research related to the functioning of proteins or emergence of diabetes on the cell-level.[7] Neuroscience also extensively makes use of the model of the evolution of networks as phase-transition occur in neuron-networks.[8]

Network science, statistics and machine learning

Phase transition of a network is naturally a building block of more advanced models within its own discipline too. It comes back in research examining clustering and percolation in networks,[9] or prediction of node properties.[10]

References

Шаблон:Reflist

  1. 1,0 1,1 1,2 1,3 1,4 1,5 1,6 Albert-László Barabási. Network Science: Chapter 3
  2. Erdős, Paul, and Alfréd Rényi. "On the evolution of random graphs." Publ. Math. Inst. Hung. Acad. Sci 5.1 (1960): 17-60. http://leonidzhukov.net/hse/2014/socialnetworks/papers/erdos-1960-10.pdf
  3. Erdős P., Rényi A. "On random graphs I." Publ. math. debrecen 6.290-297 (1959): 18. http://www.leonidzhukov.net/hse/2016/networks/papers/erdos-1959-11.pdf Шаблон:Webarchive
  4. Samulionis, V., Svirskas, Š., Banys, J., Sánchez-Ferrer, A., Gimeno, N., & Ros, M. B. (2015). Phase Transitions in Smectic Bent-Core Main-Chain Polymer Networks Detected by Dielectric and Ultrasonic Techniques. Ferroelectrics, 479(1), 76-81. Шаблон:Doi
  5. Habicht, A., Schmolke, W., Lange, F., Saalwachter, K., & Seiffert, S. (n.d). The Non-effect of Polymer-Network Inhomogeneities in Microgel Volume Phase Transitions: Support for the Mean-Field Perspective. Macromolecular Chemistry And Physics, 215(11), 1116-1133.
  6. Liu, C., Zhong, G., Huang, H., & Li, Z. (n.d). Phase assembly-induced transition of three dimensional nanofibril- to sheet-networks in porous cellulose with tunable properties. Cellulose, 21(1), 383-394.
  7. Stamper, I., Jackson, E., & Wang, X. (n.d). Phase transitions in pancreatic islet cellular networks and implications for type-1 diabetes. Physical Review E, 89(1),
  8. Шаблон:Cite journal
  9. Colomer-de-Simon, P., & Boguna, M. (2014). Double percolation phase transition in clustered complex networks.
  10. Zhang, P., Moore, C., & Zdeborová, L. (2014). Phase transitions in semisupervised clustering of sparse networks.