Some of the most daunting scientific challenges of the 21st century involve complex social, technological, and biological systems—from the stability of global financial and economic networks to the spreading of epidemics, the web of biotic interactions in an ecosystem, and metabolic and transcriptional processes inside cells and tissues. An important theoretical foundation for understanding complexity is network science,- which focuses on the structure and function of systems that are composed of numerous elements and their interactions. Over the past couple of decades, the network perspective has gained considerable ground in neuroscience. - Brain networks have become a fertile area of research, now called network neuroscience, ranging across different scales, from interacting biomolecules all the way to social behavior. A major driving force has been the application of mathematical and computational network science tools to neurobiological systems, especially models and measures of graph theory. , ,
Graph theory is a branch of mathematics that dates back to the 18th century. Today, applications of graph theory pervade all scientific disciplines as well as many modern information and computing technologies. The brain is a natural fit for graph theory approaches as it is readily represented as a network (a graph) of elements and their pairwise interconnections, also called nodes and edges. Comprehensive maps of brain connectivity have given rise to the emerging field of connectomics,, a central focus of which is the systematic and quantitative study of brain networks and graphs.
Graph theory methods, when applied properly, can offer important new insights into the structure and function of networked brain systems, including their architecture, evolution, development, and clinical disorders. This brief review surveys some of the most relevant graph theory methods and illustrates their application in various neurobiological contexts. Comprehensive coverage of the topic is beyond the scope of this article (see a recent textbook). Instead, the emphasis here is on highlighting some new methodological trends, discussing their application to brain data, and suggesting future avenues for graphical models and measures.
Networks or graphs are collections of elements (nodes, vertices) and their pairwise links (edges, connections) which, in their simplest form, can be summarized in the form of a connection (or adjacency) matrix. The complete set of all pairwise connections defines the graph's topology, providing a complete map of all relations among nodes and edges. Brain nodes may be individual neurons or entire brain regions, depending on the measurement technique. Edges can take on binary or weighted values, and they can be directed or undirected, depending on how interactions are estimated from empirical data. The selection of appropriate graph theory methods for modeling and analyzing empirical data requires that the nature of the edge representation is taken into account., Put differently, not all graph theory methods fit all purposes.
The two most common species of brain graphs describe structural and functional connectivity among neural elements. Structural graphs are generally sparse (most possible structural connections in a given nervous systems do not exist) and temporally relatively stable (but subject to plasticity and development). In contrast, functional graphs record statistical dependencies among neuronal time series, and hence are often dense and highly variable across time. The dichotomy of structural and functional graphs is important to consider, as each domain draws on a specific subset of graph theory methods that are commensurate with the origin of the data.
The definition of nodes generally requires sophisticated data-processing pipelines and is the subject of much methodological discussion. In whole-brain networks acquired with neuroimaging, nodes are often derived from a parcellation of the original voxel-level data, designed to extract coherent brain “areas” that form the building blocks of structural or functional brain networks. Some approaches leverage anatomical templates, others pursue parcellation in a data-driven manner, eg, by boundary detection and clustering patterns of connectivity.A recent multimodal study employed machine learning to extract coherent brain regions on the basis of several different anatomical and functional criteria. Node definition is considerably more straightforward in studies of circuits and neuronal populations, where individual neurons are natural candidates for individual network nodes.
A major simplification inherent in most current applications of graph theory is the assumption that, within a given network representation, all nodes and edges are identical and homogeneous. Annotation of nodes and edges can address this limitation of simple graphs, by allowing additional layers of data to be linked to network elements, which can be useful for identifying biologically meaningful network communities.A further elaboration of simple graphs is the inclusion of multidimensional relationships which can be expressed in multilayer networks, composed of different layers that encode different types of interactions (eg, synaptic links, temporal correlations, transmitter systems, or gene coexpression).
Graphs can be investigated at different levels of scale, and specific measures capture graph attributes at local (nodal) and global (network-wide) scales.Nodal measures include simple statistics such as node degree or strength, while global measures express network-wide attributes such as the path length or the efficiency. Intermediate scales can be accessed via hierarchical neighborhoods around single graph elements, or by considering subgraphs or motifs. , Motifs are defined as subsets of network nodes and their mutual edges whose patterns of connectivity can be classified into distinct motif categories. In empirical networks, these categories often occur in characteristic frequencies that can be compared with distributions from appropriate (random) null models. In the brain, motif analysis has been applied to structural and functional graphs.
Most highly resolved structural brain networks are not fully, or even densely, connected. In such sparsely connected graphs, the minimal topological distance between two nodes, ie, the length of the shortest path, often involves multiple steps. Network paths are composed of unique edges that are traversed only once, while the usually much more abundant walks between two nodes can use edges any number of times. Paths and walks are considered important for the flow of signals and communication,and are the basis for popular graph metrics such as the so-called efficiency which defines the global capacity of a graph to pass information via short paths. Importantly, concepts of path lengths and efficiency are most naturally applied to brain graphs that represent structural connectivity. In contrast, they have rather different (and potentially problematic) interpretations when applied to functional connectivity. The distinction derives from the nature of functional connectivity which reports statistical associations among neural time series rather than a web of physical links that propagates neural signals.
Among the most widely encountered and biologically meaningful aspects of brain networks is their organization into distinct network communities or modules, , ( ). Modules are useful to partition larger networks into basic “building blocks,” ie, internally densely connected clusters that are more weakly interconnected amongst each other. Modular partitions have neurobiological significance as their boundaries separate functionally related neural elements, define critical bridges and hubs that join communities, channel and restrict the flow of neural signals and information, and limit the uncontrolled spread of perturbations.
There are numerous computational techniques for extracting communities and modules from complex networks., One of the most widely used approaches in network neuroscience is modularity maximization which aims to divide a given network into a set of non-overlapping communities by maximizing a global objective function, the modularity metric. Originally, this metric was formulated to detect communities whose internal density of connections is maximal, relative to a degree-preserving null model. More recently, variants of the metric that can be applied to directed and signed networks have been proposed. Of special note are variants of the scheme that are designed to deal with correlation matrices, a data type that is often encountered in studies of functional connectivity.
Modularity maximization faces important methodological limitations that need to be addressed. One limitation refers to the existence of several, often quite numerous, distinct partitions that are almost equally optimal (ie, degenerate) under the modularity metric. Given the inherent noisiness of empirical estimates of the network topology, it seems arbitrary to pick a single “optimal” partition as the sole representative of the network's community structure. Instead, multiple degenerate solutions should be aggregated into consensus partitions, for example by forming an agreement matrix that can then be reclustered until a single consensus partition emerges.The application of such a consensus approach can also reveal additional facets of community organization, including the degree to which individual nodes are affiliated with their host community. ,
Another fundamental limitation is the inability of the original modularity metric to detect modules below a certain size. This resolution limitcan be addressed in a number of ways. One common avenue is the inclusion of an additional resolution parameter into the modularity metric that rescales the intrinsic null model and allows the detection of smaller or larger communities. Varying the resolution parameter is important since many brain networks exhibit communities across different scales, which renders the selection of a single scale of analysis potentially problematic. The issue becomes of fundamental neurobiological importance when community detection methods are used to identify, for example, specific partitions of the brain into resting-state functional networks or “functional systems.” While several landmark studies have proposed canonical partitions of such networks; , that are now widely applied in the field, it should be noted that other partitions at both finer and coarser scales may represent additional levels of organization. Up to date, most studies circumvent full multiscale analysis by selecting one or a few settings of the resolution parameter, usually based on some criteria of partition stability.
One way to preserve and represent the full multiscale structure of brain networks is to perform consensus clustering across multiple spatial resolutions,an approach that combines sampling the entire range of possible spatial resolutions with a hierarchical consensus clustering procedure. The approach returns a coassignment matrix that captures the probability that each node pair remains associated within the same community as the scale is varied, together with a hierarchy that illustrates their mutual relations. Multiresolution consensus clustering delivers a more complete picture of community structure than is provided by single partitions, and avoids complicated and often rather arbitrary models for selecting relevant scales in brain networks.
Finally, many extensions of the above framework for detecting communities in brain networks should be noted. While modularity maximization detects non-overlapping communities, it may also be useful to define modules as overlapping communities (eg, see ref 38), ie, sets of nodes where some, or all, nodes maintain multiple affiliations. Other methods, eg, multislice modularity,are designed to track modular partitions, and their nodal memberships, across time. Yet another promising avenue, with deep historical roots in the social sciences, is the use of block models. Block modeling attempts to fit a statistical model for generating networks to empirical data to identify those model parameters that provide the best match. For example, those parameters may correspond to a number of blocks and the corresponding within and between block connection probabilities. Since block models are not limited to strictly modular arrangements (maximizing within-module density while minimizing between-module density) they can detect more complex block structure in networks, including the existence of a dense core and a more weakly connected periphery. In addition to versatility, block models offer the advantage of fitting a generative model to data (see below), which may offer insight into the processes that underlie the observed network topology.
Centrality and communication
Empirical networks have architectures that differ significantly from those of classic random graph models—most importantly, their nodes and edges are not equal in the way they are connected with the rest of the network. Far from being “equipotential,” the ways in which nodes and edges are embedded within the overall topology play a major role in determining their specific contributions to network function. Thus, network theory reconciles functional specialization with distributed processing—a dichotomy that has in the past led to strongly polarized theories of brain function, to the detriment of scientific progress.
Indeed, a major rationale for mapping brain connectivity arises from the idea that connectivity drives the functional specialization of local network elements. This idea is inherent in the notion that different brain regions have unique connectivity fingerprints that are indicative of their network embedding and predictive of their functional roles.Similarities among connectivity fingerprints can be informative of functional groupings of areas and their mutual relations.
Numerous measures quantify the potential of individual nodes and edges to influence the global state of the network. Many of them allow the identification of network hubs,, generally defined as highly central parts of the network. The number of connections maintained by a node (its degree) or the combined weight of these connections (its strength) often provides a strong indicator of influence or centrality. Other measures of centrality take advantage of the layout of the shortest paths within the network, and record the number of such paths that pass through a given node or edge, a measure called the betweenness. Yet another way to approach centrality is by referencing the relation of nodes and edges to a network's community structure. The participation coefficient quantifies the diversity of a given node's connections across multiple modules—high participation indicates that many of these connections are made across modules, thus linking structurally and functionally the distinct communities. This measure is particularly useful in brain networks, as it can be applied to both structural and functional network data.
Most measures of node centrality are mutually related (and hence statistically correlated); for example, in most (but not all) networks nodes with many connections (high degree) also serve as intermediaries on many short paths (high betweenness). Since different measures of centrality index different aspects of network organization, it can be beneficial to rank nodes by aggregating multiple centrality measures.
Centrality measures can be useful tools for charting the global architecture of a brain network. Of particular neurobiological interest is the mutual association among highly central (eg, high degree) nodes. If these nodes maintain interconnections that are found to be denser than expected by chance (ie, a suitable null model) the network is said to exhibit so-called rich club organization( ). Rich clubs have been found in virtually all structural connectivity data, from the connectomes of humans to those of invertebrates. , Rich club organization tends to centralize network traffic, as the dense core formed by interconnected high-degree nodes attracts the bulk of short network paths that link lower degree nodes to each other. This important role in network communication is thought to boost network communication and efficiency, thus trading off against the high wiring cost involved in linking spatially distributed hub regions.
While much of the interest in centrality is based on the putative role of hubs in network communication, it should be noted that the mechanisms by which brain networks communicate remain obscure. Most models and their associated graph theory metrics assume that communication unfolds along the most efficient and shortest paths available. However, this idea ignores the fact that these paths cannot be discovered by neural elements or signals in the absence of global information about the network topology.Hence, alternative models based on spreading and diffusion and path ensembles are important to explore in future work.
This final section briefly reviews several new directions that have great potential for future applications in brain networks.
Most current graph theory methods applied to brain data deliver descriptive statistics that capture various aspects of network architecture. While such measures can be informative about the topological characteristics of a specific empirically measured network, their estimation should always be accompanied by some estimate of statistical significance or evidence. Every graph, even one generated by an entirely random process, will exhibit some graph attributes, including some chance level of clustering and modularity. Null models are important adjuncts of descriptive graph analysis as they allow discriminating which graph attributes are due to chance, and which exceed the expected values given by the null model. The choice of a proper null model is crucial for any descriptive analysis, as it will determine which graph features survive statistical comparison. Classic null models involve the rewiring of an empirical graph by swapping connections such that local degree is preserved while the global graph architecture is randomized. Other null models preserve subgraph frequencies to estimate the significance of larger subgraph distributions, or null models that preserve spatial locations of nodes.
Null models that fix a number of different factors such as local node measures, spatial locations, and wiring cost effectively become generative models of the empirical data. They generate graphs that may become indistinguishable from the empirical network, and in that sense can account for its topological properties. Hence, generative models can provide important insights into the factors that have shaped the emergence of specific architectural or performance characteristics. As such they are important reminders that not all graph attributes are the product of adaptation, and instead may have arisen as “spandrels” along the way,a mere by-product of other more basic generative factors such as degree distributions or spatial embedding. Another important insight that can be gleaned from the design of generative models is that many graph attributes are mutually dependent and arise jointly from a common set of driving factors. For example, high clustering is invariably linked to particular statistics of subgraphs that favor triangles, and strongly favored by spatial embedding and wiring conservation.
Spatial embedding as an important generative principle behind the organization of brain graphs deserves special mention, as it is a fundamental constraint on brain architecture., Much evidence points to distance-dependence as a major rule that governs the topology of anatomical brain connectivity within local circuits in single brain regions as well as for inter-areal projections. Generative models that combine spatial embedding with other, nonspatial, topological rules have suggested that their mutual trade-off may more fully account for characteristics of brain topology than any single (spatial or topological) factor alone.
Generative models are also crucial for understanding the emergence of dynamic states and functional connectivity.Many classic models explored in computational neuroscience are generative models, in that they attempt to generate neuronal activity and dynamics from simple biophysical and structural ingredients. In human neuroimaging, the relationship between structural and functional connectivity has been illuminated by the use of computational models that can capture some of the patterns exhibited in brain dynamics. Some of these generative models are simple, as they can be computed analytically on top of structural graphs, or make minimal assumptions (eg, linearity) about the nature of neuronal dynamics. Other models utilize detailed biophysical mechanisms to generate neuronal time series and population activity. Implementation of large numbers of generative models with varying model parameters combined with model selection are central to estimate variations in network parameters, including causal effects, in the course of varying conditions of stimulus and task.
Brain networks are not immutable, static constructs—rather, their structural and functional connectivity patterns change on multiple time scales. Data on time-varying brain graphs generally takes on the form of time series (or stacks) of graphs that form an ordered series of snapshots, for example recorded in the course of learning or across developmental stages. Changes in network topology can be tracked by computing graph measures on each time point followed by the subsequent examination of the resultant time courses of graph metrics. Another analysis approach involves arranging this stack into a series of time slices that are mutually coupled and can be analyzed as a single graph construct.This allows the derivation of nodal measures of flexibility which can pinpoint parts of the network that are more variable across learning or development.
Much effort has recently been expended to track fast changes in graph topology and organization of functional networks recorded with fMRI.Most commonly, a long time series of fMRI activations is partitioned into shorter “windows” that are then analyzed separately. Possible confounds are measurement artifacts such as physiological noise and increased uncertainty of estimating the magnitudes of functional connections on short samples. In resting state, fMRI networks appear to undergo fluctuations between states of higher integration and segregation, or modularity. Similar transitions between different network states occur during task switching and in the course of cognitively demanding spontaneous stimulation.
The arrival of multiomic data has enabled the joint analysis of networks between elements of neurobiological systems at different levels of organization. Prime examples are recent studies that combine maps of anatomical and functional networks, as well as studies that combine large-scale brain connectivity data with spatially registered data on patterns of gene expression. The latter have yielded important insights into relations between the centrality of network elements, for example their membership in a dense core or rich club, and distinct genetic signatures in energy metabolism.In most studies so far, graph theoretic analysis proceeds through simple comparison or correlation of graph metrics across different levels (eg, anatomy and genetics). In the future, more explicit use of a multi-layer graphical framework is likely to occur. A few early examples are studies that place structural and functional connectivity into a multilayer model, eg, with data from human neuroimaging and magnetoencephalography.
All graph theory approaches discussed so far build on networks that are composed of pairwise (dyadic) interactions. However, higher-order interactions can be highly informative for understanding non-random attributes of brain networks. Such higher-order relations can be represented with tools from applied algebraic topology, such as so-called simplicial complexes or simplices.Simplices reframe the problem of relational data in terms of collections of vertices: a 0-simplex is a node, a 1-simplex is an edge, and a 2-simplex is a filled (connected) triangle. Simplices can be used to locate cliques (all-to-all connected subgraphs) or cavities. Recent applications of simplices to human connectome data have shown the utility of the approach for identifying both densely connected groups of nodes as well as other patterns of connectivity (eg loop-like paths) that would facilitate parallel processing. Finally, the related area of topological data analysis attempts to detect, quantify and compare mesoscale structure present in complex network data. Essentially, the approach attempts to embed the data in a way that provides an optimal summary of its global structure. A recent example used topological data analysis to reveal dynamical organization in multitask fMRI time series, by creating graphical representations of relations among single image frames at the level of individual participants. These representations allow a comparison of how individuals transition among multiple cognitive tasks and states and could provide useful markers for clinical diagnosis and treatment. Overall, the arrival of these topological methods capitalizes on higher-order and high-dimensional features in brain data that have so far been inaccessible with simple graph methods, and are therefore promising avenues for future investigation.
The growth of network neuroscience over the past decade or two has been nothing short of astonishing. A major driving force for this rapid expansion is the availability of relational data recording couplings and interactions among elements of neural systems. For now, most studies remain descriptive and focus entirely on pairwise interactions resulting in graphs composed of dyadic links. But graph theory is much more powerful than current methods applied to brain networks suggest. Generative models, dynamic networks, multilayer models, and algebraic topology are just a few of the promising directions that are currently pursued. With time, these new approaches will likely find applications not only in basic, but also in clinical and translational research. For years to come graph theory methods will remain indispensable tools to further our understanding of the brain as a complex interconnected system.