As we know, in the tree structure, the main restriction is that each tree has one root node. However, if we remove that restriction, we end up with a more complex data structure called a graph. In the graph, there is not a root node at all.
A graph is defined as a set of nodes or vertices and a set of edges or arcs that connect the two vertices.
A set of vertices is described by listing the vertices as in a set. For example, V = {k, l, m, n} and the set of edges is determined as a sequence of edges. For example, E = {(k, l), (m, n), (k, n), (l, m)}
The graph is commonly defined as follows:
where V (G) is a finite, non-empty set of vertices of a graph, G. E (G) is a set of pairs of vertices or arcs each representing an edge of a graph.
Directed graphs are also called digraphs. A directed graph or digraph is a collection of nodes connected by edges, where the edges have a direction linked with them.
An undirected graph is a type of graph, in which a set of vertices or nodes are connected, and all the edges are bidirectional.
In directed graphs, the directions are shown on the edges (Figure 6.1).
As shown in Figure 6.1., the edges between vertices are ordered. In this type of graph, edge E1 is in between vertices v1 and v2. The v1 is called head and v2 is called tail. We can say, E1 is set of (V1, V2) and not of (V2, V1).
Similarly, in an undirected graph, edges are not ordered (Figure 6.2).
In this type of graph, that is an undirected graph, the edge E1 is set of (V1, V2) or (V2, V1).
Consider the following directed graph (Figure 6.3):
In Figure 6.3, edge (A, B) and edge (B, A) are two different edges through both the edges that connect vertex A and vertex B. Some edges may connect one node with itself. These edges are called loops. For example, edge (A, A) is a loop.
The degree of a vertex is the number of edges incident to it. In-degree of a vertex is the number of edges pointing to that vertex and out-degree of a vertex is the number of edges pointing from that vertex. For example, the out-degree of node B in Figure 6.3 is 3, and its in-degree is 1 and degree of node B is 4.
Note: A tree is a special case of a directed graph.
A graph need not be a tree, but a tree must be a graph. In a graph, a node that is not adjacent to any other node is called an isolated node (Figure 6.4).
For example, node D in a graph is an isolated node shown in Figure 6.4. The degree of an isolated node is zero.
A graph with undirected and directed edges is said to be a mixed graph as shown in Figure 6.5.
A graph containing only an isolated node is called a null graph. A null graph contains an empty set of edges (Figure 6.6).
If an undirected simple graph of ‘n’ vertices consists of n (n-1)/2 number of edges, then it is called a complete graph. The complete graph does not contain any loop edge. Here each node is connected. If the graph contains four nodes, then each node’s degree is three in a complete graph as shown in Figure 6.7.
A digraph is said to be strongly connected if for every pair of vertex there exists a path, that is, in such a graph path between V1and V2exist,then there is a path from V2to V1 is also present. For example, in Figure 6.8, a graph is a strongly connected graph.
A simple digraph is said to be unilaterally connected if for any pair of nodes of a graph at least one of the nodes of a pair is reachable from the other node.
The concept of “strongly connected” and “weakly connected” graphs are concerned with directed graphs.
If for any pair of nodes of the graph, both the nodes of the pair are reachable from one another, then the graph is called strongly connected.
If the digraph is treated as an undirected graph (means directions are not considered), and when it is connected graph, there exists a path from any pair of vertex to another and only then the graph is called weakly connected graph.
A unilaterally connected digraph is a graph in which every pair of node their exists either a path from node v1 to v2 or v2 to v1 for each pair of node as shown in Figure 6.9b.
While a unilaterally connected digraph is weakly connected, but a weakly connected digraph is not necessarily unilaterally connected.
A strongly connected digraph is, at the same time, unilaterally and weakly connected.
A graph G is called a simple graph if the graph does not contain any loop or parallel edges. The graph shown in Figure 6.10 is simple.
A graph G is called a multigraph if a graph contains loop or parallel edges. The graph shown in Figure 6.11 is a multigraph.
A graph with at least one cycle is known as a cyclic graph.
In Figure 6.12, the graph contains v1-v2-v3-v4-v1, v1-v3-v4-v1 and v3-v4-v3 cycles, so the above graph is called a cyclic graph.
A graph with no cycles is called an acyclic graph or a non-cyclic graph.
In Figure 6.13., the graph does not have any cycles. Hence, it is a non-cyclic graph or an acyclic graph.
A simple graph G = (V, E) with vertex or nodes are partition into set V1 and V2 where V = {V1, V2} and V1
In general, a bipartite graph has two sets of vertices, let us say, V1 and V2, and if an edge is drawn, it should connect any vertex in set V1 to any vertex in set V2. The graph shown in Figure 6.14 is bipartite.
A complete bipartite graph is a simple graph in which the vertices can be partitioned into two disjoint sets V1 and V2 such that each vertex in V1 is adjacent to each vertex in V2. A complete bipartite graph is shown in Figure 6.15.
Here, a bipartite graph is a simple graph where every vertex of the first set V1 is connected to every vertex in the second set V2.
There are two commonly used graph representation techniques:
Consider a graph G with a set of vertices V(G) and a set of edges E(G). If there are N nodes in V(G), for N >= 1, the graph G may be represented by an adjacency matrix which is a table with N rows and N columns where
A(i, j) = 1 if and only if there is an edge (Vi, Vj ) in E(G) 0 otherwise
If there is an edge connecting Vi and Vj in E (G), the value in the [i, j] position in the table is 1, otherwise, it is 0.
In this representation, the graph may be represented using a matrix of a total number of vertices per total number of vertices. This means if a graph with five vertices can be represented using a matrix of size 5 × 5. Within this matrix, both rows and columns are vertices. This matrix is filled with either 1 or 0. Here, 1 represents that there is an edge from row vertex to column vertex, and 0 describes that there is no edge from row vertex to a column vertex.
See Figures 6.16 and 6.17.
If the relation between two nodes is unordered, the graph is called an unordered or undirected graph (Figures 6.18 and 6.19).
The algorithm for the creation of the graph using adjacency matrix will be as follows:
The use of the adjacency matrix to represent a graph is inappropriate due to its static implementation. To use this representation, we must know in advance the number of nodes that a graph has in order to set up storage. The solution to this problem is to use a linked structure, which makes allocations and de-allocations from an available pool. We will represent the graph using adjacency lists. This adjacency list stores information about only those edges that exist. The adjacency list contains a directory and a set of linked lists. This representation is also known as node directory representation. The directory contains one entry for each node of the graph. Each entry in the directory points to a linked list that represents a node that is connected to that node. Directory represent nodes and the linked list represent the edges.
Each node of the linked list has three fields:
An undirected graph can also be stored using an adjacency list; however, each edge will be represented twice, once in each direction.
In general, an undirected graph of order N, which is the number of nodes N, with E edges requires N entries in the directory and 2*E linked list entries, whereas a directed graph will require N entries in the directory and E linked list entries.
Figure 6.22 shows the node directory representation of the directed graph as shown in Figure 6.20.
The out-degree of a node in a directed or undirected graph can be determined by counting the number of entries in its linked list. Here, in Figure 6.21, node 2 having three nodes in its linked list, that is, node 2 has out-degree 3. However, node 2 does not present in the linked list means node 2 having zero in-degree. Similarly, count node 4 in the linked list part, which is two in number, so node 4 has in-degree two.
However, it is difficult to determine the in-degree of node, but we can easily obtain the out-degree (Figure 6.22).
Figure 6.23 shows a weighted graph (Figure 6.24), and Figure 6.25 shows its node directory representation for a weighted graph.
Graph traversal is a procedure used to search for a vertex or node in a graph. The graph traversal is also used to decide the order of vertices be visited in the search process only once. A graph traversal finds the edges to be used in the search process without creating loops that means using graph traversal, we visit all vertices of the graph only at one time without getting into a looping path. The majority of graph problems involve the traversal of a graph. Traversal of a graph means visiting each node exactly once.
There are two types of graph traversal techniques, and they are as follows:
In the graph, say vertex V1 in a graph will be visited first, then all vertices adjacent to V1 will be traversed suppose adjacent to V1 are (V2, V3) then again from V2 adjacent vertices will be printed. This process will be continued for all the vertices to be encountered. To keep track of all vertices and their adjacent vertices, we will make use of an array for the visited nodes. The nodes that get visited are set to 1. Thus, we can keep track of visited nodes.
// Array visited [ ] is initialized to 0. // q is queue-type data structure. // BFS traversal on graph G is carried out beginning at vertex V. void BFS( int v) { q: a queue type variable Initialize q; visited[ v] = 1; //mark v as visited add vertex v to queue q; while( q is not empty ) { Print node v; v → delete an element from the queue ; for all vertices w adjacent from v { if ( ! visited[ w] ) { visited[ w] = 1; add the vertex w to queue q; } // completion of if statement } // completion of for loop } // completion of while loop } // completion of BFS function
Find BFS traversal for following graph G (Figure 6.26).
Solution:
Step 1:
Step 2:
Step 3:
Step 4:
Step 5:
Step 6:
Adjacent nodes of node 5 are {6, 2} already visited.
Step 7:
Step 8:
Step 9:
#include<stdio.h> #include<stdlib.h> #define MAX 100 #define initial 1 #define waiting 2 #define visited 3 int n; int adj[MAX][MAX]; int state[MAX]; void create_graph(); //Create a graph using adjacency matrix void BF_Traversal(); // traverse a graph using BFS technique void BFS(int v); // Breadth first search function int queue[MAX], front = -1,rear = -1; void insert_queue(int vertex); int delete_queue(); int isEmpty_queue(); // main function definition int main() { create_graph(); BF_Traversal(); return 0; } void BF_Traversal() { int v; for(v=0; v<n; v++) state[v] = initial; printf("Enter start Vertex for BFS: \n"); scanf("%d", &v); BFS(v); } // pass node v as the first node for traversing graph using BFS traversing technique void BFS (int v) { int i; insert_queue(v); state[v] = waiting; //Continue the while loop till queue is not empty while( !isEmpty_queue() ) { v = delete_queue( ); printf("%d ",v); state[v] = visited; //check all adjacent nodes of node v and check whether the adjacent nodes are visited or not for(i=0; i<n; i++) { if(adj[v][i] == 1 && state[i] == initial) { insert_queue(i); state[i] = waiting; } //if completed } // for loop completed } // while loop completed printf("\n"); } //insert vertex or node in the queue void insert_queue (int vertex) { if(rear == MAX-1) { printf("Queue is full\n"); } else { if(front == -1) { front = 0; } rear = rear+1; queue[rear] = vertex ; } } //Queue is empty or not function int isEmpty_queue () { if(front == -1 || front > rear) return 1; else return 0; } //delete node from the queue int delete_queue () { int delete_item; if (front == -1 || front > rear) { printf ("Queue is empty \n"); exit(1); } delete_item = queue[front]; front = front+1; return delete_item; } //create adjacency graph and initialize it void create_graph() { int count, max_edge, origin, destin; printf("Enter number of vertices : "); scanf("%d", &n); max_edge = n*(n-1); for(count=1; count<=max_edge; count++) { printf("Enter edge %d( -1 -1 to quit ) : ",count); //exit from for loop enter -1 -1 scanf("%d %d", &origin, &destin); if((origin == -1) && (destin == -1)) break; if(origin>=n || destin>=n || origin<0 || destin<0)//Check node numbers are in between 0 and n-1 { printf("Invalid edge!\n"); count--; } else { adj[origin][destin] = 1; //Otherwise insert the edge into the adjacency array } } // for loop completed }
Output:
If the graph is implemented using an adjacency matrix, that is, an array of size V × V, then, for each node, you have to traverse an entire row of length V in the matrix to discover all its outgoing edges. Therefore, the time complexity of BFS is T (n) = O(n2 ) where n is the number of vertices in the graph.
If your graph is implemented using adjacency lists or node directory representation, wherein each node maintains a list of all its adjacent edges, then, for each node, you could discover all its neighbors by traversing its adjacency list just once in linear time. Therefore, the time complexity of BFS is T (n) = O ( V + E ), where V is the number of vertices in the graph and E is the number of edges in the graph.
In the DFS traversal of a graph, we follow the path as deeply as we can go. When there is no adjacent vertex present, then we travel backward and look for the not-visited vertex. We will maintain a visited array to mark all the visited vertices. A node already reported as visited should not be selected for the traverse. Marking of visited vertices can be performed using a global array visited[ ]. The visited array[ ] is set to zero.
for(i=0; i<n; i++) visited [i]= 0
{ print i; visited [i] = 1; for each w adjacent to i if (! visited [w]) DFS (w); }
In DFS, the basic data structure for storing the adjacent nodes is stack. In our program, we have used a recursive call to the DFS function. When a recursive call is invoked, a push operation is performed. When we exit from the loop, a pop operation will be performed (Figure 6.28).
Step 1: Start with vertex 0, print it so ‘0’ node is printed. Mark 0 node as visited.
Step 2: Find adjacent vertex to 0, there are two adjacent vertices 1 and 2, out of that choose one say node 1 is chosen if it is not visited, call DFS (1), that is, 1 will get inserted into the stack, mark node 1 as visited in the visited array.
After exiting loop 1 will be popped and print 1.
Step 3: Find adjacent to 1 that is vertex 0 or 3 but 0 is already visited, so remaining node 3 is chosen if it is not visited and call DFS (3), that is, 3 will get pushed onto stack, mark it as visited in a visited array.
After existing, loop 3 will be popped and print 3.
Step 4: Find adjacent to 3 that is vertex 2 if it is not visited call DFS (2), that is, 2 will be pushed onto the stack and mark node 2 as visited in a visited array.
After existing, loop 2 will be popped and print 2.
Since all nodes are visited once then stop the procedure.
So the output of DFS is 0 1 3 2.
#include<stdio.h> void DFS( int ); //DFS function declaration int G[10][10], visited[10], n; //n is number of vertices and graph is stored in array G[10][10] //visited[10] is visited array //main function definition int main() { int i, j; printf("Enter number of vertices: \n"); scanf("%d",&n); //read the adjacency matrix, enter 1 if edge is present and 0 if edge is not present printf("Enter adjacency matrix of the graph: \n"); for(i=0;i<n;i++) { for(j=0;j<n;j++) { scanf("%d",&G[i][j]); } } //visited array is initialized to zero for(i=0;i<n;i++) { visited[i]=0; } //call DFS with 0 node as start node printf("DFS traversal of a graph : "); DFS(0); return 0;; } //Recursive DFS function definition void DFS (int i) { int j; printf("\t %d",i); visited[i]=1; for(j=0;j<n;j++) { if( !visited[j] && G[i][j] == 1) { DFS(j); } } }
Output:
If graph is implemented using an adjacency matrix, that is, an array of size V × V, then, for each node, you have to traverse an entire row of length V in the matrix to discover all its outgoing edges. Therefore, the time complexity of DFS is T (n) = O (n2) where n is the number of vertices in the graph.
If your graph is implemented using adjacency lists or node directory representation, wherein each node maintains a list of all its adjacent edges, then, for each node, you could discover all its neighbors by traversing its adjacency list just once in linear time. Therefore, the time complexity of DFS is T (n) = O (V + E), where V is the number of vertices in the graph and E is the number of edges in the graph.
Let G = (V, E) be a graph with n vertices.
The problem is to find out the shortest distance from the vertex to all other vertices of a graph.
Dijkstra’s algorithm is also called a single source shortest path algorithm. It is based on a greedy or optimal technique. Greedy algorithm is the algorithm, which picks the best solution at the moment without regard for consequences. In this algorithm, we use
for (i=0; i<n; i++) visited [i]=0;
for (i=1; i<n; i++) distance [i] = cost [0] [i] ;
Initial distance of source vertex is initiated as 0. that is. distance [0].
for (i=1; i<n; i++)
Choose a vertex w, such that distance [w] is minimum and visited [w] is 0.
Mark visited [w] as 1.
Recompute the shortest distance of rest vertices from the source. Only the vertices not
marked as 1 in array visited [ ] should be considered for recalculation of distance.
That is,
for each vertex v if (visited[v]==0) distance [v]=min (distance[v], distance[w] + cost[w][v])
T (n) = O (n2)
Show working of Dijkstra’s algorithm on a graph given as below (Figure 6.29):
Solution:
Source vertex is taken as 0.
Cost matrix:
Visited array:
First iteration:
Distance matrix:
Visited array
Second iteration:
Third iteration:
Vertex selected = 3
Cost of going to vertex 2 through 3 = dist[3] + cost[3] [2] = 30+20 = 50
Cost of going to vertex 4 through 3 = dist[3] + cost[3] [4] = 30+60 = 90
Distance of vertices 2 and 4 should be changed.
Distance matrix:
Visited array:
Fourth iteration:
Vertex selected = 2
Cost of going to vertex 4 through 2 = dist[2] + cost[2] [4] = 50+10 = 60
Distance of vertex 4 should be changed.
Distance matrix with final distances:
A spanning tree is a minimal sub-graph of a given graph and they follow or satisfy all of the following three conditions:
Simply, a spanning tree is a subset of an undirected Graph that has all the vertices connected by a minimum number of edges.
If all the vertices are connected in a graph, then there exists at least one spanning tree. In a graph, there may exist more than one spanning tree. If graph G, contains ‘n’ vertices, then the spanning tree contains ‘n’ vertices and (n-1) edges.
A Minimum Spanning Tree (MST) is a subset of the edges of a connected weighted undirected graph that connects all the vertices with the minimum possible total edge weight.
Given Graph G =
Here in the spanning tree, all the vertices are visited as in graph G, but the edges are less than graph G (Figures 6.30 and 6.31).
Finding a spanning tree of a weighted graph G, having minimum cost can be calculated using Greedy strategy.
Feasible solution: Final graph must contain all vertices, and they must be connected with no cycle.
Optimal solution: It is a feasible solution with the minimum cost.
There are two types of algorithms to find MST:
Let G (V, E) is a connected weighted undirected graph with no loops and parallel edges. If graph G contains any parallel edge, then choose a less weight edge from the parallel edge and remove another one. Also, if any loop is there in the graph, then remove that loop also. Therefore, after removing the loop and the largest parallel edge if any from graph G. Now for graph G, we will find the MST using Prime’s algorithm.
Prime’s Algorithm
{ pick the lightest edge between any chosen 'u' and unchosen 'v' label v as chosen T = T + <u, v> if and only <u, v> does not give rise to cycle in T }
Now we will represent Prime’s algorithm in different ways as follows:
Let G(V, E) is a connected weighted undirected graph with no loops and parallel edges.
Example: Calculate minimum cost-spanning tree using Prime’s Algorithm (Figure 6.32).
Solution:
Step 1:
Set V = {1, 2, 3, 4, 5, 6, 7}
Set V’ = empty set initially
T = { } = empty set initially
Step 2: First iteration:
By choosing 1 as a starting node
V’ ={ 1, 2}
T ={ <1,2> }
Step 3: Second iteration:
<2, 3> edge having minimum weight and does not form a cycle so choose <2, 3> edge and insert into the solution vector T. Node 3 is inserted into the set V’.
V’ ={ 1, 2, 3}
T ={ <1, 2>, <2, 3> }
Step 4: Third iteration:
<1, 4> edge having minimum weight and does not form a cycle so choose <1, 4> edge and insert into the solution vector T. Node 4 is inserted into the set V’.
V’ ={ 1, 2, 3, 4}
T ={ <1, 2>, <2, 3>, <1, 4> }
Step 5: Fourth iteration:
<4, 5> edge having minimum weight and does not form a cycle so choose <4, 5> edge and insert into the solution vector T. Node 5 is inserted into the set V’.
V’ ={ 1, 2, 3, 4, 5}
T ={ <1, 2>, <2, 3>, <1, 4>, <4, 5> }
Step 6: Fifth iteration:
<4, 7> edge having minimum weight and does not form a cycle so choose <4, 7> edge and insert into the solution vector T. Node 7 is inserted into the set V’.
V’ ={ 1, 2, 3, 4, 5, 7}
T ={ <1, 2>, <2, 3>, <1, 4>, <4, 5>, <4, 7> }
Step 7: Sixth iteration:
<7, 6> edge having minimum weight and does not form a cycle so choose <7, 6> edge and insert into the solution vector T. Node 6 is inserted into the set V’.
V’ ={ 1, 2, 3, 4, 5, 7, 6}
T ={ <1, 2>, <2, 3>, <1, 4>, <4, 5>, <4, 7>, <7, 6> }
Step 8: Seventh iteration:
Now V’ = V, so terminate the loop.
Step 9: Add weights of all edges in solution vector T as,
1 + 2 + 4 + 3 + 4 + 3 =17
Step 10: Stop the algorithm.
The above algorithm is also explained in the following way (Figure 6.33):
Minimum weight = 1+2+4+3+4+3 = 17
Another minimum cost-spanning tree for graph G can be as follows (Figure 6.34):
Minimum weight = 1+2+3+4+4+3 = 17
Disadvantages of Prime’s Algorithm:
Complexity Analysis of Prime’s Algorithm:
Kruskal’s Algorithm uses the Greedy approach as it goes on choosing the lightest edges and then choose the next lightest edge and so on.
All the edges are sorted in ascending order while choosing the next edge. Care is to be taken that it should not give rise to a cycle.
Let G(V, E) is a connected weighted undirected graph with no loops and parallel edges.
{ choose <V, W> an edge ∈ Q (Priority Queue) with least weight. delete <V, W> from Q if (<V, W> are in different sets W1 and W2 in Vs) { replace W1 and W2 in Vs by W1 U W2 add <V, W> to T } if (<V, W> are in the same set) { it suggests a cycle discard <V, W> } } // While loop complete
Example: Find out the minimum cost of a spanning tree by using Kruskal’s algorithm (Figure 6.35).
Solution:
Step 1:
T = null
Vs = null
E = {<V1, V2>, <V2, V3>….} //set of all edges
T is a solution vector, initially null with no edges.
Vs is the set of vertices and at the initial step it is null.
Step 2: Arrange all the edges in ascending order according to their weights.
When you select any edge then add it to the set of vertices Vs and the Tree, T is solution vector.
Care must be taken that there should not be any cycle.
V1 – V7 → 1 ✓
V3 – V4 → 3 ✓
V2 – V7 → 4 ✓
V3 – V7 → 9 ✓
V2 – V3 → 15 // creates loop so discard
V4 – V7 → 16 // creates loop so discard
V4 – V5 → 17 ✓
V1 – V2 → 20 // creates loop so discard
V1 – V6 → 23 ✓
V5 – V7 → 25
V6 – V7 → 36
V5 – V6 → 38
Vs = {{V1}, {V2}, {V3}, {V4}, {V5}, {V6}, {V7}}
This is the set of vertices at the initial stage.
Step 3: We select edge V1 – V7 because it has minimum weight. Add it to Vs and T.
Vs = {{V1, V7}, {V2}, {V3}, {V4}, {V5}, {V6}}
T = {<V1, V7>}
Step 4: Next select V3 – V4 with weight 3.
Vs = {{V1, V7}, {V2}, {V3, V4}, {V5}, {V6}}
T = {<V1, V7>, <V3, V4>}
Step 5: Next edge is V2 – V7 with weight 4.
Vs = {{V1, V7, V2}, {V3, V4}, {V5}, {V6}}
T = {<V1, V7>, <V3, V4>, <V2, V7>}
Step 6: Next edge is V3 – V7 with weight 9.
Vs = {{V1, V7, V2, V3, V4}, {V5}, {V6}}
T = {<V1, V7>, <V3, V4>, <V2, V7>, <V3, V7>}
Step 7:
The next edge is V2 – V3 with weight 15. We discard this edge because it results in cycle. After all, V2 and V3 are in the same set in Vs.
Therefore, the next edge is V4 – V7 with weight 16, we discard this edge also because it results in cycle. V4 and V7 are in the same set in Vs.
Therefore, the next edge is V4 – V5 with weight 17. Select it.
Vs = {{V1, V7, V2, V3, V4, V5}, {V6}}
T = {<V1, V7>, <V3, V4>, <V2, V7>, <V3, V7>, <V4, V5>}
Step 8:
The next edge is V1 – V2 with weight 20. We discard this edge because it results in cycle. V1 and V2 are in the same set in Vs.
The next edge is V1 – V6 with weight 23. Select it.
Vs = {{V1, V7, V2, V3, V4, V5, V6}}
T = {<V1, V7>, <V3, V4>, <V2, V7>, <V3, V7>, <V4, V5>, <V1, V6>}
Step 9: So all vertices of the given graph are visited. Stop this process. We get tree which is minimum cost-spanning tree as follows (Figure 6.36):
Minimum cost of spanning tree= 23 + 1 + 4 + 9 + 3 + 17 = 57
Note: The above tree satisfies all the conditions to become a spanning tree and mainly it uses greedy strategy because we choose or select the edge with minimum weight as the first edge instead of choosing any random edge like in Prime’s Algorithm.
Example: Find minimum cost-spanning tree by using Kruskal’s algorithm (Figure 6.37).
Solution:
Step 1:
T = null
Vs = null
E = {<V1, V2>, <V2, V3>….} //set of all edges
T is a solution vector, initially null with no edges.
Vs is the set of vertices and at the initial step it is null.
Step 2: Arrange all the edges in ascending order according to their weights.
When you select any edge, then add it to the set of vertices Vs and the Tree, T is the solution vector.
Care must be taken that there should not be any cycle.
A – F → 2 ✓
B – F → 3 ✓
A – B → 4 // creates loop so discard
C – E → 5 ✓
B – C → 6 ✓
B – E → 7 // creates loop so discard
D – E → 7 ✓
C – D → 8
E – F → 9
Vs = {{A}, {B}, {C}, {D}, {E}, {F}}
This is the set of vertices at the initial stage.
Step 3: We select edge A – F because it has minimum weight. Add it to Vs and T.
T = {<A, F>}
VS = {{A, F}, {B}, {C}, {D}, {E}}
Step 4: We select edge B – F because it has minimum weight. Add it to Vs and T.
T = {<A, F>, <B, F>}
VS = {{A, F, B}, {C}, {D}, {E}}
Step 5:
The next edge is A – B with weight 4. We discard this edge because it results in cycle. A and B are in the same set in Vs.
Step 6:
Therefore, the next edge is C – E with weight 5. Select it.
T = {<A, F>, <B, F>, <C, E>}
VS = {{A, F, B}, {D}, {C, E}}
Step 7:
Therefore, the next edge is B – C with weight 6, select it.
T = {<A, F>, <B, F>, <C, E>, <B, C>}
VS = {{A, F, B, C, E}, {D}}
Step 8:
The next edge is B – E with weight 7. We discard this edge because it results in cycle. In addition, B and E are in the same set in Vs.
Step 9:
Therefore, next edge is D – E with weight 7, select it.
T = {<A, F>, <B, F>, <C, E>, <B, C>, <D, E>}
VS = {{A, F, B, C, E, D}}
Step 10: So all vertices of a given graph are visited. Stop this process. We get tree that is minimum cost-spanning tree as follows (Figure 6.38):
Minimum cost = 2+3+5+6+7 = 23
Time complexity of Kruskal’s Algorithm:
Answer: (C)
Answer: (D)
Answer: (C)
Answer: (C)
Answer: (B)
Answer: (D)
Answer: (A)
Answer: (C)
Answer: (A)
Answer: (C)
Answer: B
Answer: D
Answer: B
Answer: C
Answer: A
Answer: B
C. Adjacency list, adjacency matrix as well as incidence matrix
Answer: C
Answer: B
Answer: A
Answer: B
Answer: D