I have a conceptual question about graphs which I couldn't find the answer to. I am calculating some node centralities and using them as features for a machine learning problem. I am using Networkx python library.
I noticed that for the degree centrality the library does the weighting of the values by the highest possible number of connections. In other words, the degree centrality of a node is defined by the number of connections the node has divided by the number of nodes in the graph minus one.
Although, from graph theory literature, the degree centrality of a node is simply counting the number of edges a node has. As far as I understand.
I wonder what is the implications of this weighting of the values by the library? Doesn't it completely changes the concept of the degree centrality?
Well, no. As long as you operate on one graph, nothing really changes. From what I understand you are asking why the library implementation is inconsistent with the literature. The library itself provides a good answer:
So if you just want the vertex degree, use
G.degree. If you want to normalize this number by the size of the graph (which is arguably more useful) usedegree_centrality(G).