A cluster in a graph is most naturally defined as a maximal complete subgraph. But this is too strict a definition to be of practical use: in most (especially random) graphs there will be only rather small clusters according to this definition.
One way to weaken the definition is to allow a not too large subset of edges in the complete subgraph to be missing. Two questions arise:
How large may the set of missing edges be choosen?
How would one guarantee that the missing edges are equally distributed over the graph (so that it stays connected)?
Question 1: Where can I learn more about this approach? References from the graph clustering literature?
Another possible weakening of the definition starts from the observation that in a complete graph the maximal distance between two of its nodes is 1. This leads naturally to the definition of a $k$-cluster being a maximal subgraph such that the maximal distance between two of its nodes is $k$. For $k=2$ this would imply that half of the edges of the complete subgraph may be missing, and it's still a cluster.
Question 2: Has this approach been taken before in the graph clustering literature?