The minimum distance is easy to compute, and so determine the error correcting/detection capabilities of a code, by enumerating all possible pairs of codewords and computing the hamming distance between them.
This procedure is not practical when the codes are very large, as it amounts to a brute force search. How can I know the error correcting capabilities of a large random code?
I repeat what the comments say: There is not a general efficient way to compute the minimum distance of a code. For the linear codes (practically all real codes) it's slightly simpler, as it equals the minimum Hamming weights of all codewords (excepting the zero codeword, of course).
Another property is: the minimum code equals the minimum number of columns of $H$ (parity check matrix) such that they sum up to zero. This property does not usually help to compute the distance, it instead can help to compute the code (especially Hammming codes).
For convolutional codes, the minimum distance is related to the minimum length path that (in the state transition diagram) starts from the zero state and returns to it.
But I want to stress this: you say that $d_{min}$, the minimum distance
That needs some clarification; $d_{min}$ only bounds the maximum number of bits in block that can be corrected/detected. But that's not the holy grail. I mean that that parameter is not necessarily what we are primarily interested; in general, what we really want is to minimize the overall probability of error. And this is not a function of $d_{min}$, it's more complex. Even more, MacKay (see chapter 13) argues that the "obsession with distance" (his words) was detrimental for the progress towards better codes. It does not really matter if the distance is small for a few codewords, if by paying that price we attain a good average distance and hence a low probability of error.