In Huffman coding, we start by combining elements with least frequencies and putting them in correct order, and once we reach a condition where there is only 1 element left, from there we start symbols to the parts which were earlier combined and then splitting them up further.
I wish to ask WHY is it that we cannot follow the reverse of this process, that is, we start by combining the elements with the highest frequency and then assigning symbols?
I'm not sure which is the "reverse process" you are thinking of.
If you are thinking of doing exactly the same procedure with reverse sorting (group most probable symbols in each iteration), then it would not make sense: most probable symbols would be involved in many combinations, and thus they would have longer encodings. We don't want that.
It would make more sense if you are thinking: "Why to build the tree beginning with the 'bottom' of it, i.e. with the longest depth = longest encoding = minimum probability? Wouldn't make more sense to start with the root of the tree, decide the first two branches, and so on?" Actually, this looks more natural, and the first proposed encodings were based on this approach (see eg Shannon-Fano coding [*]). It was Huffman who had the idea of starting from the deepest leaves instead of from the root of the tree. And this can be proved to be optimal, because the two lowest probabilties can be placed as siblings in the tree, and then regarded as a combined new symbol, and then the same reasoning applies recursively.
[*] (look specially at the comparison with the contruction of the Huffman code)