It seems like math purely "on paper" vs. purely on a computer both have their advantages and disadvantages. However, when combined together, they seem to greatly increase the possibilities and help confirm each methods results.
Some examples:
Computing pi to more than a few decimal places is tedious "on paper" but "cake" for a computer.
Someone not good in math but savvy on computers can solve some math problems such as compute how many busts are possible in a $5$ card poker hand.
Some things are too numerous for a computer to simulate such as those with an "astronomically large" number of possible states. Even something "reasonable" such as $64 \choose 32$ is almost $2$ quintillion ($2 * 10^{18}$). Too many states for a computer (and a simple algorithm) to simulate in a reasonable amount of time, thus more clever math skills (and/or some intense pruning is needed).
So this is just "background info" but here is the real question: Suppose someone (a mathematician or maybe not) comes across a substantial discovery on a computer simulation of a "mathematical process" and wants to use it as a proof. Can he/she do it or will it be "rejected" by his/her peers without a more "traditional" (formal) proof? I've seen/heard some arguments that computers may have bugs in hardware and/or software but if multiple people get the same answer on different computers using different computer languages, then that is reassuring that the matching answers are correct.
If you use a computer to work through all possible cases, then I would consider this a formal proof (obviously as long as you provide the necessary code). This is still somewhat contentious---some argue that since computers can have bugs and defects, you can't ever really know if such a proof is correct. But this seems silly to me, because humans make mistakes even more frequently than computers, so you could just as easily make the argument that as long as only humans have looked at the proof, you can't be certain it is right.
This approach is becoming more accepted. Quite recently, Harald Helfgott came up with a proof of the weak Goldbach conjecture that really would not have been possible without a computer---it took about 40,000 hours of computations to finally settle the question positively. Hopefully, we will eventually obtain a proof of this theorem that does not require so much tedious calculation, but that seems very much out of reach for the near future. There are other computer-assisted proofs out there: Computer Assisted proofs apart from the 4 color theorem.
On the other hand, if you take a random sample of possible cases and verify your conjecture for those cases, then this is not a proof, regardless of how many cases you do this for. The reason for this is that, if you were to allow such a method of "proof", you could conceivably prove things which were not true. Stochastic processes can lead you very, very astray. As an example of an extreme case, there exist events that are possible, yet have 0% probability of occurring. If you were to allow stochastic proofs, you might have a special case that disproves your conjecture, but that you would have a 0% chance of ever discovering randomly.
Simulations are very, very useful for testing conjectures and providing data to formulate ideas about what might be true. However, they (much like computer-assisted proofs, unfortunately) don't give much insight into why the conjecture is true, which is really what mathematicians care about.