When studying about Chi-squared test and distribution, it's quite common to read that most people use p value equal to 0.05 or 0.01.
Do you have more information about what p values are good for different real-life problems? What is the criteria to select one over another? Who decides that 0.05 is enough an not 0.2 or 0.001?
In biostatistics most people has been using 0.05 as "cut-off" value to decide if a statistical test is significative. In other fields this value is reduced to 0.01 when we want to be very sure before rejecting the null hypothesis, when something related with people safety.
Anyway, it's highly recommended not to use that value to take decisions. Nowadays it's suggested to report the p-value itself instead of just saying if it's below or above a given value.
Why should the conclusion drawn with 0.0499 be different from the 0.0501 one?
Most of the time scientists don't decide about a phenomenon, they just think about it, they try to infer what happens in the population by studying a small sample.
p-values depend largely on the size of the sample and tells you how probable is to get that sample from an hypothetical population. That could let you reject the null hypothesis but never accept it. There is also the multitesting problem.
https://www.researchgate.net/publication/262971440_Practical_Interpretation_of_Hypothesis_Tests_-_letter_to_the_editor_-_TAS
http://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-10-44
https://www.jstor.org/stable/23736900?seq=1#page_scan_tab_contents
You should read about Fisher vs Neymar frameworks, and about bayesian statistics vs frequentist.
Fisher thought that the p-value could be interpreted as a continuous measure of evidence against the null hypothesis. There is no particular fixed value at which the results become 'significant'.
On the other hand, Neyman & Pearson thought you could use the p-value as part of a formalized decision making process.