One can find Chi-squared by using
$\chi^2=\sum \frac{\left(value \ expected_i-value \ measured_i\right)^2}{\sigma_i^2}$
And the quality of the fit is evaluated by how small Chi-squared is. But how should I interpret the error $\sigma$ being in the denominator. Shouldn't large errors inflate the Chi-squared value rather than minimize it? Shouldn't we get large Chi-squared when errors are large? Because data cannot be fitted closely and therefore Chi-squared cannot then be expected to be able to attain a small value.
What you call $\chi^2$ is a weighted least squares, ie a distance between model and data. Having $\sigma_i$ on the denominator reduces the weight of samples that have large fluctuations.
Regarding goodness of fit, if the fluctuations are gaussian, then that quantity $\Delta^2 = \sum_i \frac{(expected_i - measured_i)^2}{\sigma_i^2}$ should follow a $\chi^2_d$ statistical distribution with $d$ degrees of freedom equal to the the number of samples minus the number of fit parameters. The expectation of $\chi^2_d$ is $d$. Given the distribution of $\chi^2_d$, values much larger than $d$ are unlikely. Goodness of fit can be quantified by $p$, the probability $\chi^2_d$ gives a value at least as large as the value of $\Delta^2$ observed. Small values of $p$ indicate tension between the data and the model. Remember that $p$ depends on the number of degrees of freedom, not just on the ratio $\Delta^2/d$: for $d=3$, $\Delta^2/d \geq 4/3$ is not uncommon, but for $d=3000$ it is very rare.
Be aware that in practice, values of $\Delta^2$ much smaller than $d$ can also mean that the $\sigma_i$ have been overestimated.