Assume that I have data that can be described by:
$y_i = \beta x_i + \epsilon_i, \epsilon_i \sim (0,\sigma_{\epsilon})$,
then the least squares estimator is given by
$\hat{\beta_1} = \frac{\sum_{i=1}^N x_iy_i}{\sum_{i=1}^N x_i^2}$.
Why is it wrong to use the following estimator?
$\hat{\beta_2} = \frac{1}{N}\sum_{i=1}^N \frac{y_i}{x_i}$.
Do they not estimate the same parameter?
The ordinary least squares estimator, $\widehat \beta_{OLS}=\sum \frac{x_i y_i}{x_i^2}$, as others have mentioned minimizes the sum of squares error $\widehat \beta_{OLS}=argmin_b \sum(y_i -bx_i)^2$. The major reason that it is so widely used is because it is BLUE, best linear unbiased estimator (http://en.wikipedia.org/wiki/Gauss%E2%80%93Markov_theorem) i.e. among all unbiased estimators is has lowest variance ($var(\widehat \beta_{OLS})$ is smallest among all unbiased estimators). Hence, your estimator is unbiased but $\widehat \beta_{OLS}$ is also unbiased and has lower variance.
There are other estimators that are sometimes thought to be better than $\widehat \beta_{OLS}=\sum \frac{x_i y_i}{x_i^2}$. For example, an estimator that minimizes the absolute difference $\widehat \beta_{LAD}=argmin_b \sum |y_i -bx_i|$ is less sensitive to outliers http://en.wikipedia.org/wiki/Least_absolute_deviations.
Actually, I think the estimator $\widehat \beta_1=\frac{1}{N}\sum \frac{y_i}{x_i}$ is not a very good one because it is sensitive to small values of $x_i$ while the other estimator are not... if anything I think $\widehat \beta_2=\frac{\sum y_i}{\sum x_i}$ is better.... Also it is hard to see how to extend the estimator $\widehat \beta_1$ to a multivariate regression.
Let's calculate the variances. Assume for simplicity that the $x_i$ are nonrandom. Then, $$var(\widehat \beta_1)=var(\frac{1}{N}\sum \frac{y_i}{x_i}) =\frac{1}{N^2}var( \sum \frac{(x_i\beta+\varepsilon_i)}{x_i}) =\frac{\sigma^2_{\varepsilon}}{N^2} \sum \frac{1}{x_i^2} \\ var(\widehat \beta_2)=var( \frac{\sum y_i}{\sum x_i})=var( \frac{\sum(x_i\beta+\varepsilon_i)}{\sum x_i}) = \frac{N\sigma^2_{\varepsilon}}{(\sum x_i)^2} \\ var(\widehat \beta_{OLS})=var( \frac{\sum x_iy_i}{\sum x_i^2}) =var( \frac{\sum x_i(x_i\beta+\varepsilon_i)}{\sum x_i^2}) =var( \frac{\sum x_i\varepsilon_i}{\sum x_i^2}) = \frac{\sigma^2_{\varepsilon} }{\sum x_i^2}$$ we can see that the issue with $\widehat \beta_1$ is when $x_i$ is small... I'll let you apply Cauchy-Swartz to actually prove that $var(\widehat \beta_{OLS})\leq var(\widehat \beta_{1})$.