Let $X_1,...,X_n$ denote a sample on a random variable $X$ with pdf $f(x, \theta), \theta \in \Omega$. Let $T=T(X_1,...,X_n)$ be a statistic. T is called an unbiassed (point) estimator of $\theta$ if $\mathbf E (T)=\theta.$
Now, why such a T is called “unbiased”? And, what would be the intuitive interpretation of “bias” in a statistic for which $\mathbf E (T) \not=\theta$?
The bias of an estimator $\hat{\theta}$ for a parameter $\theta$ is defined as $$\mathrm{Bias}(\hat{\theta})=\mathbb{E}(\hat{\theta})-\theta.$$ Thus "unbiased" is exactly the same as having $\mathrm{Bias}(\hat{\theta})=0.$
For an intuitive explanation, suppose we have a population of people living in a small village who have heights $65",65",67",68",78",$ but the person who is $78"$ tall is a hermit who doesn't like being sampled. If we construct an estimate for the population mean by sampling two people who are not hermits, call this $\hat{\mu},$ we would get $$E(\hat{\mu})=\frac{65"+66"+66"+66.5"+66.5"+67.5"}{\binom{4}{2}}=\frac{397.5"}{6}=66.25",$$ while $\mu=68.6".$ Thus, this estimator has a bias of $66.25"-68.6"=-2.35".$ But we should expect that this estimator would be biased, since it isn't taking a representative sample of the population, so this agrees with intuition.