Is it ok to cancel out fractions while integrating? If yes, how? For example: $$\int \frac{x^3+1}{x+1}dx = \int \frac{(x+1)(x^2-x+1)}{x+1}dx = \int (x^2-x+1)dx = \frac{x^3}3-\frac{x^2}2+x + C$$
Wouldn't it be undefined if $x = -1$ as the divisor is $0$?
Let me separately address definite vs. indefinite integrals.
For $a<-1<b$, I don't think I'm aware of a standard definition of $\int_a^b f(x)dx$, where $f(x)=\frac{x^3+1}{x+1}$, $x\neq -1$. I think that how one defines it is a matter of convention, but regardless of the convention, as David said in the comments, the value of the integral will be the same as you calculated by cancelling. This is because we can define a new function $\hat{f}$ by letting $f(x)=\hat{f}(x)$ for all $x\neq -1$ and $\hat{f}(-1)=c$. Then we can show that the Riemann integral exists and does not depend on the value of $c$. Just take partitions that enclose $-1$ in a very narrow subinterval, so the value at $-1$ will contribute negligibly as the mesh of the partition vanishes. Similar story for Lebesgue integration. So if I were presenting this material in a course, I would say that in this case, we define the integral to be $\int_a^b \hat{f}(x)dx$, where $\hat{f}$ is any extension of $f$. All extensions give the same definite integrals, so this is unambiguous. However, I could also see a course in which the convention is to define things as with improper integrals, since technically the function has a (removable) discontinuity at $-1$. We could define $$\int_a^{-1}f(x)dx = \lim_{t\to -1^-} \int_a^t f(x)dx$$ for $a<-1$, $$\int_{-1}^b f(x)dx=\lim_{t\to -1^+}\int_t^b f(x)dx$$ for $b>-1$, and for $a<-1<b$, define $$\int_a^b f(x)dx=\lim_{s\to -1^-}\int_a^s f(x)dx+\lim_{t\to -1^+}\int_t^b f(x)dx.$$ In this case, we completely avoid evaluating $f$ at $-1$. These quantities only depend on the values of $f$ away from $-1$. If we define $g(x)=x^2-x+1$ (your cancelled function), then since $f(x)=g(x)$ for all $x\neq -1$, with these definitions, $$\int_a^b f(x)dx = \lim_{s\to -1^-}\int_a^s f(x)dx+\lim_{t\to -1^+}\int_t^b f(x)dx = \lim_{s\to -1^-}\int_a^s g(x)dx+\lim_{t\to -1^+}\int_t^b g(x)dx = \int_a^b g(x)dx.$$
So how we prove it depends on how we define the definite integral, but in any case, the definite integral will be equal for $f$ and $g$.
For indefinite integrals, there is another ambiguity. Remember that the purpose of writing $+C$ in the indefinite integral is writing the "most general" antiderivative. Note that $f(x)=\frac{x^3+1}{x+1}$ is only defined on $(-\infty,-1)\cup (-1,\infty)$. Often the domain is completely ignored for indefinite integrals/antiderivatives. If we ask for something very weak, like a function $F$ defined on the same domain as $f$ satisfying $F'(x)=f(x)$ for all $x$ in this domain, and we want to ask for the most general form of such a function, it is actually $$F(x)=\left\{\begin{array}{ll}\frac{x^3}{3}-\frac{x^2}{2}+x+C_1 & : x<-1 \\ \frac{x^3}{3}-\frac{x^2}{2}+x+C_2 & : x>-1.\end{array}\right.$$