$$f_{X,Y}(x,y)=f_X(x)f_Y(y)$$
This is the standard formula to determine if random variables $X$ and $Y$ are independent. Sometimes $f_{X,Y}(x,y)$ or $f_X(x)f_Y(y)$ are unknown and can be quite a hassle to calculate. I have compiled few shortcuts I have been using to determine relation between $X$ and $Y$. Any proof to verify or fringe cases to falsify my hypothesis are greatly appreciated.
Scenario 1:
$X$ is restricted by $Y$. Example: $0\le X\le Y$
$X$ and $Y$ are restricted by eachother. Example: $0\le X^2\le Y, \quad 0\le Y^2\le X$
The two scenarios above always imply that $X$ and $Y$ are not independent?
Scenario 2:
$\frac{f_{X,Y}(x,y)}{f_Y(y)}=f_{X\mid Y=y}(x)$ contain variable $Y$ and/or $\frac{f_{X,Y}(x,y)}{f_X(x)}=f_{Y\mid X=x}(y)$ contain variable $X$
Does this imply that $X$ and $Y$ are not independent?
Scenario 1+2:
If all the conditions above are satisfied, does it imply that $X$ and $Y$ are independent?
If the support for the joint distribution indicates a clear dependency --such as you say, if they restrict each other-- then indeed how can the random variables be considered independent? However, a lack of such restriction does not signify independence.
If the conditional measure is non-constant with respect to the conditioning variable (ie: does not equal the marginal measure), then that is a clear indication for dependence. Ie: independence occurs exactly when $\forall y:f_X(x)=f_{X\mid Y}(x\mid y)$ when $f_{X}, f_{X\mid Y}$ are the marginal and conditional probability density (or mass) functions .