Anyone who has walked on the beach knows that walking speed is dependent upon how far away from the ocean one walks. If you walk on the wet sand you can walk much more quickly than if you walked on the dry sand. I have a question that discusses this principal.
On a Cartesian $xy$ plane limited by: Domain: {$x|0 \le x \le 1$} Range: {$y|0 \le y \le 1$} you start at the point ($0,0$) and you would like to travel on a defined path to ($1,1$) in the shortest amount of time. This sounds simple just take the path $y=x$ because it is the shortest path so it will take the shortest amount of time, but there is a catch. Your forward speed $\frac{dS}{dt}$ is equal to ($1−\frac{3}{4}y$). With this constraint in mind the path $y=x$ would not be the fastest path. What is the fastest path. I am open to questions about the problem itself if I have not been clear. Thank you.

For your particular problem, there might be a simple solution in terms of e.g. Snell's law, as Hagen suggests in the comments above. But let me tell you a bit about how to solve these kinds of problems generally.
The general technique for solving this kind of optimization problem is the Calculus of Variations. The basic idea is this: you are looking at all paths $y(x)$ from $y(0)=0$ to $y(1)=1$, and trying to find the one that minimizes the total travel time $$T(\gamma)=\int_0^1 \frac{\sqrt{1+y'^2}}{1-\frac{3}{4} y}dx.$$
(We can assume that the path is a graph over $x$, since it is obviously a bad idea to double back.)
How do you do this? Well, suppose you have a best path $y$. Now consider perturbing the path by moving every point along it by some offset vector $\delta y(x)$. Since the endpoints of the path are fixed, you must have $\delta y(0) = \delta y(1) = 0$. For every scalar $\epsilon$, the perturbed curve $y + \epsilon \delta y$ must take longer to travel than $y$, since $y$ is the best path. In other words, for every $\delta y$, $\epsilon=0$ is a critical point of the scalar function $$F(\epsilon) = T(y+\epsilon\delta y)$$ so $$\frac{d}{d\epsilon} T(y+\epsilon\delta y)\Bigg\vert_{\epsilon=0} = 0.$$
For now we can write $T$ more generally as $\int_0^1 g(y,y')\,dx$. Taking the above derivative we get $$\int_0^1 \left(g_y \delta y + g_{y'} (\delta y)'\right)\,dx=0.$$ The trick now is that you can get rid of dependence on $(\delta \gamma)'$ by integrating by parts: $$\int_0^1 \frac{d}{dx} \left(g_{y'}\delta y\right)\,dx = \int_0^1 \left(\frac{d}{dx}g_{y'}\right)\delta y\,dx + \int_0^1 g_{y'}(\delta y)'\,dx$$ and the left-hand is zero since $\delta\gamma(1) = \delta\gamma(0) = 0.$ Plugging in we get the equation $$\int_0^1 \left(g_y - \frac{d}{dx}g_{y'}\right)\delta y\,dx = 0.$$ The key step is that the above has to hold for every perturbation $\delta y$. The only way the above is always zero, for any $\delta y$ including those perturbations that are zero except for arbitrarily small neighborhoods of any point on $[0,1]$, is if the stuff in parentheses, which doesn't depend on $\delta y$, is zero: $$\frac{d}{dx} g_{y'} = g_y,$$ an ordinary differential equation called the Euler-Lagrange equations of your variational problem.
Now $$\frac{d}{dx}\left(y' g_{y'}\right) = y''g_{y'} + y' \frac{d}{dx}g_{y'},$$ and plugging in the Euler-Lagrange equation we get $$\frac{d}{dx}\left(y' g_{y'}\right) = y''g_{y'} + y' g_y = \frac{d}{dx}g.$$
Integrating both sides by $x$ gives us $$y' g_{y'} = C + g$$ for some constant $C$. For our particular $g$, $$\frac{y'^2}{\left(1-\frac{3}{4}y\right)\sqrt{1+y'^2}} = C + \frac{\sqrt{1+y'^2}}{1-\frac{3}{4} y}.$$ Clearing out the denominator, things simplify to $$1 = -C\left(1-\frac{3}{4}y\right)\sqrt{1+y'^2}$$ or, since $y'\geq 0$, $$y' = \sqrt{\frac{1}{C^2\left(1-\frac{3}{4}y\right)^2}-1}.$$
This is now a separable first-order ODE; solving it and using the initial conditions $y(0)=0$ gives you a one-parameter family of curves from $(0,0)$ to $(1,1)$, and you can find the shortest by differentiating $T(\gamma)$ as a function of $C$ and finding the optimal $C$.