When performing linear regression with one explanatory variable (predictor), one can compute Fisher's test with value $F$, and derive Student's test $T=\sqrt{F}$.
When there is more than one explanatory variable (predictor), the relationship $T=\sqrt{F}$ no longer holds. Is there a relationship between $F$ and the different student tests $T_i$ (one for each variable)? Perhaps a system of equations?
Note: I am referring to F test in Excel's linear regression report:

Be careful: there is Fisher's exact test (https://en.wikipedia.org/wiki/Fisher%27s_exact_test) and Fisher's F-test (https://en.wikipedia.org/wiki/F-test). Both tests are often refered to as "Fisher's test". From your description it becomes clear that you refer to Fisher's F-test.
Assumptions
A1. Suppose the data generating process is given by $\mu = \boldsymbol x'\boldsymbol \beta$, where $\mu$ denotes the mean of some random variable $y$, $\boldsymbol x = (x_1, x_2,\dots, x_p)'$ is a vector of other random variables with $x_1 = 1$, and $\boldsymbol \beta = (\beta_1, \beta_2, \dots \beta_p)'$ is the "effect" of $\boldsymbol x$ on the mean of $y$.
A2. An independent sample $\{(y_i, \boldsymbol x_i') : i = 1,2,\dots, n\}$ is obversed and each observation in the sample is distributed according to the relation given in A1.
A3. The conditional distribution of $y$ given $\boldsymbol x$ is a normal distribution with mean $\mu = \boldsymbol x'\boldsymbol\beta$ and variance $\sigma^2$.
Notation
Let $$\boldsymbol y = \begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_n\end{pmatrix},\qquad \text{and}\qquad \boldsymbol X = \begin{pmatrix} \boldsymbol x_1' \\ \boldsymbol x_2' \\ \vdots \\ \boldsymbol x_n'\end{pmatrix}.$$
Inference
The ordinary least squares estimator (OLSE) $\hat{\boldsymbol\beta}$ of $\boldsymbol\beta$ is given by $$\hat{\boldsymbol\beta} = (\boldsymbol X'\boldsymbol X)^{-1}(\boldsymbol X'\boldsymbol y).$$ For simplicitly, I assume in addition to A1, A2, and A3 that $\boldsymbol X$ has full rank. However, this assumption can be dropped. In that case $(\boldsymbol X'\boldsymbol X)^{-1}$ has to be replaced with a generalized inverse $(\boldsymbol X'\boldsymbol X)^{-}$, e.g. Moore-Penrose inverse. Most of what follows holds, mutatis mutandis, in this more general setting.
By A1, A2, and A3, it follows that $\boldsymbol y$ is multivariate normally distributed with mean vector $\boldsymbol X\boldsymbol\beta$ and variance-covariance matrix $\sigma^2\boldsymbol I_n$. Here $\boldsymbol I_n$ denotes the identity matrix in $\mathbb R^{n\times n}$. Now note that the matrix $\boldsymbol A = (\boldsymbol X'\boldsymbol X)^{-1}\boldsymbol X'$ is a linear transformation. Since the normal distribution is invariant under linear-affine transformations, $\hat{\boldsymbol\beta}$ is also normally distributed with mean $\boldsymbol\beta$ and variance-covariance matrix $\sigma^2(\boldsymbol X'\boldsymbol X)^{-1}$. To confirm this result, apply the rules of the mean and the variance operator to $\hat{\boldsymbol\beta}$.
If $\sigma^2$ is known, a test for $\beta_j = 0$ could be carried out immediately by comparing the test statistic $$\frac{\hat\beta_j - \beta_j}{\sqrt{\xi_{jj}}}$$ against the $(1-\frac\alpha 2)$ quantile of a standard normal distribution. Here $\xi_{jj}$ is the $jj$th element of the matrix $\sigma^2(\boldsymbol X'\boldsymbol X)^{-1}$. The above test procedure can only test one of the effects $\beta_j$ at a time. If we are interested in testing $\boldsymbol\beta = \boldsymbol 0_p$ simulatenously, where $\boldsymbol 0_p$ denotes a vector of $p$ zeroes, a different approach is necessary. This is the motivation for the F-test. Recall that the sum of $k$ squared standard normal random variables is $\chi^2$ distributed with $k$ degrees of freedom. Hence $$(\hat{\boldsymbol\beta} - \boldsymbol\beta)'\big(\sigma^2(\boldsymbol X'\boldsymbol X)^{-1}\big)^{-1}(\hat{\boldsymbol\beta} - \boldsymbol\beta)$$ is $\chi^2$ distributed with $p$ degrees of freedom. This result can be confirmed by noting that $\big(\sigma^2(\boldsymbol X'\boldsymbol X)^{-1}\big)^{-\frac 12}(\hat{\boldsymbol\beta}- \boldsymbol\beta)$ is multivariate standard normal, and the above epxression corresponds to the sum of squares of this standard normal vector.
However, in most applications, $\sigma^2$ is unknown and has to be estimated by $$\hat\sigma^2 = \frac{1}{n-p}(\boldsymbol y - \boldsymbol X\hat{\boldsymbol\beta})'(\boldsymbol y - \boldsymbol X\hat{\boldsymbol\beta}).$$ Under assumption A1, A2, and A3, it can be shown that $\frac{\hat\sigma^2}{\sigma^2}$ is $\chi^2$ distributed with $n-p$ degrees of freedom. It can be further shown that $$\sqrt{\frac{\sigma^2}{\hat\sigma^2}}\frac{\hat\beta_j - \beta_j}{\sqrt{\xi_{jj}}} $$ is t distributed with $n-p$ degrees of freedom. Similarily, it can be shown that $$\frac{\sigma^2}{\hat\sigma^2}(\hat{\boldsymbol\beta} - \boldsymbol\beta)'\big(\sigma^2(\boldsymbol X'\boldsymbol X)^{-1}\big)^{-1}(\hat{\boldsymbol\beta} - \boldsymbol\beta)$$ is F distributed with $p$ degrees of freedom in the numerator and $n-p$ degrees in the denominator.
This is, however, not the test statistic reported by Excel (and any other statistical software package). Note that our null hypothesis was $\boldsymbol\beta = \boldsymbol 0_p$. That is, we are also testing whether the effect of the intercept, $\beta_1$ is equal to zero. This is often not interesting, hence the test in question tests whether there is any simulatenous effect of the non-constant random variables in $\boldsymbol x$, i.e. $x_2, x_3, \dots, x_p$. Consequently, we have to slightly modify the above procedure. Note that we actually just want to get rid of the first element of $\boldsymbol\beta$. This can be easily done by considering an "elemination matrix" $$\boldsymbol R = \begin{pmatrix} 0 & 1 & \dots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & 0 \\ 0 & 0 & \cdots & 0 & 1 \end{pmatrix}$$ such that $$\boldsymbol R\boldsymbol\beta = \begin{pmatrix} \beta_2 \\ \beta_3 \\ \vdots \\ \beta_p \end{pmatrix}.$$ Note that $\boldsymbol R$ has rank $p-1$. The test in questions now tests the null hypothesis $\boldsymbol R\boldsymbol\beta = \boldsymbol 0_{p-1}$. Now it is a good exercise to redo all of the above derivation for $\boldsymbol R\hat{\boldsymbol\beta}$ (instead of just $\hat{\boldsymbol\beta}$) to obtain the test statistic for the test in question $$F:=\frac{1}{\hat\sigma^2}(\hat{\boldsymbol\beta} - \boldsymbol\beta)'\boldsymbol R'(\boldsymbol R'(\boldsymbol X'\boldsymbol X)^{-1}\boldsymbol R)^{-1}\boldsymbol R(\hat{\boldsymbol\beta} - \boldsymbol\beta).$$
Conclusion
If the model contains the intercept $\beta_1$ and one slope parameter $\beta_2$, the F test and the t test for the slope parameter obviously coincide (by the relation $T = \sqrt F$) as shown above. If there are multiple slope parameters, the test statistics for the t test and the F test are of course different, because they test different hypotheses.
By appropriately changing the $\boldsymbol R$ matrix, each of the t tests could be rephrased as an F test. But even if the t tests were rephrased as F tests, it would still remain the issue that both testing procedures would test different hypotheses.
Nevertheless, both tests are related. For example, one could compute the joint distribution of $\boldsymbol T = (T_2, T_3, \dots, T_p)'$ and then compare the $(1-\alpha)$ equicoordinate quantile of this distribution to the maximum of $\boldsymbol T$. The test decision should agree with the test decision of the F test. But it is not possible to, say, find a function $f$, such that $f(T_2, T_3, \dots, T_p) = F$ since the $F$ test incorporates the covariances of the $\hat{\boldsymbol\beta}$ vector, while the single t tests don't.