This image contains the specific patterned block matrix which has,
- The diagonal matrices A11 .... A33 are square matrices are of size m x m.
- The matrices in the last row-block Â11T .... Â33T is of size n x m where n < m
- The matrices in the last column-block Â11.... Â33 is of size m x n where n < m. Therefore, the last row and last column block matrices are rectangular matrices.
- The last row block Â11T .... Â33T is the transpose of last column block Â11.... Â33 of the block-matrix.
- The end matrix  k of the block-matrix is of size n x n.
The corresponding example is explained with specific values for m and n here (In this link, same question is being posted in mathworks but there are no answers to that.)
I have two questions :
1) Is there any specific name for this pattern of block-matrix ?
2) Is there any way to find the inverse of just the last-row block ( Â11T .... Â33T Âk) of the block matrix without finding the inverse of the entire block matrix?
(Additionally I have also attached this image for pattern of block matrix)
I'd call it a block-arrowhead matrix, seeing as an arrowhead matrix is already a term.
I don't know what you mean by inverting the last row-block since that isn't square, but we can efficiently compute the inverse of the entire matrix using the block matrix inversion formula:
$$\begin{bmatrix} \mathbf{A} & \mathbf{B} \\ \mathbf{C} & \mathbf{D} \end{bmatrix}^{-1} = \begin{bmatrix} \mathbf{A}^{-1} + \mathbf{A}^{-1}\mathbf{B}\left(\mathbf{D} - \mathbf{CA}^{-1}\mathbf{B}\right)^{-1}\mathbf{CA}^{-1} & -\mathbf{A}^{-1}\mathbf{B}\left(\mathbf{D} - \mathbf{CA}^{-1}\mathbf{B}\right)^{-1} \\ -\left(\mathbf{D} - \mathbf{CA}^{-1}\mathbf{B}\right)^{-1}\mathbf{CA}^{-1} & \left(\mathbf{D} - \mathbf{CA}^{-1}\mathbf{B}\right)^{-1} \end{bmatrix}$$
For convenience, let's notate the blocks of the matrix as follows: $$\begin{bmatrix}A_1& & & &B_1\\ &A_2& & &B_2\\ & &\ddots& &\vdots\\ & & &A_k&B_k\\ C_1&C_2&\cdots&C_k&D \end{bmatrix}$$ where $A_1,\ldots,A_k$ are square $n \times n$ matrices, $B_1,\ldots,B_k$ are $n \times m$ matrices, $C_1,\ldots,C_k$ are $m \times n$ matrices, and $D$ is an $m \times m$ matrix.
Then, we can apply the above block matrix inversion formula with $\mathbf{A} = \text{blockdiag}(A_1,\ldots,A_k)$ (i.e. the block-diagonal portion of the matrix), $\mathbf{B} = \begin{bmatrix}B_1^T & B_2^T & \cdots & B_k^T\end{bmatrix}^T$, $\mathbf{C} = \begin{bmatrix}C_1 & C_2 & \cdots & C_k\end{bmatrix}$ and $\mathbf{D} = D$.
Then we can compute the inverse of the whole matrix in the following steps:
Step 1 involves inverting $k$ matrices of size $n \times n$, which takes $O(kn^{\omega})$ operations. (Here $\omega$ is the exponent in the runtime of the matrix inversion algorithm you are using. Gaussian elimination has $\omega = 3$, Strassen's has $\omega = \log_2 7 \approx 2.807$, and other less practical algorithms have smaller values of $\omega$.)
Step 2 involves $k$ multiplications of an $m \times n$ matrix and an $n \times n$ matrix, which takes $O(kmn^2)$ operations.
Step 3 involves multiplying a $m \times kn$ matrix with an $kn \times m$ matrix and subtracting it from an $m \times m$ matrix, which takes $O(km^2n)+O(m^2) = O(km^2n)$ operations.
Step 4 involves inverting an $m \times m$ matrix which takes $O(m^{\omega})$ operations.
Step 5 involves multiplying an $m \times m$ matrix with an $m \times kn$ matrix, which takes $O(km^2n)$ operations.
Step 6 involves $k$ products of an $n \times n$ matrix with an $n \times m$ matrix with an $m \times m$ matrix, which takes $O(kmn^2+km^2n)$ operations.
Step 7 involves multiplying an $kn \times m$ matrix with an $m \times kn$ matrix, which takes $O(k^2mn^2)$ operations.
Step 8 involves adding a matrix with at most $kn^2$ entries to the previous result, which takes $O(kn^2)$ operations.
If we use Gaussian elimination to do all the inversions, and $n \gg m,k$, then the dominant part of the cost is the matrix inversion steps. This block-arrowhead inversion algorithm will take $O(kn^3)$ operations while Gaussian elimination with an arbitrary $(kn+m) \times (kn+m)$ matrix will take $O(k^3n^3)$ operations. Analyzing the cost savings in other cases is a bit tricky, but the above algorithm usually saves work over a general inversion algorithm.