I am new to linear algebra and I am facing this question. Let $A \in \mathbb R^{N \times M}$ be a full rank matrix with $M > N$. Since $A$ has full rank I can find a solution $\mathbf x \in \mathbb R^M$ of the linear system $A \mathbf x = \mathbf b$ with $\mathbf b \in \mathbb R^N$. However, I want to determine a specific solution $\mathbf x^*$ such that $k$ of its entries with $k < M$ are fixed values. So here it is my reasoning. Since I want $\mathbf x^*$ to satisfy $A \mathbf x^* = \mathbf b$, then $\mathbf x^* - \mathbf x \in \mathrm{ker} A$. Hence, if we have the condition that $\mathrm{dim} \,\mathrm{ker} A > k$, then we can always find a linear span of vectors of a basis of $\mathrm{ker} A$ equal to $\mathbf x^* - \mathbf x$. My conclusion is that a necessary and sufficient condition to find a solution $\mathbf x^*$ with $k$ known values is that $M - \mathrm{rank}A = M - N = \mathrm{dim}\, \mathrm{ker}A > k$. Is this correct?
2026-04-06 09:47:48.1775468868
Vector with known values
101 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in LINEAR-ALGEBRA
- An underdetermined system derived for rotated coordinate system
- How to prove the following equality with matrix norm?
- Alternate basis for a subspace of $\mathcal P_3(\mathbb R)$?
- Why the derivative of $T(\gamma(s))$ is $T$ if this composition is not a linear transformation?
- Why is necessary ask $F$ to be infinite in order to obtain: $ f(v)=0$ for all $ f\in V^* \implies v=0 $
- I don't understand this $\left(\left[T\right]^B_C\right)^{-1}=\left[T^{-1}\right]^C_B$
- Summation in subsets
- $C=AB-BA$. If $CA=AC$, then $C$ is not invertible.
- Basis of span in $R^4$
- Prove if A is regular skew symmetric, I+A is regular (with obstacles)
Related Questions in MATRIX-RANK
- Bases for column spaces
- relation between rank of power of a singular matrix with the algebraic multiplicity of zero
- How to determine the rank of the following general $\mathbb{R}$-linear transformation.
- How to prove the dimension identity of subspace? i.e. $\dim(V_1) + \dim(V_2) = \dim(V_1 + V_2) + \dim(V_1 \cap V_2)$
- How can I prove that $[T]_B$ is a reversible matrix?
- can I have $\det(A+B)=0$ if $\det(A)=0$ and $\det(B) \neq 0$?
- Let $A$ be a diagonalizable real matrix such as $A^3=A$. Prove that $\mbox{rank}(A) = \mbox{tr}(A^2)$
- Row permuation of a matrix for a non-zero diagonal
- Tensor rank as a first order formula
- Rank of Matrix , Intersection of 3 planes
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
It's well known that if a matrix has a kernel there are "free variables" in equations like $A \mathbf{x} = \mathbf{b}$. This question might be asking about when you can take particularly selected variables to be free. I say "might be," because it's not clear that the question cares about which $k$ values are specified.
To see that it can make a difference, consider $A = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}$. Note that $A \mathbf{x} = \begin{pmatrix} x_1 & x_2 \end{pmatrix}^T$ holds for all $\mathbf{x}$, so that for any $\mathbf{b}$, every solution $\mathbf{x}$ to $A \mathbf{x} = \mathbf{b}$ has to match $\mathbf{b}$ in the first two components. So it's possible to arbitrarily specify "one entry" of a solution to $A \mathbf{x} = \mathbf{b}$, as long as you only want to specify the third entry. For this $A$, you wouldn't be able to arbitrarily specify the first or second entries, no matter what $\mathbf{b}$ is.
If you permute the columns of this example and consider higher dimensional examples, you can see that for any $1 \leq k < M$, there is an $(M-k) \times M$ matrix $A$ having full rank and satisfying $\dim \ker A = k$ with the property for every $\mathbf{b}$, it is possible to arbitrarily specify one set of $k$ entries in a solution to $A \mathbf{x} = \mathbf{b}$, but where it is not possible to arbitrarily specify any other choice of $k$ entries in a solution to $A \mathbf{x} = \mathbf{b}$.
To work on general conditions, let's take $\mathbf{b}=\mathbf{0}$ to start. For integers $1 \leq n_1 < n_2 < \dots < n_k \leq M$, write $n = (n_1, n_2, \dots, n_k)$, and let $Q_n$ denote the $k \times M$ matrix with $1$s in the row $j$, column $n_j$ entries, $1 \leq j \leq k$, and zeros everywhere else. Note that for any $\mathbf{x} \in \mathbb{R}^M$, $Q_n \mathbf{x}$ is the vector $\begin{pmatrix} x_{n_1} & x_{n_2} & \dots & x_{n_k} \end{pmatrix}^T$ obtained from $\mathbf{x}$ by selecting out only the $n_1$th, $n_2$th, ..., $n_k$th entries. Fix also any $M \times M$ matrix $P$ whose range is $\ker A$. Then the set $S_n$ of $n_1, n_2, \dots, n_k$th entries of solutions to $A \mathbf{x} = \mathbf{0}$ is the range of $Q_n P$.
Because $\operatorname{rank}(XY) \leq \operatorname{rank}(Y)$ for any $X$ and $Y$, we deduce that $\dim S_n = \operatorname{rank}(Q_n P) \leq \operatorname{rank}(P) = \dim \ker A$. Conclusion: if we can arbitrarily specify some selection of $k$ entries in a solution to $A \mathbf{x} = \mathbf{0}$ (so that $S_n = \mathbf{R}^k$), then $k \leq \dim \ker A$.
Conversely, if $k \leq \dim \ker A$, then $\operatorname{rank}(P) \geq k$. Because the rank is the dimension of the row space, there is at least one selection of $k$ rows of $P$ - say, the $n_1$th, $n_2$th, ..., $n_k$th rows - that forms a linearly independent list. Defining $Q_n$ with this tuple $n = (n_1, \dots, n_k)$, we see that $Q_n P$ is just the $k \times M$ matrix consisting of the $n_1$th, ..., $n_k$th rows of $P$, and hence $\operatorname{rank}(Q_n P) = k$. Because $Q_n P$ is a map from $\mathbb{R}^M$ to $\mathbb{R}^k$ it follows that the range of $Q_n P$ is $\mathbb{R}^k$, i.e., that $S_n = \mathbb{R}^k$. So we can arbitrarily specify the $n_1$th, ..., $n_k$th entries of a solution to $A \mathbf{x} = \mathbf{0}$ in this instance.
Summarizing the above, we have:
One possible choice of $P$ is the orthogonal projection onto the nullspace of $A$, which is explicitly given by $A (A^T A)^{-1} A^T$. If you have a computer program that can tell you the rank of a matrix, you can therefore answer the above question for any given $A$ and choice of $n_1, \dots, n_k$.
I don't know if the condition in 2 above has a simpler characterization. When $k = 1$, the condition 2 occurs if and only if every row of $P$ is nonzero. Taking $P$ to be the orthogonal projection onto the nullspace of $A$, we see that this happens if and only if there is a single element of $\ker A$ that fails to be orthogonal to every one of the coordinate axes. (This does not hold in $2 \times 3$ example $A$ that we started with because every element of $\ker A = \{\begin{pmatrix} 0 & 0 & s\end{pmatrix}^T: s \in \mathbb{R}\}$ is orthogonal to both the first and second coordinate axes, reflecting our inability to freely specify the first and second variables in $A \mathbf{x} = \mathbf{0}$.) More generally, the condition appears to encode some kind geometric relation between $\ker A$ and coordinate subspaces. Maybe someone else will have more information about this condition.
If $\mathbf{b}$ is nonzero things are similar. We can still consider $P$ as above and run through the argument to see when it's possible to specify the $(n_1, \dots, n_k)$th entries of a solution to $A \mathbf{x} = \mathbf{0}$ (we need $\dim \ker A \geq k$ and certain rows of $P$ to be linearly independent). From there, if there's a particular solution $\mathbf{p}$ to $A \mathbf{x} = \mathbf{b}$, then considering vectors of the form $\mathbf{p} + \mathbf{v}$ where $\mathbf{v}$ is a solution to $A \mathbf{v} = \mathbf{0}$, we see that it's possible to arbitrarily specify the $(n_1, \dots, n_k)$th entries of a solution to $A \mathbf{x} = \mathbf{v}$ (namely: given a $k$-tuple $(a_1, \dots, a_k)$ that you want, find a solution $\mathbf{v}$ to $A \mathbf{v} = \mathbf{0}$ for which $v_{n_j} = a_j - p_{n_j}$, $1 \leq j \leq k$, and consider $\mathbf{p} + \mathbf{v}$). And conversely, if we can arbitrarily specify the $(n_1, \dots, n_k)$th entries of a solution to $A \mathbf{x} = \mathbf{b}$ and $\mathbf{p}$ is a particular solution to $A \mathbf{x} = \mathbf{p}$, then the vectors $\{\mathbf{v}_j - \mathbf{p}: 1 \leq j \leq k\}$, where $\mathbf{v}_j$ denotes a chosen solution of $A \mathbf{x} = \mathbf{b}$ with a $1$ in the $n_j$th entry and a zero in the $n_i$th entries for $i \neq j$, will be linearly independent (there are some details to check here) and in $\ker A$, so that $\dim \ker A \geq k$.