First of all, sorry for my bad english. How can be an artificial neuron with 'n' inputs and 'n' weights a linear separator? I saw a lot of courses when they showed me those artificial neuron ( first classes of those courses ) with 'N' inputs and weights and from the other hand they showed me a Descartes coordinate system with a line witch is the separator line of that neuron's output. In the coordinate system I saw points from sector A and B , separated by a line. What does that neuron, and its inputs and weights have to do with those points in the coordinate system ? The coordinate system has two axis, x and y with corresponding points like ...-2,-1,0,1,2... meanwhile that neuron just has 'N' input witch can be either 0 or 1 multiplied by the weights and sumed up. Whats the point on this ? Thank you !
2026-03-26 09:15:05.1774516505
Linear Separator
218 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in NEURAL-NETWORKS
- Retrain of a neural network
- Angular values for input to a neural network
- Smooth, differentiable loss function 'bounding' $[0,1]$
- How to show that a gradient is a sum of gradients?
- Approximation rates of Neural Networks
- How does using chain rule in backprogation algorithm works?
- Computing the derivative of a matrix-vector dot product
- Need to do an opposite operation to a dot product with non square matrices, cannot figure out how.
- Paradox of square error function and derivates in neural networks
- Momentum in gradient descent
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
I think you are just getting confused by the 'linear' part.The line just so happens to be a linear equation.Every linear equation has the following format $ax + b=y$ .This corresponds to the neuron - $x$ is your input and $a$ is the weight associated with it and $b$ is the bias.
In the neural network you have a bunch of these :
$\sum_{i=0}^{n} w_{i}x_{i} + b_{i}$
I deliberately used summation so that you can see that it still has the same format.
Now let's look at the graph if we have for example a weight of 1
w=1
Our function will draw a line straight trough the diagonal and continue to where x has been defined (if your input is $[-1;1]$ the solution space of your equation $1x=y$ will be a square).
Now let's see how this changes when we change the weight :
w=0.3
It just rotated the line segment.So in essence you can think of it like this- weights are rotating the line that separates the output, biases shift the line segment, and how you define your input determines how the solution space expands.
You were a little confused because they showed you a binary classification with only one input for simplicity.The fact is that you can't visualize more than 2 dimensional input, but the underlying principle stays the same for $n$ dimensions.Just sum up every weight*input + bias and you will get a scalar that corresponds to the output.
Ultimately what the training of the network does is to find that unique line segment, by rotating and shifting it to produce two sepate areas where the maximum amound of examples fall in with minumum amount of error.