Answers to this question take the peak in the cross correlation as the measure to the likelihood of the trigger signal exist in the received signal - this is pretty much text book.
My question is whether this hypothesis is correct:
The sum (or average) of all the cross correlation samples represents the likelihood that the target signal is in the received signal.
This seems to be nonsense to me, but there is a specific case I'm interested in:
Consider this hypothetical system:
- a primitive radar system
- positioned in the middle of a tennis court
- with two radio transmitter/receiver combos - one surveys the left side of the court, the other the right
- the tennis ball was injected with radio-reflective substance
As for the signal:
- The target signal is 10 samples long,
- The received signal (per side) uses 1000 sample frames.
The role of the radar is to determine whether the ball is on the left or right side of the court.
In other words, the analysis should determine whether the ball is more likely to be on the left radar cross-correlation, or the right one.
I care not for its alignment within the received signal.
Also important, that on the tennis court there might be other radio-reflective objects, so the received signal is not purely noise + transmitted, but also confound signals.
Just to provide some extra context, The Scientist and Engineer's Guide to Digital Signal Processing, asserts:
The amplitude of each sample in the cross-correlation signal is a measure of how much the received signal resembles the target signal, at that location. This means that a peak will occur in the cross-correlation signal for every target signal that is present in the received signal.
And the following illustration is provided:
So will adding together all the samples in y[] yield the likelihood of the target existing in the received signal (given such sum from the left side is compared with the sum from the right side)?