Word Problem:
Imagine there is a party with an infinite number of mathematicians at it, and I walk up to my friend with the following instructions:
"every 1 minute, find 1 person at the party who has not been told this message yet, and repeat this message to them"
It's clear that after $N$ minutes, provided these instructions are followed exactly, there will be $2^N$ people who have heard this message.
I will now describe an algorithm which can solve the boolean satisfiability in N variables in N minutes.
1 - find 2 people at the party, hand one of them a paper that says "True" and another a paper which says "False" and has a boolean expression on it
2 - tell them these instructions "in 1 minute, find 2 people at the party who have not been told this message yet, and repeat this message to them, give 1 of these people your paper but add True to the list, find a new piece of paper and copy over the list + expression from the paper I gave you, but add False to the list"
3 - after N minutes, whoever has a piece of paper in their hands has to compute the solution of the boolean expression (we are assuming since all the people at the party are mathematicians, they can all check this in less than 30 seconds) and if its solvable they have to shout out "ITS SOLVABLE EVERYONE"
Then given any boolean expression involving N variables, I can use this algorithm to solve the boolean satisfiability problem in N minutes and 30 seconds, giving an algorithmic efficiency of $O(N)$
QUESTION: Would it be wrong to claim the efficiency of this algorithm is in fact $O(N)$ and not $O(2^N)$?
(Note: I am experimenting with new ways to do exposition of algorithms for a pure math audience, this is my new word problem idea. Is this silly mathematicians at a party thing a good start for me to improve my exposition of algorithms and how they relate to pure mathematics and the theory of computation?)
Your algorithm consists of two phases. At the preprocessing phase it assigns to each of $2^N$ mathematicians a variable instance. This is done in linear time. Or even faster, if at each step each of already assigned mathematicians assigns the data to more and more mathematicians. But this phase is not so essential. It can be even skipped provided each mathematician has a predefined name equal to a given variable instance. For example, for $N=2$ we can have $4$ mathematicians named $00$, $01$, $10$, and $11$. At the processing phase each of the mathematician calculate the value of the expression for the given instance. This is done quickly, I guess, in polynomial time.
The proposed pattern is typical for solving NP-hard problems. I recall that NP means non-deterministic polynomial time and one of interpretation of an NP-hard problem is: given a feasible input instance, we can verify in polynomial time whether it solves the problem. In other words, a lucky non-deterministic machine can solve the problem in polynomial time.
But it is said that a problem has polynomial complexity provided a machine can solve in polynomial time. If a problem is in NP and has polynomially many feasible input instances then we can check them one-by-one and solve the problem in polynomial time. But if there are exponentially many feasible input instances then this approach clearly fails to solve a problem in polynomial time. Unfortunately, we cannot resolve this issue by data sharing, because in the proposed framework we have only one solving machine. It has a prescribed set of operations with data and machine states. But the machine is fixed and these operations do not include machine replications (and this restriction differs the classes NP and P, leading to the famous open problem). Even when an algorithm emulates machine cloning, there still is only one basic machine, so the emulated machines work not simultaneously, but consecutively.