I have an encoding of my problem into a linear integer problem with binary variables only, potentially very big, where the number of binary variables is roughly one third of the total number of variables. And interested in feasibility only (no objective function).
It is a straightforward encoding (later referred to as basic), so does not result in strong formulations (if I'm not wrong with the terminology). I am trying to come up with some more optimised formulations (different from precomputing the bounds of all real variables). I had an idea to add some constraints on pairs of binary variables that rule out certain assignments to them, for example of the form $b_1 + b_2 \leq 1$, or $b_1 + b_2 \geq 1$, so that the new formulation is equivalent to the original one (I could infer them by looking at the problem).
Now, I compare the augmented formulation with the basic one, and get all possible kinds of results, where I get an improvement, no significant difference, but also where the augmented formulation is considerably slower than the basic one (up to 3 times slower).
So my question is how can it be that adding constraints that prune the search space makes the solving time worse? Even in the cases when the formulation is not feasible, so as I understand the solver has to explore the whole state space.
For illustration I post a table with results, where size is half the number of binary variables in the encoding.
It seems that there is a correlation with the number of added constraints. But in any case I don't understand in the second line with size 70 why they don't help to solve the problem faster when it already takes more than half an hour to solve the basic formulation? Perhaps somebody could indicate how solvers are making use of such constraints. (I was using Gurobi).

If I'm understanding correctly, your constraints are redundant in the MIP formulation, meaning they do not change the feasible region -- any feasible solution to the "basic" problem is also feasible for the "augmented" problem, and vice versa. Unfortunately it can be hard to predict whether adding constraints like these will help or hurt the solution time. In some cases they will help guide the solver toward optimality more quickly, but in others they will just be "noise".
The key question to think about (in my opinion; maybe others will have different perspectives) is whether the new constraints change the feasible region of the LP relaxation. That is, are there solutions that are feasible for the LP relaxation of the basic problem but infeasible for the LP relaxation of the augmented problem? (Of course, all integer solutions that were feasible for the basic problem should still be feasible for the augmented problem.) If so, then you have managed to bring the LP feasible region closer to the convex hull of the IP feasible region, which is generally going to help the solver.