The Problem
Let $a_1,\cdots,a_n$ and $b_1,\cdots,b_m$ be sequences of positive integers with $m>n$, and let $t$ be a positive integer coprime to all elements of $\{a_i\}$ and $\{b_i\}$. What is a computationally efficient way to find all $n$-element subsequences $c_1,\cdots,c_n$ of $\{b_i\}$ such that $\sum_{i=1}^n a_ic_i \equiv 0\bmod t$ (or does no way exist)?
(Note: In the particular cases with which I am concerned, $m\approx 1.6\cdot n$ and $t \approx \frac{2^n}{n^2}.$)
My Progress
So far, I haven't really made much progress. The brute force method requires ${n\choose m}$ tests, which is horribly inefficient if $m$ and $n$ are at all large. Any recursive or backtracking-recursive implementation will still end up going through all ${n\choose m}$ cases at some point. Once one has chosen $c_1,\cdots,c_{n-1}$, one can calculate $c_n\bmod t$ by noting that
$$c_na_n \equiv -\sum_{i=1}^{n-1} c_i a_i\bmod t \implies c_n \equiv -a_n^{-1}\sum_{i=1}^{n-1} c_i a_i\bmod t$$
and then check whether this is actually a valid value for $c_n$ (the lookup on $c_n$ can be done in $O(1)$ time if $n$ is a reasonable number and the value of $a_n^{-1}$ can probably be precomputed in a small amount of time), but this still only reduces the calculation to ${n-1 \choose m-1}$ cases, which is not much better, if it is even faster at all.
The only other idea I had was that we can first attempt to find a superset of solutions by completely neglecting the order and just looking at the set of products of elements of $\{a_i\}$ with those of $\{b_i\}$. From there we simply need to find all $n$-element subsets with sums $0\bmod t$ (and then weed out those with duplicated elements of $a_i$ or $b_i$, or those whose $b_i$ elements are in the wrong order; perhaps this could be done as part of the algorithm to help reduce extraneous calculations). Even though the brute-force method is significantly worse in this simplification, it feels like an easier problem to deal with and maybe one that has a nicer solution.
I got something in $O(m^2nt+kn)$, where $k$ denotes the number of valid subsequences that you seek.
There are two steps. First you figure out every possible partial sums you can obtain such that $c_i=b_j$, while remembering the previous element in the subsequence, then you backtrack which sums reach $0 \bmod t$.
For the first step, you build a data structure $S$ of size $n\times m\times t\times m$, such that $S(i,j,s,l)=1$ if there exists a subsequence $c_1,\ldots,c_i$ with :
Otherwise, $S(i,j,s,l)=0$.
If you know the proper values of $S$ for a given "slice" $i$, it's easy to deduce the values for the next "slice" $i+1$. For a non-zero $S(i,j,s,.)$, assign $S(i+1,j',s',j) \gets 1$ for all $j+1\le j'\le m-n+i+1$, where $s'\equiv s+a_{i+1}b_{j'}\bmod t$.
Obtaining the new "slice" takes $O(m^2t)$ time, repeated $n$ times for the whole structure.
Once you have $S$, inspect the last "slice" at $s=0$, you obtain tuples of values $j,l$ such that $S(n,j,0,l)=1$. This means that there is a subsequence $c$ with $c_n=b_j$, and $c_{n-1}=b_l$. To figure out the rest of the subsequence, inspect the previous "slice" $n-1$ at $s\equiv -a_nb_j \bmod t$ and $j'=l$. Or in other words, find the values $l'$ such that $S(n-1,l,-a_nb_j,l')$ is equal to $1$. Repeat until you reach the first "slice".
This step can be done in $O(kn)$ time.