Subspace-evasive set performance in the random case

61 Views Asked by At

A subspace evasive set is defined as a large subset of a vector space which has small intersection with any $k$ dimensional affine space. That is, it "evades" all affine subspaces of small enough dimension.

Formally, for parameters $k$ and $\epsilon > 0$, a $(k,c)$-subspace evasive set $S \in \mathbb{F}_q^n$ is such that $|S| > |\mathbb{F}_q^{(1-\epsilon)n}|$ and $|S \cap H| < c$ for all $k$-dimensional affine subspaces $H \in \mathbb{F}_q^n$.

The goal is to make the intersection $c$ as small as possible.

It is claimed (as trivial in most references) that a random set $S \in \mathbb{F}_q^n$ of size $|\mathbb{F}_q^{(1-\epsilon)n}|$ has (whp) intersection $c = O(k/\epsilon)$.

Is there a way of showing this? It is supposed to be a simple application of a probabilistic method, but I am stuck and clueless.