Let $\mathbf{x}\in \{0,1\}^n$, be the objective variables of an ILP. Further, let $\mathbf{a} \in \mathbb{N}_{\geq 0}^n$ be a given random vector and $\mathbf{w} = \mathbf{x} \odot \mathbf{a}$ where in the latter $\odot $ indicates an element-wise product. We define the function $f(\mathbf{w})$ as the "number of unique elements" in $\mathbf{w}$ which are greater than zero. In fact, we choose the elements of $\mathbf{a}$ via $\mathbf{x}$.
For instance, suppose $\mathbf{a} = [2,4,2]$, then $f(\mathbf{w}) =1$ if $\mathbf{x} = [1,0,1]$ and $f(\mathbf{w}) = 2$ if $\mathbf{x} = [0,1,1]$.
The objective function of an ILP is a minimization of a function composed of $\mathbf{x}$ and $f(\mathbf{w})$.
Now, the question is, is it possible to find $f(\mathbf{w})$ by linear constraints?
One approach would be to define $2^n$ objective variables, instead of $n$, and map them to one of the $2^n$ possible values of $f(\mathbf{w})$ but that will explode the number of variables.
Let $E$ denote the set of unique elements in $a$, and let $y_e$ ($e \in E$) be a binary variable that takes the value 1 if the value $e$ is selected. The objective term is $f(w) = \sum_e y_e$, and the constraints are $y_e \geq x_i$ for all $(e,i)$ for which $a_i = e$.