Machine Learning over Finite Fields

395 Views Asked by At

My question is if machine learning over finite fields is a sensible idea, or if there is any literature regarding this. By that I mean, given some function with inputs and outputs on a finite field and some training data from this function, does it make sense to apply machine learning to learn this function, i.e., predict test data from this function (i.e., data not from the training set)? An example might be trying to learn the discrete log function over a finite field with very large prime order. I would appreciate any insight anyone may have.

I am not sure whether or not it does. One point against the idea is that while notions such as greater than, less than, limit, etc. exist over real numbers, for which there exists ordering among elements, there does not exist ordering in the elements of finite fields in the same sense and so those notions do not have the same meaning.

Thus, I am not sure how to develop a loss function, for example, since I'm not sure of a way to quantify how far a prediction is from the truth. I'm also not sure how such a model could be trained, since derivatives don't exist over finite fields in the same way. Also, the concepts of signal and noise are not the same; I would expect all training data to be a precisely correct manifestation of some function, therefore all signal.