This paper extends the dual interpretation of entropy balancing to general situations and proposes a tailored loss function. Minimizing this loss function by machine learning algorithms generates approximate covariate balance in large function classes.