We consider the problem of stochastic optimizationwith nonlinear constraints, where the decision variable is notvector-valued but instead a function belonging to a reproducingKernel Hilbert Space (RKHS). Currently, there exist solutions toonly special cases of this problem. To solve this constrained problem with kernels, we first generalize the Representer Theoremto a class of saddle-point problems defined over RKHS. Then,we develop a primal-dual method which executes alternatingprojected primal/dual stochastic gradient descent/ascent on thedual-augmented Lagrangian of the problem. The primal projection sets are low-dimensional subspaces of the ambient functionspace, which are greedily constructed using matching pursuit. Bytuning the projection-induced error to the algorithm step-size, weare able to establish mean convergence in both primal objectivesub-optimality and constraint violation, to respective O(√T) andO(T3/4) neighborhoods. Here T is the final iteration index andthe constant step-size is chosen as 1/√T with 1/T approximationbudget. Finally, we demonstrate experimentally the effectivenessof the proposed method for risk-aware supervised learning
To View the Abstract Contents
Now it is Your Time to Shine.
Great careers Start Here.
We Guide you to Every Step
Success! You're Awesome
Thank you for filling out your information!
We’ve sent you an email with your Final Year Project PPT file download link at the email address you provided. Please enjoy, and let us know if there’s anything else we can help you with.
To know more details Call 900 31 31 555
The WISEN Team