Puyu Wang's Homepage

Profile Photo

Google Scholar Profile    RPTU Homepage

I am currently an Alexander von Humboldt Research Fellow in the Machine Learning Group at RPTU Kaiserslautern–Landau, Germany, led by Prof. Marius Kloft . Prior to this fellowship, I held research positions as a Postdoctoral Research Fellow at City University of Hong Kong and Hong Kong Baptist University, under the supervision of Prof. Ding-Xuan Zhou and Prof. Jun Fan . I received my Ph.D. in Statistics from Northwest University, China, and I spent one year as a visiting PhD student at the State University of New York at Albany, supervised by Prof. Yiming Ying .

Research Interests

My research interests lie in statistical learning theory and machine learning, with an emphasis on deep learning theory, optimization, and differential privacy.

 NEWS

  • Jan 2026: One paper accepted to TNNLS.
  • Jan 2026: One paper accepted to JMLR.
  • Nov 2025: Three papers accepted to AAAI 2026.
  • Oct 2025: Top Reviewer at NeurIPS 2025.
  • Oct 2025: Served as a Session Chair at IEEE DSAA 2025 (Birmingham, UK).
  • Mar 2025: Joined RPTU Kaiserslautern–Landau as a Humboldt Research Fellow.

 PUBLICATIONS

Preprint

[1] P. Wang, J. Zhou, P. Liznerski and M. Kloft. Optimization, Generalization and Differential Privacy Bounds for Gradient Descent on Kolmogorov–Arnold Networks . Submitted

[2] J. Zhou, P. Wang and D. Zhou. Generalization Analysis with Deep ReLU Networks for Metric and Similarity Learning . Submitted

[3] J. Zhou, P. Wang, Y. Lei, Y. Ying and D. Zhou. Optimal Rates for Generalization of Gradient Descent Methods with Deep ReLU Networks . Submitted

Refereed Journal Papers

[1] J. Zhou, S. Huang, H. Feng, P. Wang* and D. Zhou. Fine-grained Analysis of Non-parametric Estimation for Pairwise Learning . IEEE Transactions on Neural Networks and Learning Systems, to appear (*corresponding author)

[2] Y, Lei, P. Wang, Y, Ying and D. Zhou. Optimization and Generalization of Gradient Descent for Shallow ReLU Networks with Minimal Width . Journal of Machine Learning Research, to appear.

[3] P. Wang, Y. Lei, D. Wang, Y. Ying and D. Zhou. Generalization Guarantees of Gradient Descent for Shallow Neural Networks . Neural Computation, 2025.

[4] P. Wang, Y. Lei, Y. Ying and D. Zhou. Differentially Private Stochastic Gradient Descent with Low-Noise . Neurocomputing, 2024.

[5] T. Asenso, P. Wang and H. Zhang. Pliable Lasso for the Support Vector Machine. . Communications in Statistics–Simulation and Computation, 2024.

[6] P. Wang, Y. Lei, Y. Ying and H. Zhang. Differentially Private SGD with Non-smooth Losses . Applied and Computational Harmonic Analysis, 2022.

[7] P. Wang, Z. Yang, Y. Lei, Y. Ying and H. Zhang. Differentially Private Empirical Risk Minimization for AUC Maximization . Neurocomputing, 2021.

[8] P. Wang and H. Zhang. Differential Privacy for Sparse Classification Learning . Neurocomputing, 2020.

[9] P. Wang, H. Zhang and Y. Liang. Distributed Logistic Regression with Differential Privacy . SCIENTIA SINICA Informationis (Chinese version), 2020.

[10] P. Wang, H. Zhang and Y. Liang. Model Selection with Distributed SCAD Penalty . Journal of Applied Statistics, 2018.

[11] H. Zhang, P. Wang, Q. Dong and P. Wang. Sparse Bayesian Linear Regression using Generalized Normal Priors . International Journal of Wavelets, Multiresolution and Information Processing, 2017.

Refereed Conference Papers

[1] Z. Shi, P. Wang, C. Zhang and Y. Cao. Towards Understanding Generalization in DP-GD: A Case Study in Training Two-Layer CNNs . AAAI, 2026.

[2] P. Liznerski, S. Varshneya, E. Calikus, P. Wang, A. Bartscher, S. Vollmer, S. Fellenz and M. Kloft. Reimagining Anomalies: What If Anomalies Were Normal? . AAAI, 2026.

[3] W. Li, W. Mustafa, M. Monteiro, P. Wang, S. Fellenz and M. Kloft. Train Once, Align Anytime: Inference-Time Preference Alignment for Offline Multi-Objective Reinforcement Learning. AAAI, 2026.

[4] P. Wang, Y. Lei, M. Kloft and Y. Ying. Optimal Utility Bounds for Differentially Private Gradient Descent in Three-Layer Neural Networks . IEEE DSAA, 2025.

[5] W. Mustafa, P. Liznerski, A. Ledent, D. Wagner, P. Wang and M. Kloft. Non-vacuous Generalization Bounds for Adversarial Risk in Stochastic Neural Networks . AISTATS, 2024.

[6] W. Mustafa, P. Liznerski, D. Wagner, P. Wang, M. Kloft. Non-vacuous PAC-Bayes bounds for models under adversarial corruptions . ICML Workshop, 2023.

[7] P. Wang, Y. Lei, Y. Ying and D. Zhou. Stability and Generalization for Markov Chain Stochastic Gradient Methods . NeurIPS, 2022.

[8] Z. Yang, Y. Lei, P. Wang, T. Yang and Y. Ying. Simple Stochastic and Online Gradient Descent Algorithms for Pairwise Learning . NeurIPS, 2021.

[9] P. Wang, L. Wu and Y. Lei. Stability and Generalization for Randomized Coordinate Descent . IJCAI, 2021.

 SERVICE

  • Conference Reviewer for AAAI (2025), AISTATS (2023–2026), ICLR (2024–2026), ICML (2022–2025), IJCAI (2021–2023), NeurIPS (2022–2025), ECML PKDD (2023), ECAI (2025)
  • Journal Reviewer for TPAMI, TNNLS, TKDE, TII, TSP, TMLR, NEUCOM, TCS, MFC
  • Session Chair for EcoSta 2024, IEEE DSAA 2025

 TEACHING

  • Learning Theory, 2026 Summer Semester, RPTU
  • Foundations of Deep Generative Models, 2026 Summer Semester, RPTU