Preprint
[1] P. Wang, J. Zhou, P. Liznerski and M. Kloft.
Optimization, Generalization and Differential Privacy Bounds for Gradient Descent on Kolmogorov–Arnold Networks
.
Submitted
[2] J. Zhou,
P. Wang
and D. Zhou.
Generalization Analysis with Deep ReLU Networks for Metric and Similarity Learning
.
Submitted
[3] J. Zhou, P. Wang, Y. Lei, Y. Ying and D. Zhou.
Optimal Rates for Generalization of Gradient Descent Methods with Deep ReLU Networks
.
Submitted
Refereed Journal Papers
[1] J. Zhou, S. Huang, H. Feng,
P. Wang*
and D. Zhou.
Fine-grained Analysis of Non-parametric Estimation for Pairwise Learning
.
IEEE Transactions on Neural Networks and Learning Systems, to appear
(*corresponding author)
[2] Y, Lei,
P. Wang, Y, Ying and D. Zhou.
Optimization and Generalization of Gradient Descent for Shallow ReLU Networks with Minimal Width
.
Journal of Machine Learning Research, to appear.
[3] P. Wang, Y. Lei, D. Wang, Y. Ying and D. Zhou.
Generalization Guarantees of Gradient Descent for Shallow Neural Networks
.
Neural Computation, 2025.
[4] P. Wang, Y. Lei, Y. Ying and D. Zhou.
Differentially Private Stochastic Gradient Descent with Low-Noise
.
Neurocomputing, 2024.
[5] T. Asenso,
P. Wang
and H. Zhang.
Pliable Lasso for the Support Vector Machine.
.
Communications in Statistics–Simulation and Computation, 2024.
[6] P. Wang, Y. Lei, Y. Ying and H. Zhang.
Differentially Private SGD with Non-smooth Losses
.
Applied and Computational Harmonic Analysis, 2022.
[7] P. Wang, Z. Yang, Y. Lei, Y. Ying and H. Zhang.
Differentially Private Empirical Risk Minimization for AUC Maximization
.
Neurocomputing, 2021.
[8] P. Wang and H. Zhang.
Differential Privacy for Sparse Classification Learning
.
Neurocomputing, 2020.
[9] P. Wang, H. Zhang and Y. Liang.
Distributed Logistic Regression with Differential Privacy
.
SCIENTIA SINICA Informationis (Chinese version), 2020.
[10] P. Wang, H. Zhang and Y. Liang.
Model Selection with Distributed SCAD Penalty
.
Journal of Applied Statistics, 2018.
[11] H. Zhang,
P. Wang,
Q. Dong and P. Wang.
Sparse Bayesian Linear Regression using Generalized Normal Priors
.
International Journal of Wavelets, Multiresolution and Information Processing, 2017.
Refereed Conference Papers
[1] Z. Shi,
P. Wang,
C. Zhang and Y. Cao.
Towards Understanding Generalization in DP-GD: A Case Study in Training Two-Layer CNNs
.
AAAI, 2026.
[2] P. Liznerski, S. Varshneya, E. Calikus,
P. Wang,
A. Bartscher, S. Vollmer,
S. Fellenz and M. Kloft.
Reimagining Anomalies: What If Anomalies Were Normal?
.
AAAI, 2026.
[3] W. Li, W. Mustafa, M. Monteiro,
P. Wang,
S. Fellenz and M. Kloft.
Train Once, Align Anytime: Inference-Time Preference Alignment for Offline Multi-Objective Reinforcement Learning.
AAAI, 2026.
[4] P. Wang, Y. Lei, M. Kloft and Y. Ying.
Optimal Utility Bounds for Differentially Private Gradient Descent in Three-Layer Neural Networks
.
IEEE DSAA, 2025.
[5] W. Mustafa, P. Liznerski, A. Ledent, D. Wagner,
P. Wang
and M. Kloft.
Non-vacuous Generalization Bounds for Adversarial Risk in Stochastic Neural Networks
.
AISTATS, 2024.
[6] W. Mustafa, P. Liznerski, D. Wagner,
P. Wang,
M. Kloft.
Non-vacuous PAC-Bayes bounds for models under adversarial corruptions
.
ICML Workshop, 2023.
[7] P. Wang, Y. Lei, Y. Ying and D. Zhou.
Stability and Generalization for Markov Chain Stochastic Gradient Methods
.
NeurIPS, 2022.
[8] Z. Yang, Y. Lei,
P. Wang,
T. Yang and Y. Ying.
Simple Stochastic and Online Gradient Descent Algorithms for Pairwise Learning
.
NeurIPS, 2021.
[9] P. Wang, L. Wu and Y. Lei.
Stability and Generalization for Randomized Coordinate Descent
.
IJCAI, 2021.