
Preprints
 N. Mücke, G. Neu and L. Rosasco: Beating SGD saturation with tailaveraging and minibatching. To appear in Advances in Neural Information
Processing Systems 32
(NIPS), 2019.
 C. Riquelme, H. Penedones, D. Vincent, H. Maennel, S. Gelly, T. Mann, A. Barreto, G. Neu: Adaptive TemporalDifference Learning for Policy Evaluation with PerState Uncertainty Estimates. To appear in Advances in Neural Information
Processing Systems 32
(NIPS), 2019.
 G. Neu, A. Jonsson and V. Gómez: A unified view of entropyregularized Markov decision processes. Under review. [poster] [slides]
 G. Lugosi, M. G. Markakis, G. Neu: On the hardness of inventory management with censored demand data. Under review.
Journal papers
 G. Neu and G. Bartók: Importance weighting without importance weights: An efficient algorithm for combinatorial semibandits. In Journal on Machine Learning Research (JMLR), vol. 17(154), pp. 121, 2016.
 L. Devroye, G. Lugosi and G. Neu: RandomWalk
Perturbations for Online Combinatorial Optimization. In IEEE Transactions on Information Theory,
vol. 61, pp. 40994106, 2015.
 G. Neu, A.
György, Cs. Szepesvári and A. Antos: Online Markov
Decision
Processes under Bandit Feedback. In IEEE Transactions on Automatic
Control, vol. 59., pp. 676691, 2014.
 A.
György and G.
Neu: NearOptimal
Rates for LimitedDelay Universal Lossy Source
Coding. In IEEE Transactions on Information Theory, vol. 60, pp.
28232834, 2014.
 G. Neu and
Cs.
Szepesvári: Training
Parsers by Inverse Reinforcement Learning. In Machine
Learning, vol. 77(2), pp. 303337, 2009.
Refereed conference papers
 W. Kotłowski and G. Neu: Bandit Principal Component Analysis. In Proceedings of the 32nd Annual Conference on Learning Theory (COLT), pp. 19942024, 2019. [slides]
 G. Lugosi, J. Olkhovskaya and G. Neu: Online influence maximization with local observations. In Proceedings of the 30th International Conference
on
Algorithmic Learning Theory (ALT), pp. 557580, 2019.
 G. Neu and L. Rosasco: Iterate averaging as regularization for stochastic gradient descent. In Proceedings of the 31st Annual Conference on Learning Theory (COLT), pp. 32223242, 2018.
 N. CesaBianchi, C. Gentile, G. Lugosi and G. Neu: Boltzmann exploration done right. In Advances in Neural Information
Processing Systems 30
(NIPS), pp. 62846293, 2017. [poster]
 G. Neu and V. Gómez: Fast rates for online learning in Linearly Solvable Markov Decision Processes. In Proceedings of the 30th Annual Conference on Learning Theory (COLT), pp. 15671588, 2017. [slides]
 T. Liu, G. Lugosi, G. Neu and D. Tao: Algorithmic stability and hypothesis complexity. In Proceedings of the 34th International Conference on Machine Learning (ICML), pp. 21592167, 2017.
 T. Kocák, G. Neu and M. Valko: Online learning with ErdősRényi sideobservation graphs. In Proceedings of
the 32nd Conference on
Uncertainty in Artificial Intelligence (UAI), pp. 339346, 2016.
 T. Kocák, G. Neu and M. Valko: Online learning with noisy side observations. In Proceedings of
the Nineteenth International Conference on Artificial Intelligence and
Statistics (AISTATS), pp. 11861194, 2016.
 G. Neu: Explore no more: Improved highprobability regret bounds for nonstochastic bandits. In Advances in Neural Information
Processing Systems
28
(NIPS), pp. 31503158, 2015. [poster] [slides]
 G. Neu: Firstorder regret bounds for combinatorial semibandits. In Proceedings of the 28th Annual Conference on Learning Theory (COLT),
pp. 13601375, 2015.[poster] [slides]
 G. Neu and M. Valko: Online Combinatorial
Optimization with Stochastic Decision Sets and Adversarial Losses.
In Advances in Neural Information Processing Systems
27
(NIPS), pp. 27802788, 2014. [poster] [slides]
 T. Kocák, G.
Neu, M. Valko and R. Munos: Efficient Learning
by Implicit Exploration
in Bandit Problems with Side Observations. In Advances in Neural
Information Processing Systems
27
(NIPS), pp. 613621, 2014. [poster] [slides]
 A. Sani, G. Neu and A. Lazaric: Exploiting Easy Data
in Online Optimization. In Advances in Neural Information
Processing Systems
27
(NIPS), pp. 810818, 2014. [poster] [spotlight] [talk]
 A. Zimin and G.
Neu: Online
Learning in Episodic Markov Decision Processes by Relative Entropy
Policy Search. In Advances in Neural Information Processing Systems
26
(NIPS), pp. 15831591, 2013. [poster] [slides]
 G. Neu and G.
Bartók: An
Efficient Algorithm for Learning with SemiBandit
Feedback. In Proceedings of the 24th International Conference
on
Algorithmic Learning Theory (ALT), pp. 234248, 2013. [poster] [slides] Full version in JMLR '16.
 L. Devroye, G.
Lugosi and G. Neu: Prediction
by RandomWalk Perturbation. In
Proceedings of the 26th Annual Conference on Learning Theory (COLT),
pp. 460473, 2013. [slides] Full version in IEEE TIT '15.
 G. Neu, A.
György, and Cs. Szepesvári: The Adversarial
Stochastic Shortest Path
Problem with Unknown Transition Probabilities. In Proceedings of
the
Fifteenth International Conference on Artificial Intelligence and
Statistics (AISTATS), pp. 805813, 2012. [supplement] [poster]
 A. György and G.
Neu: NearOptimal
Rates
for LimitedDelay Universal Lossy Source Coding. In 2011 IEEE
International Symposium
on
Information Theory, pp. 22182222, 2011.Full version in IEEE TIT '14.
 G. Neu, A.
György, Cs. Szepesvári and A. Antos: Online Markov
Decision
Processes under Bandit Feedback. In Advances in Neural Information
Processing Systems 23 (NIPS), pp. 18041812, 2010. [poster] [spotlight] Full version in IEEE TAC '14.
 G. Neu, A.
György, and Cs. Szepesvári: The Online Loopfree
Stochastic
ShortestPath Problem. In Proceedings of The 23rd Conference on
Learning Theory (COLT), pp. 231243, 2010.
 G. Neu and Cs.
Szepesvári: Apprenticeship
Learning using Inverse Reinforcement
Learning and Gradient Methods.
In Proceedings of the 23rd
Conference on
Uncertainty in Artificial Intelligence (UAI), pp. 295302, 2007.
Other

