We had two papers accepted at AI&Stats’16 on multi-armed bandit exploiting the arm structure to reduce the regret in reward maximization and the probability of error in best-arm identification.

**Online learning with noisy side observations** (Tomáš Kocák, Gergely Neu, Michal Valko)

*We propose a new partial-observability model for online learning problems where the learner, besides its own loss, also observes some noisy feedback about the other actions, depending on the underlying structure of the problem. We represent this structure by a weighted directed graph, where the edge weights are related to the quality of the feedback shared by the connected nodes. Our main contribution is an efficient algorithm that guarantees a regret of O(sqrt(alpha T)) after T rounds, where alpha is a novel graph property that we call the effective independence number. Our algorithm is completely parameter free and does not require knowledge (or even estimation) of alpha. For the special case of binary edge weights, our setting reduces to the partial-observability models of Mannor & Shamir (2011) and Alon et al. (2013) and our algorithm recovers the near-optimal regret bounds.*

**Improved Learning Complexity in Combinatorial Pure Exploration Bandits** (V. Gabillon, A. Lazaric, M. Ghavamzadeh, R. Ortner, P. Bartlett)