We consider the following on-line learning protocol: given a sequence of previous outcomes, a basic on-line learning agent is required to output a prediction for the next outcome. The quality of the prediction is measured by its discrepancy from the outcome, which is assessed using a loss function. The performance of the on-line prediction strategy on a sequence is judged by the cumulative loss per element. The asymptotic complexity of a language (i.e., of a collection of sequences) is defined as the maximum loss of the "best" strategy.
This talk addresses the following problem. Let AC_1 be asymptotic complexity corresponding to one loss function and AC_2 be asymptotic complexity specified by another. What relations exist between them? We give an answer to this question by describing the set of pairs (AC_1(L),AC_2(L)), where L ranges over all languages, on the Euclidean plane. This set turns out to have a simple geometric description in terms of the so called generalised entropy defined by the loss function. Thus earlier results of Lutz and Fortnow are generalised.
The result presented in the talk was published in the conference paper
Y.Kalnishkan, V.Vovk and M.V.Vyugin. Generalised Entropy and Asymptotic Complexities of Languages. In Learning Theory, 20th Annual Conference on Learning Theory, COLT 2007, volume 4539 of Lecture Notes in Computer Science, pages 293-307, Springer 2007.
Available at: http://www.clrc.rhul.ac.uk/people/yura/
The talk is self-contained and no specific background in machine learning is required.
Dr Yuri Kalnishkan
Dr Yuri Kalnishkan graduated from the department of mechanics and mathematics of Moscow State University and obtained his PhD in computer science from Royal Holloway, University of London, where he currently works as a lecturer. His research interests include on-line learning, complexity, regression methods, and their applications, particularly in finance.