Welcome to my research blog.

This page collects assorted insights that I stumbled upon in my daily research and found worth sharing. I plan to post on topics ranging from elegant, curious or useful mathematical identities to interesting techniques in online learning. There is no set schedule; I only post when I have something to say. Subscribe to learn when that happens.

Enjoy!

Wouter M. Koolen

- 2017-03-20Visualisation of Conformal Mappings

A visualisation of conformal mappings. - 2017-03-11Rendering with Droste Effects

A Droste image contains a smaller copy of itself. We look at how to render such images. - 2017-01-21The Advantage of Full AdaGrad

AdaGrad, an adaptive version of Online Gradient Descent, is often used in its diagonal version. In this post we take a look at the full matrix version to understand the benefit of its additional adaptive power. - 2016-11-10Matrix iProd and Matrix Squint

We look at extending two recent adaptive online learning strategies, iProd and Squint, to the matrix Hedge setting. The extension is straightforward up to a point, but we end with a nice open problem. - 2016-10-31AdaFTRL

We analyse AdaFTRL from the AdaHedge perspective. - 2016-09-06Exploiting Curvature Using Exponential Weights

We show how Exponential Weights applies uniformly to convex losses, strongly convex losses and exp-concave losses. - 2016-09-01A Tractable Bound for Simple Power Inequalities

We study a simple type of power inequality that pops up in the analysis of learning algorithms. The inequality has no general analytic solution, and we find a tractable bound. - 2016-08-11Visualising The Escort Probability Map

A simple visualisation of the escort probability map. - 2016-04-11The Dual of the von Neumann Relative Entropy

We compute two duals of the von Neumann relative entropy between PSD matrices. - 2016-02-21Gradient Descent as Exponential Weights

We regard gradient descent as an instance of exponential weights. - 2016-02-01The GADDAG

We have a look at the GADDAG data structure. - 2016-01-31The Bregman Divergence Generated by the Absolute Value Function

We look at the Bregman divergence generated by the prototypical non-differentiable convex function. - 2016-01-13Mixability Gap Bounds

A collection of bounds on the mixability gap. - 2015-12-05An Upper Bound on the Learning Rate for IID Data

We obtain an upper bound on the learning rate for convergence of the Hedge posterior in a simple iid data scenario. - 2015-11-29Implementing Squint

We look at some tricks and considerations that came up during my implementation of Squint. - 2015-11-19Tight Diagonal Upper Bounds on Dyads

We look at bounding a symmetric rank one matrix from above by a diagonal matrix. - 2015-09-29Minimax on the Ball

We compare two minimax strategies on the ball. - 2015-09-21Colliding Balls

Videos from my Newtonian collision simulator - 2015-09-08Concave-convex vs Convex-convex

We look at a simple quadratic minimax problem on the Euclidean sphere, and consider what happens when it flips from convex to concave. - 2015-08-13The Relative Entropy Bound for Squint

We derive the relative entropy version of the second-order quantile regret bound for Squint. - 2015-05-25The Hedge Algorithm in Continuous Time

We look at the Hedge algorithm in a version of continuous time. - 2015-05-22Exchangeable vs Mixture of IID for a Pair of Binary Outcomes

We determine which exchangeable distributions are mixtures of IID in the simple case of two binary outcomes. - 2015-05-13Unknown Games

We investigate what happens to simple matrix games and their solutions if we either randomly or strategically choose between two games. - 2015-03-29Convex Duality Illustrated

Convex duality is an important tool in machine learning. Here I present an insightful arrangement of the graphs of a convex function, its dual and their derivatives. - 2015-03-08Differentiability of Compositions

A composition of differentiable functions can be non-differentiable. - 2015-02-22Close to the Perfect Expert, now Without Outcomes

We look at one of the simplest online learning problems: making decisions in the presence of a perfect expert. We derive a non-uniform regret bound. - 2015-02-01Bimatrix Games

We show that any two-player zero-sum bimatrix game has a saddle point where both players randomise over the same number of strategies.