Welcome to my research blog.

This page collects assorted insights that I stumbled upon in my daily research and found worth sharing. I plan to post on topics ranging from elegant, curious or useful mathematical identities to interesting techniques in online learning. There is no set schedule; I only post when I have something to say. Subscribe to learn when that happens.

Enjoy!

Wouter M. Koolen

- 2021-02-08Dot product in Julia

We implement and profile the dot product in Julia. - 2021-01-20Tightening the Screws on the FreeGrad Martingale

We review the recently proposed FreeGrad Martingale, and improve one of the constants in its definition. - 2020-10-13Optimal Sparsification

We review one way to measures sparsity of a data set, and derive the optimal pre-processing step to maximise sparsity. - 2020-03-04On the Tightness of Pinsker's Bound

We look into the tightness of Pinsker's bound relating Total Variation distance to Kullback Leibler divergence. - 2018-11-13Bounded Excess Risk for AdaHedge

We show that AdaHedge has constant excess risk (aka pseudo-regret) for nice stochastic cases. - 2018-10-24The Value of Random Game Trees

We look at an intuitive generative model for game trees. We compute the distribution of the value of the root node, and see how it concentrates with the depth. - 2018-09-29Solving Two-part Metal Puzzles: Theory and Practice

A brief look at two metal take-me-apart puzzles. - 2018-06-11A Dynamical System view of some Exponential Update

We consider a natural mapping from the simplex to itself, similar to but different from the well-known exponential weights scheme. We investigate the limit behaviour when iterating this mapping. - 2018-02-21Conference on Learning Theory (COLT) Proceedings

The proceedings of the Conference On Learning Theory (COLT) 2010 are now online again. - 2017-09-15Conference on Learning Theory (COLT) Video Archives

The videos of the 2017 Conference On Learning Theory (COLT) are now online. - 2017-09-08Bernoulli is Sub-Gaussian

We show that any Bernoulli random variable is sub-Gaussian with variance factor $\frac{1}{4}$. - 2017-04-26A Quick and Dirty Finite Time Law of the Iterated Logarithm Result

The Law of the Iterated Logarithm is an asymptotic uniform deviation bound. We prove a finite time version using martingale arguments. - 2017-03-20Visualisation of Conformal Mappings

A visualisation of conformal mappings. - 2017-03-11Rendering with Droste Effects

A Droste image contains a smaller copy of itself. We look at how to render such images. - 2017-01-21The Advantage of Full AdaGrad

AdaGrad, an adaptive version of Online Gradient Descent, is often used in its diagonal version. In this post we take a look at the full matrix version to understand the benefit of its additional adaptive power. - 2016-11-10Matrix iProd and Matrix Squint

We look at extending two recent adaptive online learning strategies, iProd and Squint, to the matrix Hedge setting. The extension is straightforward up to a point, but we end with a nice open problem. - 2016-10-31AdaFTRL

We analyse AdaFTRL from the AdaHedge perspective. - 2016-09-06Exploiting Curvature Using Exponential Weights

We show how Exponential Weights applies uniformly to convex losses, strongly convex losses and exp-concave losses. - 2016-09-01A Tractable Bound for Simple Power Inequalities

We study a simple type of power inequality that pops up in the analysis of learning algorithms. The inequality has no general analytic solution, and we find a tractable bound. - 2016-08-11Visualising The Escort Probability Map

A simple visualisation of the escort probability map. - 2016-04-11The Dual of the von Neumann Relative Entropy

We compute two duals of the von Neumann relative entropy between PSD matrices. - 2016-02-21Gradient Descent as Exponential Weights

We regard gradient descent as an instance of exponential weights. - 2016-02-01The GADDAG

We have a look at the GADDAG data structure. - 2016-01-31The Bregman Divergence Generated by the Absolute Value Function

We look at the Bregman divergence generated by the prototypical non-differentiable convex function. - 2016-01-13Mixability Gap Bounds

A collection of bounds on the mixability gap. - 2015-12-05An Upper Bound on the Learning Rate for IID Data

We obtain an upper bound on the learning rate for convergence of the Hedge posterior in a simple iid data scenario. - 2015-11-29Implementing Squint

We look at some tricks and considerations that came up during my implementation of Squint. - 2015-11-19Tight Diagonal Upper Bounds on Dyads

We look at bounding a symmetric rank one matrix from above by a diagonal matrix. - 2015-09-29Minimax on the Ball

We compare two minimax strategies on the ball. - 2015-09-21Colliding Balls

Videos from my Newtonian collision simulator - 2015-09-08Concave-convex vs Convex-convex

We look at a simple quadratic minimax problem on the Euclidean sphere, and consider what happens when it flips from convex to concave. - 2015-08-13The Relative Entropy Bound for Squint

We derive the relative entropy version of the second-order quantile regret bound for Squint. - 2015-05-25The Hedge Algorithm in Continuous Time

We look at the Hedge algorithm in a version of continuous time. - 2015-05-22Exchangeable vs Mixture of IID for a Pair of Binary Outcomes

We determine which exchangeable distributions are mixtures of IID in the simple case of two binary outcomes. - 2015-05-13Unknown Games

We investigate what happens to simple matrix games and their solutions if we either randomly or strategically choose between two games. - 2015-03-29Convex Duality Illustrated

Convex duality is an important tool in machine learning. Here I present an insightful arrangement of the graphs of a convex function, its dual and their derivatives. - 2015-03-08Differentiability of Compositions

A composition of differentiable functions can be non-differentiable. - 2015-02-22Close to the Perfect Expert, now Without Outcomes

We look at one of the simplest online learning problems: making decisions in the presence of a perfect expert. We derive a non-uniform regret bound. - 2015-02-01Bimatrix Games

We show that any two-player zero-sum bimatrix game has a saddle point where both players randomise over the same number of strategies.