On Twenty-Niner - student

The Baron's most recent wager set Sir R----- the task of placing tokens upon spaces numbered from zero to nine according to the outcome of a twenty sided die upon which was inscribed two of each of those numbers. At a cost of one coin per roll of the die, Sir R-----'s goal was to place a token upon every space for which he should receive twenty nine coins and twenty nine cents from the Baron.

Full text...  

Big Friendly GiantS - a.k.

In the previous post we saw how we could perform a univariate line search for a point that satisfies the Wolfe conditions meaning that it is reasonably close to a minimum and takes a lot less work to find than the minimum itself. Line searches are used in a class of multivariate minimisation algorithms which iteratively choose directions in which to proceed, in particular those that use approximations of the Hessian matrix of second partial derivatives of a function to do so, similarly to how the Levenberg-Marquardt multivariate inversion algorithm uses a diagonal matrix in place of the sum of the products of its Hessian matrices for each element and the error in that element's current value, and in this post we shall take a look at one of them.

Full text...  

On A Clockwork Contagion - student

During the recent epidemic, my fellow students and I had plenty of time upon our hands due to the closure of the taverns, theatres and gambling houses at which we would typically while away our evenings and the Dean's subsequent edict restricting us to halls. We naturally set to thinking upon the nature of the disease's transmission and, once the Dean relaxed our confinement, we returned to our college determined to employ Professor B------'s incredible mathematical machine to investigate the probabilistic nature of contagion.

Full text...  

Wolfe It Down - a.k.

Last time we saw how we could efficiently invert a vector valued multivariate function with the Levenberg-Marquardt algorithm which replaces the sum of its second derivatives with respect to each element in its result multiplied by the difference from those of its target value with a diagonal matrix. Similarly there are minimisation algorithms that use approximations of the Hessian matrix of second partial derivatives to estimate directions in which the value of the function will decrease.
Before we take a look at them, however, we'll need a way to step toward minima in such directions, known as a line search, and in this post we shall see how we might reasonably do so.

Full text...  

Twenty-Niner - baron m.

Sir R----- my fine fellow! Come in from the cold and join me at my table for a tumbler of restorative spirits!

Might I also tempt you with a wager?

Good man!

I propose a game that was popular amongst the notoriously unsuccessful lunar prospectors of '29. Spurred on by rumours of gold nuggets scattered upon the ground simply for the taking, they arrived en-masse during winter woefully unprepared for the inclement weather. By the time that I arrived on a diplomatic mission to the king of the moon people they were in a frightful state, desperately short of provisions and futilely trying to work the frost bitten land to grow more.

Full text...  

Found In Space - a.k.

Some time ago we saw how Newton's method used the derivative of a univariate scalar valued function to guide the search for an argument at which it took a specific value. A related problem is finding a vector at which a multivariate vector valued function takes one, or at least comes as close as possible to it. In particular, we should often like to fit an arbitrary parametrically defined scalar valued functional form to a set of points with possibly noisy values, much as we did using linear regression to find the best fitting weighted sum of a given set of functions, and in this post we shall see how we can generalise Newton's method to solve such problems.

Full text...  

On Tug O' War - student

The Baron and Sir R-----'s latest wager comprised of first placing a draught piece upon the fifth lowest of a column of twelve squares and subsequently moving it up or down by one space depending upon the outcome of a coin toss until such time as it should escape, either by moving above the topmost or below the bottommost square. In the former outcome the Baron should have had a prize of three coins and in the latter Sir R----- should have had two.

Full text...  

Smooth Operator - a.k.

Last time we took a look at linear regression which finds the linear function that minimises the differences between its results and values at a set of points that are presumed, possibly after applying some specified transformation, to be random deviations from a straight line or, in multiple dimensions, a flat plane. The purpose was to reveal the underlying relationship between the independent variable represented by the points and the dependent variable represented by the values at them.
This time we shall see how we can approximate the function that defines the relationship between them without actually revealing what it is.

Full text...  

Finally On A Very Cellular Process - student

Over the course of the year my fellow students and I have been utilising our free time to explore the behaviour of cellular automata, which are mechanistic processes that crudely approximate the lives and deaths of unicellular creatures such as amoebas. Specifically, they are comprised of unending lines of boxes, some of which contain cells that are destined to live, dive and reproduce according to the occupancy of their neighbours.
Most recently we have seen how we can categorise automata by the manner in which their populations evolve from a primordial state of each box having equal chances of containing or not containing a cell, be they uniform, constant, cyclical, migratory, random or strange. It is the latter of these, which contain arrangements of cells that interact with each other in complicated fashions, that has lately consumed our attention and I shall now report upon our findings.

Full text...  

Regressive Tendencies - a.k.

Several months ago we saw how we could use basis functions to interpolate between points upon arbitrary curves or surfaces to approximate the values between them. Related to that is linear regression which fits a straight line, or a flat plane, though points that have values that are assumed to be the results of a linear function with independent random errors, having means of zero and equal standard deviations, in order to reveal the underlying relationship between them. Specifically, we want to find the linear function that minimises the differences between its results and the values at those points.

Full text...