## Hello, world!

** Published:**

Blog coming soon.

** Published:**

Blog coming soon.

** Published:**

One of the most influential lectures I ever caught in grad school was Ryan Gutenkunst’s talk on *sloppy models*. Model sloppiness, first developed by Jim Sethna and his students and collaborators, is a ubiquitous phenomenon in mathematical models for the natural sciences. Often, especially in biology, a model of a system of interest will have many more parameters to be fit than observations to fit them with: a scenario like eighty parameters and six data points is not uncommon. Standard theory tells you that you are doomed here. But if you ignore standard theory and plow ahead, you typically find that the (under)fit model will predict future experiments perfectly well. How could that possibly work?

** Published:**

Shared ownership is supposed to align incentives. If two founders each own 50% of a given company, then they will tend to want the same things for that company. But consider a founder who owns 99% of one company and an investor who owns 1% of 99 companies– do these actors always have the same incentives? Not necessarily. To see why, consider the following model.

** Published:**

Next week I’ll be giving a guest lecture at MIT for 15.S08, Advanced Data Analytics and Machine Learning in Finance, sketching an overview of modern speech-to-text systems and offering some lessons learned from building Scribe.

** Published:**

One of the most influential lectures I ever caught in grad school was Ryan Gutenkunst’s talk on *sloppy models*. Model sloppiness, first developed by Jim Sethna and his students and collaborators, is a ubiquitous phenomenon in mathematical models for the natural sciences. Often, especially in biology, a model of a system of interest will have many more parameters to be fit than observations to fit them with: a scenario like eighty parameters and six data points is not uncommon. Standard theory tells you that you are doomed here. But if you ignore standard theory and plow ahead, you typically find that the (under)fit model will predict future experiments perfectly well. How could that possibly work?