I am a management consultant exploring the World of Artificial intelligence.

The next step

A couple of years ago, I started thinking about a seemingly simple question: how does nature allow very simple organisms with super simple brains to perform quite difficult tasks - and all that without gigantic amounts of training data, huge models and heavy processing power?

How does an earthworm decide which direction might be the best shot at finding shade or moisture? How does a fish tell tasty food from poison? Predator from mate? These animals operate with very little resources, so spending some on a decision is quite risky. How do they generalize from the little prior experience they have? How does it work? And how does it work so fast?

I worked on neural networks in 2002, back when nobody called it “deep learning” and we could only dream of the computing power we have available today. But even back then, I focused on self-learning systems (specifically self-organizing maps) and later in my PhD, I would look deeply into semantic networks, which can also be dynamically extended with new knowledge at runtime.

What’s always fascinated me is the idea of efficient intelligence — systems that can learn with minimal effort. I’ve never believed that brute-forcing millions of images to find cats is the future of AI. Nature doesn’t throw teraflops at a problem.

So over the last years, I’ve worked on a new approach. It learns continuously, starting at nothing. It generalizes via one-shot learning and it finds causal and temporal connections in the world it observes. If you’re a reader of my blog, you’ve seen it in action over the different stages of development it went through, showing the diverse set of stages it went through, meanwhile demonstrating the generality it provides: the same tech can be applied to path prediction as well as traffic and congestion prediction.

At the core, it’s a self-learning engine that operates on continuous streams of numerical data — unsupervised, one-shot, and always adapting. It learns causal and temporal relationships across dozens or hundreds of time series at once.

And not only can it learn and predict, it also acts: It allocates resources based on what it predicts will happen next.

Currently, it’s trading fully autonomously live in the stock market, outperforming benchmarks. Oh, and it is all running on hardware comparable to the average gaming PC - without the GPU, as that is not needed.

But while this use case is interesting, it is not the end goal. Am I there yet? No, but it sure is a fun ride :)

The Risky Economics Behind Today’s AI Giants

The Risky Economics Behind Today’s AI Giants

Self-learning traffic prediction for Los Angeles

Self-learning traffic prediction for Los Angeles