Self-learning AI Movement Prediction: Beyond Airstriker Genesis to multi-directional predictions
Quick update on my self-learning software experiment:
Thanks to your feedback, I decided to test my prediction system on a newer tower-defense game from the Apple App Store (simply called ‘The Tower’). What's crucial to remember is that my algorithm is not pre-trained and only learns from the current game it encounters - it starts with zero knowledge and learns exclusively from the game it's currently playing, building from the ground up without the use of deep learning or neural networks.
In this game (unlike Airstriker which I’ve previously used), players don't control a spaceship or fire weapons (you play the game by ‘upgrading’ your weapons, etc.). It's simpler because there's only one type of enemy that always approaches the center, so the system cannot demonstrate its capabilities for differentiation in this case. But this simplicity presents some other interesting challenges: Enemies approach from all 360-degree directions, pushing the boundaries of the path prediction software. They overlap during explosions, demanding the system to separate them. There's also more visual clutter, including static lines and a non-black background.
The system's predictive performance has been remarkably strong. I’ve put together an overlay video to visually demonstrate how the system learns and adapts in this new game. The weird blue/yellow-ish tiles behind the moving squares are the prediction output. You can see how they increase while the system learns over the course of a couple of rounds. Note: If things don’t align perfectly in there, it’s due to my poor video editing skills…
Your feedback is appreciated as always!