Today we will talk about the "Situation awareness" — an important part of the AI in Enlisted. More specifically - how we are training the AI players of your squad to better identify hazards and priority targets.
Let us start this devblog with an important caveat: as is customary in games, under “AI” we mean a set of algorithms that allow the players of your squad to make their own decisions.
After the playtest “Battle for Moscow” we continued to work on the intelligence of your computer teammates. We taught them to overcome obstacles, to follow commands of the commander, to use hideouts and even throw grenades. The last open test allowed us to identify the most important AI vectors, one of which is to access hazards and targets.
During the past two play tests you played with AI soldiers which have used the performance optimized “Aggro meter” system. It accumulated the “danger” index on targets visible to the soldier which caused them damage or killed their teammates. The system worked but didn’t show itself perfectly in the situations with targets which had a low “danger” level but were more accessible to fire. After weighing all the pros and cons we decided to rework the system completely in favor of a different approach called “Utility Function”.
Technically speaking - the new algorithm is a set of mathematical functions above the input values. The final function allows AI to weigh the input values and get the result as a single number - the danger from the enemy, which the AI uses to sort out its goals. This approach opens up a large number of parameters that can be configured and therefore to interfere in the AI behavior even within a single session.
For you all this means that AI soldiers who choose their targets by using the “Utility Function” algorithm have become more responsive in a rapidly changing battle environment. Already the first internal tests have shown that the actions of soldiers have become noticeably more similar to the actions of real players: they respond more quickly to danger and are switching between targets more willingly.
What is particularly interesting is the fact that in the new algorithm we have seen the potential for machine learning which will constantly improve the behavior of the AI soldiers.
We can say that the one artificial intelligence system (by using machine learning) teaches the module of another artificial intelligence (target selection module) to be more effective in the battle and show better results.
We are still selecting the right criteria for learning and will probably train AIs for different tasks: survival and performance. Let’s say that the soldier on the defensive should choose the target that poses the biggest threat and in attack - the target that is easiest to kill. It is clear that these goals don’t always coincide. Learning will allow us to combine the current developments in the AI field and usefully apply them in the future to improve the behavior of the soldiers on the battlefield.
Already now we have a huge data stream in every gaming session to train AI in real time to predict where the danger will come from and choose the direction of their gaze accordingly. We are already using an algorithm to search for a cluster of events during the battle to choose the AI’s direction of gaze but further training on real data about how players play will make this system much more powerful! Also the priorities set by the engineers are not ideal in every situation and location for AI - here we also see the potential for machine learning. It helps us a lot that for each day of the open test we can get a big amount of data to train AIs for better priority selection. Even AI’s aiming can be improved by training AI aiming using the methods for the players.
This is just the beginning of our way to improve the game AI through machine learning. Soon we will tell you how we teach computer soldiers to hide from enemy fire. Keep up with the Enlisted development, stay tuned and follow the news!