Your guess is as good as mine. But reading between the lines it has something to do with a high end jet fighter engine tech using AI.
@Nilgiri has given his views on it. I concur with what he says.
There are many articles and YouTube news channels that claim that TF35K is in production. One of them is where Mete Yarar, claiming that prototype of a 38000lbf engine for Kaan has been designed and the parts production is progressing. How much credibility he has among defence buffs is another question. But as he is adamantly and openly claiming that Kaan engine is currently in the works, one has to think that there has got to be a “smoking gun” there.
We also know that Kale, RR, TEI. Ivchenko are all working together under a consortium to put together an indigenous engine for Kaan.
Propulsion Systems
I know nothing of jet engine design but I am aware that it is mighty challenging. Has there ever been a case of an engine being designed, built and run for a thousand or so hours to check it works without any problems surfacing that required fixing? It looks like the prototype TF-6000 is taking...defencehub.live
So in spite of all the secrecy and not enough being known about the Kaan engine, with all above leaked information and the clear need of an indigenous engine for our TFX fighter, it is normal that there are patent applications pertaining to KAAN without making it too open. (Remember how we all had got the wind of TF6000, through patent application)
I mean to scale a TF from 10k to 35k like what is being talked about for a while now as just one example (I would think its likely the main driver in Turkish efforts currently).
How would one optimally do the simulations for it etc.
That is where sound algorithm formation for the dataset handling comes in, so it doesn't take the time it would have taken say some decades ago.
RnD<--->deep machine learning<---->"RnD primed" (just one example of early loop 3 being a branch to something RnD/IP wise in loop 1 or 2)
At same time, there is no easy way to mesh "AI" (essentially a bunch of nested+networked algorithms) overnight into these things with magic wand as AI does not understand big picture stuff well at all.
I am, for example, biased towards Monte Carlo sims (and a number of related variants) as they have occupied the higher order routines I am currently working on from experience formed..... for the last few years (for later HR to properly then take further when PW gets around to hiring larger department/consultancy for it).
So for a scaling of a TF from X to say Y roughly speaking needs standard simulation runs, then sensitivity analysis and then a look into optimisation of all of this (to reduce the dataset needed by filtering relative noise compared to what you want so the supercomputer doesnt have to spend X hours on the "average usual or bad".... for every quality minute of something important etc).
AI can automate a whole lot of this (saving time and thus money) if you design it well. Basically in effect "freezing in time" a jet engine operating under some condition, then filter/screen the exact subset of air molecules (or higher order factors involving them) you want to run the next sim in the progression. Then you cross evaluate with other kinds of operating conditions and you can scale things far quicker that way (essentially making bridges in shortest amount of time by finding the smallest distances to bridge in the "data field").
If you are able to get same quality results by running a thousand units instead of the earlier requirement of say a million, thats orders of magnitude advantage.
Like pebbles in a zen garden, you can arrange and check systematic patterns too that can help you filter (and get even better resolution with same computing power applied to less etc)....essentially thats how the AI "learns" if you prioritise this aspect in some routine it does in sandbox you allot to it (and watch its progress and measure its relevancy and so on)....it does the searches and filters better and better with less higher order "human" handholding over time.