TR UAV/UCAV Programs | Anka - series | Kızılelma | TB - series

Ryder

Experienced member
Messages
10,857
Reactions
6 18,707
Nation of residence
Australia
Nation of origin
Turkey

Fxj27LGaAAQ923f


In the past we all mocked how Terminator will never happen but it seems the reality is actually chilling.

Do we think technology is going to far in its aim to kill.
 

BalkanTurk90

Contributor
Messages
658
Reactions
5 1,028
Nation of residence
Albania
Nation of origin
Turkey
GOKALP Drone System goes into mass production.


gokalp-infografik.jpg


This thing will be great.
With 20 minutes its not so great , you should have time to serch for enemy amd destroy it but this drone lools like only work when enrmy is spotted and you can sent a drone.
 

boredaf

Contributor
Messages
1,414
Solutions
1
Reactions
16 3,928
Nation of residence
United Kingdom
Nation of origin
Turkey
With 20 minutes its not so great , you should have time to serch for enemy amd destroy it but this drone lools like only work when enrmy is spotted and you can sent a drone.
Because that is its function? It isn't a spotter.
 

UkroTurk

Experienced member
Land Warfare Specialist
Professional
Messages
2,684
Reactions
55 4,801
Nation of residence
Turkey
Nation of origin
Turkey
GOKALP Drone System goes into mass production.


gokalp-infografik.jpg


This thing will be great.
But without Laser designator, it will be very hard to use. These compact systems should be accomplished with laser designator.
How many kg does the Turkish lightest laser designator weight?
 

dBSPL

Experienced member
Think Tank Analyst
DefenceHub Ambassador
Messages
2,296
Reactions
96 11,840
Nation of residence
Turkey
Nation of origin
Turkey
Problems that can arise when development engineers are weak in social sciences. You offer the AI a reward every time disables the enemy system. So the AI removes the factors that prevent it from getting more rewards. This actually is a child's way of thinking, and just think how children can bend the rules, growing up is really about understanding social norms and ethical boundaries.
 

uçuyorum

Contributor
Messages
939
Reactions
13 1,547
Nation of residence
Turkey
Nation of origin
Turkey
My understanding is that there is that there wasn't any real simulation. However this is a likely scenario if you use basic reinforcement learning with a poor reward function. Deciding the reward function is an art itself and a very difficult thing. However you'd be better of with a more structured autonomous robotic control.
 

dBSPL

Experienced member
Think Tank Analyst
DefenceHub Ambassador
Messages
2,296
Reactions
96 11,840
Nation of residence
Turkey
Nation of origin
Turkey
The most interesting point in the story is that when the system was forbidden to "kill" the operator, it "destroyed" the tower that communicated with him :D So -all candies are mine, I'm sorry.
 

Afif

Experienced member
Moderator
Bangladesh Correspondent
DefenceHub Diplomat
Bangladesh Moderator
Messages
4,754
Reactions
94 9,091
Nation of residence
Bangladesh
Nation of origin
Bangladesh
If the SEAD mission was programed to be its highest priority then what happened is not very unexpected outcome.

It is not a glitch in the system nor it suddenly got terminator style free will as some people may think.

AI needs to be programed in a way that, obeying human command is its highest priority under any circumstances.

Clearly this one wasn't.
 
Last edited:

godel44

Active member
Messages
142
Reactions
8 457
Nation of residence
Turkey
Nation of origin
Turkey
It sounds like a reinforcement learning type of exercise. Having done a fair bit of that myself for commercial applications, I find it a bit far-fetched. In reinforcement learning, you have an agent which observes environment, has a set of actions it can take, outcomes and rewards associated with those outcomes that you use to update the parameters of the model according to some algorithm. Only an extremely poorly specified reward function would give a simple positive reward for every SAM site as you would actually have something like negative infinity for killing the operator. Even more importantly, the set of actions the agents are allowed to take is fixed and would never include killing the operator. It might be configured to possibly perform strikes in a given geographical area but then you would designate that area so that the operator is not in it.

There is a hype around AI risks nowadays and making it sound like we are at the Skynet levels and AI is like people now. Meanwhile I am still dealing with their stupidities daily. It's still just matrices.
 

Kaan Azman 

Well-known member
DH Visual Specialist
Messages
424
Reactions
26 1,748
Age
22
Website
twitter.com
Nation of residence
Turkey
Nation of origin
Turkey
As godel said, there was a mistake in priorities. (I was thinking of this while taking a walk ain't that funny?)

Machines don't have the sense of military duty of a soldier, they work linearly when they are tasked with something. What needs to be and will be done is placement of an obstacle on the road to destroying the allied assets, as a way of saying "Be a good boy and don't bite the master no matter what".
 

GoatsMilk

Experienced member
Messages
3,450
Reactions
14 9,110
Nation of residence
United Kingdom
It sounds like a reinforcement learning type of exercise. Having done a fair bit of that myself for commercial applications, I find it a bit far-fetched. In reinforcement learning, you have an agent which observes environment, has a set of actions it can take, outcomes and rewards associated with those outcomes that you use to update the parameters of the model according to some algorithm. Only an extremely poorly specified reward function would give a simple positive reward for every SAM site as you would actually have something like negative infinity for killing the operator. Even more importantly, the set of actions the agents are allowed to take is fixed and would never include killing the operator. It might be configured to possibly perform strikes in a given geographical area but then you would designate that area so that the operator is not in it.

There is a hype around AI risks nowadays and making it sound like we are at the Skynet levels and AI is like people now. Meanwhile I am still dealing with their stupidities daily. It's still just matrices.

When i was kid i used to play video games. I always used to wonder why the enemy AI was so basic and dumb. From what i gather AI in computer games is still really shit.

When i read these "skynet" reports i cant help but think these stories are made up. Its a bit like once Turkiye started using and exporting armed drones lots of fear mongering stories in the media came out and talks about needing to create international control measures. I wonder if all this AI fear mongering is about creating international control measures of its use. So that select nations can benefit from it while others are hindered from making their own AI systems.
 

Mustafa27

Committed member
Messages
215
Reactions
2 588
Nation of residence
United Kingdom
Nation of origin
Turkey
When i was kid i used to play video games. I always used to wonder why the enemy AI was so basic and dumb. From what i gather AI in computer games is still really shit.

When i read these "skynet" reports i cant help but think these stories are made up. Its a bit like once Turkiye started using and exporting armed drones lots of fear mongering stories in the media came out and talks about needing to create international control measures. I wonder if all this AI fear mongering is about creating international control measures of its use. So that select nations can benefit from it while others are hindered from making their own AI systems.
AI in video games are as good as they need to be, if it becomes too good the game will not be enjoyable for the majority of people. You could create an AI that consistently beats a player but what is the point if they will quit because they never win.
A good explanation on the topic here: https://askagamedev.tumblr.com%2Fpost%2F76972636953
 

Nilgiri

Experienced member
Moderator
Aviation Specialist
Messages
9,767
Reactions
119 19,787
Nation of residence
Canada
Nation of origin
India
When i was kid i used to play video games. I always used to wonder why the enemy AI was so basic and dumb. From what i gather AI in computer games is still really shit.

When i read these "skynet" reports i cant help but think these stories are made up. Its a bit like once Turkiye started using and exporting armed drones lots of fear mongering stories in the media came out and talks about needing to create international control measures. I wonder if all this AI fear mongering is about creating international control measures of its use. So that select nations can benefit from it while others are hindered from making their own AI systems.

My reply to Indonesians discussing same matter:
Probably just sandbox testing to see "what if" there were no overrides in the programming (i.e natural state and consequence of AI).

In reality, its not very difficult at all to have robust overrides for humans w.r.t Asimov rules and similar principles etc.

Since that news "leaked", USAF spokesperson denied any such simulation happened.

But then a friend of mine was quick to quip that would be precisely what an AI would say :p
 

GoatsMilk

Experienced member
Messages
3,450
Reactions
14 9,110
Nation of residence
United Kingdom
AI in video games are as good as they need to be, if it becomes too good the game will not be enjoyable for the majority of people. You could create an AI that consistently beats a player but what is the point if they will quit because they never win.
A good explanation on the topic here: https://askagamedev.tumblr.com%2Fpost%2F76972636953

Its not even that, its just really basic dumb stuff. I remember playing half life 1 and coming across the grunts and thinking what a step up. These guys would move as a unit, take cover, flank etc. Since then AI hasnt improved.

Before that enemy AI in games was enemy appears in front of you and fires and has basic movement patterns.

Why don't we get enemies talking to each other, employing real time tactics. Or for example lets say you are playing an RPG. In it you murder someone's father, then 5 hours later the son tries to kill you having spent the rest of the time roaming villages and investigating who was behind his fathers death.

AI is so so basic on a computer game level that i dont think AI will ever be able to think or operate like a human.

We can't even get NPC's in RPG's to operate in any organic way.

Even take a game like civilisation, the rival AI is so so basic, it cannot do anything but a few simply patterns of behaviour based on triggers. To me artificial intelligence is just about how many on/off triggers you can place on it. It operates in a linear sense, frame by frame.
 

Follow us on social media

Top Bottom