Former CEO of Google has been quietly working on a military startup for “suicide” attack drones.::The former Google CEO has been quietly working on a military startup called White Stork with plans to design “kamikaze” attack drones.
Former CEO of Google has been quietly working on a military startup for “suicide” attack drones.::The former Google CEO has been quietly working on a military startup called White Stork with plans to design “kamikaze” attack drones.
It’s a weapon like any other. Maybe you’re iffy on the name, but suicide drones are just another way to attack specific targets, like missiles but far more precise. What is evil about having a remotely controlled aircraft hit an enemy position as opposed to artillery, bombs or gunfire hitting enemy position?
I disagree with this. There is one glaring issue with AI-powered weapons, in comparison to other traditional ones - the skill ceiling required to make massive damages at scale.
Sure, you can probably level a whole town if you get your hands on some kind of advanced artillery. But it’s still vastly more complex machine, that probably requires extensive training just to operate. You need an army for that, and army is made of people who will hopefully tell you “No, we’re not doing that”, if your request is not reasonable. And if you somehow try to do it yourself, good luck getting more than a few shots out before someone notices and tries to stop you.
If you have an army of hundreds or thousands of AI powered suicide drones, where you just slap an explosive on them, set a target and the whole fleet will start running, you only need one person with a computer. And once you send the fleet, it’s vastly more difficult to stop it. Hell, you probably don’t event need to physically get to the drones, if you can hack into the system that controls them.
And that’s the biggest issue with any AI-powered weapon, and a reason why they shouldn’t exist.