The Biden administration is warning of “evolving intelligence that the Russian Government is exploring options for potential cyberattacks” against the United States. Promising to “use every tool to deter, disrupt, and if necessary, respond to cyberattacks against critical infrastructure,” the president also warned, “But the Federal Government can’t defend against this threat alone,” and called on private industry to take aggressive cybersecurity measures:
If you have not already done so, I urge our private sector partners to harden your cyber defenses immediately by implementing the best practices we have developed together over the last year. You have the power, the capacity, and the responsibility to strengthen the cybersecurity and resilience of the critical services and technologies on which Americans rely. We need everyone to do their part to meet one of the defining threats of our time—your vigilance and urgency today can prevent or mitigate attacks tomorrow.
No further information about this “evolving intelligence” is available publicly but it’s reasonable to assume we’re seeing preparatory actions by Russian government hackers, Russian government-backed non-state hackers, or both. These preparations could include attempted hacks, prepositioning tools and code on command-and-control servers, and all-source intelligence on Moscow’s intentions and plans.
Last month, the Office of the Director of National Intelligence made this assessment:
We assess that Russia will remain a top cyber threat as it refines and employs its espionage, influence, and attack capabilities. We assess that Russia views cyber disruptions as a foreign policy lever to shape other countries’ decisions, as well as a deterrence and military tool … Russia is particularly focused on improving its ability to target critical infrastructure, including underwater cables and industrial control systems, in the United States as well as in allied and partner countries, because compromising such infrastructure improves and demonstrates its ability to damage infrastructure during a crisis.
I’m inclined to believe ransomware and similar attacks from non-government black hats against critical infrastructure and other targets are the most likely threat in the near-term. Therefore, for several years and especially since Putin’s invasion of Ukraine, I’ve called on the United States to aggressively target and dismantle these groups so that they cannot be used by Moscow against us. If, however, there is credible intelligence that Russia’s military or intelligence services are preparing cyberattacks against our nation, this would be, and should be responded to as, a significant escalation that cannot go unpunished.
There are a lot of unknowns right now so it’s best to let this story develop further. But one thing is certain: While we have significant cyber capabilities, we also have significant vulnerabilities. If large-scale cyberattacks come, we’re all going to feel it.
As Jen Easterly, the director of the Cybersecurity and Infrastructure Security Agency (CISA), says, “Shields Up!”
A Tale of Three Drones
I began my career in the U.S. intelligence community shortly after 9/11. This was the very beginning of what was then called the Global War on Terrorism and when our nation began using armed unmanned aerial vehicles (UAVs or “drones”) to strike terrorist targets. While the public was generally aware of these operations, their details were so secret that, even in classified intelligence reports, we could only refer to a drone strike as an “explosion.”
In these early days, terrorists could be hit only if we had positive identification (PID) of the intended target, typically from at least two sources. Toward the end of the Bush administration, we adapted our rules of engagement to include “signature strikes”: attacks predicated on more general “signatures” of hostile intent and capabilities, things like military-age males, the presence of weapons, convoys matching known enemy vehicles, suspicious activities, etc. We made this change in part to accommodate general military targets in Iraq and because, having eliminated many of the known terrorist leaders on our bad guy lists, we needed the flexibility to take out unidentified enemies and the infrastructure supporting them.
There are serious debates about the efficacy, ethics, and strategic implications of drone warfare but I’m not going to wade into those today. The reality is that these tools are here to stay, and that this type of warfare will only proliferate and feature more heavily in modern conflict. The crisis in Ukraine supplies an excellent case study of this new reality.
In many ways the Turkish-made Bayraktar TB2 drone is the hero of the Ukrainian resistance. Compared to American UAVs, the TB2 is about as sophisticated as a crop duster; but this slow, lumbering aircraft is inflicting some of the most significant losses endured by the Russian military over the last several weeks. The main body of the TB2, with a 39-foot wingspan, is built in Turkey but also includes American avionics and sensors. It can be used for reconnaissance or be armed with missiles and the whole suite can be bought for the relatively modest price of $2 million apiece.
Prior to Putin’s invasion, most observers thought the Russian army would make short work of these drones, using missiles, air defense systems, and electronic jamming devices to destroy the UAVs long before they were able to reach their targets. This, like so many other pre-war expectations, has not materialized. For reasons we still don’t fully understand, the Russian military has not established air dominance over Ukraine and is not effectively integrating its air and ground campaigns, and the mighty TB2 is extracting a high price for these failures.
But the Russians are also using drones in Ukraine and one of these systems could be breaking new ground.
Recent photographs show what appears to be KUB-BLAs, a Russian-made “suicide drone” that claims to use artificial intelligence (AI) to find its targets before blowing them up in kamikaze-style attacks.
In military jargon, these kinds of drones are called “loitering munitions” and the United States and other nations are also developing these weapons. The Russian variant has a wingspan of just under four feet, is launched from a portable catapult, carries a three-kilogram explosive, and can fly for 30 minutes. Its maker, ZALA Aero, claims the drone’s AI can autonomously identify and engage targets without human aid if desired—and that’s why this drone is noteworthy.
The idea of killer robots is a long-held dream and nightmare for many, and the idea rightly provokes many questions. If the Russians are using a drone that is autonomously finding and engaging targets in Ukraine, it is among the first deployments of such a capability in a real shooting war. While the drone itself is not especially novel, the possible autonomous real-life targeting would be.
But a new $800 million aid package from the United States means Ukraine will soon be fielding their own “loitering munitions,” and these will come in two variants.
American-made Switchblade 300 and 600 drones are on their way to the fight and these bad boys pack a punch. The Switchblade 300 weighs only 5.5 pounds, can be carried in a backpack, flies for 15 minutes, can reach a speed of 100 mph, and can hit targets six miles away. The larger Switchblade 600 weighs 50 pounds, can fly for 40 minutes, has a top speed of 115 mph, and has a range of 25 miles. The 300 is mostly used for killing soldiers while the 600 can be used to destroy tanks and other equipment.
Like the Russian KUB-BLAs, the Switchblades can autonomously identify and engage enemies; however, current U.S. doctrine requires an operator to manually approve a target before it is attacked. The Switchblades have previously been used in Afghanistan against the Taliban.
So, what do these developments mean in the larger context? Here are three thoughts.
First, a growing part of the battlefield will be unmanned. Remotely operated and autonomous systems are not just flying in the skies, they’re also patrolling the ground, navigating the seas, and even doing spooky things in space. There are also programs focused on converting manned systems, like F-16 fighter planes, into unmanned systems so that even more firepower can be brought to bear from afar. In many ways, we are still at the very beginning of a revolution in drone warfare and these battle bots are only getting more autonomous, more capable, and more lethal.
Which brings me to my second point: Lethal autonomy is inevitable.
As Russian KUB-BLAs and American Switchblades illustrate, we already have the tech to break things and kill people completely autonomously. But, for now, the United States is choosing to keep a man in the loop to mitigate risk and because we still operate effectively under this constraint. But the day is coming when the scale and speed of threats will almost certainly demand that we meat-bags take our fingers off the trigger and let the machines do the work. While this can be a little scary, let me give you two examples to illustrate why I’m generally optimistic.
Consider hypersonic missiles. These fast-flying, maneuverable weapons are all the rage right now and both Russia and China have even made them capable of delivering nuclear payloads. Now imagine if they had stealth or other features that dramatically reduced our already minimal ability to identify and destroy them. What if we could improve our chances of knocking these missiles out of the sky by 30 percent if we saved time by fully automating our missile defense systems—essentially allowing the machines to ask for forgiveness rather than permission when it comes to deploying anti-missile measures? Would you do it?
Or think about humanoid systems like the Atlas robot. What if we could send one of these robots into a hostage situation instead of a team of soldiers? Its multiple 4K cameras, LIDAR, and tailor-made AI could one day identify civilians and engage more enemies faster, with greater precision, and with fewer mistakes than any human could dream of matching. But in moments like these, where fractions of seconds matter, we can’t be waiting for a grunt with a remote control to push the button. More to the point, if this capability was really being deployed to the battlefield, I’m sure many of our military personnel would be more than happy to let Number “Johnny” Five be the first one through the door.
As helpful as this could be, there are real challenges.
Third, easy strikes could lead to easy wars. One of the unintended consequences of American drone warfare has been that—as it has become easier and less risky to kill people overseas—we’ve become more willing to conduct these operations. Before all of this, if you wanted to reach out and touch a terrorist in the hinterlands of Pakistan’s tribal areas, you could launch cruise missiles (kind of like killing a fly with a sledgehammer), you could drop bombs from a plane with a pilot who could be killed or captured, or you could send in special operations forces who might also be killed. All these options were risky and highly consequential. But drones changed things.
With drones like the Predator and Reaper, the United States could now find, fix, and finish terrorist targets with extreme precision and little to no risk to personnel. In fact, to further minimize physical and political risk, the Obama administration ramped up these types of targeted killings in Pakistan and Afghanistan as a way of keeping terrorists’ heads down while also reducing the American footprint in these areas.
But conflict is supposed to be hard and reducing its frictions only makes us more likely to slip into it.
When the time for fighting comes, I want the United States to be able to quickly mobilize and deliver overwhelming force on the enemy, while keeping our warfighters as safe as possible. But I also want war to be difficult and rare and we must admit that, as this technology continues to mature, it’s only getting easier to be detached from the fighting and its consequences.
Even so, drones are playing a huge role in Ukraine’s resistance and that’s a good thing.
A Deep Disappointment
In the summer of 2018, I hosted Sen. Marco Rubio to discuss the rising challenge of so-called “deep fake” media—realistic audio, video, and other media that is partially or completely fabricated, often with the help of artificial intelligence. It was a fascinating discussion and included Chris Bregler, now director and principal scientist at Google AI and law professors Danielle Citron and Bobby Chesney. Citron and Chesney had recently completed an excellent report on deep fakes and Sen. Rubio was raising the alarm about how synthetic media could have national security implications.
While there have been several deep fakes over the years, recently the Russian government appears to have released a laughably bad fake video of Ukrainian President Volodymyr Zelenskyy calling on his military to lay down their arms. It was easily dismissed. But the next one may not be.
In this case, the United States has been aggressively “pre-bunking” Putin’s lies by proactively releasing intelligence and other information about the autocrat’s intentions. This has created an environment where everyone takes everything coming out of Russia and Ukraine with a healthy grain of salt. Trying to push out a poorly made deep fake into this highly skeptical environment was not well-considered.
But if Russia had deployed a high-quality deep fake of Zelenskyy in the early stages of its invasion it very well could have caused confusion and even supplied a short-term tactical edge by slowing the Ukrainian military’s response—yet another baffling failure of Putin’s planning.
But the threat isn’t gone. Deep fakes are getting easier to make and more believable all the time. Whether it’s Putin or some other authoritarian leader, synthetic media’s geopolitical days of influence are just beginning.
That’s it for this edition of The Current. Be sure to comment on this post and to share this newsletter with your family, friends, and followers. You can also follow me on Twitter (@KlonKitchen). Thanks for taking the time and I’ll see you next week!