At Wednesday’s breakout session on cognitive EW, panelists Michael Schmid, expert in costly risk prevention, AI, MIT; David Zurn, division head of the test engineering division, GTRI’s Electronic Systems Laboratory (ELSYS); Dr. Karen Haigh, expert in cognitive EW, AI and ML for physical systems; and Dylan E Duplechain, GS-15, chief engineer, 350th Spectrum Warfare Wing, Eglin AFB, answered questions from moderator Dr. William “Dollar” Young, Jr., PhD, retired Colonel and former Commander, 350th Spectrum Warfare Wing, in front of a standing-room-only crowd. The questions focused mainly on how cognitive EW can be improved. Below are some highlights of the panelists’ answers:
Dollar: How soon do you think we’ll see widespread adoption of cognitive EW, and what do you think is the most important barrier that must be addressed immediately by government, industry and academia?
Dr. Karen Haigh: The biggest barrier is the thinking, the fear, the risk that we’re doing the wrong thing. You can see that propagating from policy all the way down to individuals who are developing the technology. We have to just be willing to take the risk. It might not do the right thing, but even if it’s infinitesimally better than the previous version, it’s infinitely better than the previous version. Epsilon is greater than zero means it’s infinitely better than zero and we need to be willing to take risks, to take those steps because China is seriously outpacing us and we’re behind.
Dollar: Bloomberg published an article by Max Chafkin earlier this month with the headline “Even After $100 Billion, Self-Driving Cars are Going Nowhere” that challenged the assumption that widespread adoption of autonomous vehicles is right around the corner. If we assume that warfighting is more complex than driving a car, how do we avoid cognitive EW following the example and having the problems that autonomous vehicles are seeing?
David Zurn: I don’t like the headline. Self-driving cars are not going nowhere. Elon Musk tends to make pretty bold predictions, that’s what he does, but the fact is there has been incremental progress in self-driving cars. You see even in standard cars a lot of automated features like lane-change systems that are leading up to self-driving. … Just because we’re not at the top doesn’t mean we’re not making any progress.
Dollar: How do we establish trust among not just operators, but among the senior leaders and decision makers and stakeholders, and then how are we going to measure risk?
Dr. Karen Haigh: A lot of cases I think it’s the baby step thinking — getting people used to the ideas as we go along. You think about your autonomous vehicles, once upon a time cruise control people would never trust, now we trust cruise control and we get (upset) if it doesn’t have the automatic radar to slow you down from the vehicle in front.