It is safe to say that the Chat GPT function from OpenAI has created a firestorm of conversation about the applications of artificial intelligence (AI) in knowledge work and scholarship, which includes cyber threat intelligence. Can ChatGPT really replace the thought and knowledge work done by many people? That question is outstanding and I cannot answer, nor can anyone yet with any certainty.
But, it’s application to various topics, including cyber threat intelligence, is in question – and by proxy, it’s impact on those topics.
So, let me provide some perspective after 20 years of cyber threat intelligence AND having employed artificial intelligence and machine learning in this space for the last 10 years at least.
Before we pass any judgements, we must be clear of which we are speaking. Chat GPT is a ‘chat bot,’ a simulated human chat client. Its role is not of ‘knowledge maker.’ While it is impressive as to what it can produce, there is no question that using this particular AI for larger ‘knowledge making’ applications would be a mistaken application of this technology. AI models are very specific to their function and expected output. When used outside those applications there are many well-known errors. You wouldn’t use a self-driving application to write a historical essay.
So, my first point is: using ChatGPT in the production of threat intelligence would be a misapplication of this specific technology implementation. ChatGPT is a ‘chat bot’ designed and engineered to simulate human conversation and interaction. It was not built as a ‘expert system’ which the type of AI and ML would need to be to ‘speak more authoritatively’ on a subject. The output of ChatGPT sounds authoritative not because of its ability to analyze knowledge but because it uses short declarative sentences in active voice as we normally do in conversation. Unfortunately, when we communicate knowledge, we also use short declarative sentences in active voice. Do not confuse sentence syntax with knowledge.
My second point is: machine learning and artificial intelligence implementations are ripe for adversarial interference. Its actually very easy, without defense mechanisms, for humans to influence AI and ML models to produce incorrect output (both intentionally and unintentionally). Therefore, without verifiable defense mechanisms against adversarial environments (and verified inputs for which it learns), one should not trust the output of AI and ML. See research on bias in AI.
My third point is: that it is important that for intelligence to be accurate not only must its output be critically examined, but so must the mental processes by which it is produced. ChatGPT is a ‘blackbox’ meaning that its full analytic process is designed to be hidden from the user. In fact, without all the facts I would argue that ChatGPT likely uses significant amounts of ‘fuzzy logic’ and other probabilistic and predictive approaches which inhibit the ability to question its logic. This means that we cannot actually verify ChatGPT used appropriate structured techniques, strong hypothetical generation and testing, and bias reduction that we require to produce good intelligence.
My fourth and last point is: ChatGPT is not secure (as are most ‘community ML’ programs). It learns from the input of its users. All of your input to ChatGPT are also sent back to the company to build better algorithms. Therefore, with the sensitivity of many cyber threat intelligence topics, I would recommend against sending the ChatGPT engine any sensitive content or questions.
Therefore, it would be a mistake for intelligence analysts and producers to leverage ChatGPT in anyway in their work until these questions can be addressed effectively.
This is not due to any fear of AI but simply because of the issues I’ve described above and their criticality to the production of quality intelligence. Machine learning and artificial intelligence can be leveraged in a threat intelligence processes, but it must address the issues I’ve outlined first.