When Data Becomes Instruction
AI & Technology

AI systems that analyze telemetry data — like those used in AIOps for IT operations — have an interesting vulnerability.

Traditional systems treat data as data. But when an LLM processes telemetry, that data becomes part of its context — effectively becoming instructions that influence its behavior.

This means a bad actor who can inject malicious data into a telemetry stream could potentially manipulate the AI's analysis or recommendations. What looks like ordinary log data could contain carefully crafted text that changes how the model interprets everything else.

It's a reminder that with AI systems, the line between "data" and "instructions" is blurry. Security models need to account for this.