thehgtech 3 hours ago
While the "burnout math" makes sense on paper—71% of analysts are burned out and alert fatigue is a massive issue—we are essentially handing over the "defender must be right 100% of the time" standard to probabilistic models.
If an LLM hallucinates and auto-closes a critical alert as a false positive, the human fallback mechanism (the L2/L3 analyst) never even sees the log. I wrote this trying to figure out what happens to the entry-level infosec pipeline if the L1 training ground is completely abstracted away by AI.
Would love to hear how folks here running enterprise SOCs are handling (or avoiding) automated triage right now.