<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Spoiledlunch</title><link>https://05504994.spoiledlunch.pages.dev/</link><description>Nerdy Stuff. Tech Talk. Zero Freshness. Analysis and commentary on GRC, security, and AI.</description><generator>Hugo 0.160.1</generator><language>en-us</language><lastBuildDate>Mon, 20 Apr 2026 09:00:00 -0700</lastBuildDate><atom:link href="https://05504994.spoiledlunch.pages.dev/topics/ai/" rel="self" type="application/rss+xml"/><item><title>Why AI Governance Frameworks Are Security Theater</title><link>https://05504994.spoiledlunch.pages.dev/articles/2026-04-20-ai-governance-security-theater/</link><pubDate>Mon, 20 Apr 2026 09:00:00 -0700</pubDate><guid>https://05504994.spoiledlunch.pages.dev/articles/2026-04-20-ai-governance-security-theater/</guid><description>
&lt;![CDATA[<p><strong>Article</strong> • April 20, 2026 • 4 min read</p><p><strong>Topics:</strong> AI, GRC</p><p>Why AI Governance Frameworks Are Security Theater Most enterprise AI governance frameworks are elaborate exercises in checkbox compliance that miss the actual risks. They&rsquo;re designed to satisfy …</p><p><a href="https://05504994.spoiledlunch.pages.dev/articles/2026-04-20-ai-governance-security-theater/">Read full analysis →</a></p>
]]></description><author>@spoiledlunch</author><category>AI</category><category>GRC</category><category>governance</category><category>risk management</category><category>enterprise AI</category><category>compliance</category></item><item><title>OpenAI Opens Applications for a Safety Fellowship Focused on Alignment Research</title><link>https://05504994.spoiledlunch.pages.dev/news/2026-04-06-openai-opens-applications-for-a-safety-fellowship-focused-on-alignment-research/</link><pubDate>Mon, 06 Apr 2026 09:00:00 -0700</pubDate><guid>https://05504994.spoiledlunch.pages.dev/news/2026-04-06-openai-opens-applications-for-a-safety-fellowship-focused-on-alignment-research/</guid><description>
&lt;![CDATA[<p><strong>News Brief</strong> • April 6, 2026</p><p><strong>Topics:</strong> AI</p><p>Summary: OpenAI announced the OpenAI Safety Fellowship on April 6, 2026, describing it as a pilot program for external researchers, engineers, and …</p><p><a href="https://05504994.spoiledlunch.pages.dev/news/2026-04-06-openai-opens-applications-for-a-safety-fellowship-focused-on-alignment-research/">Read brief →</a></p>
]]></description><author>@spoiledlunch</author><category>AI</category><category>OpenAI</category><category>AI safety</category><category>alignment</category><category>research</category></item><item><title>NIST Maps the Hard Parts of Monitoring Deployed AI Systems</title><link>https://05504994.spoiledlunch.pages.dev/news/2026-03-09-nist-maps-the-hard-parts-of-monitoring-deployed-ai-systems/</link><pubDate>Mon, 09 Mar 2026 09:00:00 -0400</pubDate><guid>https://05504994.spoiledlunch.pages.dev/news/2026-03-09-nist-maps-the-hard-parts-of-monitoring-deployed-ai-systems/</guid><description>
&lt;![CDATA[<p><strong>News Brief</strong> • March 9, 2026</p><p><strong>Topics:</strong> AI</p><p>Summary: NIST published AI 800-4, &ldquo;Challenges to the Monitoring of Deployed AI Systems,&rdquo; on March 9, 2026. The report groups monitoring …</p><p><a href="https://05504994.spoiledlunch.pages.dev/news/2026-03-09-nist-maps-the-hard-parts-of-monitoring-deployed-ai-systems/">Read brief →</a></p>
]]></description><author>@spoiledlunch</author><category>AI</category><category>NIST</category><category>AI monitoring</category><category>AI governance</category><category>standards</category></item></channel></rss>