<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>NexArt Blog</title>
    <link>https://nexart.io/blog</link>
    <description>Insights on verifiable AI execution, Certified Execution Records, and cryptographic execution integrity from the NexArt team.</description>
    <language>en-us</language>
    <lastBuildDate>Mon, 24 Mar 2026 00:00:00 GMT</lastBuildDate>
    <atom:link href="https://nexart.io/feed.xml" rel="self" type="application/rss+xml" />
    

    <item>
      <title>AI Auditability and the EU AI Act: Why Execution Evidence Matters</title>
      <link>https://nexart.io/blog/ai-auditability-eu-ai-act</link>
      <guid isPermaLink="true">https://nexart.io/blog/ai-auditability-eu-ai-act</guid>
      <pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate>
      <dc:creator>NexArt Team</dc:creator>
      <description>The EU AI Act requires auditable AI systems, but most are built for observability, not auditability. This article explains why execution evidence — not logs — is the foundation of real AI auditability.</description>
      <enclosure url="https://nexart.io/og/og-blog-ai-auditability-eu-ai-act.jpg" type="image/jpeg" length="0" />
    </item>

    <item>
      <title>Verifiable AI Execution vs zkML: What NexArt Proves, What It Doesn't, and How Privacy Works in Practice</title>
      <link>https://nexart.io/blog/verifiable-ai-execution-vs-zkml</link>
      <guid isPermaLink="true">https://nexart.io/blog/verifiable-ai-execution-vs-zkml</guid>
      <pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate>
      <dc:creator>NexArt Team</dc:creator>
      <description>NexArt and zkML both fall under 'verifiable AI,' but they solve different trust problems. This article clarifies the distinction between execution evidence and computation proofs, and outlines NexArt's practical privacy model.</description>
      <enclosure url="https://nexart.io/og/og-blog-verifiable-ai-execution-vs-zkml.jpg" type="image/jpeg" length="0" />
    </item>

    <item>
      <title>AI Audit Trails vs Verifiable Execution</title>
      <link>https://nexart.io/blog/ai-audit-trails-vs-verifiable-execution</link>
      <guid isPermaLink="true">https://nexart.io/blog/ai-audit-trails-vs-verifiable-execution</guid>
      <pubDate>Mon, 23 Mar 2026 00:00:00 GMT</pubDate>
      <dc:creator>NexArt Team</dc:creator>
      <description>Audit trails provide visibility, but visibility is not proof. This article explores the gap between tracking behavior and proving execution, and why verifiable execution is emerging as a new standard for AI auditability.</description>
      <enclosure url="https://nexart.io/og/og-blog-audit-trails-vs-verifiable-execution.jpg" type="image/jpeg" length="0" />
    </item>

    <item>
      <title>Execution Drift in AI Systems (and Why It Matters More Than You Think)</title>
      <link>https://nexart.io/blog/execution-drift-in-ai-systems</link>
      <guid isPermaLink="true">https://nexart.io/blog/execution-drift-in-ai-systems</guid>
      <pubDate>Fri, 20 Mar 2026 00:00:00 GMT</pubDate>
      <dc:creator>NexArt Team</dc:creator>
      <description>Two executions with the same inputs can produce different results. Execution drift is not a bug; it is a property of modern AI systems. The challenge is making it visible, explainable, and verifiable.</description>
      <enclosure url="https://nexart.io/og/og-blog-execution-drift.jpg" type="image/jpeg" length="0" />
    </item>

    <item>
      <title>What Is a Certified Execution Record (CER)?</title>
      <link>https://nexart.io/blog/what-is-a-certified-execution-record</link>
      <guid isPermaLink="true">https://nexart.io/blog/what-is-a-certified-execution-record</guid>
      <pubDate>Thu, 19 Mar 2026 00:00:00 GMT</pubDate>
      <dc:creator>NexArt Team</dc:creator>
      <description>Most AI systems produce logs, but logs are not proof. A Certified Execution Record is a cryptographically verifiable artifact that turns execution into independently validatable evidence.</description>
      <enclosure url="https://nexart.io/og/og-blog-what-is-cer.jpg" type="image/jpeg" length="0" />
    </item>

    <item>
      <title>How to Verify AI Execution (and Why Logs Are Not Enough)</title>
      <link>https://nexart.io/blog/how-to-verify-ai-execution</link>
      <guid isPermaLink="true">https://nexart.io/blog/how-to-verify-ai-execution</guid>
      <pubDate>Wed, 18 Mar 2026 00:00:00 GMT</pubDate>
      <dc:creator>NexArt Team</dc:creator>
      <description>Most AI systems rely on logs for accountability, but logs are not proof. This article explores why verification requires a different model, and how Certified Execution Records turn execution into independently verifiable evidence.</description>
      <enclosure url="https://nexart.io/og/og-blog-how-to-verify.jpg" type="image/jpeg" length="0" />
    </item>

    <item>
      <title>The Missing Layer in AI Systems: Verifiable Execution</title>
      <link>https://nexart.io/blog/verifiable-ai-execution</link>
      <guid isPermaLink="true">https://nexart.io/blog/verifiable-ai-execution</guid>
      <pubDate>Sat, 14 Mar 2026 00:00:00 GMT</pubDate>
      <dc:creator>NexArt Team</dc:creator>
      <description>Most AI systems produce logs that are mutable, incomplete, or opaque. Certified Execution Records replace them with tamper-evident, cryptographically sealed proof of what actually happened.</description>
      <enclosure url="https://nexart.io/og/og-blog-verifiable-ai-execution.jpg" type="image/jpeg" length="0" />
    </item>
  </channel>
</rss>
