<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Subho Halder]]></title><description><![CDATA[Co-founder and former CEO of Appknox. Now building in AI x Security. Working notes from what I am building and what I'm shipping, what's surprising me, what I can't yet explain.]]></description><link>https://notes.subhohalder.com</link><generator>Substack</generator><lastBuildDate>Wed, 06 May 2026 09:25:49 GMT</lastBuildDate><atom:link href="https://notes.subhohalder.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Subho Halder]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[me@subhohalder.com]]></webMaster><itunes:owner><itunes:email><![CDATA[me@subhohalder.com]]></itunes:email><itunes:name><![CDATA[Subho Halder]]></itunes:name></itunes:owner><itunes:author><![CDATA[Subho Halder]]></itunes:author><googleplay:owner><![CDATA[me@subhohalder.com]]></googleplay:owner><googleplay:email><![CDATA[me@subhohalder.com]]></googleplay:email><googleplay:author><![CDATA[Subho Halder]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[AI Authorship Question]]></title><description><![CDATA[An 800-line open-source scanner for how much of your code an AI wrote, and how much of it you shipped without reading.]]></description><link>https://notes.subhohalder.com/p/i-stopped-calling-it-vibe-check</link><guid isPermaLink="false">https://notes.subhohalder.com/p/i-stopped-calling-it-vibe-check</guid><dc:creator><![CDATA[Subho Halder]]></dc:creator><pubDate>Thu, 23 Apr 2026 16:37:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!5PG4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff193b239-cd71-48d1-9647-02d385e33a6d_1656x2016.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is a walkthrough of <code>ai-authorship</code>, a small open-source tool that reads your git history and estimates two things: how much of your code was written by an AI, and how much of it you shipped without reading. About 800 lines of TypeScript, MIT, no telemetry, runs locally on your <code>.git</code>.</p><p>I built it last week because I couldn&#8217;t answer either question for my own codebase.</p><h2>Overview</h2><p>The pipeline end-to-end:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;plaintext&quot;,&quot;nodeId&quot;:&quot;4989b9c1-0417-45a1-b532-dc736bed96f3&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-plaintext">  git log
    &#9474; null-byte format parse
    &#9660;
  tagged commits  &#8592;  Co-Authored-By email  &#8594;  nine-row model table
    &#9474;
    &#9500;&#9472;&#9472;&#9658;  hotspots (AI % per directory)
    &#9500;&#9472;&#9472;&#9658;  velocity (AI commit size &#247; human commit size)
    &#9474;
    &#9660;
  (model &#215; language) pair  &#8592;  SecLens benchmark  &#8594;  blind spots
    &#9474;
    &#9660;
  Risk Score  =  0.4 &#215; AI-coverage  +  0.6 &#215; (1 &#8722; language-weighted recall)
</code></pre></div><h2>Detection</h2><p>Most developers already produce the core signal. If you use Claude Code, Cursor, Copilot, Codex, Gemini, Devin, or Windsurf, those tools auto-append a <code>Co-Authored-By:</code> line to your commit message whenever the assistant writes or rewrites code. The ground truth for &#8220;AI wrote this commit&#8221; is already in <code>git log</code>. The scanner reads it.</p><p>The whole detection table fits in nine rows:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;javascript&quot;,&quot;nodeId&quot;:&quot;b677406a-65b8-44fd-afc8-dbe199b3da13&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-javascript">// src/intelligence/models.ts
const AI_EMAILS: Record&lt;string, ModelFamily&gt; = {
  "noreply@anthropic.com":                { tool: "claude-code", provider: "anthropic", family: "claude" },
  "claude@anthropic.com":                 { tool: "claude-code", provider: "anthropic", family: "claude" },
  "copilot@github.com":                   { tool: "copilot",     provider: "openai",    family: "gpt" },
  "cursor-ai@users.noreply.github.com":   { tool: "cursor",      provider: "cursor",    family: "unknown" },
  "cursor@cursor.sh":                     { tool: "cursor",      provider: "cursor",    family: "unknown" },
  "codeium@codeium.com":                  { tool: "windsurf",    provider: "codeium",   family: "unknown" },
  "devin-ai-integration[bot]@users.noreply.github.com": { tool: "devin", provider: "cognition", family: "unknown" },
  "codex@openai.com":                     { tool: "codex",       provider: "openai",    family: "gpt" },
  "gemini@google.com":                    { tool: "gemini",      provider: "google",    family: "gemini" },
};</code></pre></div><p>Nine email addresses, nine tools. No classifier, no LLM call, no inference. For every commit in your repo, the scanner reads the <code>Co-Authored-By:</code> trailers in the body, looks them up in this table, and tags the commit with the model that wrote it. You can audit the method with <code>git log --grep 'Co-Authored-By'</code> on any repo. Everything else in the tool is variations on &#8220;group the tagged commits by X and count.&#8221;</p><p>Model names inside trailers aren&#8217;t standardised, so they get normalised on the way in:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;javascript&quot;,&quot;nodeId&quot;:&quot;fb7d3fb3-0226-46c3-ae84-3f3a26c73125&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-javascript">// "Claude Opus 4.6 (1M context)" &#8594; "claude-opus-4-6"
// "Claude Sonnet 4.6"            &#8594; "claude-sonnet-4-6"
export function extractModelName(coAuthorName: string): string | null {
  if (!coAuthorName.trim()) return null;
  let name = coAuthorName.trim();
  name = name.replace(/\s*\(.*?\)\s*/g, "").trim();   // strip "(1M context)" etc.
  if (!name) return null;
  return name
    .toLowerCase()
    .replace(/[\s.]+/g, "-")
    .replace(/-+/g, "-")
    .replace(/^-|-$/g, "");
}</code></pre></div><h2>Parsing git log</h2><p>I thought this part would be easy. It wasn&#8217;t. Commit messages can contain any character: newlines, tabs, quotes, emoji, adversarial trailers, even the output of <code>git log</code> itself. Split on newlines or commas and some commit message somewhere will eat you.</p><p>Git log has had a solution forever. Use <code>--format</code> with placeholder bytes that can&#8217;t appear in normal text.</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;javascript&quot;,&quot;nodeId&quot;:&quot;3301264b-ff8d-4dc4-85ca-a3a55173d70b&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-javascript">// src/scanner/git-log.ts
const RECORD_SEP = "\x1E";   // ASCII record separator (1960s)
const FIELD_SEP  = "\x00";   // null byte

// %x00 between fields, %x1E between records. Git expands these to real bytes.
const format = "%H%x00%aN%x00%aE%x00%aI%x00%s%x00%b%x1E";

const raw = execFileSync("git", [
  "log", "--all",
  "-n", String(maxCommits),
  `--format=${format}`,
  "--numstat",
], { cwd: repoPath, maxBuffer: 100 * 1024 * 1024, encoding: "utf-8" });</code></pre></div><p><code>\x00</code> and <code>\x1E</code> are the original record separators from ASCII. They exist to split records unambiguously. They almost never appear inside commit messages, because you can&#8217;t type them on a keyboard. Parsing becomes <code>raw.split("\x1E").map(r =&gt; r.split("\x00"))</code>. No regex acrobatics, no shell-quote hell. <code>--numstat</code> gets you line-count stats per file on the same command, same parser, a few extra lines.</p><h2>Hotspots</h2><p>Once every commit has a detection tag, the question shifts from &#8220;how much&#8221; to &#8220;where&#8221;. A 61%-AI repo with AI work spread evenly is different from a 61%-AI repo where one directory is 100% AI and the rest is all human. The second one is a risk surface.</p><p>The hotspot computation walks every analysed commit, groups line additions by top-level directory, and keeps anything above 30% AI:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;javascript&quot;,&quot;nodeId&quot;:&quot;f6dd4ca8-7ecd-4d1a-9a07-f9c68b5882a6&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-javascript">// src/scanner/insights.ts
function computeHotspots(analyzed: AnalyzedCommit[]): AiHotspot[] {
  const dirs = new Map&lt;string, { ai: number; human: number }&gt;();

  for (const { commit, detection } of analyzed) {
    const isAi = detection !== null;
    for (const file of commit.filesChanged) {
      const dir = getDirectory(file.path);
      const entry = dirs.get(dir) ?? { ai: 0, human: 0 };
      if (isAi) entry.ai    += file.additions;
      else      entry.human += file.additions;
      dirs.set(dir, entry);
    }
  }

  const hotspots: AiHotspot[] = [];
  for (const [directory, { ai, human }] of dirs) {
    const total = ai + human;
    if (total &lt; 20) continue;               // skip trivial dirs
    hotspots.push({ directory, aiLines: ai, totalLines: total, aiPercentage: ai / total });
  }

  return hotspots
    .filter(h =&gt; h.aiPercentage &gt; 0.3)
    .sort((a, b) =&gt; b.aiPercentage - a.aiPercentage)
    .slice(0, 5);
}</code></pre></div><p>On my own backend repo this surfaced <code>apps/analytics</code>, <code>apps/intelligence</code>, and <code>apps/realtime</code> at 100% AI. I had noticed the 53% top-line. I had not noticed that three entire directories were pure Claude.</p><h2>Risk scoring</h2><p>I went back and forth on how to score &#8220;risk&#8221; and landed on a weighted sum of two factors, with the blind-spot term carrying more weight:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;javascript&quot;,&quot;nodeId&quot;:&quot;5b934d0a-1760-462e-8754-6d936c0d86c1&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-javascript">// src/scoring/index.ts
// Risk Score = AI Coverage (40%) + Language-Weighted Blind Spot Severity (60%)
const confirmedCommits = aiCommits - heuristicCommits;
const weightedAI = confirmedCommits + heuristicCommits * 0.6;
const aiCoverage = totalCommits &gt; 0 ? Math.min(weightedAI / totalCommits, 1) : 0;

const blindSpotSeverity = 1 - languageWeightedRecall;

const raw   = aiCoverage * 0.4 + blindSpotSeverity * 0.6;
const score = Math.round(raw * 100);

const grade =
  score &gt;= 75 ? "F" :
  score &gt;= 60 ? "D" :
  score &gt;= 45 ? "C" :
  score &gt;= 25 ? "B" : "A";</code></pre></div><p><strong>Heuristic commits get weighted at 0.6.</strong> Trailer-based detection is ground truth. A commit either has the trailer or it doesn&#8217;t. The heuristic detector (mass-add diff shape plus AST-level structural tells that tree-sitter picks up from AI-generated code) is noisier, so its contributions are discounted in the coverage factor. I trust it less than the trailer.</p><p><strong>Blind-spot severity uses language-weighted recall, not generic category scores.</strong> Recall means: when you run a model against OWASP-seeded vulnerable code, what fraction does it catch? A Python-heavy repo with Claude Opus 4.6 (63% Python recall on SecLens) scores differently from a JavaScript-heavy repo with the same model (31% JavaScript recall). The severity weighting follows the actual language mix of your code.</p><p><strong>Blind-spot severity uses language-weighted recall, not generic category scores.</strong> Recall means: when you run a model against OWASP-seeded vulnerable code, what fraction does it catch? A Python-heavy repo with Claude Opus 4.6 (63% Python recall on SecLens) scores differently from a JavaScript-heavy repo with the same model (31% JavaScript recall). The severity weighting follows the actual language mix of your code.</p><p><a href="https://mattersec-labs.github.io/seclens/">SecLens</a> is the benchmark feeding those recall numbers. 12 models &#215; 8 OWASP Top 10 categories &#215; 10 languages &#215; 35 scoring dimensions, running known-bad code through each model and counting what they catch. A slice of the recall table:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;plaintext&quot;,&quot;nodeId&quot;:&quot;3b3eb988-0f0d-4649-9a46-9fa0247f7508&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-plaintext">| Model              | Python | JavaScript | Java  | Go    | Overall |
|--------------------|--------|------------|-------|-------|---------|
| Claude Opus 4.6    | 62.5%  | 31.2%      | 27.8% | 55.6% | 39.0%   |
| Claude Sonnet 4.6  | 70.8%  | 62.5%      | 61.1% | 85.2% | 42.1%   |
| Claude Haiku 4.5   | 70.8%  | 68.8%      | 77.8% | 85.2% | 37.8%   |
| GPT-5.4            |  8.3%  |  0.0%      |  5.6% | 14.8% | 31.1%   |
| Gemini 3.1 Pro     | 83.3%  | 75.0%      | 77.8% | 70.4% | 45.8%   |
</code></pre></div><p>Two things surprised me on first read. Gemini 3.1 Pro beats the Claude family overall. And GPT-5.4 has near-zero recall on three of these four languages, which is not where I would have placed it going in. The risk from AI blind spots is heavily language-dependent, and it rarely matches the intuitive model leaderboard.</p><h2>Results</h2><h2>Results</h2><p>I ran it on <code>overwatch-backend</code>, a Django-ish service, 74 commits over six weeks:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5PG4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff193b239-cd71-48d1-9647-02d385e33a6d_1656x2016.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5PG4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff193b239-cd71-48d1-9647-02d385e33a6d_1656x2016.png 424w, https://substackcdn.com/image/fetch/$s_!5PG4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff193b239-cd71-48d1-9647-02d385e33a6d_1656x2016.png 848w, https://substackcdn.com/image/fetch/$s_!5PG4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff193b239-cd71-48d1-9647-02d385e33a6d_1656x2016.png 1272w, https://substackcdn.com/image/fetch/$s_!5PG4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff193b239-cd71-48d1-9647-02d385e33a6d_1656x2016.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5PG4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff193b239-cd71-48d1-9647-02d385e33a6d_1656x2016.png" width="1456" height="1773" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f193b239-cd71-48d1-9647-02d385e33a6d_1656x2016.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1773,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:373552,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://subho007.substack.com/i/195257104?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff193b239-cd71-48d1-9647-02d385e33a6d_1656x2016.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5PG4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff193b239-cd71-48d1-9647-02d385e33a6d_1656x2016.png 424w, https://substackcdn.com/image/fetch/$s_!5PG4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff193b239-cd71-48d1-9647-02d385e33a6d_1656x2016.png 848w, https://substackcdn.com/image/fetch/$s_!5PG4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff193b239-cd71-48d1-9647-02d385e33a6d_1656x2016.png 1272w, https://substackcdn.com/image/fetch/$s_!5PG4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff193b239-cd71-48d1-9647-02d385e33a6d_1656x2016.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The 3.0x commit-size ratio is the number that stuck with me. AI commits average three times the size of human commits. Three times more lines per commit for me or a reviewer to read. The real question is how much of that I reviewed, and that scales in the opposite direction from my attention budget.</p><p>The top-line AI percentage is a vibe. The review-delegation estimate (AI-authorship percentage combined with commit-size ratio) is the accountability question. You can ship 100% AI code if you reviewed every hunk. You can ship 30% AI code and be worse off if those hunks were merged unread.</p><h2>The name</h2><p>I posted this tool to <a href="https://www.reddit.com/r/ClaudeAI/comments/1spud4p/i_told_my_investor_61_of_my_code_was_aiassisted/?utm_source=share&amp;utm_medium=web3x&amp;utm_name=web3xcss&amp;utm_term=1&amp;utm_content=share_button">r/ClaudeAI</a> on Tuesday. I called it vibe-check. That was a joke. Zero upvotes. A couple of people came after me. One called it a pattern of dishonesty, then asked me a question I keep coming back to: &#8220;<strong>what would the non-hedged version of this post look like to you?</strong>&#8221; I answered honestly. They came back softer. This post is part of that answer.</p><p>The real name is AI authorship. I renamed the npm package:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;plaintext&quot;,&quot;nodeId&quot;:&quot;6cf3c870-3baa-49da-87fc-d3e101126f12&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-plaintext">npx @mattersec/ai-authorship scan</code></pre></div><p>MIT, runs locally on your <code>.git</code>, no telemetry. Repo: <a href="https://github.com/mattersec-labs/ai-authorship">https://github.com/mattersec-labs/ai-authorship</a></p><h2>Limitations</h2><p>Limitations I know about:</p><ul><li><p><strong>Trailer-based detection is ground truth only if trailers aren&#8217;t stripped.</strong> <code>git commit --amend</code> with a manual rewrite removes them. Developers who want to hide AI authorship can. The heuristic is the attempt to catch that case but it&#8217;s noisy, which is why it&#8217;s weighted at 0.6.</p></li><li><p><strong>Stylistic tells per model are tuned on Claude.</strong> Detection is strongest on Claude-heavy repos. Other models are supported but noisier. I have been staring at Claude output for a few hundred hours and it shows.</p></li><li><p><strong>Newest models (released in the last month or two) don&#8217;t have full SecLens coverage yet.</strong> If your scan lands on one, you&#8217;ll see an <code>unknown model</code> fallback in the blind-spot block.</p></li><li><p><strong>The 3.0x commit-size ratio is a proxy, not a direct measurement of unreviewed code.</strong> I want to correlate against review traces (who opened which PR, who squashed what, who LGTM&#8217;d without comment), but that needs GitHub API data I haven&#8217;t integrated yet.</p></li></ul><h2>FAQ</h2><p><strong>Can developers strip the trailers to hide AI authorship?</strong><br>Yes. <code>git commit --amend</code> with a manual rewrite removes them, and <code>git filter-repo</code> does it at scale. The heuristic detector is the attempt to catch the rewrite case, but it&#8217;s noisier than trailer matching. Heuristic commits are discounted to 0.6&#215; in the coverage factor for that reason. If you want to hide AI authorship, you can. The tool is built on the assumption that most people don&#8217;t bother.</p><p><strong>How is this different from </strong><code>git log | grep Claude | wc -l</code><strong>?</strong><br>Not that different for the top-line number. Three things the scanner adds: (1) mapping trailer emails to the right model/provider via the nine-row table, (2) per-directory hotspot computation, so you can see where the AI code is concentrated instead of only how much, and (3) cross-referencing the detected (model &#215; language) pair against SecLens to surface blind spots specific to your repo&#8217;s language mix. If all you want is the top-line, <code>git log --grep</code> is fine.</p><p><strong>Why is blind-spot severity weighted higher than AI coverage (60/40)?</strong><br>The thing that damages you is not that AI wrote your code. It is what the model writing your code fails to write safely. A repo 90% written by a model with 90% OWASP recall is safer than a repo 50% written by a model with 20% recall. Coverage tells you how much of a problem could exist. Severity tells you how bad the problem is if it does. The formula prioritises the second.</p><p><strong>My AI tool isn&#8217;t in the nine-row table. What happens?</strong><br>The commit falls through trailer-detection into the heuristic pipeline, which flags on diff shape and AST tells. That catches some of it and misses some of it. If the tool emits a stable <code>Co-Authored-By:</code> email, PR a new row. The table is the only thing that needs updating.</p><p><strong>Does it work on rewritten history (rebase, squash merge)?</strong><br>Partly. It reads whatever is in <code>git log</code> at scan time. If the rebase or squash preserved the trailers on the final commit, they get counted. If the squash dropped them, the heuristic may flag the commit as <code>Likely AI</code> based on diff shape, or it may miss. Rewritten history is the known soft spot.</p><p>Month three of building again, alone. I keep noticing versions of this same problem. Shipping security tools for the last thirteen years had a familiar shape: see something nobody else had seen, plan, staff, build the instrument for the view, then use it months later. This weekend I wrote the question, an agent wrote the instrument, and I used it the same day. The tool that measures Claude&#8217;s authorship was, funnily enough, built with Claude Code.</p><p>I&#8217;m not announcing a cadence. I am shipping instruments for things I suspect we should be looking at and haven&#8217;t. AI authorship is the first one. There will be others. Some will be wrong. The repo is MIT, the scanner runs locally, and I&#8217;m reading the comments.</p><p>The next twelve years are going to look nothing like the last twelve. I&#8217;d rather write while I figure out why than after.</p>]]></content:encoded></item></channel></rss>