Every time you see an AI-verified badge on a parking garage, tunnel, or bridge, it means Claude Vision read the posted clearance sign from Google Street View imagery. Here's exactly what that pipeline does — and where we stop it from pretending to know things it doesn't.
For every parking garage in our database without a posted clearance, our batch pipeline pulls four Google Street View images at the entrance, asks Claude Vision what clearance sign is posted, and records the reading only if the model's self-reported confidence is high.
We start with the latitude/longitude already in our database (from OpenStreetMap,
the FHWA National Bridge Inventory, or hand-curation). Before spending a cent, we ping
Google's free Street View metadata endpoint — /streetview/metadata — to check
whether imagery even exists at that point. If not, we skip and mark it "no imagery."
We fetch four Google Street View Static images at headings 0° / 90° / 180° / 270° (N/E/S/W) with a slight upward pitch to catch ceiling-mounted signage. Four angles, not one, because the clearance sign is usually visible from at most one or two directions — we want to give the model the best chance.
All four images go to Claude Vision in a single API call with a tightly-scoped prompt. The model returns JSON with:
found_sign — whether any clearance sign is visibleheight_in — the posted height, in inches (null if not readable)confidence — low, medium, or highraw_text — exactly what the sign saidnotes — occlusion, glare, or ambiguity flagsKey part of the prompt:
This matters because an LLM that's been told "tell me what you see" will cheerfully
make something up. Our prompt instructs it to answer "no sign" when the sign isn't there,
and to flag partial views as low confidence.
By default we only commit a reading when confidence === "high". Anything
medium or low is logged for a human to spot-check later. The script has a
--confidence medium flag we can use on a case-by-case basis, but the
production pass is high-only.
Before writing, two hard sanity bounds:
Every AI-verified entry keeps:
source: "AI-verified (Street View + Claude Vision)"verified_on: "2026-04-18" — ISO date so the app can stale-check laterThe AI-verified badge you see in the app is driven by that source
field. No behind-the-scenes relabeling.
This is not a substitute for looking at the sign. AI vision is good — on our test set it agrees with a human reader about 90% of the time — but 10% is still wrong 1 in 10. Wrong about a clearance, at the wrong moment, costs you a torn-off AC unit, a shattered windshield, or worse.
Every time you drive toward a garage, the posted sign at the entrance is the only authoritative number. Our data — AI-verified or otherwise — is for planning. The sign is for committing.
Before AI verification, about 38% of the garages in our database had no posted clearance from any upstream source (OpenStreetMap, operator websites, FHWA). Those entries rendered as "Unverified" — which is honest, but not useful.
AI verification converts most of those into real numbers you can filter and sort on. The remaining 10% of AI errors get surfaced two ways:
verified_on date drives a "data may be stale"
banner after two years, at which point we re-fetch Street View (which may itself be
fresher imagery) and re-ask.None. The AI-verification pipeline is entirely offline — it runs against Google Street View and Claude, not against your session. We don't send your location, vehicle height, or any identifying information to the AI. See our privacy policy for the full disclosure.
claude-haiku-4-5 by default (fast + cheap); we upgrade to
claude-sonnet-4-5 for ambiguous cases.scripts/streetview_verify.py — open-source in our main repo.Spot a bad AI-verification? Open the entry in the app and click Report clearance. It goes to a queue we review daily.