Lisnloop – Privacy and Data

Last updated: January 2, 2026

Privacy & Analytics

Lisnloop is a free beta product created by Arbor Product Solutions, LLC. We collect minimal analytics to understand usage and improve the product.

This includes:

  • Basic message metadata (chat ID, message length, a short message preview, whether attachments were included, selected model, visibility type, and conversation position)
  • Anonymous or pseudonymous session replay for UX improvement

We do not sell personal data, and we do not use conversations to train AI models.

Lisnloop is designed for product discovery and thinking work. We discourage sharing sensitive personal data, secrets, or highly confidential proprietary information.

Lisnloop is built for product-minded engineers who care about how their data moves through a system. This page explains how Lisnloop is built, how your data is handled, and what does (and does not) happen to your conversations.

Technology Stack (High Level)

Lisnloop is a modern, cloud-native web application currently built on:

  • Next.js and React for the application layer
  • PostgreSQL for persistent data storage
  • Vercel for hosting, infrastructure, and AI routing
  • Anthropic (Claude) and OpenAI for language models
  • PostHog for product analytics
  • Linear (optional) for issue and project context

The system is intentionally simple, observable, and auditable. There are no hidden background agents or opaque data flows.

How Your Data Flows

When you send a message in Lisnloop, the request follows this path:

Your browser
  → Lisnloop API (hosted on Vercel)
    → Vercel AI Gateway
      → AI provider (Anthropic or OpenAI)
    ← response streams back the same way

Lisnloop uses Vercel AI Gateway as a managed proxy. This allows us to centralize routing, observability, rate limiting, and model selection without scattering API keys across the codebase.

Yes — your messages pass through Vercel's infrastructure before reaching the AI provider. This is intentional and required to operate a secure, production-grade AI application.

What Data Is Sent to AI Models

Lisnloop sends only the data required to generate a relevant response.

May be included in a prompt:

  • Your current message
  • Recent conversation history
  • Profile context (auto-enriched from your email domain, plus anything you've explicitly added)
  • Connected Linear issues or projects (only if you've enabled the integration)
  • Open artifacts you're actively working on

Never sent to AI models:

  • Passwords (hashed, never leave the database)
  • Authentication tokens or session cookies
  • OAuth tokens for connected services
  • Internal analytics, billing, or entitlement data

What Lisnloop Stores

Lisnloop stores data only to support core product functionality.

Persisted data includes:

  • User account and profile information
  • Conversation history (so your work doesn't disappear)
  • Artifacts you create (interview scripts, briefs, notes)
  • Integration metadata (e.g. cached Linear issue summaries)

Security basics:

  • Passwords are hashed
  • OAuth tokens are encrypted at rest
  • Data is encrypted in transit

Lisnloop does not sell user data or share it with advertisers.

Analytics & Observability

Analytics exist to improve the product — not to surveil users.

  • PostHog: product usage events and optional session replay
  • Vercel Analytics: performance metrics (page load, web vitals)
  • OpenTelemetry: request tracing and AI call timing

Analytics events may include metadata such as message length or whether attachments were used, but not bulk exports of private conversations for marketing or resale.

AI Training & Data Retention

Is my data used to train AI models?

No. Lisnloop is informed by real-world product experience, not by training on your data.

Lisnloop uses commercial APIs with providers that do not train on customer API data by default:

  • Vercel AI Gateway — does not use data for training and supports Zero Data Retention (ZDR)
  • Anthropic (Claude) — Commercial API data is not used for training unless the customer explicitly opts in
    (Note: this is different from Anthropic's consumer Claude products, which have separate policies.)
  • OpenAI — API data is not used for training by default

Primary sources:

Lisnloop accesses AI models exclusively via their commercial APIs, not consumer chat interfaces. Commercial APIs have stricter data handling and retention policies than the free consumer products you may have used directly.

Trust Boundaries

SystemWhat it can see
LisnloopStored conversations and artifacts
VercelApplication traffic, database, AI routing
AI providersPrompt + response text only
PostHogProduct analytics events

No single provider has blanket access to everything.

Transparency & Verification

If you want to verify behavior yourself, you can:

  • Inspect network requests to /api/chat in your browser
  • Review function logs in the Vercel dashboard
  • Disconnect integrations (like Linear) at any time
  • Delete conversations or your account

Lisnloop is intentionally built so engineers don't have to "just trust us."

Under the Hood (For Engineers)

If you want to see exactly how data flows, here are the relevant code paths. This is real architecture, not marketing diagrams.

AI Gateway Routing

This shows the proxy pattern. All model calls route through a single gateway — no scattered API keys.

// lib/ai/providers.ts
import { createGateway } from "@ai-sdk/gateway";

const gateway = createGateway({
  apiKey: process.env.VERCEL_AI_GATEWAY_KEY,
});

// All model calls route through the gateway
return gateway("anthropic/claude-haiku-4.5");
return gateway("openai/gpt-4.1-mini");

Why it matters: Single control point for rate limiting, observability, billing, and model switching.

What Context Gets Injected

This is the function that builds dynamic context for each request.

// lib/ai/prompts.ts (simplified)
export const buildDynamicContext = ({
  profileContext,
  inferredContext,
  companyProfile,
  linearContext,
  linearSummary,
  artifactContext,
}) => {
  const sections: string[] = [];

  // User context (only if provided)
  if (profileContext?.name)
    sections.push(`**Talking to:** ${profileContext.name}`);

  // Company context (auto-enriched from domain)
  if (companyProfile?.description)
    sections.push(`**Their company:** ${companyProfile.companyName}`);

  // Linear context (only if integration enabled)
  if (linearSummary?.activeIssues?.length)
    sections.push(`**Active issues:** ${linearSummary.activeIssues}`);

  // Currently open artifact
  if (artifactContext?.content)
    sections.push(`**Existing artifact:** "${artifactContext.title}"`);

  return sections.join("\n\n");
};

Why it matters: You can audit exactly what context is injected. No hidden data collection.

User Schema (What's Actually Stored)
// lib/db/schema.ts
export const user = pgTable("User", {
  id: uuid("id").primaryKey().defaultRandom(),
  email: varchar("email", { length: 64 }).notNull(),
  password: varchar("password", { length: 64 }), // Hashed, never sent to AI

  // Profile (user-provided + enriched)
  name: varchar("name", { length: 100 }),
  company: varchar("company", { length: 100 }),
  customer: text("customer"),

  // Auto-enriched from email domain
  companyProfile: jsonb("company_profile"),
  inferredContext: jsonb("inferredContext"),
});

Why it matters: No mystery fields. Passwords never leave the database.

The Actual API Call
// app/(app)/api/chat/route.ts
const result = streamText({
  model: myProvider.languageModel(selectedChatModel),
  system: systemPrompt({
    profileContext,
    companyProfile,
    linearContext,
    artifactContext,
  }),
  messages: await convertToModelMessages(uiMessages),
  experimental_telemetry: {
    isEnabled: isProductionEnvironment, // Only in prod
    functionId: "stream-text",
  },
});

Why it matters: Telemetry is explicit and conditional. Nothing is silently traced.

The Bottom Line

  • Your conversations are not used to train AI models
  • Data is stored only to make the product work
  • AI access happens through commercial APIs with strict data policies
  • The architecture is designed to be inspectable and boring — in the best way