Skip to content

Deckly Architecture

This is a living document describing the technical "How" of Deckly.

System Overview

The app is built as a Heavy-Client SPA leveraging Supabase for high-speed data persistence.

mermaid
graph TD
    A[PDF Upload] --> B[Browser-side Processing]
    B -->|pdfjs-dist| C[Canvas Rendering]
    C -->|WebP Compression| D[Supabase Storage]
    D --> E[PostgreSQL Record]
    E --> F[Dashboard / Viewer]
    F -->|Auth Gate| P[get_deck_payload RPC]
    P -->|Private Path| S[sign-deck-url Edge Function]
    S -->|Signed URL| F
    F -->|Friendly URL| J[/:handle/:slug]
    J --> F

    subgraph "Data & State"
    F --> K[TanStack Query Cache]
    K --> L[Supabase / Service API]
    F -->|Optimistic UI| M[Investor Notes / Saves]
    end

    subgraph Analytics
    F --> G[Heartbeat Tracker]
    G -->|FETCH| V[Vercel Edge API /api/geo]
    V -->|x-vercel-ip-country/city| G
    G --> H[record_deck_visit RPC]
    end

Core Components

LayerTechnologyResponsibility
FrontendReact + ViteUI/UX & Code-Splitting (Lazy Load)
AuthSupabase AuthGated access & Session management
BrandingTanStack QueryCentralized profile & mission-control state
StorageSupabase BucketsLarge asset (PDF/Images) hosting
ComputeBrowserPDF rendering (reduces server costs)
StateTanStack QueryAsync caching & optimistic updates
AnalyticsCustom HookReal-time 45s heartbeat refresh (useQuery)
Geo-LocationVercel Edge APIIP-based country/city extraction via headers
OptimizationPostgres RPCsOffloaded heavy GROUP BY and DISTINCT logic to DB
PerformanceLayered PrefetchTiered prefetching & lazy-module preloading
Utilitiesurl.tsCentralized share-link and internal routing

Layered Performance Architecture

To maintain a "zero-latency" feel across the platform, Deckly uses a three-tier performance strategy orchestrated globally in Home.tsx and contextually in navigation components.

1. Layer 1: Global Readiness (Boot-Time)

Upon landing on the dashboard, the application identifies idle time to prepare the environment:

  • Data Priming: Prefetches high-level metadata lists (Decks, Rooms, Saved Decks) via TanStack Query.
  • Module Warming: Pre-imports the JS chunks for heavy routes (e.g. Viewer.tsx, DataRoomsPage.tsx) using background import() calls. This removes "loading..." flickers during navigation.

2. Layer 2: Intent-Based Prefetching (Hover)

We anticipate user navigation by tracking cursor movement:

  • Sidebar Hover: Hovering over a sidebar link triggers a pre-import of that page's specific chunk and its primary data query.
  • Card Hover: Hovering over a DataRoomCard or DocumentRow prefetches its specific metadata and document list ~100-300ms before the click.

3. Layer 3: Deferred Heavy Operations

To preserve database resources and bandwidth:

  • Analytics Deferral: Deep analytics (individual visitor trends, historical charts) are never prefetched. They load strictly on-demand when the user navigates into a detail view.
  • Metadata Bundling: Prefetching is limited to lightweight metadata (titles, counts) and omits heavy image thumbnails or binary assets.

UI Component Standards

Deckly follows a strict premium UI standard utilizing Framer Motion for interactivity and Tailwind CSS for obsidian-themed styling.

  • Obsidian Depth: Core backgrounds and sidebars use #10120f for grounded, professional contrast.
  • Typography: Standardized on sentence-case (not uppercase) for a more human, readable interface.
  • Glassmorphism: Subtle patterns, grid textures, and backdrop-blur for high-end cards.

Security Model

  • RLS (Row Level Security): Strict auth.uid() checks on all sensitive rows. Write access to analytics is strictly routed through record_deck_visit (Security Definer RPC), preventing unauthorized data injection.
  • Granular Storage Policies: Storage security is enforced via operation-specific policies (INSERT, UPDATE, DELETE, SELECT) rather than monolithic FOR ALL blocks. This ensures that WITH CHECK constraints (like size limits) are correctly and exclusively applied during write operations.
  • Transactional Advisory Locks: We use pg_advisory_xact_lock(hashtext(v_ip)) to serialize sensitive concurrent operations, such as signup throttling. This eliminates TOCTOC (Time-of-Check to Time-of-Use) race conditions by ensuring a single IP can only execute the check+insert sequence one transaction at a time.
  • Modern API Access Control: Uses namespaced VITE_SUPABASE_PUBLISHABLE_KEY and PROJECT_SECRET_KEY architecture.
  • Asymmetric JWT Verification: Authentication tokens are verified using RSA Public Keys as part of the project's hardened security infrastructure.
  • Private Storage: The decks storage bucket is set to private. Direct URL access is blocked; all asset retrieval requires a short-lived signed URL generated via a secure Edge Function.
  • Robust IP Rate Limiting: Security RPCs correctly parse the x-forwarded-for header to identify unique client IPs, preventing shared 'unknown' buckets and ensuring accurate brute-force protection.
  • Slug Enumeration Prevention: The get_payload RPCs for decks and rooms unify success/failure responses. Both non-existent slugs and incorrect passwords return a generic 'Unauthorized' error, preventing attackers from discovering valid URLs.
  • Security Invoker Views: All public-facing views (profiles_public) use security_invoker=true to prevent unauthorized data exposure through view-based logic.
  • Gated Links: Multi-layer protection (Email/Password) verified server-side via the hardened get_deck_payload and check_deck_password RPCs. AccessGate now uses robust regex-based email validation and input trimming to ensure data integrity.
  • RLS Resiliency: Storage policies (especially on the assets bucket) utilize COALESCE guards for size checks: COALESCE((metadata->>'size')::bigint, 0) <= limits. This prevents upload failures when metadata (populated asynchronously by Supabase) is missing during the initial request phase.
  • PII-Aware Logging: Edge Functions (sign-deck-url, etc.) implement automatic redaction of sensitive identifiers, including User UUIDs and internal storage paths, before logging to protect user privacy in observability logs.
  • XSS & Protocol Sanitization: Slide hotspots and external links are strictly validated to permit only secure protocols (https:, mailto:, tel:) and enforced with rel="noopener noreferrer" to prevent tab-nabbing.

Private Storage & Signed URLs

To prevent unauthorized file scraping and bypass of platform-level security (like passwords and expiry), Deckly implements a secure signing orchestration.

1. The Signing Flow

  1. Payload Resolution: The client calls get_deck_payload or get_data_room_payload. These RPCs return a storage_path for each page/document only if all security checks (password, expiry) pass.
  2. URL Signing: The client sends the storage_path to the sign-deck-url Supabase Edge Function.
  3. Verification: The Edge Function re-validates access permissions (re-calling the database) and verifies the requested storage_path against the canonical path returned by the database (preventing IDOR).
  4. Token Issuance: Upon success, the Edge Function returns a short-lived (1-hour) signed URL and its expiry metadata.

2. Auto-Refresh Mechanism

The Viewer.tsx component monitors the expires_in metadata. To ensure zero-latency viewing during long sessions, it initiates a background refresh ~60 seconds before the signed URL expires, re-authenticating and updating the asset pointers without interrupting the user.


Smart Slide Architecture

Deckly uses a dual-path processing pipeline to ensure fast, beautiful documents while minimizing server load. This is known as "Smart Slide" rendering (Image + Tracking Layer).

1. Client-Side (PDFs)

PDFs are processed entirely in the user's browser using pdfjs-dist.

  • Rasterization: Pages are rendered to <canvas> and converted to WebP blobs for optimized storage.
  • Link Extraction: The extractPdfLinkHotspots() utility scans the PDF annotations to find hyperlinks.
  • Normalization: Coordinates are normalized to Percentages (0-1). This ensures that the interactive hotspots scale perfectly with the image, regardless of the viewer's screen resolution.

2. Server-Side (Office/Other)

Files like .pptx, .docx, and .xlsx are processed via a Supabase Edge Function (document-processor).

  • Conversion: Uses ConvertAPI to transform multi-page documents into high-resolution JPG images.
  • Direct Transformation: These images are uploaded to the decks bucket in standard document order.

3. The Interactive Viewer

The ImageDeckViewer.tsx component is the high-fidelity presentation layer. It renders "Smart Slides" by:

  1. Displaying the rasterized image (WebP/JPG).
  2. Layering an invisible, absolute-positioned grid of <a> tags (hotspots) on top based on the SlidePage.links array.

Pipeline Resilience

1. Atomic Rollback Architecture

To prevent orphaned assets and broken deck records, the conversion pipeline now lives in useManageDeckWorkflow.ts and implements a robust cleanup mechanism.

  • DB-State Awareness: Instead of relying on potentially stale local component variables (which can be reset in interactive mode), the rollback logic queries the live database state immediately before performing a cleanup.
  • Deep Comparison: Reclaimed storage paths are computed by comparing the current DB pages array against the previousValues.pages snapshot.
  • Purge Process: Any assets present in the database that are NOT part of the intended rollback state are purged from Supabase Storage, ensuring zero-waste asset management.

2. Batch Analytics Optimization

The Data Rooms dashboard avoids "N+1" query patterns by using the get_batch_data_room_analytics RPC.

  • Single-Trip Retrieval: A single PostgreSQL call returns document counts and unique visitor counts for an entire list of room UUIDs.
  • Set-Based Aggregation: Aggregates unique visitor_ids from deck_page_views cross-referenced with data_room_documents in the database layer, delivering O(1) performance to the frontend.

3. Type-Safe Analytics Pipeline

Internal dashboard analytics (Decks/Rooms) utilize the DeckPageStats interface for aggregation.

  • Explicit Typing: Replaced legacy any types with a strict interface for per-page metrics (views, duration, retention).
  • Service Integration: The analyticsService.getDeckStats aggregator ensures every data point is mapped to a full DeckPageStats object, preventing "incomplete object" issues common in manual JS reduce building.

4. Robust Path Extraction

To support the private storage model, the database and Edge Functions use regex-based path normalization instead of brittle string splitting.

  • Regexp Pattern: \/storage\/v1\/object\/(?:public|sign|authenticated)\/decks\/(.+).
  • Universality: This allows utility functions like get_owner_thumbnails and the sign-deck-url function to correctly extract the base storage path regardless of whether the incoming URL is a public link, a signed link, or an authenticated bucket link.

3. Current Refactor Direction

The repo has recently moved toward clearer boundaries without a big-bang rewrite:

  • ManageDeck.tsx is now a composition layer
  • useManageDeckWorkflow.ts owns the heavy upload/edit workflow
  • src/components/dashboard/manage-deck/ManageDeckSections.tsx holds the screen sections
  • src/workflows/deckProcessing.ts owns shared PDF rendering/image generation
  • deckService.ts is now a facade over narrower service modules
  • auth/session resolution is being standardized through shared helpers instead of repeated ad-hoc lookups


Admin & Notifications

1. Secure Admin Access

Access to administrative routes (e.g., /admin) is strictly gated by the is_admin() PostgreSQL function.

  • Server-Side Verification: The logic checks the current auth.uid() against the public.admin_emails table.
  • RPC Gating: The frontend performs a mandatory async check on mount, preventing access even if client-side variables are tampered with.

2. Notification Persistence & Cleanup

  • Batched Deletion: To prevent long-held table locks during cleanup, the cleanup_expired_notifications() job uses a LOOP with a LIMIT 1000 batch size. This ensures high availability for the notifications table even during large-scale data maintenance.
  • Deduplication Performance: A partial index idx_notifications_dedup on (user_id, type, title, created_at DESC) filtered by read_at IS NULL ensures that deduplication checks are near-instant.

3. Visitor Counting Accuracy

The notify_signal_threshold trigger ensures that decks.unique_visitors only increments for first-time visits by a specific visitor_id.

  • EXISTS Guard: Since the deck_page_views table legitimately allows multiple rows per visitor (one per page viewed), the trigger uses an EXISTS check to avoid overcounting return visits or multi-page sessions.
  • Idempotency: This approach maintains a high-performance, atomic source of truth for investor interest metrics.

Built with ❤️ for Founders