Back

Heart Cove

~6 min read

View Website
Next.jsTypeScriptFirebaseMongoDBSSTAWS LambdaGeminiRAG

Overview

Heart Cove is a private digital space for couples I built as a personal project, covering a shared photo feed with real-time reactions and comments, notes and checklists, daily habit tracking, and a dual-mode expense tracker with automated PDF reports. An AI assistant backed by RAG retrieval lets us surface memories and notes through natural language. Deployed at heartcove.app on AWS via SST v4.

My Role

Sole engineer and primary user across the entire product.

Full-Stack Engineering

Server actions, frontend, Firebase integrations, from schema to UI.

Infrastructure as Code

SST v4 on AWS Lambda via OpenNext with CloudFront and response streaming.

AI and RAG Pipeline

Gemini vision captions combined with user captions for richer embedding and recall.

Media Processing

Direct browser upload to Firebase Storage with Cloud Function image optimization.

Tech Stack

Frontend
Next.jsTypeScript
Backend
Server Actions
Database
MongoDB Atlas
Real-time
Firebase RTDBFirebase Storage
AI
Google GeminiVercel AI SDK
Infra
SST v4AWS LambdaCloudFront

Platform Features

Shared Feed

Photo posts with real-time reactions, comments, and Gemini vision captioning for richer search.

Notes & Checklists

Shared or private notes with checklist support, synced across both partners.

Habit Tracking

Daily habit logging with streak visibility for both partners in one view.

Expense Tracker

Dual-mode expense tracking with automated monthly PDF reports exported to S3.

AI Assistant (Clove)

RAG-backed assistant that answers questions across shared memories and notes via natural language.

Real-time Sync

Firebase RTDB for instant comment threads and emoji reactions across both clients.

Technical Highlights

01

Embedding visual memories

Most photo posts have short captions, so embedding only caption text produces weak recall for descriptive queries. Each uploaded image gets a Gemini vision caption combined with the user's own caption before producing the embedding, encoding both what was written and what is visually present. Queries about a scene, activity, or location surface the right memories even when original captions were brief.

02

Response streaming on Lambda

The Next.js app runs on Lambda via OpenNext with response streaming enabled. Without it, Lambda buffers the full response before returning anything, which hits the function timeout on longer AI generations. Streaming lets the AI SDK deliver chunks to the client as they arrive. A warmer cron reduces cold start frequency during active hours.

03

Firebase and MongoDB divided by access pattern

MongoDB is the source of truth for all durable data. Firebase RTDB handles the narrow slice that needs instant sync: comment threads and emoji reactions. Media goes directly from the browser to Firebase Storage, bypassing Lambda to avoid payload size constraints, then a Cloud Function optimizes each image and triggers the captioning pipeline.