ReactDjangoPostgreSQLCLIPGPT-4o-miniJWTOAuth

PeerCampus

AI-assisted campus super-app — events, forums, lost-and-found, skills marketplace

Timeline3 months
RoleFull Stack
TeamSolo
StatusShipped

The Problem

Campus life generates a constant stream of fragmented activity, events posted on notice boards, lost items announced over WhatsApp groups, and skills traded through personal contacts. There was no unified surface for any of it.

The lost-and-found problem was particularly acute. Text descriptions of lost items are unreliable, so visual similarity search was the core requirement.

Architecture

System:React -> Django REST API (6 apps, 57 endpoints) -> PostgreSQL
React -> Django REST -> PostgreSQL + CLIP embeddings
Key decisions:
  • Django over Node.js
    because CLIP and GPT-4o-mini integrations required Python's ML ecosystem; Django REST Framework provided robust validation for a large endpoint surface.
  • 6-app Django architecture
    because Splitting users, events, forums, lost-and-found, skills, and notifications into separate apps kept boundaries explicit and maintainable.
  • CLIP embeddings over text search
    because CLIP enables image-to-image similarity matching, which performs better than text matching for visually similar lost items.

Impact

  • Shipped and deployed at peercampus.abhinavchaurasia.in
  • 57 REST endpoints across 6 Django apps
  • CLIP image similarity search for lost-and-found
  • Multiple auth flows: JWT, Google OAuth, OTP

What I’d Do Differently

I’d use a dedicated task queue for embedding generation from day one and adopt vector indexing earlier to avoid scan-heavy similarity queries as data grows.