P
AboutProjectsProcessSkillsAchievementsResumeContactLeetCodeHire MeResume
P
← Back to Projects

Finsight AI

AI expense manager that automates transaction handling and budget guidance with async backend orchestration.

Live DemoBackendFrontend
Finsight AI preview

The Problem

Manual expense tracking is high-friction. Users abandon workflows when categorization and analysis require too much repeated effort.

The Solution

Finsight AI combines natural-language transaction capture with async processing so users get instant feedback while AI tasks run in the background.

AI Transaction Parsing

Gemini converts natural language expense input into structured transaction data.

Async Processing Pipeline

Celery + Redis move AI operations off request paths to keep APIs responsive under load.

Financial Data Model

MongoDB persistence for users, transactions, budgets, and derived analytics workflows.

Revocable Auth

JWT auth with Redis-based token controls for safer session lifecycle management.

Tech Stack

Flask
Celery
Redis
Gemini API
MongoDB
Docker
Next.js
DigitalOcean
Vercel

System Architecture

Next.js frontend
↓
Flask API layer
↓
Celery queue + Redis broker
↓
Gemini processing workers
↓
MongoDB persistence

Deployment: Backend and workers deployed on DigitalOcean, frontend on Vercel, APIs documented with Swagger/OpenAPI.

My Role & Contributions

  • •Built async AI request handling with Celery workers and Redis queueing around Flask service endpoints.
  • •Integrated Gemini-powered automation to reduce manual expense categorization and enable intelligent suggestions.
  • •Implemented secure API authentication with JWT and token invalidation strategy.
  • •Shipped full-stack architecture from backend APIs and workers to frontend UX and production deployment.
  • •Documented and tested core flows with OpenAPI specs and repeatable development workflows.

Technical Challenge Solved

Challenge: AI latency blocking request flow

Running AI calls synchronously in request handlers caused avoidable wait times and degraded UX during concurrent usage.

Solution: Queue-backed async orchestration

  1. Accept and validate request in Flask
  2. Queue AI tasks via Celery + Redis
  3. Process AI jobs asynchronously in workers
  4. Persist results and surface updates in UI

Faster perceived interactions and a more resilient backend under concurrent real-world usage.

Explore Finsight AI

Live in production. Try the demo or read the source code.

Try Live DemoView on GitHub