My First Ever Hackathon Experience - From Idea to Working App in 24 Hours


Hello Everyone,

On Saturday, I participated in a hackathon, during which I built Scan and Cook, an AI-powered recipe generator that transforms food images into complete recipes.

Let me take you behind the scenes to see how this app came to life, the challenges I faced, and how I overcame them.

The Spark: Finding a Real Problem to Solve

Many people stand in front of the fridge, staring at ingredients, wondering, "What can I make with these?" or have seen a delicious dish and wished they had the recipe.

My marketing genius co-founder, Virgil, researched and identified this problem as a significant opportunity.

This common frustration sparked the idea for Scan-n-Cook, an app that analyzes food images and instantly generates detailed recipes.

I decided to build it during the Lovable hackathon using their AI Code Editor Lovable.dev.

Initial Planning Phase

The hackathon gave me just a short timeframe to build a working prototype. I started by mapping out two core features:

  1. Ingredient Scanning: Using AI to identify ingredients from photos
  2. Recipe Generation: Creating detailed recipes from either ingredient lists or food images

My technology stack was:

  • Frontend: React with TypeScript
  • UI Framework: Tailwind CSS with shadcn/ui components
  • Backend: Supabase for serverless infrastructure and edge functions
  • AI Services: Multiple vision models for ingredient detection and Claude AI for recipe generation

Early Challenges and Pivots

My initial plan was overly ambitious. I wanted to build a comprehensive application with ingredient and dis scanning, food management, meal planning, etc. After a reality check about the hackathon timeline, I pivoted to focus on the core value proposition: turning food images into recipes.

By narrowing my focus, I could deliver a polished experience for the most valuable feature instead of a half-implemented suite of tools.

The Multi-Model AI Approach

First, I implemented Google Vision APIs, which did not produce consistent results. Then, I implemented Clarifai Food API and Spoonacular API.

I realized that these AI vision models weren't reliable enough for accurate ingredient detection. Different models had different strengths.

I implemented a program that could run all three models in parallel and combine their results for better accuracy. It also uses standardization logic to handle synonyms (like "Roma tomato" and "tomato").

Roadblocks

I hit a wall after spending 14 grueling hours trying to make the ingredient identification workflow function properly. At one point, I seriously questioned whether this idea was viable at all.

The multi-step process of identifying ingredients and then generating recipes proved too complex and unreliable.

The Pivotal Moment

This is when Virgil suggested a pivot in our approach, a "Quick Recipe" concept that allows users to take a single photo of food and directly generate a complete recipe using Anthropic's Claude AI.

By eliminating multiple steps in the workflow, we created a much more seamless experience.

This was the breakthrough moment. Instead of struggling with complex ingredient detection, we leveraged the power of advanced AI to handle the entire process in one go.

The application immediately became more reliable and effective at delivering value to users.

Supabase Edge Function

I built multiple Supabase edge functions to handle communication with Google Vision Cloud API, Clarifai Food API, Spoonacular API, and Claude APIs, ensuring API keys remained secure on the server side.

The function processes the image and sends it to APIs with specific ingredient identification and recipe generation instructions.

The UI/UX Evolution

The user experience went through several iterations:

  1. First Version: Basic image upload with text output
  2. Iteration: Added camera integration for mobile users
  3. Refinement: Implemented loading states, error handling, and better recipe formatting
  4. Polish: Added ability to save recipes to favorites and view history

I followed the mobile-first approach since most users would capture food images on their phones while cooking or shopping.

In the future, I will be converting it into a React Native app.

Image Format Compatibility Issue

One of the most frustrating issues occurred when testing with different types of food images. After hours of debugging, I discovered the problem was an image format compatibility issue.

The AI service expected the JPEG format, but users could upload in different formats. I implemented format validation and conversion to ensure all images were processed correctly.

Storage and Performance Optimizations

As the app grew, I encountered some storage limitations. I was using Supabase's free tier, so each generated recipe and image was added to the app's storage, potentially causing user issues over time.

I implemented automatic pruning of older scan history when storage limits are approached

The Final Push: Bringing Everything Together

In the final hours of the hackathon, I focused on building a demo video and creating a detailed write-up in the Miro board.

I spent a solid two and a half hours putting things together, recording a demo, taking screenshots, and submitting my final project. I didn't expect the submission to take this long, but fortunately, I began early (3 a.m.) and had sufficient time.

Lessons Learned

This hackathon taught me several valuable lessons:

  1. Start with core value first: Focus on what provides immediate value to users before expanding
  2. Embrace AI limitations: No AI is perfect - building graceful fallbacks is essential
  3. Mobile-first is essential: Most food-related activities happen on mobile devices
  4. Test with real scenarios: What works with test data often fails with real-world usage
  5. Be open to pivot: Be open to completely changing your approach and still delivering the solution.

What's Next for Scan-n-Cook

We plan to expand the app, make it really valuable to users, and launch it in the next 3-4 weeks.

It will be called Snook, short for snap and cook.

Try It Yourself!

I'd love for you all to try Snook. Let me know what you think. Just take a photo of the ingredients you have or a dish you'd like to recreate, and watch as the app generates a complete recipe in seconds.

Thank you for following my hackathon journey. Building Snook in such a short timeframe was incredibly rewarding and proves you can build amazing things quickly.

This is the best time to be in product development. I extensively used AI coding editors to build Snook and you can also do that. If you have been sitting on an idea, then start building today.

Vinod

Start Building Products Faster with AI Coding

Start building products faster with AI coding. After 25 years in tech, I'm building my startup in the public eye, using AI coding tools and sharing everything I learn, including AI coding tutorials, new trending AI tools, and behind-the-scenes lessons on startup building.

Read more from Start Building Products Faster with AI Coding

Hello Reader, Most builders get stuck on the same question: “What should I build next?” They scroll through idea lists, chase trends, or wait for a flash of inspiration that never arrives. But here’s the thing, you don’t need more ideas. You need the right idea for you. And now, AI coding tools like V0, Bolt, and Lovable can help you find it. Here’s How You Can Start Building Today Here’s how you can start building an application using V0, Bolt or Lovable today: Go to V0, Bolt, or Lovable....

Hello Reader, Our beta launch for Sucana failed. And the feedback was brutal. Beta users said, the application is, Slow Clunky Cumbersome Now imagine this: weeks of late nights, coding until 4 AM, finally hitting “launch”… and the first thing you hear from users is exactly that. It stings. Then my cofounders Victor & Virgil called me and said, “Let’s flip it all around.” And I’m sitting here thinking, I pulled late nights. Wrote endless code. You can’t just flip it upside down. You see,...

Hello Reader, Quarter 4 is here. For most of my career, I set yearly goals… and then forgot them by February. Everything felt too big, too far away, and too overwhelming. That changed in 2022 when I discovered quarterly planning. Here’s why it works: Big vision → 5-year direction Focus → one or two projects per quarter Execution → daily actions, weekly reviews, and a scorecard It’s a system that has helped me clarify my direction, build my startup part-time, stay consistent with content, and...