Hello Everyone, On Saturday, I participated in a hackathon, during which I built Scan and Cook, an AI-powered recipe generator that transforms food images into complete recipes. Let me take you behind the scenes to see how this app came to life, the challenges I faced, and how I overcame them. The Spark: Finding a Real Problem to SolveMany people stand in front of the fridge, staring at ingredients, wondering, "What can I make with these?" or have seen a delicious dish and wished they had the recipe. My marketing genius co-founder, Virgil, researched and identified this problem as a significant opportunity. This common frustration sparked the idea for Scan-n-Cook, an app that analyzes food images and instantly generates detailed recipes. I decided to build it during the Lovable hackathon using their AI Code Editor Lovable.dev. Initial Planning PhaseThe hackathon gave me just a short timeframe to build a working prototype. I started by mapping out two core features:
My technology stack was:
Early Challenges and PivotsMy initial plan was overly ambitious. I wanted to build a comprehensive application with ingredient and dis scanning, food management, meal planning, etc. After a reality check about the hackathon timeline, I pivoted to focus on the core value proposition: turning food images into recipes. By narrowing my focus, I could deliver a polished experience for the most valuable feature instead of a half-implemented suite of tools. The Multi-Model AI ApproachFirst, I implemented Google Vision APIs, which did not produce consistent results. Then, I implemented Clarifai Food API and Spoonacular API. I realized that these AI vision models weren't reliable enough for accurate ingredient detection. Different models had different strengths. I implemented a program that could run all three models in parallel and combine their results for better accuracy. It also uses standardization logic to handle synonyms (like "Roma tomato" and "tomato"). RoadblocksI hit a wall after spending 14 grueling hours trying to make the ingredient identification workflow function properly. At one point, I seriously questioned whether this idea was viable at all. The multi-step process of identifying ingredients and then generating recipes proved too complex and unreliable. The Pivotal MomentThis is when Virgil suggested a pivot in our approach, a "Quick Recipe" concept that allows users to take a single photo of food and directly generate a complete recipe using Anthropic's Claude AI. By eliminating multiple steps in the workflow, we created a much more seamless experience. This was the breakthrough moment. Instead of struggling with complex ingredient detection, we leveraged the power of advanced AI to handle the entire process in one go. The application immediately became more reliable and effective at delivering value to users. Supabase Edge FunctionI built multiple Supabase edge functions to handle communication with Google Vision Cloud API, Clarifai Food API, Spoonacular API, and Claude APIs, ensuring API keys remained secure on the server side. The function processes the image and sends it to APIs with specific ingredient identification and recipe generation instructions. The UI/UX EvolutionThe user experience went through several iterations:
I followed the mobile-first approach since most users would capture food images on their phones while cooking or shopping. In the future, I will be converting it into a React Native app. Image Format Compatibility IssueOne of the most frustrating issues occurred when testing with different types of food images. After hours of debugging, I discovered the problem was an image format compatibility issue. The AI service expected the JPEG format, but users could upload in different formats. I implemented format validation and conversion to ensure all images were processed correctly. Storage and Performance OptimizationsAs the app grew, I encountered some storage limitations. I was using Supabase's free tier, so each generated recipe and image was added to the app's storage, potentially causing user issues over time. I implemented automatic pruning of older scan history when storage limits are approached The Final Push: Bringing Everything TogetherIn the final hours of the hackathon, I focused on building a demo video and creating a detailed write-up in the Miro board. I spent a solid two and a half hours putting things together, recording a demo, taking screenshots, and submitting my final project. I didn't expect the submission to take this long, but fortunately, I began early (3 a.m.) and had sufficient time. Lessons LearnedThis hackathon taught me several valuable lessons:
What's Next for Scan-n-CookWe plan to expand the app, make it really valuable to users, and launch it in the next 3-4 weeks. It will be called Snook, short for snap and cook. Try It Yourself!I'd love for you all to try Snook. Let me know what you think. Just take a photo of the ingredients you have or a dish you'd like to recreate, and watch as the app generates a complete recipe in seconds. Thank you for following my hackathon journey. Building Snook in such a short timeframe was incredibly rewarding and proves you can build amazing things quickly. This is the best time to be in product development. I extensively used AI coding editors to build Snook and you can also do that. If you have been sitting on an idea, then start building today. Vinod |
Every week, you will get 1 actionable advice to help you build and growyour Micro SaaS startup without sacrificing your full-time job. Learn how to validate, build, and grow your SaaS startup step by step. Join a supportive community of 1000+ part-time founders.
Hello Reader, If you’ve been following my updates, you know I’m deep in the world’s largest hackathon right now. But this isn’t just a coding challenge for me; I’m treating it like a real startup sprint. In this week's newsletter, I share everything: – What we’re building now – How we’re building it (tech stack, tools, process) – And most importantly, why we’re making the decisions we are But first, some backstory. We’ve already killed four ideas. Yes, really. Since Virgil and I became...
Hello Reader, Imagine having one clean dashboard to track product launches. → Track popular launches for inspiration. → Switch between Product Hunt and Tiny Launch.(Soon, more options based on your feedback.) → Chat with AI for the next steps. What started as a simple Chrome extension has evolved into something bigger. Jivro's Product Launch Dashboard Today, we are excited to announce that Jivro beta is officially OPEN. No more lost ideas.No more browser tab overload.No more missing the next...
Hello Reader, Last week, I came off an intense stretch of focused work. I call it extreme momentum. It’s the kind of mode where I move fast, ship hard, and almost forget to breathe. But when it ends? I crash. Not energy-wise — system-wise. Because during those bursts, I stop updating my tools. No OmniFocus. No Notion. Just building. And when I finally slow down, I’m staring at a mess: Outdated tasks Random notes & ideas Disconnected logs of what I did A system that no longer reflects my...