Elephant

Project Overview

The Prometheus Project continues with Elephant, a tool built to simplify the chaotic yet delightful tradition of White Elephant gift exchanges. Whether for work parties or family gatherings, finding the perfect mix of fun, practical, and occasionally NSFW gifts can be daunting. Elephant aims to streamline this process with a curated database of gift ideas and filters tailored to specific audiences—bookworms, pet lovers, sports enthusiasts, and more.

With rapid development as a core objective, this project highlights how deliberate preparation and layering on past learnings make even ambitious tools achievable within constrained timelines.

And yes this is going up probably too late to be useful for Christmas given shipping times but that (and any number of more politically correct reasons) is why we say “Happy Holidays” instead of “Merry Christmas” most of the time these days, amirite?

The resulting tool is linked from the button above!

Problem Statement

Design a simple but effective tool to find niche / dare I say desirable gifts for just the right occasion. Use some intelligent systems to pre-populate a database with already-vetted gift ideas so that you can head to that holiday party knowing you’ll get the right thing for the person whose name you drew some laughs from the crowd.

Objective

Goals for this project were as follows:

1. Pre-seed a database of gift ideas based on popular trends, articles, and more to arrive at a tool that would be simple but effective

2. Test New Development Workflows: Deployment and debugging for Sprout proved to be quite challenging so I wanted to take what I learned and try to move faster.

3. Focus on fun: I didn't expect this project to be too challenging and I wanted to have a little fun with it. ‘Tis the season and all that.

Build Process

As I mentioned in the objectives above, I really wanted to approach this project with a more scalable mindset than my last one. I’ll admit—I’m notoriously bad at sitting down and making a proper project plan. I usually just dive in headfirst and let my imagination guide the process. This time, though, I started with a brainstorming session in GPT to clearly outline my scope, articulate what I was trying to accomplish, and identify the tools and frameworks I’d be building with. Alongside this, I did some focused research into the AI-powered build and deployment process (more on that below), and I’m glad I did—it definitely paid off.

Oh - and this time, I did my best to stay in the same GPT window so that I could just share it here and you can see all of the prompts that I used. At least all the ones that I used with GPT but if we're being honest I did a bunch with Claude too and I'm not sure if I can share those because I use it less for that. Anyways, prompts here.

Starting with Deployment

When I was building Sprout, getting the product online was a nightmare. I went through multiple iterations—version nine or eleven, maybe more—repeatedly trying new setups to get the code hosted properly. It was a frustrating disaster. Every time I thought I had it figured out, another issue would pop up, particularly around how I was managing my files and deployments.

During this process, I reached out to some forums for people building with AI, and someone shared a fantastic recommendation that completely shifted my approach. They suggested starting with deployment as soon as you have a basic wireframe, rather than waiting until the functionality is in place. They even shared a helpful video (which I’ll link here). Taking their advice, I went from concept to a fully hosted tool with a working UI (though no functionality yet) in just under 3.5 hours. Compare that to the two days I spent previously just on deployment—it was a massive improvement.

→ Check out the video from Bruno Bertapeli here for loads of tips such as “deploy first”

I’ll admit, I definitely built on what I learned during the Sprout project, where I picked up some dos and don’ts, but this time I also figured out a few things that really streamlined the process. The biggest lesson: for UI elements, I need to back up my app functionality to a GIT repository that focuses solely on the front end. Previously, I was backing up the entire project folder, including server files and other extraneous elements. Even though hosting tools like Vercel and deployment platforms let you designate specific directories for launch, it caused all sorts of confusion in practice.

Even this week, I had moments where the tools got a little annoyed with my setup. Still, I think I’ve finally ironed out a process that works, and it feels like a huge win. If anyone else is struggling with deployment or wants to chat about this further, I’m happy to share more details!

Frontend — the shadcn-UI mystery

In last week’s post, I mentioned having a ton of headaches with “shadcn-UI,” and I think I’ve finally solved them, so I figured I’d elaborate here. For those unfamiliar, shadcn-UI is essentially a set of off-the-shelf UI components designed to make your website or tools look polished. It’s the system V0.dev uses for generating wireframes when you mark up your project, and it’s widely considered one of the best tools for front-end design—a first stop for most projects in this community. I used it again for this project, but the struggles I had last week were real.

No matter what I did, I couldn’t get the necessary program packages to install properly and make the UI elements functional. When I was working on Sprout, I ended up sidestepping the problem entirely by asking V0.dev to ignore all the elements that relied on shadcn-UI just so I could move forward. But here’s the linchpin: when you’re using GPT or other AI tools for development, they’re going to suggest using a command like npx install shadcn-ui@v(numbersnumbersnumbers). It looks valid, and it follows the same pattern as other commands you’d typically use during setup, but the truth is, the actual command you need is npx shadcn@latest.

I haven’t dug into why this discrepancy exists, but I suspect the command recently changed, and the LLMs just don’t have up-to-date enough training data to reflect that. Someone with coding experience would probably spot this right away, but for me—someone who’s just muddling through—it was an infuriating roadblock. Once I figured it out, though, everything worked seamlessly, and shadcn-UI performed exactly as intended.

As for the front end of this project, it was pretty straightforward. I probably iterated on it with just four prompts. I asked for a search bar, a non-denominational holiday theme, and a simple product output. From there, I moved on to gathering and integrating a bunch of data.

Populating our data

From here, it seemed like smooth sailing: I just needed to populate the system with some products, deploy the app, and I’d be done. Simple, right? Right? Wrong.

My initial plan was straightforward enough. I figured I could pull product recommendations from a few sources. First, popular articles like “25 Must-Have Holiday Gifts,” where I could scrape data, plug it into my database, and move on. Second, I planned to tap into Reddit, which has entire communities devoted to white elephant gift ideas and holiday shopping. Reddit’s API seemed like an easy way to grab a ton of data. It all sounded good in theory, but nothing ever goes as planned.

Here’s the thing: web scraping works great when you’re dealing with a known structure—when you know exactly where a specific element is on the page, and you just need to repeat the process 1,000 times. But when you’re working across dozens of different websites and articles, all with wildly different formatting, it’s a much bigger headache. I tried everything, probably longer than I should have, with various scripts. If you’re interested, I’m happy to share them—they’re in the full GPT context linked above.

Even with these scripts doing their job, the process wasn’t as easy as I’d hoped. On the bright side, I discovered (or confirmed) that Claude is better than GPT for debugging code. Every time my scripts hit a snag, I’d throw them into Claude, and like magic, the issues were fixed. So shout-out to Claude—great for code!

After all this effort, I ended up with a massive list of product links—somewhere around 50,000—and I needed to pare it down. At one point, I explored tools that can take product links and cleanly pull metadata, thinking this would simplify everything. Unfortunately, many of these tools are insanely expensive. We’re talking $200 for a few thousand API calls. Is that data made of gold? I don’t know, but it was way out of my budget.

That’s when I found RapidAPI. It’s awesome—$25 got me a one-month membership with access to exactly the kind of data I needed. I can confidently say I’ll be using this tool again in the future. However, I only ended up using RapidAPI after a detour involving Amazon’s API.


It’s extremely discouraging to Think that you're gonna just quickly sign up for using an API and then realize that there's all kinds of hoops to jump through to actually use it (source: Amazon partner API)

I originally thought Amazon’s API would be the perfect solution since my plan had by then evolved into primarily showcasing Amazon links. Here’s the kicker: to use their API, you need an approved Amazon Associates account with at least three sales already under your belt. But how am I supposed to get three sales if I can’t use their API to make my app look polished in the first place? It’s a classic catch-22, and I had to move on.

WE MUST WATER OUR TREES. I don’t know why I’m putting this here. The screenshot was in my notes and I find it funny that there's something to do with hydration and trees and… I don't know, I am writing this post way too late.

Data grooming

Once I had all my metadata pulled from RapidAPI, it was time to connect everything to my code—and honestly, it just worked. I tossed the data into some JSON files, linked them up, and everything came together seamlessly. I iterated a little bit using Cursor (shout-out again to this incredible AI-driven code tool—it’s such a dream to work with), but honestly, the process was straightforward. The file structure was ready, the deployment went smoothly, and the site was live without a hitch. It was genuinely thrilling to see the “deploy upfront” method pay off so well this time around.

One last task was to clean up those product names. Let’s be real: Amazon product titles are a mess. (Side note: if anyone from Amazon is reading this, please fix your UI—it’s wild that I have to read product names that feel like they were SEO-optimized in reverse.) To solve this, I added a quick connection to OpenAI’s GPT API. With just a few tweaks, I had it streamline those long, chaotic names into clean, concise titles. While I was at it, I also used GPT to apply filter tags to the items in my database. The process was quick, painless, and exactly what I needed to tidy things up.

Amazon PLEASE fix these HPOG titles

From there, I made a few final adjustments to the UI—cleaning up small details to make everything look polished—and that was it. The site was live, functional, and honestly, kind of beautiful. Keyword kind of — I’m under no delusions. The combination of a solid process, powerful tools, and lessons learned from past projects really came together to make this one shine.

Cursor rules

One of my favorite discoveries during this project was Cursor Rules. For those unfamiliar, Cursor is an AI code-writing tool, and Cursor Rules allow you to give it a set of guidelines to follow while it’s generating code. These rules can range from practical instructions like “always ask before making structural changes” to more high-level reminders such as “you are an expert code writer.” But where I found it especially useful was in providing a persistent list of methods, tech stacks, or folder structures I wanted to use.

Why does this matter? Well, even though Cursor runs in a UI where all your files are visible, it’s easy to forget that, at its core, it’s still just an LLM (large language model) with a coding wrapper. That means it can sometimes lose track of details—like where files are located or which deployment methods you’ve chosen. Having Cursor Rules ensures those details stay front and center, saving time and reducing frustration. For example, I used rules to remind Cursor of my preferred deployment process, specific folder structures, and other critical setups.

And here’s the kicker: the video I learned this from suggested testing Cursor Rules by adding something like, “Start every response with ‘Ho ho ho.’” Naturally, I gave it a try, and for the duration of this project, every Cursor response began with a cheerful “Ho ho ho.” It was a tiny thing, but it made me chuckle every time—and it was a great reminder of how flexible and playful these tools can be while still improving productivity.

Notable compromises

With this project, I stuck to my overarching commitment: a week to build and write. This constraint means making compromises to get things done and out the door. Otherwise, I’d spend endless hours polishing, and instead of completing multiple projects, I’d only have two perfect ones—and no one would care. So, yes, the result isn’t perfect, but I’m okay with that. For instance, the search tool currently only features Amazon links and a limited number at that. I have another spreadsheet with tens of thousands of potential links, and I could have spent more time cleaning and integrating those. Tools like RapidAPI might help streamline pulling cleaner data from other sources, but it wasn’t worth the extra time this week. A quick pass was enough to create something functional and genuinely useful.

The products in the tool come primarily from journalistic “Top X Gifts of the Year” type articles. I also explored pulling data from Reddit communities dedicated to white elephant gifts and gift-giving ideas, but that was a bust. The Reddit API was too limited, and I couldn’t figure out how to pay for more robust access. Plus, it seems people don’t share product links on Reddit as often as I’d hoped. While this approach wasn’t perfect and felt manual at times, the output works, and I’m happy with what I was able to pull together.

As for the search field, let’s be real—it’s not a full search engine. It’s a filter box, plain and simple. There’s some nuance to it, but I didn’t need to create a full search engine for a project like this. The database isn’t big enough to justify the effort. That said, if you’re looking for something bear-themed, for example, you can search for “bear” and maybe find something delightful. There’s good stuff here, and I even added a few Easter eggs for those who like to explore.

Results & Lessons Learned

Project Status: Live

Website is live. Find some gifts!

Use Cases

While building Elephant, I wanted to create a tool that wasn’t just functional but genuinely useful for specific scenarios. Here are some ways this tool can be leveraged effectively. I asked GPT to write this section and candidly just ended up outputting it holistically. Sorry.

White Elephant Gift Exchanges: The most obvious use case! White elephant gift exchanges are a holiday classic, but finding the right mix of funny, practical, and memorable gifts can be challenging. Elephant streamlines this process by curating options pulled from reputable sources, ensuring you’re not stuck browsing aimlessly. Whether your goal is to surprise your coworkers or make your family laugh, Elephant offers a fun and easy way to pick the perfect gift.

Quick Gift Ideas: Sometimes, you need a gift fast, whether for a birthday, a housewarming, or a random celebration. The filtering system allows you to search by tags like “funny,” “tech,” or “bookworm” to quickly narrow your options. It’s great for anyone short on time who still wants to give a thoughtful present.

Holiday Prep: Planning for the holidays can be stressful, especially when you’re shopping for multiple people with different interests. With its playful and customizable search options, Elephant acts as a holiday gift planner, letting you pre-filter ideas based on themes or interests and reducing the holiday shopping overwhelm.

Corporate or Themed Events: Companies and organizations often host white elephant-style exchanges or themed gift-giving events. Elephant can help event planners identify gifts that fit their themes while staying within budget constraints. For example, if your company’s theme is “wilderness survival,” you can easily filter for relevant products.

Discovering Hidden Gems: Beyond gifting, Elephant is just fun to explore. It’s like a treasure hunt for cool, quirky products you might not have come across otherwise. The tool’s connection to curated lists and metadata makes it a great place to stumble upon hidden gems.

Social Inspiration: Even if you don’t end up purchasing directly from Elephant, it can serve as a spark for your own creative gift ideas. Seeing a curated list of unique items might remind you of something personal or nostalgic you could give instead.

Lessons Learned

  1. Start deployment early: One of the biggest takeaways from this project was the value of deploying as soon as possible, even with just a skeleton UI. This approach allowed me to identify potential issues early, avoid the headaches I faced in past projects, and iterate quickly. Moving from wireframe to a hosted, functional tool in just a few hours was a massive improvement over previous efforts.

  2. The Importance of Structured Workflows: The frustration with messy file structures and misaligned deployment tools in earlier projects highlighted the need for clean, consistent workflows. Backing up specific directories, sticking to a Git repository for version control, and using dedicated tools like Vercel for deployment made the process smoother. This clarity helped me avoid mistakes like pointing at the wrong directories during deployment.

  3. Tools Like RapidAPI Are Worth It: After hitting roadblocks with scraping metadata manually and being denied access to Amazon’s API, RapidAPI became a game-changer. Investing in this tool saved time and provided high-quality data, reinforcing the value of choosing the right tools, if you can swing affording their costs.

Costs

Pretty solid week — the only net new cost I think I incurred for this project was the $25 spent for one month of RapidAPI. And I’ll use it again!

Of course there are ongoing costs for all of the AI tools that I am now paying recurring costs on but… that's going to be a persistent background thing on all these projects.

The Toolbox

Section coming soon! See the main build details for tools used.

#efficiencies

The key takeaway on efficiencies is the critical importance of structured workflows. For this project, I started with a clear document outlining how I planned to approach each step and largely followed it. This upfront planning made a huge difference, reducing the chaos and indecision that often derails projects. Knowing which tools to use, when to abandon them, and how to pivot quickly streamlined the entire process. Moving forward, the more I invest in defining these methods and approaches at the outset, the more efficient and productive future projects will be.

Next Steps

If I were to invest further in this project—and I’m still undecided—there are three main areas I’d focus on:

1. Expanding the Product Database: The tool’s current database is functional but limited in scope. Adding products from additional websites and curating a larger, more diverse selection would significantly enhance its utility. While I never intended for this to house an exhaustive inventory, having a broader and more interesting dataset would elevate its appeal.

2. Improving Search Functionality: Right now, the search box operates as a filter rather than a true search engine. Implementing a more intelligent search system capable of understanding user intent and providing better results would make the tool far more dynamic and useful.

3. Affiliate Integration: Adding affiliate links to the products would create a potential revenue stream. If the tool were to gain traffic, this could be a logical next step to monetize its utility.

That said, this tool is highly seasonal. Its usefulness is tied closely to the holiday season, and I don’t anticipate significant utility beyond that timeframe. Depending on its reception and whether it shows long-term potential, I might revisit and expand the project in 2025. For now, it feels like a solid one-off tool for the season.

Previous
Previous

Darwin

Next
Next

Sprout