Generative AI, agents, and tools designed to turbo-charge your software development productivity are everywhere.
A lot of it might be empty buzz and only time will tell what kind of a disruption this will all turn out to be. But, as you’ll see in this article, in the past year or so, the tools have matured and are starting to make a real difference, and we as Software Developers need to start paying attention.
So, wanting to stay up to date on the developments, in the past months, I’ve been pushing myself to test the AI tools in real world scenarios, with a critical yet open mind1.
Note: Despite my AI experiments, I never use AI for writing. All my typos and imperfect sentences are mine. But I like it that way: to me writing is a way to think.
What is it like to build an app with AI?
The newest side project I’m building with AI is a tool few hours in the making — and I’m already using it.
Let’s start with some back story.
I’ve been using Obsidian to organize my notes for many years, but every now and then, I feel that it’s not matching my usage patterns 100%. So, couple days ago, I started again looking into alternatives.
As I was browsing the web, nothing looked just right… and my developer itch of building something of my own started raising its head. As I’ve gained experience, I’ve taught myself to fight back, knowing that nothing is as quick to build as I first think…
But now, with AI there to help me, maybe time won’t be a blocker anymore?
So, I opened a new Cursor project, and in it, created a simple README.md
file describing how the app should work. In another folder, I created a sample markdown note to show an example of how my notes will be like.
Then I wrote the first prompt2:

Cursor got to work, installing npm packages, creating folder structures, and implementing core functionality. When it needed to access to command line, it checked with me if the command is OK to run, but otherwise, I didn’t need to get involved.
And two minutes later, I had an app ready to be run.
It was still quite rough around the edges (I didn’t think to take a screen shot, unfortunately, so you’ll need to take my word for it), but already had the basic functionality you’d expect in a note taking app.
I shared my opinion to the agent: “Very nice, now let’s polish things up a bit. Please update the notes app to use Material UI”
And it went right back to work:

…and so on.
When I saw errors trying to run the app, I copy-pasted them to the chat and Cursor was always enthusiastic in fixing them:
“I see the issue! The error is occurring because Material UI’s theme object contains functions that can’t be passed directly to client components. This is a common issue when using Material UI with Next.js 15’s strict server/client component boundaries.Let me fix this by creating a proper client-side theme provider: […]”
Getting to the first, functional, Material UI based version of the app took about 10 minutes (as I didn’t really check any of its code output, or even read through it’s replies — most of the time).
And then I started tweaking it by asking for different features:
- “Let’s modify the note listing on the left side of the app to only contain the titles of the notes in a folder hierarchy displayed as a foldable tree.”
- “Add a button for creating a new folder”
- “Something’s wrong with the nested notes: when I click on them, nothing happens”
- “Now, add support for dragging notes between folders”
- …and so on.
When something wasn’t working, I typed: “it’s not working.”
And usually, that was enough to get Cursor to find a solution and fix issues.
Only on few occasions, when the AI agent wasn’t finding a solution on its own, I took some time to think about the issue myself and suggest a solution. For example, when it got stuck trying to make nested folder paths working with a regular path separator (“/”), I suggested using “|” instead. It did, and it worked.
Soon, I started moving my notes into the app, and the development became driven by the needs of my real world notes.
When I added a to do list and realized that I want to have check boxes just like in Obsidian, I prompted that feature.
When I wanted my daily notes to be sorted with the latest note first, I asked AI to change the code.
And this is where I am now, roughly four hours later:

Everything is functional, changes are saved reliably, and it all looks alright. Most of my notes are there, and while it’s not perfect yet, it already does the job.
And it’s getting better the more I’m using it.
How good is the outcome?
So, as a user who got a custom app in just a few hours, I am mind blown.
But how good is it from a software engineering standpoint? Let’s take a look at the code.
First, I can see that it’s a Single Page App, with a clean (standard) Next.js folder structure. The functionality of creating and modifying notes is handled through server-side API definitions, which in turn call domain logic functions in a lib
directory.
The UI is split into components in a logical way that makes sense to me. There’s some unused code from different attempts left behind, but those are easy to spot and delete.
External dependencies used seem smart and safe. And once I told it to set up a testing suite, in the following prompts it always wrote unit tests, and ran them to verify that things are working correctly. If tests fail, the recent version of Cursor doesn’t stop there but actually makes a fix and tries again until tests pass!
And finally, the README.md
it wrote is clean and informational both for the user and the developer of the tool.
If this was code from a developer in my team, I can see myself approving the pull request without many requests for changes — except for maybe some uses of deprecated functions… I’m sure Cursor will fix those just fine when I ask, though.
So, what to make of it?
So, what do I think…
My first impression is one of awe: creating working software has never been this easy.
Maybe in a few months, the more technical users of apps can prompt the tools they need. A future where game designers or product managers create their own helper tools is probably already here3.
Or then, software engineers will get more done, quicker, and can spend more time polishing the work and bringing the quality up?
Or maybe even work a bit less — and still be very productive?
Whatever the future, I’m sure for programming4, this can be great.
Something feels off, though…
When prompting my note taking tool, I realized something I hadn’t thought of before.
As a programmer for the past 30+ years (I started when I was 10 and I’m turning 45 this year), I’ve always enjoyed the state of flow working on code takes me to.
I use coding for thinking, just the same way I do with writing: I think of a solution, work on it, get feedback from my tests and from looking at the results and iterate. All the time, I am focused and immersed in what I’m doing.
Breaks in the flow comes in the moments where I need to wait for things to compile. For example, when working on server code, if I need to wait for the CI process to complete a build so I can see the results in the remote setup, or when waiting for a game build to complete so I can try it on a device.
Today, for the first time ever, I’m feeling the same while my code is being written.

For example, while writing the paragraphs above, the AI agent was busy adding in-place editing support for my notes.
The agent moves fast (a lot faster than I would) and I have not yet read the codebase, so there’s nothing for me to do there. The level of abstraction I am working at is one of an architect, or maybe a product or UX designer thinking about how the app should function (actually, it’s probably a combination of all of them… I will need to process this thought and come back to it in a later post).
And now, funnily enough, even though the AI is coding way faster than I would be (most of the time, when it’s not making silly mistakes), it feels too slow for this new abstraction level I am at.
And so, I lose the flow.
So now, I’m finding myself constantly context-switching between giving the AI more instructions and doing something else while waiting. Like writing this article.
That’s kind of cool, but I miss the fully engaged thought process of coding myself. Or being 100% focused in writing a blog post.
Maybe this will change though, as the agents become still faster.
Or maybe we’ll need to find ways to stay engaged with the code. That could happen by becoming still more intentional and clear with the prompts to reduce the surprises and back and forth needed to be done with the tool…
What’s next?
In this article, I’ve only scratched the surface in exploring how AI tools can change the face of our craft and what to make of it.
I have many more questions (how does one learn to code in the world of AI agents, what will programmers be doing if users prompt their own tools, who creates the code for AI agents to learn from in the future, why the code I prompted doesn’t feel like mine for sharing in GitHub, …), but as my answers to them are just as tentative as these, I will leave them for later.
Let’s all keep experimenting, and sharing our tentative answers and the new questions that come up!
It’s much more interesting than just passively waiting for the future to unfold.
Footnotes:
- As I write these words, my Cursor Agent is pumping out code in the background. To be honest, the whole reason I started writing this post was to find something to do while the code was being generated! ↩︎
- I picked Next.js as a framework for the app as I have some experience with it from an internal admin dashboard we’ve built at work, and I thought it could be a nice and simple solution for handling both the backend and frontend. ↩︎
- This is one of the questions I want to address next: I want to put a non-programmer at the steering wheel, and see how they work with Cursor to create their own tool. When I do, I’ll report my takeaways on the blog, so stay tuned! ↩︎
- I have a fair share of reservations for the use of AI in creative work. I already mentioned writing, and I’ve been thinking about music a lot recently… There’s another blog post in the making for that. For programming, I’m mostly fine. ↩︎