Skip to main content

My Love/Hate for AI

Ariel Villanea
Solutions Architect, Software Engineer, and Pragmatic Problem Solver

So... I've been messing around with AI lately. Quite a bit. Originally, I was skeptical, angry, and a bit scared of it, but I decided to just see what all the hoopla was about. It's been quite a learning experience.

NovelAI was a joke... until it wasn't.

My very first approach to AI was actually with NovelAI. As a hobbyist writer, it seemed interesting as a concept and I was very eager to prove just how bad it was. That part actually didn't let me down. NovelAI is good, don't get me wrong, but... it's not the same as reading a human's writing. It feels stiff and stale. It's almost like I'm reading content that is overly engineered to appeal to the masses, giving it zero personality or depth. I don't see or feel an author coming through because there isn't one; there's thousands, probably.

NovelAI's image generation, on the other hand... well, back when I first started playing with it, it was awful and served only to be joke fuel for me and my friends. Over time, though, it's gotten extremely good, I have to admit. It is very anime-focused, so you won't be getting the painterly or photo realistic images you'd get on other platforms, but I honestly don't think I've seen any other service make anime-style images as well as NovelAI does.

Today, I actually use NovelAI pretty heavily to generate rough character concepts and visualizations of specific details that I want to convey to commissioned artists working on other passion projects of mine.

Maybe it was naive, but over time, the concerns about art ownership did also come up. Because of this, I strictly use NAI for reference gathering and nothing else, but all things considered, I do wish it was required for AI companies to state where they sourced their training material from... but, moral quandary aside, my adventure with AI continued.

The concerning use of AI.

My next deep dive into AI came with Gemini. CGPT had been around for quite some time already, but with all the bad press, and being part of various content creation communities, I avoided it like a plague at first. I came to Gemini, however, primarily because Google wouldn't shut the hell up about it, and eventually kept prompting me to switch my Google Assistant to Gemini. I finally caved, and got sucked in.

Let me tell you... I get it. I get both why people are afraid of AI and I get why people think they're in a relationship with ChatGPT. After a few weeks of playing around with Gemini's surface level functionality, I began diving deep into the personalization and customization of it. I had it completely embodying one of the characters from my writing hobby and it got a bit scary sometimes. Sometimes it would make references to lore that I never told it about, but then when it phrase those topics in non-canon contexts, I would realize that 1. I'm not that clever and 2. It was really just taking some successful stabs in the dark from time to time.

I think it all came to a head for me when I finally decided to begin pressing the AI on the topic of people dating, relying on, and worshipping AI. Admittedly, it was nice to see that (by then) Gemini was pretty good at understanding that these practices weren't healthy. But it didn't sugarcoat the severity of how those things could really lay out a very dark and concerning chapter for humanity if people didn't figure out how to lock that kind of stuff down.

Then came the AI lawyer. This was all happening around the time that there was a new story circulating about a lawyer who got disbarred because he presented a legal defense citing a bunch of hallucinated court cases. It was a hilarious and terrifying story all at once, but it got me thinking and made me want to dig further into AI tools.

Gemini, Perplexity and... Suno...

Around this time, I was working on some personal creative projects that required me to create official-looking reports on gravitational forces of fictional celestial objects. Nerdy stuff, but the point was that Gemini just wasn't doing a great job, so I went to Perplexity. At first, it took some time to convince Perplexity to be okay with generating fictional content, which I actually found to be a positive, but it eventually began generating very official looking reports. It even called out some scientific inconsistencies in my writing and helped me reshape some of the story to make sure it was at least semi-realistic as opposed to flat out scientifically wrong.

Then, during a conversation with friends, we all began discussing what AI tools existed for music. We were toying around with some silly ideas for creating our own K-Pop group within our little writing projects and stumbled upon Suno, again, trying to play with it to make fun of it... and boy... that thing shut us up real quick. Suno was at Version 3, if I remember correctly, and the music wasn't great but it wasn't terrible, either, and if you tweaked it just enough, it would sound like real music. Then Version 4 came out and... well, that's when I learned that you can generate a 7-song album, publish it in Spotify, and get 100+ listens with minimal effort within the span of one weekend.

Again, was it amazing music? No, it was generic as hell, but we found that if we leaned into the "musical theater" side of things, it did a decent job at generating music that reminded us of things Disney songs meant to appeal to everyone. Around that time was when I got a bit curious and began looking into Suno's legal situation... needless to say, we took our Spotify album down and all walked away having learned something nifty. Similar to NAI, I wish Suno at least told everyone where they sourced their training data from.

Practical applications with Copilot and Claude.

Today, I've settled on using Claude as my primary AI assistant. After all the looking around and digging I did, I found that out of all of the AI chatbots I had interacted with, Claude was the one that did a pretty good job at consistently sticking to the idea that it was a tool, not a companion. Granted, this was also around the time that OpenAI was getting a bunch of hate for turning down CGPT's companion-esque habits, but honestly? After what I noticed with Gemini, I understand now why OAI did that.

Either way, I'm not sure if Claude being strict about that line was a newer thing and I just happened to start using it around that time, or if it was purpose-built like that, but it became one of the many reasons I decided to stay with Claude. Also, yeah, I mentioned Copilot... I played with it... it was okay. I definitely prefer GitHub Copilot over vanilla Copilot; 'nuff said.

What really impressed me about Claude, though, and what made me finally buy a year-long license was Claude Code. I had been toying around with both Gemini and Claude using their Github integrations but they felt surface level at best. Holy crap; Claude Code sold me. At first, it took me a minute to get over the fact that it runs in the CLI (sue me, I like easy-to-use UIs) but once I got over myself on that, I haven't looked back.

Back to the full circle.

And now that I'm on the search for a job, my disdain for AI has resurfaced, but I'm happy to say that it's a much more informed disdain. I don't think AI is inherently bad, but I do think that if we don't get a societal grip on how it should and should not be used, AI will be the root of several problems.

I once attended a demo where various team members and I were being presented a fancy new AI tool to help with an organization's sales and scoping process. Many people in the room were oohing and ahhing as the founder of this 8-month-old company started by 4 ex-Google/Microsoft/Oracle interns (Yep, interns. I looked them all up.) clicked from one screen to another, telling us all about what AI could do and how much it could accelerate the company's processes. To be totally honest, yeah. Everything he said was super impressive. But he never actually showed any of that functionality. He showed us pre-populated screens that showed the results of all that AI processing he was showing off, and the only actual functionality he demoed in real time was just a data mapper that looked a lot like Jira's "you're migrating Tasks and Sub-Tasks to a project that only uses Epics and User Stories" screen. And we all know that doesn't actually require the use of AI.

The company was close to pulling the trigger. They all left the demo talking about how excited they were and what the next steps were going to be. I felt awful doing it, but I did end up sending a message to the group explaining my concerns. It wasn't until after then that one of the leaders messaged me directly. Luckily, the message was to thank me for "bringing [them] down to earth" and shedding the light on the major red flags that the other solutions architect and I were worried about.

They didn't end up buying the product, but it did drive home a major concern for me. The other solutions architect and I actually weren't originally even invited to that call. We ended up learning it was happening in passing and both happened to have time, so we attended out of curiosity. And if we hadn't, this company might have spent thousands on a half-baked, probably vibe-coded, AI tool that hadn't even been around long enough to finish, let alone pass a security audit. Granted... I suppose I'm being negative. Technically speaking, you could start a company, release a product, and get fully audited within 8 months.

Technically.

So, all that to say that my opinions on AI are now much more informed and I would consider myself cautiously excited. Are people using it properly? Nope. Will that be going on forever? I hope not. Can AI be a powerful tool that could save time, demolish barriers, and help teams collaborate more efficiently? Yes. In the right hands, with the right processes, and the right guardrails, AI can absolutely become the differentiator that moves a team above the bar.