Creator interview: Brian Penny

In these creator interview posts, I’ll be asking a series of questions to people using AI tools for their work or artwork.
I hope you will enjoy the read and learn one or two useful things ;).

Madebyai: How did you go from Bank whistleblower to making an AI generating human faces from

colors ?!

Brian: I worked for Countrywide Home Loans in the years leading up to the financial crisis. Specifically, I worked in their operations as a Business Systems Analyst acting as a translator between IT and business units through their digitization and automation efforts.

I led project management through their transition to Bank of America, and I became a whistleblower by leaking some internal documents to Anonymous 8 months before Occupy Wall Street ignited.

Jeff_Bezos_as_Satan_in_hell

I became a freelance journalist and watched the cryptocurrency and cannabis industries blossom while learning the ropes of what we now call a “content creator.”

I’ve always preferred to be more behind the scenes and became a ghostwriter, where I worked with a lot of clients in those industries. A couple years ago, I picked up two clients in the AI space and started researching it heavily around the time GPT-3 AI writers started coming out. It was a perfect storm where as soon as I got access to the AI art, I dove right in.

Madeyai: It seems you are trying to “ crack the code “, what did you discover so far about MidJourney algorithm that could help the audience?

Brian: I like to approach things with an analytical mind. It’s fun to play with new technologies, but it’s also important to understand how they work. When I first joined MidJourney’s Discord server, it was so chaotic, and there was so much to learn. I had a basic understanding of at least GPT-3, neural networks, etc. But I wanted to understand the Clip and diffusion models, so after a few days of just throwing words and being amazed, I started looking things up.

Because it’s so new, there wasn’t much documentation out there. So, I started running my own tests to figure it out. I started by throwing entire rap songs into it to see what it would do (here’s what it drew for Eminem’s Cleaning Out My Closet), then I decided to explore individual words, chose colors and kept rerolling the word “orange” to see what it would do and how far I could take it away from the original meaning by rolling variants.

I found a woman in the orange, so I immediately checked red, white, and blue, where I found a woman in each of them as well. Then I spent about 10 days rolling a different color each week once I realized I could transform any picture into a human. Once I had tangible proof, things just started to click, and I had a solid point of understanding that I could then use to start exploring more of the system.

Red

What I learned from it is “human” is the most important parameter in the system. AI was built by humans to serve a human audience using datasets created entirely by humans. And if you think about it, humans are the hardest thing for the AI to draw. We want it to recognize hundreds of species of plants and animals, but we expect it to know 7 billion of us by first and last name. And none of our pictures on Instagram or the Internet at large look human. We take photos in inhuman poses from unhuman angles. The AI has no clue what we actually look like.

At the same time, I was feeling out the AI art communities on social media. I kept learning from what everyone else is doing and decided it’s important to spread what I’m learning to help others explore. There are endless possibilities with AI art, and there’s no way I’ll ever explore all of it on my own. So sharing what I learn is my way of contributing to the ecosystem and help ensure I get to see more without having to explore myself.

Black

Madebyai: Probabaly in a quite near future, the art of prompt writing will be an in demand skill, What are your favorite prompts and why ?

Brian: I’m a fan of one-word prompts, because the color experiment taught me that every word is a color to the AI. After colors, people started collaborating with me and wanting to see things happen in other words, like Quest, Scar, Halloween, and Winter. As we explore the mutations in the word, it helps to understand what objects the system assigns to those words.

For example, in Halloween, you can see ghosts, pumpkins, and black cats. Or for Donald I ran a fun experiment to remove Donalds Duck and Trump to reveal Donald Glover using “–no Duck, Trump”. I think you’re right that prompt writing will inevitably be its own art, and there are people who will find ways to market those skills. But for me as a writer, I mostly enjoy exploring and discovering what the AI thinks each word we use actually means. Sometimes, it’s obvious, while other times it is not what you’d think.

I do agree that prompting will become its own in-demand skill. It isn’t as easy one would think to make great imagery with these apps. I can spend an entire day just trying to get one prompt to work exactly like I want it to. When I finally get what I want, it can be very satisfying. And while it seems easy from my end, it also took a LOT of time and research to zero in on everything.

DSLR_CAMERA_made_of_seashell_pearl_lense_realis_

Madebyai: From a writer POV, what do you think of the AI text generators ?

Brian: They’re garbage on their own TBH. I’m no more worried about being replaced by AI than a graphic designer should be. Yes, I can make GPT-3 write 1000 words on a topic, but it’s not going to be a good article. It’ll still require me to edit it and rewrite at least half of it. That’s because AI writers aren’t trained to understand facts, so they’ll make up dates, numbers, names, and even attribute fake quotes to people.

I’ve used them for several years now, and I think it’s interesting from a chatbot perspective, but you’d have to be crazy to publish something directly from an AI writer. Imagine you write a story about Elon Musk, and you get his birthdate wrong and attribute a quote to him that he didn’t say. He could sue you and end your entire company like Hulk Hogan ended Gawker. It’s very dangerous, and people need to be aware of that. Companies like Jasper advertise that you can get SEO-friendly blogs instantly, but you shouldn’t both because of liability and because Google considers AI content junk.

As a writer, I knew GPT-3 and the like scrape words just like these AI art programs scrape images. In fact, even the AI art programs are based on GPT-3 datasets as well. We can easily see when someone invokes an artist like Shepard Fairey’s style in a visual design, but it’s impossible to notice when someone invokes my writing. IMHO, angry photographers and designers should set some case law in these easy-to-visualize mediums to give a chance to the writers of the world whose work was stolen much earlier and easier.

Madebyai: What is the next big project you are going to try using Midjourney / other AI tools ?

Brian: I’m collaborating with several people to help them on their individual projects, like a graphic novel and an art exhibit. I don’t have any big dreams myself of selling any of this stuff. I’ve been a professional writer for over 12 years now, and I’m comfortable with my workflows in getting writing work. I have clients in the AI space, and I know that my high-level research and analysis will never go out of style.

Mostly what drew me to MidJourney was the fun of exploring a new technology while satiating my creative side. But from a business perspective, the only value it has so far for me is the ability to generate images for my articles.

What sucks about being a writer is scope creep with clients. I’m contracted for writing words (and words aren’t just words, as many clients like to say – they’re the culmination of study and experience), and often clients also want visuals. It’s neither cheap nor easy to source photographs, infographics, and general article imagery, and I don’t have the overhead to pay for a professional. So with MJ, I can generate an appropriate image on the side using keywords from the article and satisfy my contractual service-level agreement (SLA).

Because I’m a writer, my art doesn’t have to be perfect, as long as it fits the article I’m writing. From a business sense, that’s all I’m making money on with any of this. Everything else I’m doing is because I legitimately enjoy making AI art and want to contribute to help those in the community who want to improve get better at it. So, I keep writing guides about my discovery process on my blogs so they’re available when the next person dives in to figure out what is possible and how they can use it.

Madebyai: Is there anything else you wanna share with the audience ?

Brian: AI isn’t going to replace people – it’s only going to augment our current skills. We have text, image, movie, and music generators that can create outputs in seconds that would take a human team months or years. But it still takes knowledge to generate the right outputs and polish them, removing errors and inconsistencies. Just because we have AI language generators doesn’t mean we can stop learning how to communicate with each other.

The reason AI writers can’t replace people is because they don’t exist in the real world and can’t report on what’s contextually important for people. An AI writer can write about the President of the United States shaking another foreign dignitary’s hands. But it doesn’t understand the cultural significance of such an event. It doesn’t analyze the possible reasons and outcomes of what it could mean to our foreign policy. It only “sees” the words, but our words have actual meaning.

Art has meaning too, and the AI may be providing some profound imagery, but it doesn’t actually understand what it is making. It only know what is programmed into it. Ethics in AI is a very important aspect that shouldn’t be ignored because of this. The AI can have 250 billion reference points, but if they’re only focused on specific demographics, it’s only mirroring those demographics.

For example, when you ask the AI to draw “a beautiful woman,” its definition of beauty is not your standard. It’s the standard of the overly Photoshopped imagery we assign to it online. Everybody knows that Facebook isn’t truly representative of our real lives, so try to understand that AI isn’t either. It’s just a representative of the fantasies we publish for the rest of the world, and it will always need human guidance.

Also, we as people have a tendency to talk more than we listen. And that’s what we’re doing with AI – everybody is trying to find the right prompts to say to it to get instant gratification of what we want. But I learned more from just exploring variations of one word than I ever could have by talking. I suggest people try to listen to what the AI is actually outputting to gain better insight into what’s happening under the hood.

Phoenix_sunset

Madebyai: Where can people find more about your work and creations?

Brian: Best place to find me is on social media, like the MidJourney groups on LinkedIn, Facebook, Pinterest and Reddit. I spend most my time supporting the rest of the community, and I even do some Reddit Talks (their social audio feature) about AI art, ethics, and technology. Social audio is another emerging technology I’m a fan of and watch the industry. I’ll also inevitably start incorporating AI art into the animation stuff I’ve been working on for my YouTube channel, but that’ll be further down the road I’m sure.

I want to say a big thank you to Brian for sharing these insights with us, check also some of his creations that I added in the “ studies “ section.

Post a comment

Your email address will not be published. Required fields are marked *