Creator INTERVIEW: Pete Sena

In these creator interview posts, I’ll be asking a series of questions to people using AI tools for their work or artwork.
I hope you will enjoy the read and learn one or two useful things ;).

Madebyai: Hey Pete, I discovered your work on Linkedin and OpenSea and I thought it was really inspiring! I have a background in Design and Design Thinking so I kind of resonate with what you do.

Can you tell us who you are and how you ended up doing AI-generated art?

Pete: My name is Pete Sena. I’m a creative technologist who loves to tinker with how new technologies can enrich our lives. I ended up first playing with generative art around the time NFTs started to get interesting, and when I got access to the closed beta of Midjourney I was blown away by the possibilities. 

House of the dragon launch

Madeyai: Seeing your Linkedin profile, I see that you have a lot of experience in business in general and helping businesses. You are also into Web3 and AI, which makes it perfect to ask you some questions about adoption, pain points, and so on. I am wondering why, in your opinion, we don’t yet see AI-generated art everywhere? 

Pete: I think that there’s a lot of confusion around AI art. The reality as you and many may know is that AI is all around us, from the apps on our phones to the marketplaces we transact on daily. AI stabilizes and corrects bad photos and so many other things.

I think there’s a lack of comprehension and fear around AI art. Many think it’s unoriginal and there’s a ton around legal copyright that also is starting to flare up. But the main reason we haven’t seen AI art everywhere is because it’s very early. The more people start to train their own AIs and custom diffusion models, the more we will see AI as a part of the artist’s toolkit. I don’t believe it should be taken as an end-all, be-all. 


Madebyai: You wrote in your Medium article that you know someone in the AAA game industry who is using MidJourney to generate quick concepts and mood boards. Do you have other examples of real business use cases around AI art so far?

Pete: Absolutely! Industrial designers are using it to generate product ideas. Tech-savvy photographers are using it to conceptualize shots in pre-production. Writers are using it to generate fast and efficient stock photos for their articles. The use cases of AI generated art are continuing to increase by the day.

Madebyai: The field is evolving pretty fast. I still remember just a couple of months ago, the MidJourney Reddit was full of people making fun of AI generated “weird hands and eyes”, but now it’s already almost an old memory. What in your opinion will be the next breakthrough in the short term?

Pete: I think the two biggest breakthroughs in innovation and adoption of AI art come from Stability AI releasing stable diffusion open source, and the widespread news and media coverage of OpenAI’s DALL·E 2.

In the case of Stability AI, the company took the open source route – which history has shown us is great for community and innovation around sharing and expanding knowledge. Cars, planes, drones, phones and computers all have Linux to thank for the power of open source. And in DALL·E 2’s case, Open AI has done an amazing job marketing the project along with the incredible work they are doing with GPT-3. GPT-3 is paving the way for a new era of startups and tech innovation. Text-to-image is just the beginning. We’re already seeing text-to-video and text-to-audio. I even have benefited from GitHub’s copilot tool, which is making AI your coding copilot, writing and explaining code for you.

Madebyai: You started an NFT project combining AI-generated text and artwork and I think it’s a pretty interesting concept, it’s almost like a prototype of what a newsfeed could become in the future.

AI scanning news and trends and picking some interesting moments, then creating content and registering it on the blockchain (as proof/legacy or something). How did you come up with this idea and what are the reasons behind this project?

Pete: As a creative entrepreneur and marketer, the cultural zeitgeist has always interested me. Humans working with an AI is one thing, but I was curious to see how I could combine and remix some of my favorite new inventions (the blockchain, NFTs, Text-to-image AI and GPT3) along with anthropology and news media. 

The experiment began with a question as most of my inventions and experiments do: “What if I could capture the essence of the cultural zeitgeist of the day and memorialize it forever on the blockchain? And what if AI could facilitate the entire process?” 

I did it purely from a place of passion, but it quickly turned into an obsession. I’ve got something really big planned for what’s next 😉. The Zeitgeist already sparked the attention of some major media organizations, some have even spun off “look-alikes” of the idea. I consider imitation the greatest form of flattery.

Madebyai: I believe that something interesting might happen at the intersection of AI and NFTs.

At the moment, artists are afraid that they will lose their jobs because the AI can / will copy their art and execute much faster and much cheaper. The first boom around NFTs was fueled with a lot of copies of great artworks uploaded on the blockchain as a proof of owning fake art (Don’t get me wrong, I created and sold NFTs and I am an enthusiast and early adopter).

What I imagine in the future is that some artists will be able to train their own AI model to help them work faster, and by registering it on the blockchain they will be able to prove they are the creator (or their own AI-model ) which will probably bring more trust and people will be willing to buy their artwork.

Maybe the artist will still do some custom gigs but he will also be able to generate cheaper and faster artwork for people to buy. My question for you is: what do you think of this idea and / or what do you think will happen in the long term, combining AI and the blockchain?

Pete: I love this question! Let me unpack this for a moment. 

To start, I think that value is determined by the market. Just because something gets faster or cheaper to make doesn’t make it more valuable. Art is highly subjective and is still beholden to supply and demand economics. I believe the blockchain creates a trustless way to memorize content whether it is fungible or non-fungible, based on the nature of the smart contract the creator uses to mint it on the blockchain. This is a great advancement in technology, but I believe that just because someone can make a satirical image of Donald Trump being a buffoon doesn’t make it worth millions of dollars like Beeple’s artwork. The challenge with any market or technology is they are volatile. An NFT once worth millions is worth a few hundred dollars as the markets shift. 

What I think is uniquely interesting here as we start to get more explainable AI and transparency of intellectual property on the blockchain is the dynamic ways we can create art and get more immersed in the stories and lore that the artist creates for their audience.

Years ago I discovered a company called DNA11 which makes art from your DNA sequence. I proudly have a small piece of art made from my DNA, and my cofounder has one that is made from his daughter’s DNA. Fast forward to today, where I have a digital frame called a Meural in my home studio. What I love about it is how I have some art and NFTs from Async Art which change as events in the world do. Time, events and the art changes. If we start to think about the possibilities of generative and personalized art, it’s going to be tremendous for performance artists and their ability to use tokens, NFTs and unique collectibles that range in rarity, uniqueness and heavy personalization. 

The beauty of the blockchain is you can’t cheat the blockchain, it’s immutable.

Madebyai: What do you think is the next big thing that is going to happen in the next couple of weeks/months, AI-tech related?

Pete: We’re already seeing a huge explosion of AI-startups. These range from products that are built on top of things like stable diffusion (art generators, texture makers, performance art), to software as a service (SaaS) platforms built on top of GPT-3 to do things like create marketing copy, A/B split test ad creative and analyze and improve how we learn from a computer. Some are saying the GPT-3 chat is going to be a ‘Google killer’ due to its ability to have conversations with us. 

We’re close to AGI (artificial general intelligence). Meta’s demo of Cicero is pretty provocative, demonstrating an AI’s ability to play at a human level of diplomacy. It’s getting interesting. I’m super excited, because as a creative technologist I’m getting a lot of interesting consulting work from engineers and mathematicians who are so close to the technology. They cannot easily see and imagine possibilities and use cases which because of my small brain, I am able to be a value add to them by seeing it through a creative lens.

Madebyai: With your experience using the tools, you probably discovered a couple of tips and tricks. Which ones would you be ok with sharing with our audience? (Can be related to AI art or AI text or both)

Pete: Certainly! In the case of AI, I suggest starting with a process I use for everything which I call EIIOS, which stands for excavate, imitate, innovate, optimize, scale.
I would start by looking at sites like
Lexica and PromptHero to first imitate and understand the bounds of possibility, observing the language and promoting approaches by the wisdom of the crowd. If you’re not familiar with these sites, it’s like Pinterest for AI prompts where you can see others’ art and the prompts they used to create them. 

From there, I would suggest others move to imitation where they can imagine an image and create a similar version from their own prompt. I would start to play with innovation (pushing beyond what’s been done), which could be running your own local copy of stable diffusion, training your own model off your own image set, or using one of the awesome custom models on Replicate or Hugging Face to do things like animate images, create video from text and push it even further. Next, I would optimize so you can get a reliable and consistent outcome, similar to the visual language and consistent art direction we established months ago on 

I challenge folks to scale and spin that process like a flywheel, creating new things and pushing boundaries. I also suggest digging into the tech to understand how it works. A YouTube search for “stable diffusion” or “how text to image AI with CLIP works” will return some amazing content to dive into.

Madebyai: What is the next thing you are going to try using AI tools?

Pete: I’m currently working on an idea for an AI creative director. Think Don Draper from the show Mad Men, combined with Kit from Knight Rider, with a little bit of Jarvis from Iron Man (Tony Stark’s super intelligence assistant). It’s proving to be quite challenging, but I already have some interesting results. Perhaps I can share more when they are ready?

Madebyai: Is there anything else you want to share with our audience?

Pete: I invite everyone to tinker and explore their curiosities for what could be possible with these tools. With this in mind, it’s time to consider AI not as competition, but as a complement — a partner and co-creative tool that can help eliminate creative blocks so you can ship products faster. The interplay between people and machines is a brilliant space to play in.

Madebyai: Where can people find out more about you?

Pete: I’m @petesena on Twitter and pretty active there. You can grab my newsletter at or you can check out my brand experience consultancy Digital Surgeons.

I want to say a big thank you to Pete for sharing these insights with us, check also some of his creations that I added in the “ studies “ section.

Post a comment

Your email address will not be published. Required fields are marked *