Creator interview: John Miller
In these creator interview posts, I’ll be asking a series of questions to people using AI tools for their work or artwork.
I hope you will enjoy the read and learn one or two useful things ;).
Madebyai: The hype around AI-generated art has been huge recently, with DallE, MidJourney, Stable diffusion, and the likes.
I am curious to know a bit more about how you ended up there.
John: Agreed, it’s been absolutely exploding, and I love it. These platforms have brought so many new faces and conversations to the art table, that for me, has been really refreshing. Personally, I found myself drawn to ai art back when NVIDIA showcased GauGAN in 2019 as a painting tool (now called Canvas). The possibility that a computer could take suggestions and render something completely anew, just stupefied me. Since then, it’s really just been a waiting game. There’s been workflows and plugins, that kind of dipped into the water – but nothing like we’re seeing now. Nothing like this.
Madeyai: Could you share with us what is your job, when, and why you started using AI tools for artwork/work?
John: I’m the Chief Creative Officer at Launchvox, where we develop animations, interactive experiences, and XR content as solutions for our clients’ needs. My primary role is leading and developing our content while also keeping our fidelity high, and most recently, the pleasure in exploring ai art.There’s a lot of places I could go with this, but what I will say, what we’re seeing a lot of success in, is inspiration and concept development. The process of mood boarding, Pinteresting, Googling has really been consolidated to a central exploration with some of the previous, as more of an augmentation.
On a personal level, I like to take images from ai-art and re-do them to tell a visual story.
Here’s a sample of such an exploration:
Madebyai: You probably learned a couple of tips during the journey, is there anything you can share with the audience? Maybe what is your best prompt to generate nice visuals almost all the time? Or how to quickly improve a generated image?
John: I don’t have a favorite prompt, per se, but when trying to create trends in 3D illustrative looks and feels, I’ll offer the renderer that might produce them, which has been an effective output. For example cartoon’y 3D models -> Cinema4D/Blender; good product lighting with cinematic appeal, it might be Raytracing/Vray; or if it’s more dynamic/VFX centered, I’ll use Redshift/Houdini/Xparticle.
Explore iterations a good bit, before you give-up and try a new string of prompts. I’ve found more success in continuing down the rabbit hole than starting over fresh. If you do want to start over though, as a suggestion, you could try uploading the image you were happiest with and then using that as a foundation with new prompts.
I’ve heard from a lot of people that Midjourney’s chat bot is cumbersome as they don’t really know how to visualize the potential outputs or even how to use the weights of them. The good news is, there’s websites out there that help you develop your string and the best one I’ve found is at Noonshot. It shows you visuals and gives you a great sense of what you’re working with, it’s just not as friendly on mobile.
Madebyai: I often see your Linkedin posts around AI tools, workflows and creations. What are the best use cases so far in your opinion ?
John: Ideation, by far, which I mentioned a bit earlier. It’s a boon to a workflow that helps you visually brainstorm ideas quickly and also beautifully, because let’s be straight here – ai art is quite a capable art director, on its own.
Outside of that though, here’s a few I’d suggest:
Custom HDRIs using the panoramic feature
Textures for bump/normal/displacement maps (The prompt *seamless* might also have success in tiling)
Combining multiple photos together, seamlessly (Seems to be only Dall-E is the only one capable of this, for now)
Madebyai: Thinking about AR/VR and Metaverses, how do you see these Ai tools fit in a near future ?
John: For AR, outside of artwork and being creative for funsies, I would say shopping and e-commerce is probably going to utilize it much quicker than other industries, as they have the budget and are currently poised to be the most consumer facing. In fact, we’re already seeing some of it happen, with real-time concepts being integrated on live models for fashion.
I wouldn’t be surprised if one of the first main-stream uses we see is a Shopify-type of system, where users can view themselves in an AR mirror while ai generates custom prompted clothing for them, letting them try it on while moving around. Once the user sees something they like, the ai does a “similar image search” through the visual data-base of exclusive content and then links them to the page in hopes of converting a sale.
Take it further and imagine Etsy giving its users the ability to work closely with the maker-space, by both visualizing content the user creates merged with content the Etsy creator makes, opening the door to a more collaborative experience and most likely increasing sales.
I could go on: tattoo parlors with an AR app that builds custom ai art, while letting a patron visualize it on themselves; Vinyl coloring or window tint for cars; murals on walls; landscaping and home-renovations, etc.
It’s nuts, because these aren’t ideas that are future potential – These are literally possible right now, with a small, motivated team that can do some basic UX alongside the knowhow to take the right APIs and build it around a good customer experience. I mean, if I may, this is exactly what Launchvox is doing right now for a handful of clients and all we’re seeing is excitement to make it work.
Madebyai: What is the next project your are going to try using AI tech / tools ?
John: I can’t speak too much about it, but we are developing some exciting ai integration that is game-engine agnostic for a number of potential clients and the use-case vastly ranges.
On the personal side, I plan to finish up a series I’ve been doing of reimagining some of the Pokémon from Red/Blue and then re-doing the render completely. You can check out a sample below:
Madebyai: Thinking about this Upcoming Ai tsunami, what will in your opinion, be the next ” dyke ” to fall ?
John: I don’t really know, to be completely frank. I truly believe we’re witnessing the next great renaissance and slowing it down might currently just be impossible, let alone stopping it. Instead, our focus should be integration and making it safe for consumers, that might be targeted in their daily consumption of social media.
Madebyai: Is there something else you would like to share about AI generated stuff?
John: Ai is going to change a lot. Whether that’s a good thing or a bad thing, is anyone’s guess. Don’t get me wrong, it’s so exciting to be here and witnessing this massive shift, let alone be a part of it the exploration, but I do wonder if we’ll look back and be thankful for what we did here?
Madebyai: Where can people find out more about your work ?
John: For what I do on the personal side, please check out my work on Instagram: https://www.instagram.com/johndoesartwrong/
Professionally, check out my LinkedIn where I post unique R&D pipelines along with shares of others deeply connected in the space: https://www.linkedin.com/in/jwmrendering/
I also encourage you to check out my company’s LinkedIn, where we often showcase explorations of ai and their use-cases: https://www.linkedin.com/company/launchvox/
I want to say a big thank you to John for sharing these insights with us, check also some of his creations that I added in the “ studies “ section.
Post a comment