
Creator interview: Jacques Alomo
In these creator interview posts, I’ll be asking a series of questions to people using AI tools for their work or artwork.
I hope you will enjoy the read and learn one or two useful things ;).
Madebyai: Can you tell us who you are and how you ended up Doing Ai generated art ?
Jacques: My name is Jacques Alomo I’m a self-taught german UX and motion designer with 15 years of experience working for brands like BMW, ASUS, Allianz, and Rolls Royce Power Systems https://jacquesalomo.de/. In parallel to Freelancing, I’m also leading a motion design department at youknow https://www.linkedin.com/company/youknow-gmbh/, where I’m also a partner.
I love to optimize processes without sacrificing quality. This also led me to downscale my leadership role at youknow by handing over responsibility for most of the leadership aspects to my team. Ultimately, they lead themselves, which means they are leaderless, and I act as a coach and only contribute my experience where necessary.
This allowed me to focus on other topics like ai. The area of artificial intelligence and visual-generated content has always been appealing to me. For me, however, it’s less about the pure creation of beautiful graphics and photos. I am much more excited about the quest for practical use cases in a business context and in optimizing the day-to-day operations of corporations or creative agencies.
This is precisely the area I research and experiment with daily. The results are usually short articles or visual results in picture or video form. In the meantime, I collect and consolidate all my know-how in an AI presentation for creative agencies, which introduces the systems, tools, and application scenarios. Here is a small peak:
AiXAgencies
Madeyai: As a freelancer designer, How do/will you use Ai generated art In your workflow ?
Jacques: This entire area is still in its infancy. And it is precisely the variations in the quality of the results that do not yet enable a perfectly optimized production workflow. But there is a lot of exciting testing going on.
In the future, I see many potential use cases that will also find their way into our everyday lives as creatives. These are just two examples:
Faster and cheaper deep face creation results and a wider range and accessibility. That way, commercials can utilize this technology without requiring huge 3D or special FX departments. Take this commercial by PAYBOX as an example. https://www.youtube.com/watch?v=grVZaRSXbp8 The technology used is developed by D-ID and enables anybody to create great facial animations. Instead of a 3D modeling & animation workflow, an AI is used to bring the face of the character to life.
By combining Visual AI with other AI tools, high-quality social media posts can be created quickly, especially now. Here’s an example of test production. In total, this took about 1.5 hours.
AI Experiment – Bunny by Jacques Alomo https://vimeo.com/747997541/b7a993eccd
AI – Experiment – Church Tower by Jacques Alomo https://vimeo.com/742746977
Madebyai : you are experimenting a lot and very fast, what do you think is the next big thing that gonna happen in the next couple of weeks / months, ai-tech related ?
Jacques: It is incredibly challenging to predict the future at the moment. I have already amusingly failed with it a few times:
In a first test screening of my ai presentation for creative agencies, a question was asked if it’s possible to generate different situations with the same object/character. I had to decline since, for now, this was not possible. “It will be available in the future in a few months to 1 year but isn’t at the moment”. About 20 hours later, Google introduced DreamBooth https://dreambooth.github.io/, exactly the solution the person asked for. And a few weeks later, with textual inversion on Stable Diffusion, we are moving closer to this case at a speed that I would have never expected.
Let’s look at another experience I had with predicting future developments: I talked to someone about a visionary voice-based visual ai image generator. It would work like this:
-
First, we describe our desired image in our own words verbally. (obviously, recording our voice is a base technology)
-
This recorded description is processed by a voice2txt system and sent to a visual image generator which creates the image (this also can be handled)
-
Now a second system scans the image and defines all elements it can recognize.
-
These elements are visually marked and offered to us for further edits.
-
Third – I can again ask the system to adapt a marked element. E.g., “Change the design on the shirt,” Whereas ‘the shirt’ represents a marked object.
Shortly after the conversation, Arnaud Atchimon presented a rough voice-based prompter for DALLE
Again, a few days after this demo, I found txt2mask, a stable diffusion add-on that enables users to create text-based masks for further processing. https://github.com/ThereforeGames/txt2mask
You can see that we are not far away from the science fiction-like voice-based visual editing workflow.
Half a year ago, I still imagined this to be at least 5-10 years in the future. Now a lot of the pieces of the puzzle are already existing. The only step left to do is to combine them.
A LAST MINUTE UPDATE: While writing this answer, I found this great video demo by Shopifys Russ Maschmeyer, which conveys the flow I dream about https://is.gd/Z4A27q
Madebyai: With your experience using the tools, you probably discovered a couple of tips and tricks, Which ones would you be ok to share with our audience ?
Jacques: EbSynth is an incredibly powerful tool. Almost 2 months ago, I was able to do some initial tests with AI-generated content but had to let the tool rest for the time being due to the rapid further development in the field.
Creature Test | Stable Diffusion Img2Img x EbSynth by Scott Lighthiser
The videos recently published by Scott Lighthiser have impressed me, and I hope to find the time to get back to using EbSynth myself.
AI Experiment – Waking up from Cryo Sleep by Jacques Alomo https://vimeo.com/740034086
New Ebsynth result – live footage action by Javoraj from Paranormal studio https://vimeo.com/752416518/0600c47bb8
Don’t quit after you’ve generated your image! Because that’s when the genuine, exciting part begins. What else can you bring out in this single image? What story can you tell? How can you best convey that story to your audience? Elevate the image to something bigger and use different tools to do so. Here are some tools I use in my workflows:
-
Facial animation:
-
MugLife (mobile/free and paid)
-
myHertiage Deep Story or Deep Nostalgia (web/paid)
-
D-ID (web/paid) MyTalkingPet (mobile/free)
-
-
Scene Animation:
-
https://convert.leiapix.com/#/ (web/free)
-
EBSynth (windows/mac/free)
-
After Effects (windows/mac/paid)
-
-
Video Editing/Effect Software
-
After Effects (windows/mac/paid)
-
CapCut (mobile/windows/mac)
-
Use these tools to transform your simple generated images into a story and into an experience for your viewers.
AI Experiment – John Wick Action Figuren by Jacques Alomo https://vimeo.com/752422355
AI Experiment – Living Faces by Jacques Alomo https://vimeo.com/742748657
AI Experiment – Pixel Story (3h production) – By Jacques Alomo https://vimeo.com/742552392
Madebyai: Where can people find more about you?
Jacques: Feel free to follow me on LinkedIn at www.linkedin.com/in/jacques-alomo
In the upcoming weeks I will switch my german motion design-based website to an english ai-focused one. Here I will bundle all my experiments, production workflows, articles and discoveries. So a visit is worthwhile https://jacquesalomo.de/
Thank you very much for having me on madebyai!
I want to say a big thank you to Jacques for sharing these insights with us, check also some of his creations that I added in the “ studies “ section.
Post a comment