Creator interview: Zhonk Vision

In these creator interview posts, I’ll be asking a series of questions to people using AI tools for their work or artwork.
I hope you will enjoy the read and learn one or two useful things ;).

Madebyai: Can you tell us who you are and how you ended up Doing Ai generated art?

Zhonk: I am Zhonk a visual artist from Malaysia, part of NĒXT community where we are a collective group that focuses on experiplay, education, art, and technology. check out my work here :

POTRET MAYA by Zhonk Vision˚

Everything starts with a prompt, My friends Diqalurima and Vuevossa used to actively discuss AI art, and we ended up creating the 10 Words challenge, in which we create a single sentence based on our prompts of ten words, and keep the never-ending story alive with AI Visuals. The end result is amazing and seems to be interesting, so we started looking into what can we explore more.

This made me wonder what AI art was, so I looked up VQGAN + CLIP, PYTTI, and GPT3. I started with mobile apps like Wombo Dream, Starrie AI, API and web-based apps like Midjourney, DALLE, and other AI tools, and then I got more into Disco Diffusion. There is where everything started.

When my friend @ChopzArt told me about the model he made with Google Collab+ Notebook and how he put together multiple AI models in the same notebook, I got excited. Knowing things like this blows my mind and makes me want to learn more about the technical side of it.

As I experiplay around with Disco Diffusion, I find a lot of AI tools that can do much more than just make visual art. I’ve learned a lot from Disco Diffusion community by understanding the workflow and some technical aspects of things like art style, prompting, software, and hardware. So here I am arrived at the other side of a dream doing animation with AI.

Madeyai: As a Visual artist, how do you see AI fitting in your workflow ?

Zhonk: I decided to spend a lot of time learning and experimenting with animation for visuals, projection mapping, XR and understanding interaction with visuals. AI seems to reduce my time and come up with instant concepts for my back story where I can comfortably edit and elaborate them by just tweaking around the prompt and use them as footage for my visuals. It’s not coming up with what I expected, but with the magic of After Effects and other editing tools, it seems to put everything in place.

Madebyai: you are making these cool videos where we go deeper and deeper inside an evolving artwork, what is the idea behind it and how do you do it ?

Zhonk: The Idea of “How deep did you go” video’s is where i was looking for an Idea for my VJ loop jamming for projection mapping, so my friends here Vuevossa got his new projector, I wanted to try them out. The fastest way I can create new video is with Disco Diffusion, so I manage to make a 1000-frame-by-frame AI-generated video with the Disco Diffusion notebook. At the time, it wasn’t easy to understand RAW-type diffusion model code, so I decided to experiment and this is the result :

We were able to project them, which made me want to do more of it. I liked seeing my story told through AI, and I liked making the transitions between them. I also like seeing them projected on any wall, which is where I got the idea for my dream of exploring space with AI.

Madebyai: You are experimenting a lot and very fast, what do you think is the next big thing that gonna happen in the next couple of weeks / months, ai-tech related ?

Zhonk: I was fed by enormous rss feed by my own personal rss bot on telegram about new development in AI especially related with art.

well, at some point we’ve already have people convinced AI reaching sentience, So I’ve been observing how big tech companies like Adobe, Google, and other open source software like blender growing by embracing AI for their technology. I believe all this companies already have features by utilising AI in their tech for the future.

some speculation may say that soon, with one simple click you can create AI generated art with any Adobe software, I believe there is lot of possibilities these company can achieve that it just a questions of how?.

The same is truly the case for animation. As we can see from deepmotion development, where you can make a 3D moving avatar animation from any video and mimic full-body, face, and hand tracking from a single video captured from any device, there has been a lot of growth in deepmotion with AI. So that with deepmotion you can easily mimicking posture in animation workflow without using high tech or expensive equipment.

When we combine modern art and AI, we can explore the limits of what is considered “art” and what is considered “science,” blurring the lines between the two. The result is a hybrid that gives us a new way to express ourselves through three main parts: data, knowledge, and intuition.

Ethan Smith (@Ethan_smith_20) once said,

maybe even further in the future with neuralink or something, we can just have a machine read our mind to make a picture” – Ethan Smith

Madebyai: With your experience using the tools, you probably discovered a couple of tips and tricks, Which ones would you be ok to share with our audience ?

Zhonk: I’ve got an Idea of “Zhonk Exploration Dream”, Why not describing my flying through exploration in the space with AI. This is how I do it, ZHONK EXPLORATION DREAM Its a totally an experiment, TLDR; I am using Disco Diffusion v5.4 and I’m using a prompt that describes the whole journey of the animation. I started with a portrait of three pilots, which I turned into an Init image and used as the first frame of the animation. My prompt will describe the whole journey of the animation.

Initial image | first frame of the animation

I run Disco Diffusion by default setting, with 512×512 Diffusion Uncond Finetune 008100 with secondary model turned on. I let ViTB32, ViTB16 & RN50 turned on. Since I want to let the AI run a little bit more faster and generating more frames, I decided to just set up a minimal setting as I can go with Disco Diffusion.

I’m using around 175 steps on the lower setting and leaving the rest on the standard setting for Disco.

I schedule my prompt based on the number of frames, and Disco Diffusion will start making your prompt frame by frame. So, here’s an example of how I structure my prompt

notes: adding some modifier’s in a prompt can make your visual much more detail. Technically, anything can be a modifier. To the AI, there is no such thing as a “modifier”. They’re just words with aesthetic implications. But adding “modifiers” is an easy way to describe the look of something.

This is some of the frames that generated by Disco diffusion

This is one example of how it looks when it’s finished generating.

The video doesn’t end here. you can find the video that I am still in the process of creating and the updates that I post on Twitter, which is located here :

Madebyai: what is the next thing you are going to try using Ai tools ?

Zhonk: During my time working with Malaysian-based R&D powerhouse, REKA in 2017, I assisted them with the design for their autonomous vehicle project, which is where I first became familiar with the application of artificial intelligence (AI).

I started using AI tools on a regular basis after that, not just for visual generation but it grows to my digital work, animation, UI to HTML and VJ work, I’ve been using AI to ingest trustworthy news, Text to Voice AI tools for archiving articles, as well as AI image upscale tools, to enhance detail and resolution for my final digital work.

For AI generated art, I would love to try all the new software that coming to the space, this is the best excitement I can express.

So next, I would love to learn and explore with bringing voice, touch, taste, and smell to enhance these immersive experiences with the support of my visuals by embracing AI technologies to create new ambience and experience.

Madebyai: Is there anything else you want to share with our audience ?

Zhonk: I’ve observe a lot through a series of reading from Two Minute Papers and Google Arts & Culture and i’ve learn that AI art is a great way to explore the potential of AI. since art creates the most real human experiences we can find. Science and engineering try to make exact copies of how people do things. Art, on the other hand, focuses on how we experience those things. Art is a place where we try to understand how people feel, and artistic expression is a place where our abstract ideas can be put into concrete terms.

Madebyai: Where can people find put more about you ?

Zhonk: Best place to find me is on social media,

Instagram : Twitter :

I spend most of my time learning with the Disco Diffusion & NFXT community, especially about free education initiatives. You can join the community and share your thoughts in discord. Reach out to me, let’s become friends, and let’s play and learn together.

I want to say a big thank you to Zhonk for sharing these insights with us, check also some of his creations that I added in the “ studies “ section.

Post a comment

Your email address will not be published. Required fields are marked *