Founder interview: Jakub Lukowski

In these creator interview posts, I’ll be asking a series of questions to people using AI tools for their work or artwork.
I hope you will enjoy the read and learn one or two useful things ;).

Madebyai: Can you tell us who you are and how you ended up doing AI generated art?

Jakub: I am a programmer working on my own SaaS apps and projects. I love building products that make people’s lives easier and help them get their job done. Before getting into AI art, I created a lot of different projects over the last few years. Some of them failed, and some are used by people daily. One AI-related example is a text-to-speech app – BlogAudio, which started as an app to make blogs more accessible by adding AI-generated narration to articles. Later it evolved into the multipurpose text-to-speech app that covers cases like video voiceovers, podcast creation, and even as a way to listen to your favorite news articles.

In my free time, I became interested in AI art when I first saw DALL·E 2 by OpenAI. I’ve waited a long time for access and did not get it for over six months. Finally, when I got an invite, Stable Diffusion was released publicly. That changed everything. I’ve started playing with many image-generation interfaces and was amazed by the possibilities the models create. I knew that I wanted to build something with it and be part of the fantastic community of like-minded early adopters of this technology.

Madeyai: You are currently building, a platform where people can generate ai art. Can you walk us through your thought process? How did you decide to make this? What is your USP(unique selling proposition)?

Jakub: It started as an idea for a product targeted to programmers. I wanted to offer a simple API to let developers easily use Stable Diffusion without the hassle of owning a GPU infrastructure, installing, and maintaining it. But before building it, I decided to verify the idea. So I created a simple landing page and posted it to developer communities. Long story short, it failed – I chose not to build it, as there needed to be more interest to make it work.

But still, while verifying the developer-targeted product, I was playing with Stable Diffusion and exploring its possibilities. This was especially hard because I don’t own a GPU, and whenever I wanted to do that, I was spinning up a costly virtual server in the cloud. After some time, I got so frustrated that I decided to build my own image-generation service that works fast and can be simply accessed from a website. After releasing the initial version, I was amazed by people’s response and that it was used to generate nearly 100k images over the first couple of days. I knew that I was building something that people wanted.

Since then, has grown to 3 different tools for creating AI art. This distinguishes it from other interfaces; you get multiple AI tools ready to use on a single website. That is also my plan for the future of the service – to be the next-generation image creation suite.

The tool I am most proud of, which is unique to, and which is also the most powerful one, is the AI Editor. It’s a graphics editor-like interface that combines multiple AI models that can be used to create anything with text on an infinite canvas. It’s easy to use but extremely powerful. It allows users to outpaint images beyond their original borders, erase and inpaint new objects, or even replace a whole set of concepts using only words.

Madebyai: Thinking about the next products that will soon be released using AI ( GPT-3 / images etc.), what do you think will be easy to create and challenging to create? 

Jakub: Undoubtedly, many products will be built on top of Stable Diffusion, applying image generation to specific business verticals, either for creating new content or generating variations of the existing one. These are easy to build as it only requires connecting existing pieces of technology. The only high barrier to entry is the cost of GPU infrastructure. A great example of such a product is InteriorAI by @levelsio, which automatically generates interior design mockups based on photos of your home

The biggest challenge right now in building AI products is scale. Running AI models requires expensive GPUs that can only process one task in parallel. Making everything more efficient is hard, but it is necessary to create powerful products that generate thousands of images in seconds. Other products that will be challenging to develop are products that require creating new models to work or fine-tuning existing ones on large datasets.

Madebyai: What do you think is the next big thing that gonna happen in the next couple of weeks/months, AI-tech related?

Jakub: It’s hard to predict the future and even harder to tell what’s gonna happen specific week. I think in the next couple of years, we’ll see many extraordinary AI-native products being built in different markets. I am betting on tools that help humans perform simple but tedious tasks.

The image generation models will definitely keep improving, producing even better results. And with next-generation GPUs, everything will be faster, maybe even in real-time. All of that leading to the popularization of AI products and mass adoption. 

Future AI models will be trained on any possible set of data. We can already see that with the recent release of the text-to-video model made by Meta. We will also see a lot of variations of open-sourced models trained on specific data, for example, models trained to generate a particular style better.

Madebyai: With your experience using the tools, you probably discovered a couple of tips and tricks. Which ones would you be ok with sharing with our audience? ( Can be related to AI art or AI text or both )

Jakub: The most crucial advice is that it’s all a creative process. You can’t expect to create stunning art just by giving one sentence to an AI.

Generative art has a low entry barrier, as writing and imagination are the only skills you need. Still, the reality is that you need to spend some time learning the possibilities of the model, how different parameters work, and how to work with them to produce great results. Therefore, I recommend everyone to just play with the tools and learn, by trial and error, how to master them.

In my opinion, the best results are achieved when multiple AI image pipelines are used during the creation process. Starting with text-to-image, expanding it beyond its borders with outpainting, adding details with inpainting, and then changing styles with image-to-image. That’s just one example process I found working exceptionally well for creating stunning art pieces. But everyone can find what works best for them; it’s the art.

Madebyai: What is the next thing you are going to try using AI tools?

Jakub: I would love to make the DreamBooth work on Dreambooth is a text-to-image fine-tuning method. It allows models like Stable Diffusion to relatively fast learn new concepts that can later be used in image generations. For example, you can teach the model to generate paintings of you based on your photos. Unfortunately, it has quite a high barrier of entry, as you need to have some coding knowledge, know how to train a model, and own a powerful enough GPU. But I think I can make it simpler and more accessible – just upload a set of pictures on the website, wait an hour, and it’s ready to use.

Madebyai: Is there anything else you want to share with our audience?

Jakub: The future is going to be incredible 😉

Madebyai: Where can people find out more about you?

Jakub: The best and only place is on my Twitter @jakublukowski. I tweet about what I’m currently building and share some behind-the-scenes and stats.

I want to say a big thank you to Jakub for sharing these insights with us, check also some of his creations that I added in the “ studies “ section.

Post a comment

Your email address will not be published. Required fields are marked *