Today we got a super treat for you. We're talking to Thomas Lundström, which is a creative professional who has been doing product images for many years and are now focused on how AI could speed up that process so that he could create better product images faster. And also in this episode, I'm going to let him teach you what you can do already today with your product images to make them better.
So this is a real treat, a naming brother from Finland. It's cool also that we have more Nordic guests. So check out my conversation with Thomas Lundström from Grove Media.
So Thomas, cool to talk to a fellow Thomas person. That's always appreciated. Immediately when I meet someone with the same name, I'm just best friends with them.
That's just our role. Likewise. But we are in a very interesting time when it comes to generating AI images, generating AI videos, etc.
You've been doing photography and videography for around six years by yourself in Grove Media. And I would like you to just go back to the first few years you were running this and how was your workflow then versus how is your workflow now the last six to 12 months, just to kind of understand a little bit about how you used to work and how you're now working. Yeah.
So in the beginning, I started out with doing mostly restaurant and food photography because I had a friend that had recently founded a restaurant here in Helsinki. I needed someone to take care of the marketing and photography stuff. So I said to him that I'm interested and there I kind of got my footing into the photography and marketing scene.
I didn't have like a degree in anything or didn't study photography or anything like that. I just was interested in it and learned by myself. So there I caught the bug, so to say.
And in the beginning, my workflow or back then, my workflow was quite simple, just testing out a lot of different things for product photography and food photography, more in the traditional sense, just trying to learn as much as possible and trying to really look abroad also from outside the small circle of food photography. So testing out, for example, portrait photography or drone photography or anything else that I actually could get my hands on just to learn as much as possible within this creative field. And that has kind of been my mentality all along, just learning, learning.
And there on this path, a couple of years back, of course, started to come these AI tools that were quite an interesting turn. And now lately, during the past couple of years, I've been playing around with workflows of trying to utilize a combination of AI and photography so that I can use, for example, AI backgrounds for my product photography, where I take the photos myself and then use AI tools to create kind of the environment. And then with some basic Photoshop, I try to combine these two to a final product that I couldn't even imagine to create without these AI tools.
So that's kind of the progression of it. That's cool. And yeah, because one thing I noticed a lot in your content on both Instagram, YouTube, TikTok, et cetera, is that it's still a combination of, I think you say that the model usually delivers maybe 90, 95% accuracy or what you would like to, and then it's a mix of your Photoshopping or editing skills mixed together.
So it's, if I understand it correctly, it's more of a way to reach a result faster, but also I remember a couple of years ago on TikTok, it was super popular to have these behind the scenes of people running around with their phones and trying to be creative, you know, throwing berries up in the air and smoke machines and all that stuff. And now it seems it's much easier to just do that the AI way and then combine kind of real things and the AI together. Yeah.
I think it's at the same time, it's really interesting, but also kind of an identity crisis as a product photographer, because just where it started out when the first like image generator tools like mid journey were starting to get some traction and stuff like this, I started to play with them quite early and they were not that great, but at the time it felt like, wow, okay, here's actually an opportunity to use these tools in a unique way. Because for example, me, I'm a small solo entrepreneur, freelancer in Helsinki, Finland. I can't ever get the budget or like the opportunity to go and take amazing product photos in a jungle or a volcano or whatever location you can come up with, but with utilizing these tools, that was suddenly possible.
So that was like the aha moment for me that, okay, this actually has some value for me and a value that I can provide for customers as well that other solo, small, freelance photographers maybe can't if they are not aware or not utilizing these tools. So that was very interesting. But then at the same time, of course, you're thinking that, okay, that was a couple of years back now with the newer models, they're getting better and better.
As you mentioned, there are 85% there, maybe in one or two years, they are 99% there. So it's getting closer and closer to that. You just need an iPhone and idea and then you can create whatever you want.
Yeah. Which in the entrepreneurial part of us is super excited, but also business-wise is pretty scary in terms of being actually helping your customers replacing yourself in some sense. Yeah.
And the listeners of this pod are small business owners. They run e-commerce stores, maybe 200, 300,000 monthly, no, yearly revenue in terms of euros. And in your opinion, where do you think they could utilize AI the most at the moment, speaking in May 2025? What do you think are the biggest wins a small e-commerce owner can do by themselves? Yeah, that's an interesting question.
Do by themselves, let's assume no kind of, or as low as possible, like photography and graphical skills needed. I would say that these, when it comes to the image models specifically, I would maybe try to use them in a way that can just create more visually pleasing content around what you want to sell. So for example, if you have a product, the challenge at the moment is that if you have a product that you would like to, for example, then with AI generate an amazing sales photo with that you want to list on Amazon or whatever, usually they will mess up the label and text on the product.
If you just try to force it to create an image with your product, it depends on the product a lot. If it's a really simple label, a really simple product, for example, I did one with my girlfriend's nail polish, which turned out amazing, but that was like a lucky one of most products. It doesn't get right.
So you can't really use it as a full replacement for photographers and designers yet, but you can maybe use it as a way to come up with ideas that you can then ask and present to your design team or photographers. You can use it to create background visuals. For example, if you have just a render of your product, a simple product shot that isn't that fancy or doesn't look good on its own, you can create visuals around that with these image tools quite easily.
So there's a lot of ways you can utilize it, even though you don't have the knowledge. But then of course, if you have some Photoshop and photography knowledge, then it opens up a whole new bucket of things that you can do. Yeah.
So to create some of the images that you're sharing online, so for example, you had a Kaya Cosmetics product. Again, I think you're stealing a lot of things from your girlfriend. Maybe we should talk about that separately.
But she has a lot of products, obviously, with a lot of cool product designs on it. But I think I saw a pink one or something like that, that was really cool. How long did you use to create that product image? So it's like the thing about this type of workflow, if you do the thing that I do, where you create the background for the specific product, and then you take the photo and then just blend it into the environment.
So the whole workflow is quite easy and fast to do. For example, the pink one that you were talking about, we can maybe put it on the screen. Yes.
That would create or take a really long time to do in a real studio setting, you would have to build out the studio and get the lighting perfect and everything like that, tweak everything and then take the photo. So that's like a whole day's at least, like a whole day's photo shoot thing that you would need to go through. But with this workflow, you can just sit down and ask the AI for the background that you want.
Sometimes you get it in like the first five tries. So that takes maybe 10, 15 minutes. And then you simply have to try to match your photo of the product in a similar angle.
So let's now say that it's just a straightforward shot of your product standing in a nice pink studio setting. Then you simply just need to take a similar photo of your product. And then that takes like, let's say 10, 15 minutes.
And then again, 10, 15 minutes, and you can use some basic, remove the background from the photo. And there's some basic settings in Photoshop to make it look like the product was actually in that studio. And the best thing about this also is that if you create similar images from the same angle for the AI backgrounds, you can make, for example, five different backgrounds and just use the same photo that you took.
And you have five photos of your product instead of one. So in different settings. So this is like where it gets really exciting that it also for people that can use these tools and know how to use these tools, you can create so much more content that looks really professional without having to spend a lot of money and time booking a studio and everything like this.
Yeah. So it is as I understand. And again, the AI will, oh, this is the worst the AI is ever going to be today, right? It's only going to improve.
So text and everything is just going to be better and better and better, which I think is super exciting. But with a reference image and then using or changing backgrounds, which is kind of what you're describing as the best case for now. So then you could use, again, I think our listeners have some knowledge and basic tools.
I think a lot of them are using Conva, for example. So then they can actually cut out the product from the product photo and then insert just the background that's generated from the AI tool and start combining and playing around in that way, for example. When it comes to the product photo itself, right? So the one you're taking yourself with maybe a little earlier background, et cetera, et cetera.
What are some things they should think about to make sure it's the best possible photo you can use going forward into the whole process? Yeah. So it's kind of interesting with this workflow and process. It's kind of the reverse, I would call it, of a traditional photography where you have to first build the set and place the product, and then you have to figure out the lighting and everything.
So here, when you create the kind of background with the AI, you get everything created at once, also the lighting and shadows and everything. So you can look at the image, let's say, just see, okay, the light is coming from the left side of the photo. That means that when I take my photo of my product, I will place my light on the left side, trying to match everything as close as possible with the angle where the camera is placed, with the angle where the light is placed, and also the focal length of the, if you're using a professional camera, like how close you zoom into the product.
Is it a wide shot? Is it a close-up shot? Stuff like this. So you can really match up everything for the AI background. And then in post, it's going to be quite easy to then just replace the AI product with your own.
So that's kind of the things I would keep in mind there. Where is the light coming from? How close is the photo taken? Is the close-up or a wide shot? And then if there's any other details, like with the background, for example, if it's a green forest in the background, I wouldn't shoot your product against a white background because then it's going to have like a white outline of your product. So I would use also a similar color to the AI background that you want to place your product in.
Yeah, that makes sense. And what about your prompts? How should they think about structuring their prompts when they do this? Yeah. So this is also a thing that changed quite recently with ChatGPT, in my opinion, because before when prompting for backgrounds using AI, you had to be quite specific with mid-journey.
For example, you have to talk in this weird way with prompting, just trying to get it to generate what you want. But now with ChatGPT's new image model, you can basically just describe what you want in free language. And if you are able to just describe something that you like, and also I would recommend to almost always use a reference image with these, if you're using ChatGPT, because that way it will get so much closer to what you really want it to do when you can provide both visual and text to it.
So it can understand, okay, you really want this style, but maybe you want to tweak, for example, in one of my photos, I gave it a reference photo where there was a beauty product floating amongst, I think it was like dragon fruit or something like this. And I asked it, okay, I want this style of photo, but I want the fruit to be peaches, for example. So then it knows, okay, this is the style you're going for, but you want peaches instead of dragon fruit.
So that's like a really effective way to quickly get results that you are satisfied with. Yeah. Okay.
That's cool. So until now we talked mainly about photos, right? And there is also more and more of moving photos or videos and animations. And have you experimented a lot around these things and have these moving images and videos become better as well with these new updates or how is that going? Yeah.
I'm actually at the moment working on a video where I experiment with these as well, because they have gotten a lot better. The thing here that's again, quite exciting is that if you have a photo, it doesn't even have to be like a AI generated background photo. But for me, for this example, I can generate the AI background and place my own product into that background.
And then we can take that final product photo and try to generate a video from that. So not only can you deliver a photo, like a final product photo, you can also deliver like a short, for example, push in of the product in that environment or a short, whatever movement you want to add to that. But I'm still experimenting with these and haven't really found like a smashing hit.
It's kind of more difficult with the video side because the adding of movement, it quite quickly breaks apart if you don't try to be really like specific and careful with what you want it to generate, but it has also gotten a lot, a lot better. Okay. So what I'm hearing is that in terms of amateur professional uses, so for these listeners that we're trying to educate today, it's best to stick with the image part of things for the moment.
I mean, there are, of course, I would say there are use cases where you can create good video as well. It's probably some quite simple, like five second long stuff that you shouldn't expect to be able to create like a minute long video with amazing product shots of your product. I would maybe put the expectation at, okay, you can probably with quite easy, just some app on your phone already, I think generate something that looks all right if the starting image is good enough.
So if you're keeping it simple and keeping it like quite easy, low budget, then I think you can create some short ads, for example, using these tools, but it's definitely more difficult than the image side, I would say. Understand. Okay.
Then I would like to talk some tools. And of course, these things are changing all the time. Again, this is May, 2025, early May.
What kind of tools are you using today, both for your complete workflow, but also in terms of very specific AI tools that you think people should pay attention to and start playing around with? Yeah. So at the moment, I have kind of switched over to mainly using ChatGPT, just because I think it's easier to communicate with ChatGPT and give reference material that I want to kind of, we're kind of entering the era also of copying and almost stealing stuff, because right now it's so easy to generate something that somebody else has created just by using it as a reference for the AI. So that's also a kind of gray area in my opinion, but coming to tools, I think ChatGPT is definitely like number one, it gives you the most flexibility in creating the AI images.
And then it's also like the most user-friendly. So if you're not an expert in prompting or whatever, it's all right, because you can just chat with it as a normal human. So that's a big, big plus on ChatGPT side, I think.
Then other tools, I also, of course, Photoshop has their own AI functions in the software with the generative fill and stuff like this. So I personally use Photoshop as the main editing software for my images and specifically the generative fill within Photoshop is also really, really good and quite easy to use, I would say. So I would strongly recommend like the combination of ChatGPT and Photoshop, because there's also other functions within Photoshop that makes it easy to blend the images together.
So that's like the two main ones, I would say, for images and maybe a close second to ChatGPT would be Meet Journey, which is also an image generator. It's more maybe on the, I would say artistic side, a bit better, but it's also more complicated to talk with it. You have to be more specific with the prompting and stuff like this.
But they are constantly updating the tools. So it can be that when this comes out, they have done a big update that makes it much easier to get reference material sent to it and so on and so forth. So, but these are maybe the main ones I would say for images where I would start.
Then if you're interested, there's a lot of more tools, but that's maybe for more advanced stuff. Yeah. And I'm also going to leave your link to your YouTube channel in the description of this episode so they can find you and follow your content.
I think you are, it's really cool how you are updating us and experimenting and doing the work so people can see what's possible. The last thing then I would like to know is more like future focused. So if you look six to 12 months ahead of time, what are you most excited about or what are your prediction of where we are going the next six to 12 months in terms of AI and content production? Yeah.
I think within the next six to 12 months there will be like an update to whatever platform it is that does it first. I would guess ChatGPT and OpenAI will be there and doing an update to their image model that will be so good that you can simply take a photo with your phone of the product you want to place in an environment or create an amazing design or Instagram ad for, and then just describe what you want and it will be able to do it like 99% accuracy also with the label, also with the text. That's like my main prediction because it was such a jump from the image generators before OpenAI came out with their model that how accurately it was able to generate the replications of your product and the label and text.
So it's not far off that they have it really locked down and then I might have to look for a new job. Yeah, I was just going to ask what happens to you then. But that's probably like the biggest one.
And then I think also the video side will get better and better all the time. And maybe not, maybe that won't be like 99% there, but it will be 80% there almost in 12 months at least. Because just thinking back a couple, if we look at there, you can Google it also like what the same prompt gave you from mid-journey's first models to what it gives you today.
And that's only like a couple of years back, it gave you like a pixelated mush of nothingness. And then now, for example, it gives you a perfect representation of a lion or a tiger or whatever the prompt was. But that's just a good example of how insanely fast this AI is moving.
It's really mind-blowing. Are you more excited or worried about the future? I'm on the side that's more excited, because I think even though they will become better and better and kind of compete with traditional photographers and designers, it's still the people that are able to use these tools in a creative way and put their own vision into the product that will kind of win in the end. So yeah, on one side, it will take jobs from people within marketing, photography and design.
But on the other side, if you don't adapt and learn these tools and really try to be on the forefront, then you're definitely going to lose your job. So that's kind of my attitude. Perfect.
Okay, Thomas, thank you so much for chatting with us today. Again, I'll leave all the links in the description so they can follow you and keep making that good content. Thank you.
My pleasure. And same to you, Thomas. Thank you.