László Gaál uses AI to shoot Porsches in Ferrari land

"The humor behind the machine", by François Reumont

Contre-Champ AFC n°364

[ English ] [ français ]

With humor, know-how and a disturbing quality, the fake Porsche advertisement “The Pisanos” produced by the Hungarian colorist László Gaál has become in recent days one of the stars of the AI ​​film generation on YouTube. Made entirely on the Google Veo2 engine to which he was able to have early bird access, this little film is featuring almost all the clichés of advertising (Tuscany, smiles, speed, etc.) Disturbing by its quality and especially by its making-off, this second part takes eventually the viewer by surprise when we suddenly learn that everything was (almost) false. Laszlo, who gives himself the pleasure of talking to the camera - in the only shot actually filmed . lends also his voice to the commentary. He talks to us (with his lovely Claude Monet t-shirt) about creation with this new tool. (FR)

How did you conceive the project ?

László Gaál : The process for the Porsche spec ad is a little bit funny as nothing was planned ahead. Many people asked if I had a storyboard of the shots while I didn’t even know I will get access to Veo, and I didn’t have this idea developed at all.
How it started is that I’ve did 2-3 shots of people walking and turning their head when a nice car passes by. Then I realised I could include multiple models from the same brand from different "ages" and create a nice little montage of different generations of a car model. But "generation" has also a human aspect so then I changed the people to different generations in the same family, and then this became the base of the story of the Pisanos and then the rest was trial and error of the different ideas. Then came the tagline "Turning heads for generations". So it’s not a streamlined process, every time I had a new thought sitting in a café, or riding my motorcycle, I stopped and noted it down. Then getting back to the computer I started to see if it’s possible to prompt and then went ahead experimenting how to make it possible I’m thinking a lot about boundaries of filmmaking these days as usually budget, location will make you feel your hands are tied, but what happens if you remove those ? The bad thing is it’s hard to stop because if everything becomes possible, nothing can have you to stop turning your new idea into a real shot and iterate forever. For the first week I woke up around 2:30am each night because a new idea came to my mind and wanted to see if it works or not, this is not super healthy.


How long did it take you to make the film ?

LG : What I could backtrack is I’ve generated shots for the ad on 12 days, for the Behind the scenes on 4 days and I’ve spent some days on editing, sound, having the voiceover done, etc. Altogether I think it was the span of three weeks including the first tests.

Many complain about the lack of control over the tool... what do you think ?

LG : I think for some genres control is very much needed and required, for example for commercials you need to have a lot control. But for narrative stories the current tools are more than enough to tell a story as it is already proven by several talented people. I think just like in images now we have a lot of control over AI, I think we will see similar features rolling out this year that will give more control on the output.


Is this the end of teamwork for cinema ?

LG : I don’t think it will end team work, this is just a transitory phase where we see many "one man show" people creating full videos. When the production has to scale up we will see more people working on a single project together, also we will see traditional directors working with AI people, instructing them, directing the generatinons, etc. I think right now we will see new kind of talents and even jobs emerging, if someone is good at doing products he will specialize in that, if someone is good at people he will do that, etc.

Do you sometimes feel like you’re chasing technology ?

LG : This is one of the bad things currently, that by the time someone learns a new tool, we have another platform coming out with another tool with a better quality. I think we will have to get to a phase where we have 1-2 good general AI models and maybe 1-2 specialized ones, but this might not come anytime soon as these models are developing in a much faster pace than human tools. Just compare it to the development of 3D graphics, 3D animated short films that took decades. Now with genAI video was almost unusable in 2023, it became vey interesting in 2024, an in this year I think we will see more and more productions giving in to genAI.


Basically, what radically changes from your point of view when it comes to story-making and storytelling ?

LG : As for the workflow I think the most interesting part is that it only used text to video. It had to have some kind of consistency, had to have some kind of set movements but everything was driven by text only. Generating, editing, sound mixing, voice over, even writing the script was happening parallel. This is for me the most interesting part of working with genAI that the workflow is not linear but you are jumping between thinking about a new part in the script, generating the previous idea, editing the already generated one, etc. It is a new kind of storytelling where it’s possible to edit something that was generated only mere seconds ago, that was ideated only a few minutes ago.


So the only real thing in there is you talking to the camera ?

LG : Yes, I thought AI lip syncing would be very strange so I recorded myself for the interview. The Monet shirt is a series from Pull and Bear !

(Interview by François Reumont for the AFC)

  • See the video :

    https://youtu.be/VqLWWYfCEbI?si=bwkp2g8LYTuADN8F