AI is revolutionizing the film industry by expediting VFX processes, unlocking creative possibilities, and redefining the role of human creativity in the era of ‘smart VFX.’

Visual effects (VFX) have become an integral part of almost every movie, but the process used to be gruelling. Animations were painstakingly created frame by frame, resulting in jerky movements, static backgrounds, and characters with limited expressions. This laborious work consumed a significant amount of time and effort, stifling the creative process.

Artificial intelligence (AI) has revolutionised VFX by expediting the process and expanding creative possibilities. Key VFX processes like rotoscoping (isolating a shot into layers), inpainting (filling in gaps), and camera tracking (replicating camera movements for object or action replacement) used to dominate VFX workloads, often spanning months. Thanks to AI tools, this timeframe has now been condensed to weeks.

Arraiy is one of the AI and machine learning platforms employed by the film industry for VFX. This platform, utilising vast amounts of existing VFX as training data, streamlines the rotoscoping process and is tailored for film and television effects. Manual rotoscoping would typically consume many hours, whereas Arraiy and similar AI tools can accomplish it in a fraction of the time. This means that the final scene or shot can be reviewed swiftly, possibly even on the same day. If reshooting is required, it can be executed sooner, saving time, effort, and money. Arraiy is owned by Matterport now.

A booming sector

While it’s still early days, many individuals are experimenting with publicly available and proprietary AI VFX tools to explore their capabilities. Essentially, AI facilitates the gathering of extensive data, makes this data readily accessible, and then executes commands based on this wealth of information. Consequently, the output, whether it involves thousands of blooming flowers, changing sunlight, or an explosion behind actors/characters, is not only faster but also cleaner and more realistic, drawing from a vast information repository. Human creativity can then be channelled into refining AI selections and determining the desired output, thus shifting the focus to “smart VFX” that empowers designers and users to concentrate on their vision rather than the technical process.

Apart from Arraiy, various players, including Framestore, Pinscreen, Weightshift, Laika Studios, Digital Domain, Ziva Dynamics, and Autodesk, are actively developing AI for smart VFX.

The enhancements driven by AI are already evident in a wide range of videos. Animation films have evolved closer to reality with highly flexible characters and objects displaying lifelike movements and expressions. Previously, for computer-generated imagery (CGI) VFX, characters were painstakingly created and animated frame by frame, a laborious and time-consuming process. AI tools enable simulations where animation and lifelike movements are pre-existing, allowing users to build characters based on these foundations. This eliminates the jerkiness and rigidity often associated with earlier methods. Machine learning further permits the storage and reuse of movement details and other features.

One key reason for the realistic appearance and motion of characters lies in the vast array of choices available. For instance, one could select Harrison Ford’s nose, Arnold Schwarzenegger’s eyebrow tilt, or Jackie Chan’s shoulder movements, among countless others.

Producers and casting directors are taking AI a step further by employing it to find the ideal actors. For instance, they can combine and match features from various faces to create a composite face that best suits a particular project before searching for the perfect actors.

This same technology is also behind the creation of deepfake videos, where movements, speech, expressions, and more are added to real persons’ images to make it appear as though they are saying or doing certain things. This can be particularly useful in cases where an actor is unavailable, such as due to passing away. A stand-in or look-alike (a doppelganger) can be used, with the original actor’s body movements, facial expressions, and features superimposed onto the stand-in.

On fast track

The year 2021 witnessed significant advancements in AI-powered VFX. Autodesk, for instance, launched Flame 2021 (which is now in its 2024 version now) with enhanced AI features, including human face segmentation driven by machine learning and Dolby HDR authoring for streaming workflows. The human face segmentation feature automates tracking, identifying, and isolating facial segments such as eyes, nose, lips, laugh lines, and skin, facilitating faster adjustments by VFX designers. Autodesk’s popular software tools like Maya and 3DS Max are also set to receive enhanced AI-powered features. The company partnered with visual studio Weta Digital to introduce cloud-based special effects software, WetaM, combining Weta’s proprietary special effects tools with Autodesk’s Maya 3D software for animation, modelling, rendering, and simulation.

Foundry, another creative software company, offers Nuke 14, which, like its peers, helps artists achieve superior results in less time, with features like upres, motion blur removal, tracker marker removal, garbage matting, and more. Nuke 14 boasts a flexible machine learning toolset, a new Hydra 3D viewport renderer, extended monitor functionality, and improved collaborative review workflows.

While many VFX companies are well-known within specialized circles, Adobe is a name recognized beyond those circles. As a frontrunner, the company has enhanced its visual tools with AI. Neural filters in Photoshop, for instance, introduce non-destructive filters that can be applied in seconds. Given that much of the labour-intensive background work in VFX revolves around removing objects from frames, Adobe’s SkyReplacement tool proves invaluable by separating the sky from the foreground, enabling users to edit or replace the sky as needed. Additionally, Adobe’s new AI offerings include the Speech-Aware Animation tool, which automatically generates animation based on recorded speech, complete with head and eyebrow movements.

The German company Maxon has upgraded various offerings, including Cinema 4D R25, Trapcode Suite 17, VFX 2, and Redshift RT. Maxon Cinema 4D assists in creating viral face filters, futuristic AI data sculptures, and CGI designs. With Trapcode Suite 17, artists can work with particles and geometric forms in the same 3D space, and Adobe After Effects gains Multi-Frame Rendering. VFX Suite 2 introduces the Bang muzzle flare generator, and Redshift RT offers almost real-time rendering performance. These new features cater to both beginners and experts, according to the company.

What matters

In VFX, attention to detail is crucial. SideFX’s new AI solver tools, for example, enable the creation of photorealistic effects, significantly improving character effects like crowds, water, fire, smoke, destruction, hair, fur, cloth, and other soft body objects. These tools specifically impact the active portions of simulations, dramatically reducing processing times. Velocity blending and relative motion controls further enhance the photorealism, making high-speed scenarios more stable.

Understandably, action and animation movies are the most prolific users of AI in VFX, yielding stunning results. In Marvel’s Avengers: Infinity War, released in 2018, Josh Brolin portrayed Thanos, a role that grossed over $2 billion worldwide. The movie’s visual effects received acclaim and garnered nominations at prestigious award ceremonies. The merging of Brolin’s real face and expressions with Thanos’s character was executed so seamlessly that Thanos appeared as a natural extension of Brolin, and vice versa.

Creating frames that were never actually shot exemplifies the magic AI can achieve. Users of Adobe, Apple, and DaVinci Resolve are already familiar with optical flow for frame rate conversion and automated effects features such as Face Refinement, Auto Color, and Optical Flow Speed Warp. Meanwhile, users of DAIN (Depth Aware video frame Interpolation) benefit from its open-source nature and its ability to accelerate animation from 15 frames per second (fps) to 60 fps, offering four times the speed.

AI and machine learning (AI-ML) are also coming to the aid of film developers as screen sizes evolve. Tools like Premiere Pro (Sensei), DaVinci Resolve, Final Cut Pro, and Google’s Open Source AutoFlip assist in intelligent reframing to adapt between video formats.

The tedium of searching for images or footage based on content is a common pain point. Major Media Asset Management systems now incorporate visual indexing and searching capabilities. For instance, AxelAI can analyse content on the local network, while Ulti.media’s FCP Video Tag employs various analysis engines to generate batches of keywords for Final Cut Pro in a standalone app.

AI in VFX has an additional role during the pre-production stage. Much of the shooting process relies on the choice of lenses. However, access to all types of lenses, especially rare ones, can be challenging. Frame.io’s Camera to Cloud (C2C) is utilised to create test shots with different lenses, proving particularly valuable to cinematographers, directors, and producers. This allows them to preview how a shot will appear through various lenses without the need for physical testing, streamlining logistics and enabling film professionals to focus on the best-fit choices.

Are you game?

Another industry poised to become a significant consumer of AI VFX in the future is the gaming industry. Over the past 10-15 years, the cost of developing video games has skyrocketed while game prices have remained relatively stable. Visuals play a substantial role in a game’s appeal, as evidenced by the popularity of online multiplayer battle royale games like PUBG and Fortnite Battle Royale. Games are becoming more intricate, featuring highly detailed and lifelike visuals within expansive open-world environments. Developing a AAA game can cost anywhere from $60-80 million, or even exceed $100 million. Beyond enhancing game dynamism, AI in gaming aims to train software to become more intelligent, paving the way for AI-driven tasks of increasing complexity.

However, within the gaming development community, there is ongoing debate regarding the extent to which AI, particularly self-learning AI, should be employed. Many gamers expect a degree of predictability in their games, as excessive AI sophistication could render games overly unpredictable and, in turn, less enjoyable to play.

Therefore, while smart VFX is poised to transform the movie industry, it’s unlikely to lead to widespread job redundancies. Instead, certain roles and skills may diminish in significance, while the demand for new skill sets is expected to rise.

Jainardhan Sathyan, a Los Angeles native, excels as a VFX Producer and Supervisor at Foxtrot X-Ray, a Hollywood boutique VFX studio. He holds a Master’s degree from New York Film Academy, Los Angeles. Previously National Director – Content at WPP, world’s largest advertising agency, he created compelling shows for global brands like Pepsi, Unilever, GSK, Kelloggs, LG, Lufthansa in partnership with studios like Viacom, Sony, Fox, Disney, and Sun Network. Jai’s unwavering dedication earned him 12 film awards, 2 social media awards, and 2 branded entertainment awards. 

Share.
Jainardhan Sathyan

Jainardhan Sathyan, a Los Angeles native, excels as a VFX Producer and Supervisor at Foxtrot X-Ray, a Hollywood boutique VFX studio. He holds a Master's degree from New York Film Academy, Los Angeles. Previously National Director - Content at WPP, world’s largest advertising agency, he created compelling shows for global brands like Pepsi, Unilever, GSK, Kelloggs, LG, Lufthansa in partnership with studios like Viacom, Sony, Fox, Disney, and Sun Network. Jai's unwavering dedication earned him 12 film awards, 2 social media awards, and 2 branded entertainment awards. 

Leave A Reply

Exit mobile version