December 09, 2017
Brand Integration inside Visual Content
Video consumption has increased drastically in last few years, almost accounting for 75% of all online traffic. No wonder, that its favorite among marketers and brands who all want to catch the eyeballs of video users.
Options and Challenges
There are a lot of options for advertising in the video such as Pre Roll, Post Roll, and Mid Roll and all of these ad units are like the traditional advertising on television. All these formats are interrupting for the user who is watching the video and are most of the times out of context from current video content. If given an option, most of the users most likely skip these ads.
These formats can be easily blocked by ad blockers, which results in lost of marketing budget from brands. Another technique to prevent blocking of ads is AD STITCHING, where ads are stitched with existing video on the server instead of the client. But these ads are still interrupting for the users.
Brand Embedding
At Bonzai Engineering, We are evaluating the option if a brand can be directly embedded inside the visual content in a non-interrupting way and at the same time have high recall. Think if you are watching the highlight of your favorite match on youtube and brand message is embedded on the ground of the match but behind the players playing the match or if you are watching the episodes of your favorite series on Netflix and brand logo start showing up at specific places behind the series characters. This can not only give brand more visibility in a non-obtrusive manner but as well as more recall for the brand.
The easiest way to visualize this, remove the foreground or moving parts from the video, add the brand logo at the surface in background suitable for it and then again add foreground on top of the video. Let’s get into the technical aspect of this.
Step1: Background Subtraction
This is the technique used in many computer vision problems for foreground extraction for further processing of object recognition etc. There are the couple of algorithms that can be used for the same.
- Frame Differencing: This is the best method for background abstraction if your foreground is all moving and background is static throughout the segment of the video. This works simply by abstracting pixel value of frame at time t and the pixel value of background image.\ This approach has the main limitation if you cannot get the pixel value of background image clearly in the video or some part of the foreground is static in some part and moving in other.
- Accumulated Mean frame: Average/Mean is calculated for all the frames in video/segment of the video. Then foreground is extracted based on some threshold (Depends on how fast the moving part in the video) difference between the actual frame and average frame.\ This approach gives a result with approximation and foreground extracted might not be that accurate. Might give some blurry result when the foreground is again added to the background(step 3) but works great if there are no images of the object involved and static background.
- Gaussian Based Algorithm: These algorithms are more complicated introduced in the paper “An improved adaptive background mixture model for real-time tracking with shadow detection” by P. KadewTraKuPong and R. Bowden in 2001 .It uses a method to model each background pixel by a mixture of K Gaussian distributions (K = 3 to 5). The weights of the mixture represent the time proportions that those colors stay in the scene. The probable background colors are the ones which stay longer and more static.\ These methods are for complicated cases, where images of objects are involved. But in practice, we have found out that when this foreground is again added on top of background in step3, haven’t given good result so far for recreating the video.
- There are other advanced algorithms which combine statistical background image and per-pixel Bayesian segmentation, which are out of scope for this problem.
Step2: Brand Blending with Background
Now next step is to blend the brand message into the subtracted background, here blending algorithm will depend on below factors:
- Transparent/ Non Transparent message: Here blending algorithm will change based on the weather we want to blend Transparent or Non Transparent message into the background.
- Blending Surface: A Brand message that needs to be blended needs to be perfectly aligned with the surface along with color, look and fill etc. so that it looks exactly as part of the video.
Step3: Foreground Blending on New Background
Final Step is to extract foreground from the old background and blend it again on the New Background created in above step. Transparent blending can be used to bend the foreground but the quality of the final video will completely depend on how neatly foreground is extracted from the video, which takes us back to step1.
Conclusion
A single algorithm might not fit on all types of video, and also threshold parameters in algorithm need to be tweaked based on the frame rate of the video or how fast moving objects are there in the video.We were able to successfully experiment with many VOD videos and integrated brand successfully with visual content.
Next Goals
At Bonzai Engineering, We are experimenting with many other moving parts of this technology :
- Automatic Surface detection from the video where a brand message can be embedded.
- Modify the visual of the brand message based on the surface where it needs to be blended so that manual effort is not required.
- Contextual information from the video based on brand context, so that right message can be given at write movement in the video.
And many more…
If these types of challenges interest you, Give us a shout at grow@bonzai.co or you can reach out on linkedin ! We are always looking for great people to join our team.