Coding / Programming Videos

Post your favorite coding videos and share them with others!

How Does Disney Create Magical Animations? – Josh Flori – Medium

Source link

One of the ways to create beautiful art forms is to exaggerate linear motion and make it non-linear. Give it a heart. Make it bounce. Make it linger. Don’t just show it to the viewer, show it with feeling.

Both the rules of animation and examples of animated movement can be seen here.

But to put it simply, if a thing is moving from point A to point B, Disney will NOT take the straight, linear path between the two. The path will be hypnotically accentuated, either having a slow-in and fast-out or fast-in and slow-out.

So I wanted to create a plot of what this specific path actually looks over time. I got a bunch of Disney clips. For each video clip, I picked something to track and plotted its movement as it moved up and down. This allows us to isolate and better understand the specific paths of motion the animators used.

This was my basic process:

I found some easy-to-track Disney clips and tracked the vertical movement with CV2 in python using the following script. Depending on the video I tried different tracker types. There was a bit of trial and error. Overall I was frustrated by the poor performance and number of clips I had to give up on (maybe 20%) because they could not be tracked. I hope that the GOTURN tracker may yield better performance but I did not try it. If it does not work as expected then I think there is room for another neural based approach as there were times when the target and it’s movement were clearly visible to the eye but were not tracked at all.

This code returns all frames from the Disney movies with their bounding boxes into my downloads folder. I stitch them together into a single video file later after processing all Disney video clips.

# Notice that although the bounding box is what is displayed on top of the image (given by p1,p2), it is the centroid data (middle of the bounding box) being returned.
import cv2
import sys
import pandas as pd
import numpy as np
import glob
import os

#(major_ver, minor_ver, subminor_ver) = cv2.__version__.split('.')

file_name = input("filename? ")
movement_axis = input("movement axis? ")

if __name__ == '__main__' :
#""""    --------- position  tracker ------------- """"
# Set up tracker.
# Instead of MIL, you can also use
tracker_type ='CSRT'
tracker = cv2.TrackerCSRT_create()
# tracker_type ='MEDIANFLOW'
# tracker = cv2.TrackerMedianFlow_create()
# tracker_type ='MOSSE'
# tracker = cv2.TrackerMOSSE_create()
# tracker_type ='MIL'
# tracker = cv2.TrackerMIL_create()
# Read video
video = cv2.VideoCapture("/users/josh.flori/"+file_name+".mp4")
# Exit if video not opened.
if not video.isOpened():
print("Could not open video")
sys.exit()
# Read first frame.
ok, frame = video.read()
if not ok:
print('Cannot read video file')
sys.exit()
# Uncomment the line below to select a different bounding box
bbox = cv2.selectROI(frame, False)
# Initialize tracker with first frame and bounding box
ok = tracker.init(frame, bbox)
i=0
while True:
# Read a new frame
ok, frame = video.read()
if not ok:
break
# Start timer
timer = cv2.getTickCount()
# Update tracker
ok, bbox = tracker.update(frame)
# Calculate Frames per second (FPS)
fps = cv2.getTickFrequency() / (cv2.getTickCount() - timer);
# Draw bounding box
if ok:
# Tracking success
p1 = (int(bbox[0]), int(bbox[1]))
p2 = (int(bbox[0] + bbox[2]), int(bbox[1] + bbox[3]))
x = [p1[0],p2[0]]
y = [p1[1],p2[1]]
centroid = (int(sum(x) / 2), int(sum(y) / 2))
if movement_axis == "y":
print(centroid[1])
else:
print(centroid[0])
cv2.rectangle(frame, p1, p2, (255,0,0), 2, 1)
else :
# Tracking failure
cv2.putText(frame, "Tracking failure detected", (100,80), cv2.FONT_HERSHEY_SIMPLEX, 0.75,(0,0,255),2)
# Display tracker type on frame
cv2.putText(frame, tracker_type + " Tracker", (100,20), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (50,170,50),2);
# Display FPS on frame
cv2.putText(frame, "FPS : " + str(int(fps)), (100,50), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (50,170,50), 2);
# Display result
cv2.imshow("Tracking", frame)
cv2.imwrite('/users/josh.flori/downloads/'+file_name+str(i)+'.png',frame)
i+=1
# Exit if ESC pressed
k = cv2.waitKey(1) & 0xff
if k == 27 : break

For processing the data, I just took the centroid y values which were printed on terminal and pasted into excel. For each sequence of y values, I subtracted the starting value so that the sequence for each video always started at 0 (this produces a more visually pleasing plot). I also multiplied all values by -1 since that allows them to move in the direction of human perception. Otherwise, the way the y-values are read by cv2, as something on screen goes up, the y value would go down, which is not what we want when displaying the two on the same screen in the final video. I added an index column from 1:num_total_frames then loaded into R where all the frames of the video were created using this script:

library(gapminder)
library(ggplot2)
library(colorRampPalette)
p=read.csv('/users/josh.flori/disney_movements.csv')
for (i in 1:nrow(p)) {
# the following lines create the scrolling window effect which I found to be most visually pleasing out of all possible options. If I did it over again I would increase the window size by 20 pixels.
  print(i)
if (i<50){b=1
e=70}
else{b=i-49
e=i+21}

t=head(read.csv('/users/josh.flori/disney_movements.csv'),i)
q<-ggplot(data=t, aes(x=x, y=y)) +
geom_point(stat="identity",width=1,color = "#016FB9",size=2.9)+
#ylim(min(p$y)-3,max(p$x)+3)+
xlab("time (frames)")+
ylab("distance moved (pixels)")+
theme(axis.text.x=element_blank(),
#axis.title.x=element_blank(),
axis.ticks.x=element_blank(),
#axis.text.y=element_blank(),
#axis.title.y=element_blank(),
axis.ticks.y=element_blank(),
panel.background = element_rect(fill = '#242423', colour = '#242423'),
panel.grid.major = element_line(colour="#242423"),
panel.grid.minor = element_line(colour="#242423"))+
scale_y_continuous(limits = c(min(p$y)-10,max(p$y)+10), expand = c(0, 0))+
scale_x_continuous(limits = c(b,e), expand = c(0, 0))
ggsave(paste('/users/josh.flori/plot_output/output',i,'.jpg'),height=3,width=12,units="in")
}

After all frames are output, I stitched together the Disney clips and my R output files using this python script, altering the input and output file paths accordingly:

import cv2
import sys
import pandas as pd
import numpy as np
import glob
import os

if __name__ == '__main__' :

files = glob.glob('/users/josh.flori/plot_output/*.jpg')
files.sort(key=os.path.getmtime)
img_array=[]
for filename in files:
img = cv2.imread(filename)
height, width, layers = img.shape
size = (width,height)
img_array.append(img)
video = cv2.VideoWriter('plot.avi',-1,1,(width,height))
for frame in img_array:
video.write(frame)
cv2.destroyAllWindows()
video.release()

The output .avi is many times slower than it should be, something like 12 minutes for a 30 second clip. So I just sped it up in the video software: hit film express.

That’s pretty much it really. This was a single day project. In the future I would like to process more footage, develop a standard set of standard movement forms into which any new given animation could be classified, assign a mathematical function for each standard movement form, create a better object tracker because I was not happy with cv2 overall (although I guess I didn’t try every single one).

Source link

Bookmark(0)
 

Leave a Reply

Please Login to comment
  Subscribe  
Notify of
Translate »