Human Activity Detection Matlab Code

Posted by admin

To detect the abnormal/unusual human activities in a video

  1. Human Action Recognition using KTH Dataset. A Matlab code is written to recognize human actions namely. Improvement in detection of result and improvement.
  2. “Histograms of Oriented Gradients for Human Detection,”Proceedings of IEEE Conference on. C/C++ Code Generation Generate C and C++ code using MATLAB.

Robot Learning Lab. Personal Robotics, Co-Robots, Robotic Perception. Human Activity Detection from RGBD Images. Being able to detect and recognize human activities is essential for several applications, including smart homes and personal assistive robotics. Download Cornell Activity Datasets and Code. Check out latest results.

A) ABSTRACT :

With the increase in the number of anti-social activities that have been takingplace, security has been given utmost importance lately. Many organizations have installedCCTVs for constant monitoring of people and their interactions. For a developed countrywith a population of 64 million, every person is captured by a camera ~ 30 times a day. Alot of video is generated and stored for certain time duration( India: 30 days). A 704x576resolution image recorded at 25fps will generate roughly 20GB per day. Since constantmonitoring of data by humans to judge if the events are abnormal is a near impossible taskas it requires a workforce and their constant attention. This creates a need to automate thesame. Also, there is a need to show in which frame and which parts of it contain theunusual activity which aid the faster judgment of that unusual activity being abnormal.The method involves generating motion influence map for frames to represent theinteractions that are captured in a frame. The main characteristic feature of the proposed motioninfluence map is that it effectively depicts the motion characteristics of the movement speed,movement direction, and size of the objects and their interactions within a frame sequence. Itfurther extracts frames of high motion influence values and compares with the testing frames toautomatically detect global and local unusual activities.

B) Requirements :PythonopenCV 3 for pythonnumpy 1.7FFmpeg

C) Implementation :The implementation phase involves the actual materialization of the ideas that areexpressed in the analysis document and that are developed in the design phase. Implementationshould be a perfect mapping of the design document in a suitable programming language in orderto achieve the necessary final product. This section discusses about the important decisionsregarding selection of the platform, the language used, etc. These decisions are often influencedby several factors such as the real environment in which the system works, speed that is required,the security concerns, other implementation-specific requirements etc. And also we have briefdiscussion on the important modules and methods that are present in the project.The code is divided into 5 modules, optflowofblocks, motioninfluencegenerator,createmegablocks, training and testing.In this section, a method for representing motion characteristics is described for thedetection and localization of unusual activities within a crowded scene. Here, we should note thatwe considered two types of unusual activities:

Matlab
  1. local and 2) global.Local unusual activities occur within a relatively small area. Different motion patterns mayappear in a portion of the frame, such as the unique appearance of nonhuman objects or the fastmovement of a person when most of the other pedestrians are walking slowly. Global unusualactivities occur across the frame, for example, when every pedestrian within a scene starts to runsuddenly to escape from the scene.

C.1) Data Input and Pre-ProcessingThe video file is given as an input to the system, which is subjected to pre processing. Avideo is treated as sequence of images called frames and these frames are processed sequentially.An RGB frame is first converted to gray scale. A gray scaled image consists of only the intensityinformation of the image rather than the apparent colors. RGB vector is 3 dimensional (as itconsists of values of colors red, green and blue) whereas gray scaled vector is one dimensional.

C.2) Optical FlowAfter the pre-processing step, for each frame in the video, optical-flow is computed foreach pixel in a frame using FarneBack algorithm. Optical flow is the pattern of apparent motionof objects, surfaces, and edges in a visual scene caused by the relative motion between anobserver and the scene. Figure 5.4 shows the optical-flow of each pixel in the image of a ballmoving upwards. Optical-flow is a vector of the form (r , θ), where, r represents the magnitude ofeach pixel and θ represents the direction in which the each pixel has moved relative to thecorresponding pixel in the previous frames. The calcOpticalFlowFarneback( ) function inopenCV computes a dense optical flow using the Gunnar Farneback’s algorithm .

C.3) Optical-Flow of blocksC.3.1) Dividing a frame into blocksAfter computing the optical flows for every pixel within a frame, we partition theframe into M by N uniform blocks without loss of generality, where the blocks can beindexed by {B1, B2, . . . , BMN}. Figure 5.5 shows a frame of size 240 x 320 divided into48 blocks where each block is of the size 20 x 20.

C.3.2) Calculating Optical-Flow of each blockAfter dividing the frames into blocks, we compute optical-flow of each block bycomputing the average of optical-flows of all the pixels constituting a block. Figure 5.6gives the formula for calculating the optical-flow of a block. Here, bi denotes an opticalflow of the i th block, J is the number of pixels in a block, and f ji denotes an optical flowof the j th pixel in the i th block.Optical-flow of a block is a vector (r , θ) which represents how much each blockhas moved and in which direction compared to the corresponding block in the previousframes.

C.4) Motion Influence MapThe movement direction of a pedestrian within a crowd can be influenced by variousfactors, such as obstacles along the path, nearby pedestrians, and moving carts. We call thisinteraction characteristic as the motion influence. We assume that the blocks under influence towhich a moving object can affect are determined by two factors:

  1. the motion direction
  2. the motion speed.The faster an object moves, the more neighboring blocks that are under the influence of theobject. Neighboring blocks have a higher influence than distant blocks.

C.4) Algorithm for creating a motion influence mapINPUT: B ← motion vector set, S ← block size, K ← a set of blocks in a frame OUTPUT: H ← motion influence map H j (j K) is set to zero at the beginning of each frame ∈for all i K do Td = bi × S; Fi/2 = bi + ? 2 ; −Fi/2 = bi − ? 2 ; for all j K do if i = j then Calculate the Euclidean distance D(i, j) betweenbi and bj if D(i, j) < Td then Calculate the angle ?ij between bi and bj if − Fi 2 < ?ij < Fi 2 then H j ( bi) = H j ( bi) + exp− D(i,j)bi end if end if end if end for end for

C.5) Feature ExtractionIn the motion influence map, a block in which an unusual activity occurs, along with itsneighboring blocks, has unique motion influence vectors. Furthermore, since an activity iscaptured by multiple consecutive frames, we extract a feature vector from a cuboid definedby n × n blocks over the most recent t number of frames.

C.5.1) Creating MegablocksFrames are partitioned into non-overlapping mega blocks, each of which is a combinationof multiple motion influence blocks. The Motion Influence value of a Megablock is thesummation of motion influence values of all the smaller blocks constituting a larger block.

C.5.2) Extracting FeaturesAfter the recent ‘t’ number of frames are divided into Megablocks, for each megablock,an 8 × t-dimensional concatenated feature vector is extracted across all the frames. For example,we take mega block (1,1) of all the frames ('t' number of frames) and concatenate their featurevectors, to create a concatenated feature vector for block (1,1).

C.6) ClusteringFor each mega block, we perform clustering using the spatio-temporal features and setthe centers as codewords. That is, for the (i , j)th mega block, we have K codewords, {w(i, j ) k }K k=1.Here, we should note that in ourtraining stage, we use only video clips of normal activities. Therefore, the codewords of a megablock model the patterns of usual activities that can occur in the respective area.

C.7) Testing phaseNow that we have generated the codewords for normal activities, it is time to test thegenerated model with a test dataset which contains unusual activities.

Kabhi kabhi mere dil mein new version mp3 download free

Flow Chart Human Activity Detection

C.7.1) Minimum Distance MatrixIn the testing state, after extracting the spatio-temporal feature vectors for all megablocks, we construct a minimum distance matrix E over the mega blocks, in which the value ofan element is defined by the minimum Euclidean distance between a feature vector of the currenttest frame and the codewords in the corresponding mega block.

C.7.2) Frame level detection of unusual activitiesIn a minimum-distance matrix, the smaller the value of an element, the less likely anunusual activity is to occur in the respective block. On the other hand, we can say that there areunusual activities in t consecutive frames if a higher value exists in the minimum-distancematrix. Therefore, we find the highest value in the minimum-distance matrix as the framerepresentative feature value. If the highest value of the minimum distance matrix is larger thanthe threshold, we classify the current frame as unusual.

Human Activity Detection Behind Wall Graph Results Walking

C.7.3) Pixel level detection of unusual activitiesOnce a frame is detected as unusual, we compare the value of the minimum distancematrix of each megablock with the threshold value,. If the value is larger than the threshold, weclassify that block as unusual. Figure 5.19 shows an example of pixel level unusual activitydetection.