Here are 3 components to the review system:
1) The community flags the video. Despite the rumor that flagging campaigns will remove a video, a single flag is sufficient to trigger this system.
2) Our algorithms prioritize the video in the queue. The algorithms examine things like flesh tones (for sexual content), the history of previous flags (i.e, has it been flagged and approved before?), and a few other demographic factors. By prioritizing this way, we can focus our attention on videos that are more likely to be in violation of our policies first.
3) Our reviewers perform a manual review using our review tool. When the reviewers consider a flagged video, they look not only at the content of the video but also at the intention of the uploader. We try to determine the intention through many user signals, including (but not limited to) the title of the video, the tags the uploader has attached to the video, and the user's description of the content.
Another way to say "intention" is context. This is why we would never automate this system. Consider, for example, the video of the death of Neda Soltan in Iran. We have policies that prohibit shocking or graphic content. On its face, a video showing a young woman bleeding to death would likely be removed if it were flagged. But we make exceptions for videos that have educational, documentary, scientific or artisitic (EDSA) value provided that it is balanced with the additional context. The videos of Neda's death clearly have documentary value. Similarly, a video focusing on bare breasts and accompanied by dance music would likely be removed from the site. But the same images of the breasts in a video about breast self-examination and cancer would likely be age-restricted and allowed to remain on the site.