Sunday, November 27, 2016

Time-Based Media

6 Passes Made While Editing Your Final Project:

First Pass- Construct the Primary Audio Narrative: The primary audio portion of a program is often constructed by inserting the spoken words or performance-based segments of a story. If you think of the audio as the wheels of the chassis and everything else above, the tracks are evenly structured this way.

Second Pass- Insert B-Roll and Natural Sound: With primary audio in place, you can begin inserting B-Roll and natural sound on the timeline. This fills in the track where only audio exists and to cover a video portion of a talking head which, in addition, hides any jump cuts.

Third Pass- Insert Titles and Graphics: Once video clips are in place, you can insert lower-thirds and graphics. Ensure they are consistent and that they appear long enough for the viewers to read.

Fourth Pass- Add Sound Effects and Music: Usually best to hold of until the basic structure is developed with music video being the exception. These should be mixed with primary audio at an appropriate level so as not to compete with or distract from the main subject or message.

Fifth Pass- Add Transitions and Effects: This is the time to do trimmings and repositioning clips which can be difficult and time consuming.

Sixth Pass- Finishing Touches: Small minute details are adjusted. This is the polishing process before closing out an editing project.

Types of Sound

Sound Bites: Constructed by editors during postproduction, sound bites are short( usually less than 10 seconds) extracts from recording interviews.

Stand Ups: Often used as and alternative to a voiceover or combination of the two. The reporter appears in front of the camera to narrate part of the story, most often the beginning and/or very end.

Narration: A commentary delivered to accompany a broadcast. It is the action or process of narrating the story.

Natural Sound: The sound obtained when capturing b-roll. As b-roll visually supports the spoken word narrative, the sound that was obtain through the shot gives the viewers a greater impact (when Nat Sound is incorporated in the background).

Foley: The addition of recorded sound effects after the shooting of a film. Matches live sound affects with the action of the picture.

Sunday, November 20, 2016

Lady's Photoshoot On Shots and Angles

High Angle

High Angle (CU)

Neutral Angle

Proper Headroom 

Establishing Shot

Close- Up

Extreme Close- Up

Low Angle

Worms View

Proper Lead- Room

Birds Eye View


Over the Shoulder

Wide Shot

Medium Shot

Lighting

The Three-Point Lighting Principle: Classic technique used in visual media. IT is sometimes called the triangle lighting because the three instruments positioned around the subject tens to form an isosceles triangle.

Key Light: Is the brightest source and the main provider of the subject illumination. The primary purpose of the key light is to reveal the basic shape of the subject.

Fill Light: Is positioned at the opposite angle to the key light in front of the subject. It is intended to compensate for falloff by softening the dark shadows created by the key light without eliminating them entirely.

Back Light: Helps separate the subject from the background accentuating the contours of the subject's hair and shoulders.




180 Degree Rule

Crossing the Line:

Image result for 180 rule example


Screen direction refers to the direction subjects are facing as we look at them in the viewfinder, television monitor, or movie screen. The 180° rule is a cinematography guideline that states that two characters in a scene should maintain the same left/ left relationship to one another. When shooting two people in a scene using three cameras or the single camera master shot, you need to avoid accidentally reversing the screen direction of the subjects when cutting a master shot to a cross shot. A cross shot should have the subject facing the same direction which means the camera(s) should not be switched during live filming. The axis of action, or simple the line, helps keep viewers comfortable by establishing where people are in the scene.

Saturday, November 12, 2016

Video Production Vocabulary

Image result for video production

Frame: A single still image. In the United States, video is typically shot at 24 frames per second (fps). Each frame contributes to the illusion of visual motion that we perceive in live action.

Shot: The smallest standalone component in a visual time-based narrative. A shot is a continuous live or recorded moving image taken from a single camera's point of view over time.

Take: A single recorded instance of a particular shot. A attempted shot taken several times until the director is satisfied with the outcome.

Scene: An event  within the film that takes place in a single location within a specific period of time. Motion picture screenplays are divided into scenes because doing so makes it easier for a director to break down a long story into manageable part during production.

Sequence: An edited series of individual shots that promotes a sense of continuous action or narrative flow.

Primary Motion: The physical onscreen movements of people, animals, and objects that take place in front of the camera during a shoot. The result of the natural and choreographed movements of subjects within the frame.

Secondary Motion: Produced by the movement of the physical camera or through zooming.

Ten Top Tips for Shooting an Effective Interview

Out of the ten Tips listed, I highlighted and elaborated on the top five that I personally thought was most important.

1)Position the camera, interviewer, and interviewee at eye level in relation to one another.
*Ensuring the interviewee's sight line is natural and non-distracting will help to not confuse the viewers for the camera is sure to capture the interviewee's gaze.

2)Avoid placing the interviewee directly against the wall.

3)Avoid placing the interviewee in front of the window.
* Windows generally make poor back drops, especially on a bright day. Window light that is too intense creates a darker backlit image. Facial details, shape, and depth are lost or diminished.

4)Avoid swivel chairs.

5)Follow the rules of thirds- place subject to the right or left of center.
* Off centering provides lead space between the subject's nose and the edge of the screen and produces a stronger, more visually pleasing composition.

6)Have the subject look slightly off-axis to camera- never directly head-on.

7)Place the interviewer as close to the left or right side of the camera as possible.
*The more the interviewer is to the left or right, the more the interviewee's face will be angled away from viewers, resulting in a perfect profile shot.

8)Alternate shooting with the interviewer on both sides of the camera.

9)Eliminate distracting background mergers and clutter.

10)Monitor audio and video recording
*Be sure to do trial recording prior to the start of your interview and monitor video and audio during taping. It is important to review snippets before leaving to ensure your happy with the results.

FIELD OF VIEW

When composing a shot, four variables work in conjunction to determine your field of view.

1.) Camera Location: The term point of view refers to the position of the camera in relation to the subject and is determined by physical location and angle.

2.) Camera Angle: The way a shot is composed. eye- level camera shots are seemed to be the least biased view because it helps to maintain the viewer's perception of a level playing field. High and low angle shots are used often to intensify emotion and energy.

3.) Subject Location: Subject placement can be done to fix/ provide better aesthetics, lighting, and sound isolation.

4.) Focal Length: Found on most video camcorders, the focal length lens gives you latitude to fine tune your framing. It's the focal lens that determines what portion of the viewable area of a scene is included in a shot and what is left out.

Sunday, November 6, 2016

Audio Monitoring

The two ways to properly monitor your recording levels

Monitoring using a VU Meter: The electrical signal produced by a microphone is very weak and must be amplified during the recording process. On professional systems, the preamp can be controlled manually using the volume-unit (VU) meter and the record level controls. The VU meter displays the strength of the microphone signal after it has passed through the preamp, a visual reference.

Monitoring with headphones or speakers: Monitoring a live recording using your ears to assess and evaluate, allows you to hear your subject and any associated background sounds or noise as it is being recorded. Doing this will help to assure nothing is wrong with your audio by giving a false impression of loudness and that your microphones is picking up the subject.



Microphones

Why should you use external microphones?

Although professional external microphones are relatively more expensive, they are made with better transducers and components. In most cases, the operator has little control over the positioning microphone to better the performance.

If you use external microphones, you :

  • Can select the best type of microphone to fit the recording setting, subject, and application.
  • Will have greater control over the placement of the microphone and its proximity to the subject.
  • Will have equilibrium, professional external microphones have better sound recording specs than the built-in microphone attached to your device.
  • Can have better reducing RF interference and other types of transmission noise from balanced XLR connector.

MICROPHONES

Microphone Polar Patterns:
Microphones are also classified according to their pattern. Polar pattern refers to how well a microphone picks up sound within 360 degrees of its central axis. The narrower the pickup pattern, the more directional the microphone will be.


SENARIOS:


OMNIDIRECTIONAL: If you have ever been to a rock band concert and wondered how the microphone was catching the singers sound through all the violent head bangs and dance moves? nondirectional picks up sounds from all directions.

BIDIRECTIONAL: In talk show such as Ellen, Oprah, Steve Harvey, etc.. you will notice that the quest stars sit on opposite side so that it is easier to use the shared microphone. Bidirectional works well from the front and the rear.

CARDIOID (UNIDIRECTIONAL): When in a recording studio, it is common for the vocalist to use cardioid mics because it work well with picking up the subjects voice without the extra background noise.



AUDIO PRODUCTION

AMPLITUDE:
A sound waves height indicates the intensity or magnitude of the pressure wave. Amplitude is defined as the distance from the crest of the wave to the trough. The louder the sound, the greater the altitude, and the taller it's waveform. Sound is greater at it's source and diminishes over distance and time.
Image result for amplitude

WAVELEGNTH:
The distance traveled during one complete vibration cycle, wave of energy.
Image result for wavelength


FREQUENCY:
Refers to a sound's low or high pitch. Measured in Hertz, cycles per second. A progression of a sound wave through one phase of rest, compression, and rarefaction is called a cycle, and the sound's frequency is based on it's numbers of cycles per second.
Image result for frequency

Sunday, October 30, 2016

White Balance

How to manually set white balance on a video camera (if your camera allows)

  • Locate the white balance settings on your camera and select the manual or custom WB option. On higher-end cameras, WB controls are often found on the exterior of the camera body. On les expensive cameras, be prepared to delve into the onboard menu system.
  • Have the subject or assistant aim a white card or sheet of paper at the camera. The card should reflect the same light source hitting the subject.
  • Zoom in on the card to fill 80%-100% of the frame with white. Be sure your exposure and focus settings are properly set. If the image is too dark or out of focus, the camera may not be able to perform the operation.
  • Press and hold the set button until the camera confirms that WB has been acquired. If the color temperature has changed dramatically from the last time you performed a WB, you actually notice a color shift in viewfinder.
If You followed the procedure correctly, your camera is now set to reproduce the best color possible!

FOCUSING

Question And Answer



1.  Why should you avoid using autofocus when doing videography?

* AF may roll of shift as the subject moves with the frame. Manual focus (MF) mode will give a more satisfactory result.

2.  When is using the autofocus acceptable?

* Autofocusing is intended to make life easier, so as long as the lighting is good, the subject-camera distance is relatively stable, and there isn't a lot of visual complexity or movement within the frame, autofocus can yield acceptable results.

3.  What are the 4 steps to properly set your focus manually when shooting a static subject (such as an interview)?

  1. Compose your shoot. Make sure the camera is set to MF mode.
  2. Zoom in as far as you can on the subject's eyes.
  3. Adjust the focus control until the eyes are sharply in focus. Moving quickly back and forth in smaller sweeps can help you identify the sweet spot more accurately.
  4. Zoom back out to compose your shoot.

4.  What does the term "rack focus" mean?

* Rack focus is a popular technique that shooters use to rapidly shift the viewer's attention from one subject to another along the z-axis. A focus puller can change shallow depth of field to obtain noticeable contrast between sharply focused  to soft and blurry.

Sampling Rates

What is PCM?

Pules-Code Modulation is one of the most common codecs used for recording audio as digital stream of binary data. PCM relied on a process called sampling to transform a continuous signal into a sequence of discrete measurements occurring at regular intervals. In digital recording, each sample is stored numerically as a binary string of zeros and ones.

The three variables that control the fidelity of a PCM:

  1. Sample Rate: Specifies the number of samples recorded every second.
  2. Bit Depth: Specifies the number of bits used to encode the value of each sample.
  3. Bit Rate: Specifies the number of bits per second transmitted during playback.

Fun Fact!

Since human hearing has a frequency range of 20Hz (Hertz) to 20 kHz, a minimum sample rate of 40kHz is required in order to effectively capture and encode all the frequencies within this range.

Recording Video and Audio

Defining File Types and Codecs

  • MPEG-2: An advanced format for encoding standard definition (SD)and HD video at a higher bit rate. Paved the way for VHS replacement to DVDs.

  • MPEG -4:Compression and encoding high- definition audio and video signal. Most likely, the video you shoot from your smartphone, watch on Blu-ray, stream for Netflix or Hulu, etc. is encoded in a format prescribed by ubiquitous MPEG-4.

  • h.264:Advanced video coding format which is part 10 of MPEG-4.

  • MOV: Video file is a common multimedia format for saving movies and other video files.

  • WAV: Raw uncompressed PCM audio streams. Only the container or wrapper is modified instead of the enclosed bit streams.

  • MP3:Or MPEG-1 Audio Layer 3, One of the most widely known audio codecs in the world. First digital format designed for encoding and storing music digitally on a portable audio player.

  • AAC: Advanced audio coding with MPEG-2. A format that extended the stereo capabilities of MP3 to multi-channel surround sound.
What is the difference in container format and compression codec?

A container or wrapper file is used for bundling and storing the raw digital bitstreams that codec creates. Each codec performs a single encoding/decoding function, while a container format can support multiple codecs.


Sunday, October 23, 2016

Depth of Field

Depth of Field

-DOF refers to the area of a scene in front of and behind the main subject that is in focus. The term great depth of field, is used to describe a photograph where the majority of the scene is sharply defined. Sharp depth of field describes an image where noticeable portions of the foreground/ background areas of the scene are out of focus.

How can playing with the depth of field enhance your photo? Background elements can often steal the attention away from the main subject in a photo. Photographers will sometimes manipulate DOF to emphasize the focal point of a composition through visual contrast.

What three factors affect the DOF in your image?
  • Size of the lens aperture or f-stop setting: As the size of the aperture decreases, the DOF increases, causing more of a scene to appear in focus and vice versa because the size of the aperture is inversely related to the DOF.

  • The focal length of the lens: The focal length is inversely related to DOF. As you zoom in the length increase while the DOF decreases. Wide-angle shots have great depth, while narrow-angle shots often have a shallow depth of field.

  • The distance from the camera to the subject: Physical distance can affect the DOF in a composition. DOF increases with distance and decreases as you move the camera physically closer to the subject.

Image result for aperture size
Difference in aperture size

WHITE BALANCE

Question and Answer on White Balance


What is meant by White Balancing a camera? White balancing of a camera can be set manually by shooting a whit object, such as a blank sheet of paper, while depressing the manual white balance button. Once a camera "sees" what white looks like under existing light, it can extrapolate the values of all the other colors in the spectrum.

What is considered the "golden hour"? Golden hour is typically an hour after sunrise and a hour before sunset. Daylight is redder or softer than when the sun is higher in the sky.

What is the color temperature of daylight, and what is its general hue? Average daylight color temperature is 5500 K and it's general hue is "daylight" or bluish white.

What is the color temperature of an interior fluorescent light, and what is its general hue? Indoor fluorescent light color temperature is 4000 K and has a general hue of cool white.

Embed an image of incorrect white balance. 

Image result for auto white balance

The Exposure Triangle

The Exposure Triangle

-The three primary components of a camera system that a photographer adjusts to control exposure. The tree variables interact to achieve the exposure value for each shot.
Image result for the exposure triangle

Three Components:

  • Aperture: The larger the aperture, the more light strikes the image sensor and the greater there will be for acquiring shallow depth of field. A series of f-stops printed on the outside surface of ring , indicates the size of the aperture. Opening the aperture one full stop, doubles the size and the amount of light hitting the image sensor.

  • ISO/ Film Speed: Increasing the ISO increases the light sensitivity of the image sensor, but the image becomes noisier and grainier. Each jump to a higher ISO number results in a doubling of the film's light sensitivity.

  • Shutter Speed: The more you increase the shutter speed, the greater detail you can achieve when shooting fast action, but more light you will need for achieving proper exposure. Shutter speed influences how motion is captured by the image sensor.

Define the Parts of the Camera

The Imaging Chain

The imaging chain of the camera are made up of four components: the lens, the iris, the shutter, and the image sensor.
Image result for camera in black and white
  • Lens: Determines the field of view, or what the camera "sees". The lens is mounted on the front of the camera and is designed to capture and manipulate the light reflected from objects in the camera's line of sight.
  • Iris: Regulates the intensity of exposure. An adjustable plastic or metal diaphragm that regulates the amount of light striking the image sensor.
  • Shutter: Regulates the time of exposure. A movable curtain, plate, or other device that controls the amount of time the image sensor is exposed to light.
  • Image Sensor: Captures light and converts it into digital signal. A small electronic chip used to register the intensity and color of light. Also the digital equivalent of "film".