Reinforcement schedules for behavior refer to the frequency with which a student receives reinforcement (or a reward) for performing a desired behavior. On a continuous reinforcement schedule, the student receives the reinforcer for each time she shows the desired behavior (such as giving a child a piece of cookie for each correct answer). A partial reinforcement schedule, also known as an intermittent reinforcement schedule, requires giving a reward for some responses, but not each response. The frequency and timing of the reinforcer depends on the type of partial reinforcement schedule.

## Fixed Ratio Schedules

When a behavior is reinforced using a fixed ratio (FR) schedule, the reward or reinforcer is given after a number of times the target behavior is performed. For example, for every five homework assignments turned in, Jake gets a prize. A plan like this can be implemented using a sticker chart, but often causes the student to respond very quickly and can decrease a student's accuracy because the student rushes to get to the next reward. Students also often show a post-reinforcement pause in behavior, meaning that after they get a reward, it may take longer than usual for them to perform the behavior again.

## Variable Ratio Schedules

A variable rate of reinforcement can solve one of the problems with a fixed ratio schedule by eliminating the post-reinforcement pause. Once a student reliably performs the target behavior on a fixed reinforcement schedule, making the schedule variable (VR) adds an element of surprise that decreases the post-reinforcement pause. If Ellie knows she will get a homework pass for some good answers, but doesn't know when it is coming, she will raise her hand frequently, because any given answer could bring reinforcement.

## Fixed Interval Schedule

While ratio schedules count the number of responses, interval schedules measure the time between reinforcements. On a fixed interval (FI) schedule, the student is rewarded for the first appropriate response a specific number of minutes after the last reinforcer. Behaviors reinforced on an FI schedule occur at a low rate, especially when the student knows how long the interval is. For example, if Kevin knows that candies are given only for correct answers during Friday's spelling lesson, he may not give an answer again until the following Friday.

## Variable Interval Schedules

In variable interval (VI) schedules, the reinforcer is given after a different interval each time, although the average time between reinforcers remains the same. Because students do not know when the next reinforcer interval will occur, they show behavior at a higher rate than with fixed interval schedules. This eliminates the problem of reduced responding between reinforcers, which is seen with FI or fixed interval schedules, and results in a higher rate of the target behavior than with a FI schedule.