Discrete trial training is one of the most widely used teaching methods in applied behavior analysis. Backed by decades of research and recognized as an evidence-based practice by the National Clearinghouse on Autism Evidence and Practice (NCAEP), DTT gives practitioners a structured, repeatable framework for teaching new skills. Whether you are an RBT running sessions daily or a BCBA designing skill acquisition programs, understanding how to implement DTT effectively is essential to your clinical practice.
This guide covers the core components of discrete trial training, a step-by-step implementation process, real-world examples across skill domains, a practical comparison of DTT vs natural environment teaching, and the most common mistakes practitioners make.
What Is Discrete Trial Training?
Discrete trial training (DTT) is a structured ABA teaching method that breaks complex skills into small, isolated components and teaches each one through repeated practice trials. Each trial has a clear beginning, middle, and end, which is what makes it "discrete" rather than continuous.
DTT was originally developed in the 1960s and 1970s by O. Ivar Lovaas at UCLA, making it one of the earliest structured interventions for individuals with autism. The method has evolved significantly since then, moving away from aversive procedures toward positive reinforcement-based approaches that align with current BACB ethical standards.
An important distinction: DTT is a technique within ABA, not a synonym for it. As researchers at Indiana University's IRCA have noted, using DTT does not automatically constitute a comprehensive ABA program. Effective ABA programs typically combine DTT with other strategies like natural environment teaching, functional communication training, and reinforcement-based behavior management.
According to the NCAEP 2020 systematic review, DTT is effective across a wide age range (birth through age 22) and addresses outcomes in over 10 domains, including communication, social skills, academics, adaptive behavior, and vocational skills (Steinbrenner et al., 2020).
The 5 Components of a Discrete Trial
Every discrete trial follows the same five-part structure. Understanding each component is the foundation of effective implementation.
1. Discriminative Stimulus (SD)
The SD is the instruction, question, or cue that signals to the learner what behavior is expected. It must be clear, concise, and delivered consistently. For example: "Touch the cup," "What color is this?" or "Say ball."
2. Prompt
The prompt is the assistance provided to help the learner respond correctly. Prompts follow a hierarchy from most to least intrusive:
- Full physical: Hand-over-hand guidance
- Partial physical: Light touch or nudge toward the correct response
- Model: Demonstrating the correct response
- Gestural: Pointing or looking toward the correct answer
- Verbal: Providing a partial verbal cue
- Independent: No prompt; the learner responds on their own
The goal is always to fade prompts over time so the learner achieves independent responding.
3. Learner Response
The learner's behavior following the SD. This is recorded as correct (independent), incorrect, prompted, or no response. Each outcome determines what happens next.
4. Consequence
For correct responses, deliver reinforcement immediately (within 1-2 seconds). This can include specific verbal praise ("Great job touching the cup!"), access to a preferred item, or tokens in a token economy system. For incorrect responses, use a neutral error correction procedure: briefly pause, re-present the SD with a prompt to ensure a correct response, then reinforce at a lower level. For a deeper look at reinforcement strategies, see our guide to differential reinforcement in ABA.
5. Inter-Trial Interval (ITI)
A brief pause of 1-5 seconds between trials. The ITI allows the learner to process the previous trial and creates a clear separation between consecutive trials. Varying the ITI slightly helps prevent rote, automatic responding.
How to Implement Discrete Trial Training: Step by Step
Knowing the five components is one thing; running clean, effective trials is another. Here is a practical implementation guide.
Step 1: Identify and define the target skill. Select a specific, measurable behavior to teach. Break complex skills into discrete components. For example, "requesting items" might break down into: making eye contact, reaching toward the item, and producing a vocal approximation or sign.
Step 2: Prepare the environment. Minimize distractions. Sit across from the learner at a table or on the floor, depending on the skill. Have all materials and reinforcers within reach but out of the learner's sight until needed.
Step 3: Establish attending. Before delivering the SD, ensure the learner is looking at you or at the relevant materials. You can use an attending cue ("Look" or "Ready?") or wait for a natural orienting response.
Step 4: Deliver the SD. State the instruction clearly and wait 3-5 seconds for a response. Keep your tone neutral and consistent across trials.
Step 5: Prompt if necessary. If the learner does not respond or responds incorrectly, provide the planned prompt. Use either a most-to-least or least-to-most prompt fading strategy, depending on the learner's history and the complexity of the skill.
Step 6: Deliver the consequence. Reinforce correct responses enthusiastically and immediately. For errors, use a calm, neutral error correction: remove materials, pause briefly, and re-present the trial with a prompt.
Step 7: Record data. After each trial, record the response as correct (+), incorrect (-), or prompted (P). Trial-by-trial data is the gold standard for DTT because it captures the learner's response pattern across attempts.
Step 8: Continue the teaching block. Run 10-20 trials per target skill in a teaching block. Intersperse mastered targets ("easy" tasks the learner already knows) every 3-4 trials to maintain motivation and create opportunities for dense reinforcement.
Discrete Trial Training Examples Across Skill Domains
DTT is versatile enough to target a wide range of skills. Here are four practical examples showing how the trial structure adapts to different learning goals.
Receptive Language: "Touch the [Object]"
- SD Place a cup and a spoon on the table. Say: "Touch the cup."
- Prompt If needed, point to the cup or gently guide the learner's hand.
- Response Learner touches the cup.
- Consequence "Great job touching the cup!" plus access to a preferred item for 5-10 seconds.
Expressive Language: Labeling Objects
Hold up a ball. SD: "What is it?" Prompt (if needed): model "ball." When the learner says "ball," reinforce with praise and brief access to the ball itself, which serves as a natural reinforcer. Across trials, vary the objects and rotate their positions to build true discrimination rather than positional responding.
Social Skills: Responding to Greetings
SD: A therapist approaches and says "Hi!" Prompt: model waving. When the learner waves back, reinforce with a high five and verbal praise. Once the learner responds consistently with the therapist, begin generalizing by having other staff members, peers, and family deliver the greeting.
Daily Living: Handwashing Steps
Complex routines like handwashing are first broken into a task analysis, then each step is taught as a discrete trial. For example, Step 1 might be "Turn on the water" taught in isolation until mastered, then chained with subsequent steps. This approach connects DTT to chaining procedures for multi-step skills.
DTT vs Natural Environment Teaching: When to Use Each
One of the most common questions in ABA practice is whether to use DTT or natural environment teaching (NET). The short answer: both, but for different purposes.
NET embeds teaching into the learner's natural activities and interests. Instead of sitting at a table with flashcards, a NET approach might teach labeling by following a child who picks up a toy car and saying "What did you find?" Reinforcement comes naturally from the activity itself.
| Factor | DTT | NET |
|---|---|---|
| Structure | Highly structured, table-based | Flexible, embedded in activities |
| Reinforcement | Planned (tokens, preferred items) | Natural (activity-based) |
| Generalization | Requires deliberate programming | Built into the teaching context |
| Best for | New skills, discriminations, foundational learning | Generalization, social skills, play, maintaining engagement |
| Data collection | Trial-by-trial (precise) | Probe-based or anecdotal |
"Best practice is to teach with DTT and generalize with NET. The two methods are complementary, not competing."
A 2024 study by Peterson and colleagues examined 93 individuals with autism receiving a combined approach of discrete trials, mass trials, and naturalistic environment training. The results showed a significant increase in mastered behaviors, from a mean of 1.32 at baseline to 27.13 by week 12, with a large effect size (Peterson et al., 2024). This reinforces what experienced practitioners already know: the strongest programs blend structured and naturalistic methods.
A simple decision rule: Ask yourself, "Is this a brand-new skill or a generalization opportunity?" If the learner has never performed the behavior before, start with DTT. Once the skill reaches mastery criteria in the structured setting, transition to NET to build fluency across people, settings, and materials.
Common DTT Mistakes and How to Avoid Them
Even experienced practitioners can fall into patterns that reduce DTT effectiveness. Here are six common mistakes and their fixes.
1. Delivering the SD before the learner is attending. If the learner is looking away or engaged with something else, the instruction will not function as an effective discriminative stimulus. Fix: always secure an orienting response before presenting the SD.
2. Prompting too quickly. Jumping in with a prompt before giving the learner time to respond independently creates prompt dependency. Fix: use a consistent prompt delay of 3-5 seconds and track prompt levels in your data to ensure you are fading over time.
3. Using identical materials and phrasing every trial. Researchers at Indiana University's IRCA have noted that learners may develop rote responses that look like comprehension but do not generalize. Fix: vary your SD wording ("Touch the cup," "Show me the cup," "Where is the cup?") and rotate materials from the start.
4. Weak or delayed reinforcement. Reinforcement delivered more than 2-3 seconds after the response loses its effectiveness. Fix: have reinforcers prepared and within arm's reach. Match reinforcer intensity to the difficulty of the response; a novel correct response deserves bigger reinforcement than a well-practiced one.
5. Failing to fade prompts systematically. Without a planned fading strategy, learners stay dependent on prompts indefinitely. Fix: include the prompt fading plan in your program documentation and track the current prompt level each session.
6. Running too many consecutive trials without breaks. Learner fatigue and satiation reduce engagement and increase problem behavior. Fix: intersperse mastered ("easy") targets every 3-4 acquisition trials and build in brief play or movement breaks between teaching blocks.
Tracking Progress: Data Collection in DTT
DTT's structured format makes it one of the easiest ABA methods to collect clean data on. Here is what to track and how to use it.
Trial-by-trial recording is the standard approach. After each trial, note whether the response was correct (+), incorrect (-), or prompted (P). At the end of a teaching block, calculate the percentage of independent correct responses.
Mastery criteria vary by program, but a common standard is 80-90% independent correct responses across 2-3 consecutive sessions with at least two different therapists. The multi-therapist requirement helps confirm that the skill has generalized beyond a single person.
When to adjust your approach: If a learner has not made progress after 3-5 sessions at the same level, do not simply keep running the same trials. Reassess by asking these questions:
- Is the SD clear and consistently delivered?
- Is the prompt level appropriate, or does the learner need more (or less) support?
- Are the reinforcers still motivating? Run a preference assessment if needed.
- Does the learner have the prerequisite skills for this target?
Data-based decision making is not just good practice; it is an ethical obligation under the BACB Ethics Code. DTT's built-in data structure makes it straightforward to meet this standard on every program you run.
Strengthen Your Clinical Skills and Advance Your ABA Career
Discrete trial training is a cornerstone of ABA practice, supported by 58 single-case design studies and decades of clinical use. Mastering DTT, along with complementary methods like differential reinforcement and behavior intervention planning, makes you a more effective clinician and a more competitive candidate in the ABA job market.
Whether you are an RBT building your clinical toolkit or a BCBA looking for a role where evidence-based practice is genuinely valued, the next step is making sure employers can see what you bring to the table.
Highlight Your Specialized ABA Skills
CertifyndABA lets you showcase your clinical expertise, including DTT, NET, and other evidence-based skills, on an anonymous profile. Employers who value quality clinical work reach out to you.
Create Your Free Profile