4 Assessment Delivery

Chapter 4 of the Dynamic Learning Maps® (DLM®) Alternate Assessment System 2021–2022 Technical Manual—Instructionally Embedded Model (Dynamic Learning Maps Consortium, 2022) describes general test administration and monitoring procedures. This chapter describes updated procedures and data collected in 2022–2023, including a summary of administration time, device use, accessibility support selections, test administration observations, data forensics reports, and test administrator survey responses regarding user experience.

Overall, intended administration features remained consistent with the 2021–2022 implementation, including the use of instructionally embedded assessment in both the fall and spring windows and the availability of accessibility supports.

For a complete description of test administration for DLM assessments–including information on the Kite® Suite used to assign and deliver assessments, testlet formats, accessibility features, the First Contact survey used to recommend testlet linkage level, available administration resources and materials, and information on monitoring assessment administration–see the 2021–2022 Technical Manual—Instructionally Embedded Model (Dynamic Learning Maps Consortium, 2022).

4.1 Overview of Key Features of the Instructionally Embedded Assessment Model

As briefly described in Chapter 1, the DLM assessment system has two available models. This manual describes the Instructionally Embedded assessment model. Consistent with the DLM Theory of Action described in Chapter 1, the DLM assessment administration features reflect multidimensional, nonlinear, and diverse ways that students learn and demonstrate their learning. Test administration procedures therefore use multiple sources of information to assign testlets, including student characteristics, prior performance, and educator judgment.

In the Instructionally Embedded model, the DLM system is designed to assess student learning throughout the year and features flexibility in the choice of assessment content to support the timely use of data to inform instructional planning. Test administrators use the Instruction and Assessment Planner in Educator Portal to administer instructionally embedded testlets. Each testlet is administered after instruction in fall and spring testing windows so that testing informs teaching and students’ learning. This assessment model yields summative results based on all instructionally embedded assessments administered across both windows.

With the exception of English language arts (ELA) writing testlets, each testlet contains items measuring one Essential Element (EE) and one linkage level. In reading and mathematics, items in a testlet are aligned to nodes at one of five linkage levels for a single EE. Writing testlets measure multiple EEs and are delivered at one of two levels: emergent (which corresponds with Initial Precursor and Distal Precursor linkage levels) or conventional (which corresponds with Proximal Precursor, Target, and Successor linkage levels).

For a complete description of key administration features, including information on assessment delivery, the Kite® Suite, and linkage level selection, see Chapter 4 of the 2021–2022 Technical Manual—Instructionally Embedded Model (Dynamic Learning Maps Consortium, 2022). Additional information about changes in administration can also be found in the Test Administration Manual (Dynamic Learning Maps Consortium, 2023d) and the Educator Portal User Guide (Dynamic Learning Maps Consortium, 2023c).

4.1.1 Assessment Administration Windows

Testlets are administered in two assessment administration windows: fall and spring.

4.1.1.1 Fall Window

Test administrators use blueprint coverage criteria to decide which EEs and linkage levels to assess for each student throughout the fall window. In 2022–2023, the fall window occurred between September 12, 2022, and December 16, 2022. States were given the option of using the entire window or setting their own dates within the larger window. All states chose to use the full fall window in 2022–2023.

4.1.1.2 Spring Window

Test administrators use the same blueprint coverage criteria to make EE and linkage level selections for the spring window. They can choose, teach, and assess the same EEs and linkage levels as the fall window, or they can choose different EEs and/or linkage levels. In 2022–2023, the spring window occurred between February 6, 2023, and May 19, 2023. States were given the option of using the entire window or setting their own dates within the larger window. Across all states, the spring window ranged from 12 to 15 weeks.

4.2 Evidence From the DLM System

This section describes evidence collected by the DLM system during the 2022–2023 operational administration of the DLM alternate assessment. The categories of evidence include administration time, device use, test administrator selection of linkage levels, blueprint coverage, and accessibility support selections.

4.2.1 Administration Time

Estimated testlet administration time varies by student and subject. Total time varies depending on the number of EEs a test administrator chooses and the number of times a student is assessed on each EE. Testlets can be administered separately across multiple testing sessions as long as they are all completed within the testing window.

The published estimated total testing time per testlet is around 5–10 minutes in mathematics, 10–15 minutes in reading, and 10–20 minutes for writing (Dynamic Learning Maps Consortium, 2023d). The estimated total testing time is 60–75 minutes per student in ELA and 35–50 minutes in mathematics in each of the fall and spring windows. Published estimates are slightly longer than anticipated real testing times where students are interacting with the assessment because of the assumption that test administrators need time for setup. The actual amount of testing time per testlet for a student varies depending on each student’s unique characteristics.

Kite Student Portal captured start dates, end dates, and time stamps for every testlet. The differences between these start and end times were calculated for each completed testlet. Table 4.1 summarizes the distribution of test times per testlet. The distribution of test times in Table 4.1 is consistent with the distribution observed in prior years. Most testlets took around 8 minutes or less to complete, with mathematics testlets generally taking less time than ELA testlets. Time per testlet may have been affected by student breaks during the assessment or use of accessibility supports. Testlets with shorter than expected administration times are included in an extract made available to each state education agency. State agency staff can use this information to monitor assessment administration and address as necessary. Testlets time out after 90 minutes.

Table 4.1: Distribution of Response Times per Testlet in Minutes
Grade Min Median Mean Max 25Q 75Q
English language arts
3 0.1 3.8 4.8 87.4 2.4 5.8
4 0.2 4.1 5.0 89.8 2.7 6.2
5 0.1 4.2 5.2 88.4 2.6 6.4
6 0.2 4.1 5.1 85.9 2.7 6.3
7 0.2 4.8 5.9 86.4 3.0 7.3
8 0.2 4.2 5.2 84.4 2.7 6.4
9 0.3 4.9 6.2 89.8 3.1 7.4
10 0.2 4.7 5.9 84.8 3.0 7.2
11 0.2 5.2 6.6 80.9 3.3 8.1
12 0.4 5.2 7.3 85.1 3.1 8.4
Mathematics
3 0.1 1.9 2.8 85.6 1.1 3.5
4 0.1 1.7 2.5 90.0 1.0 3.0
5 0.1 1.8 2.6 87.1 1.0 3.0
6 0.1 1.8 2.6 83.4 1.1 3.1
7 0.1 1.5 2.3 89.7 0.9 2.7
8 0.1 1.6 2.5 83.3 1.0 2.9
9 0.1 1.8 2.7 62.8 1.0 3.2
10 0.1 1.9 2.6 86.4 1.1 3.1
11 0.1 1.9 2.8 84.5 1.1 3.3
12 0.3 1.8 2.6   8.2 0.9 3.5
Note. Min = minimum; Max = maximum; 25Q = lower quartile; 75Q = upper quartile.

4.2.2 Device Use

Testlets may be administered on a variety of devices. Kite Student Portal captured the operating system used for each testlet completed. Although these data do not capture specific devices used to complete each testlet (e.g., SMART Board, switch system, etc.), they provide high-level information about how students access assessment content. For example, we can identify how often an iPad is used relative to a Chromebook or traditional personal computer. Figure 4.1 shows the number of testlets completed on each operating system by subject and linkage level for 2022–2023. Overall, 45% of testlets were completed on a Chromebook, 28% were completed on an iPad, 22% were completed on a personal computer, and 5% were completed on a Mac.

Figure 4.1: Distribution of Devices Used for Completed Testlets

A bar graph showing the number of testlets completed on each device, by subject and linkage level.

Note. PC = personal computer.

4.2.3 Blueprint Coverage

Test administrators selected the EEs for their students to test on from among those available on the ELA and mathematics blueprints in both the fall and spring windows. Table 4.2 summarizes the expected number of EEs required to meet blueprint coverage and the total number of EEs available for instructionally embedded assessments for each grade and subject. A total of 255 EEs (148 in ELA and 107 in mathematics) for grades 3 through high school were available; 12,447 students in those grades participated in the fall window, and 13,194 students participated in the spring window. Histograms in Appendix B.1 summarize the distribution of total unique EEs assessed per student in each grade and subject.

Table 4.2: Essential Elements (EEs) Expected for Blueprint Coverage and Total Available, by Grade and Subject
English language arts
Mathematics
Grade Expected n Available N Expected n Available N
3   8 17 6 11
4   9 17 8 16
5   8 19 7 15
6   9 19 6 11
7 11 18 7 14
8 11 20 7 14
9–10 10 19 6 26
11–12 10 19
Note. High school mathematics is reported in the 9–10 row. There were 26 EEs available for the 9–11 band. While EEs were assigned to specific grades in the mathematics blueprint (eight EEs in Grade 9, nine EEs in Grade 10, and nine EEs in Grade 11), a test administrator could choose to test on any of the high school EEs, as all were available in the system.

Figure 4.2 summarizes the percentage of students, for each window and overall for the year, in three categories: students who did not meet all blueprint requirements, students who met all blueprint requirements exactly, and students who exceeded the blueprint requirements. Across both subjects and windows, 98% of students in ELA and 97% of students in mathematics met or exceeded blueprint coverage requirements. The coverage rates for the fall and spring windows were similar. For the full year, the proportion of students exceeding blueprint requirements increases if students are assessed on different EEs in the fall and spring windows (i.e., a student may exactly meet requirements in both the fall and spring but exceed requirements overall if different EEs are selected in each window).

Figure 4.2: Student Blueprint Coverage Status

Bar graph showing the percentage of students in each blueprint coverage category by window. The majority of students are in the 'Met' expectations category.

Figure 4.3 summarizes the percentage of students in each blueprint coverage category based on their complexity band for each subject for each window. The complexity band distributions for blueprint coverage in ELA and mathematics by blueprint coverage category were roughly similar overall.

Figure 4.3: Student Blueprint Coverage Status, by Complexity Band

Bar graph showing the percentage of students in each blueprint coverage category by window. Students in the Foundational and Band 3 complexity bands are more likely to not meet blueprint requirements.

4.2.4 Linkage Level Selection

Figure 4.4 shows the percentage of testlets that were administered at the system-recommended linkage level or adjusted from the recommended level. Test administrators may choose to administer multiple testlets for a single EE at multiple linkage levels. Because the recommended linkage level for subsequent testlets on the same EE does not change within each window, we only examined adjustments for the first testlets administered for each student on each EE. Across both windows, 73% of ELA testlets and 69% of mathematics testlets were administered at the recommended linkage level. The most common adjustment was to administer a linkage level below the recommended level. This adjustment was observed for 20% of ELA testlets and 24% of mathematics testlets.

Figure 4.4: Educator Adjustment of Recommended Linkage Levels

A bar graph showing the percentage of testlets that were administered at, below, or above the recommended linkage level. Most testlets were administered at the recommended level. The most common adjustment was to administered a linkage level below the recommended level.

Based on the linkage level selections that were made by test administrators, Table 4.3 shows the total number of testlets that were administered at each linkage level by subject and window. Because test administrators do not select a specific linkage level for writing testlets, those testlets are not included in Table 4.3. For both subjects and windows, the majority of testlets were administered at the Initial Precursor or Distal Precursor linkage level. Additionally, there is a slight increase in the percentage of testlets administered at the Target and Successor linkage levels in the spring window for both subjects.

Table 4.3: Distribution of Linkage Levels Selected for Assessment
Fall window
Spring window
Linkage level n % n %
English language arts
Initial Precursor 26,693 35.5 27,535 34.2
Distal Precursor 29,522 39.3 26,404 32.8
Proximal Precursor 15,184 20.2 17,911 22.2
Target   3,438   4.6   7,298   9.1
Successor      377   0.5   1,364   1.7
Mathematics
Initial Precursor 35,831 40.4 39,108 41.0
Distal Precursor 32,547 36.7 30,491 32.0
Proximal Precursor 16,506 18.6 17,573 18.4
Target   3,448   3.9   6,864   7.2
Successor      380   0.4   1,284   1.3

4.2.5 Administration Incidents

DLM staff annually evaluate testlet assignment to promote correct assignment of students to testlets. Administration incidents that have the potential to affect scoring are reported to state education agencies in a supplemental Incident File. No incidents were observed during the 2022–2023 operational assessment windows. Assignment of testlets will continue to be monitored in subsequent years to track any potential incidents and report them to state education agencies.

4.2.6 Accessibility Support Selections

Accessibility supports provided in 2022–2023 were the same as those available in previous years. The DLM Accessibility Manual (Dynamic Learning Maps Consortium, 2023b) distinguishes accessibility supports that are provided in Kite Student Portal via the Personal Needs and Preferences Profile, require additional tools or materials, or are provided by the test administrator outside the system. Table 4.4 shows selection rates for the three categories of accessibility supports. Overall, 12,595 students (>99%) had at least one support selected. The most commonly selected supports in 2022–2023 were human read aloud, spoken audio, and test administrator enters responses for student. For a complete description of the available accessibility supports, see Chapter 4 of the 2021–2022 Technical Manual—Instructionally Embedded Model (Dynamic Learning Maps Consortium, 2022).

Table 4.4: Accessibility Supports Selected for Students (N = 12,595)
Support n %
Supports provided in Kite Student Portal
Spoken audio   8,905 70.7
Magnification   1,788 14.2
Color contrast   1,237   9.8
Overlay color      425   3.4
Invert color choice      257   2.0
Supports requiring additional tools/materials
Individualized manipulatives   4,939 39.2
Calculator   2,591 20.6
Single-switch system      423   3.4
Alternate form–visual impairment      373   3.0
Two-switch system      136   1.1
Uncontracted braille       16   0.1
Supports provided outside the system
Human read aloud 10,958 87.0
Test administrator enters responses for student   8,300 65.9
Partner-assisted scanning      857   6.8
Sign interpretation of text      198   1.6
Language translation of text       93   0.7

4.3 Evidence From Monitoring Assessment Administration

DLM staff monitor assessment administration using various materials and strategies. As in prior years, DLM staff made available an assessment administration observation protocol for use by DLM staff, state education agency staff, and local education agency staff. Project staff also reviewed Service Desk requests and hosted regular check-in calls with state education staff to monitor common issues and concerns during the assessment window. This section provides an overview of the assessment administration observation protocol and its use.

4.3.1 Test Administration Observations

Consistent with previous years, the DLM Consortium used a test administration observation protocol to gather information about how educators in the consortium states deliver testlets to students with the most significant cognitive disabilities. This protocol gave observers, regardless of their role or experience with DLM assessments, a standardized way to describe how DLM testlets were administered. The test administration observation protocol captured data about student actions (e.g., navigation, responding), educator assistance, variations from standard administration, engagement, and barriers to engagement. For a full description of the test administration observation protocol, see Chapter 4 of the 2021–2022 Technical Manual—Instructionally Embedded Model (Dynamic Learning Maps Consortium, 2022).

During 2022–2023, there were 265 assessment administration observations collected in eight states. Table 4.5 shows the number of observations collected by state. Of the 265 total observations, 164 (62%) were of computer-delivered assessments and 101 (38%) were of educator-administered testlets. The observations consisted of 131 (49%) ELA reading testlets, 20 (8%) ELA writing testlets, and 114 (43%) mathematics testlets.

Table 4.5: Educator Observations by State (N = 265)
State n %
Arkansas 64 24.2
Iowa 27 10.2
Kansas 46 17.4
Missouri 52 19.6
New Jersey   5   1.9
New York 31 11.7
North Dakota   3   1.1
West Virginia 37 14.0

Observations for computer-delivered testlets are summarized in Table 4.6; behaviors on the test administration observation protocol were identified as supporting, neutral, or nonsupporting. For example, clarifying directions (found in 42.7% of observations) removes student confusion about the task demands as a source of construct-irrelevant variance and supports the student’s meaningful, construct-related engagement with the item. In contrast, using physical prompts (e.g., hand-over-hand guidance) indicates that the test administrator directly influenced the student’s answer choice. Overall, 55% of observed behaviors were classified as supporting, with 1% of observed behaviors reflecting nonsupporting actions.

Table 4.6: Test Administrator Actions During Computer-Delivered Testlets (n = 164)
Action n %
Supporting
Read one or more screens aloud to the student 99 60.4
Navigated one or more screens for the student 78 47.6
Clarified directions or expectations for the student 70 42.7
Repeated question(s) before student responded 51 31.1
Neutral
Used verbal prompts to direct the student’s attention or engagement (e.g., “look at this.”) 57 34.8
Used pointing or gestures to direct student attention or engagement 56 34.1
Entered one or more responses for the student 43 26.2
Asked the student to clarify or confirm one or more responses 29 17.7
Used materials or manipulatives during the administration process 23 14.0
Repeated question(s) after student responded (gave a second trial at the same item) 13   7.9
Allowed student to take a break during the testlet 11   6.7
Nonsupporting
Physically guided the student to a response   6   3.7
Reduced the number of answer choices available to the student   2   1.2
Note. Respondents could select multiple responses to this question.

For DLM assessments, interaction with the system includes interaction with the assessment content as well as physical access to the testing device and platform. The fact that educators navigated one or more screens in 48% of the observations does not necessarily indicate the student was prevented from engaging with the assessment content as independently as possible. Depending on the student, test administrator navigation may either support or minimize students’ independent, physical interaction with the assessment system. While not the same as interfering with students’ interaction with the content of the assessment, navigating for students who are able to do so independently conflicts with the assumption that students are able to interact with the system as intended. The observation protocol did not capture why the test administrator chose to navigate, and the reason was not always obvious.

Observations of student actions taken during computer-delivered testlets are summarized in Table 4.7. Independent response selection was observed in 59% of the cases. Nonindependent response selection may include allowable practices, such as test administrators entering responses for the student. The use of materials outside of Kite Student Portal was seen in 7% of the observations. Verbal prompts for navigation and response selection are strategies within the realm of allowable flexibility during test administration. These strategies, which are commonly used during direct instruction for students with the most significant cognitive disabilities, are used to maximize student engagement with the system and promote the type of student-item interaction needed for a construct-relevant response. However, they also indicate that students were not able to sustain independent interaction with the system throughout the entire testlet.

Table 4.7: Student Actions During Computer-Delivered Testlets (n = 164)
Action n %
Selected answers independently 97 59.1
Navigated screens independently 59 36.0
Selected answers after verbal prompts 48 29.3
Navigated screens after verbal prompts 25 15.2
Navigated screens after test administrator pointed or gestured 24 14.6
Asked the test administrator a question 13   7.9
Used materials outside of Kite Student Portal to indicate responses to testlet items 12   7.3
Revisited one or more questions after verbal prompt(s)   4   2.4
Skipped one or more items   2   1.2
Independently revisited a question after answering it   1   0.6
Note. Respondents could select multiple responses to this question.

Observers noted whether there was difficulty with accessibility supports (including lack of appropriate available supports) during observations of educator-administered testlets. Of the 101 observations of educator-administered testlets, observers noted difficulty in four cases (4%). For computer-delivered testlets, observers noted students who indicated responses to items using varied response modes such as gesturing (24%) and using manipulatives or materials outside of the Kite system (7%). Of the 265 test administration observations collected, students completed the full testlet in 162 cases (61%). In all instances where the testlet was not completed, no reason was provided by the observer.

Finally, DLM assessment administration observation intends for test administrators to enter student responses with fidelity, including across multiple modes of communication, such as verbal, gesture, and eye gaze. Table 4.8 summarizes students’ response modes for educator-administered testlets. The most frequently observed behavior was gestured to indicate response to test administrator who selected answers.

Table 4.8: Primary Response Mode for Educator-Administered Testlets (n = 101)
Response mode n %
Gestured to indicate response to test administrator who selected answers 63 62.4
Verbally indicated response to test administrator who selected answers 59 58.4
Eye gaze system indication to test administrator who selected answers   4   4.0
No observable response mode   1   1.0
Note. Respondents could select multiple responses to this question.

Observations of computer-delivered testlets when test administrators entered responses on behalf of students provided another opportunity to confirm fidelity of response entry. This support is recorded on the Personal Needs and Preferences Profile and is recommended for a variety of situations (e.g., students who have limited motor skills and cannot interact directly with the testing device even though they can cognitively interact with the onscreen content). Observers recorded whether the response entered by the test administrator matched the student’s response. In 43 of 164 (26%) observations of computer-delivered testlets, the test administrator entered responses on the student’s behalf. In 37 (86%) of those cases, observers indicated that the entered response matched the student’s response, while the remaining six observers either responded that they could not tell if the entered response matched the student’s response, or they left the item blank.

4.4 Evidence From Test Administrators

This section describes evidence collected from the spring 2023 test administrator survey. Test administrators receive one survey per rostered DLM student, which annually collects information about that student’s assessment experience. As in previous years, the survey was distributed to test administrators in Kite Student Portal, where students completed assessments. Instructions indicated the test administrator should complete the survey after administration of the spring assessment; however, users can complete the survey at any time. The survey consisted of three blocks. Blocks 1 and 3 were administered in every survey. Block 1 included questions about the test administrator’s perceptions of the assessments and the student’s interaction with the content, and Block 3 included questions about the test administrator’s background, to be completed once per administrator. Block 2 was spiraled, so test administrators received one randomly assigned section. In these sections, test administrators were asked about one of the following topics per survey: relationship of the assessment to ELA, mathematics, or science instruction.

4.4.1 User Experience With the DLM System

A total of 3,054 test administrators responded to the survey (66%) about 6,190 students’ experiences. Test administrators are instructed to respond to the survey separately for each of their students. Participating test administrators responded to surveys for between 1 and 16 students, with a median of 1 student. Test administrators reported having an average of 12 years of experience in ELA, 12 years in mathematics, and 10 years teaching students with significant cognitive disabilities.

The following sections summarize responses regarding both educator and student experience with the system.

4.4.1.1 Educator Experience

Test administrators were asked to reflect on their own experience with the assessments as well as their comfort level and knowledge administering them. Most of the questions required test administrators to respond on a 4-point scale: strongly disagree, disagree, agree, or strongly agree. Responses are summarized in Table 4.9.

Nearly all test administrators (96%) agreed or strongly agreed that they were confident administering DLM testlets. Most respondents (92%) agreed or strongly agreed that the Required Test Administrator Training prepared them for their responsibilities as test administrators. Most test administrators agreed or strongly agreed that they had access to curriculum aligned with the content that was measured by the assessments (88%) and that they used the manuals and the Educator Resource page (93%).

Table 4.9: Test Administrator Responses Regarding Test Administration
SD
D
A
SA
A+SA
Statement n % n % n % n % n %
I was confident in my ability to deliver DLM testlets. 21 1.0 56 2.6 984 45.1 1,119 51.3 2,103 96.4
Required Test Administrator Training prepared me for the responsibilities of a test administrator. 37 1.7 138 6.3 1,094 50.3 908 41.7 2,002 92.0
I have access to curriculum aligned with the content measured by DLM assessments. 53 2.4 215 9.8 1,126 51.6 789 36.1 1,915 87.7
I used manuals and/or the DLM Educator Resource Page materials. 32 1.5 130 5.9 1,203 55.0 823 37.6 2,026 92.6
Note. SD = strongly disagree; D = disagree; A = agree; SA = strongly agree; A+SA = agree and strongly agree.

4.4.1.2 Student Experience

The spring 2023 test administrator survey included three items about how students responded to test items. Test administrators were asked to rate statements from strongly disagree to strongly agree. Results are presented in Table 4.10. The majority of test administrators agreed or strongly agreed that their students responded to items to the best of their knowledge, skills, and understandings; were able to respond regardless of disability, behavior, or health concerns; and had access to all necessary supports to participate.

Table 4.10: Test Administrator Perceptions of Student Experience with Testlets
SD
D
A
SA
A+SA
Statement n % n % n % n % n %
Student responded to items to the best of his/her knowledge, skills, and understanding. 210 3.6 536 9.3 3,088 53.7 1,920 33.4 5,008 87.1
Student was able to respond regardless of his/her disability, behavior, or health concerns. 405 7.0 651 11.3 2,945 51.1 1,765 30.6 4,710 81.7
Student had access to all necessary supports to participate. 190 3.3 308 5.4 3,167 55.1 2,081 36.2 5,248 91.3
Note. SD = strongly disagree; D = disagree; A = agree; SA = strongly agree; A+SA = agree and strongly agree.

Annual survey results show that a small percentage of test administrators disagree that their student was able to respond regardless of disability, behavior, or health concerns; had access to all necessary supports; and was able to effectively use supports. In spring 2020, DLM staff conducted educator focus groups with educators who disagreed with one or more of these survey items to learn about potential accessibility gaps in the DLM system (Kobrin et al., 2022). A total of 18 educators from 11 states participated in six focus groups. The findings revealed that many of the challenges educators described were documented in existing materials (e.g., wanting clarification about allowable practices that are described in the Test Administration Manual, such as substituting materials; desired use of not-allowed practices like hand-over-hand that are used during instruction). DLM staff are using the focus group findings to review existing materials and develop new resources that better communicate information about allowable practices to educators.

4.4.2 Opportunity to Learn

The spring 2023 test administrator survey also included items about students’ opportunity to learn. Table 4.11 reports the opportunity to learn results.

Approximately 75% of responses (n = 4,309) reported that most or all ELA testlets matched instruction, compared to 71% (n = 4,088) for mathematics.

Table 4.11: Educator Ratings of Portion of Testlets That Matched Instruction
None
Some (< half)
Most (> half)
All
Not applicable
Subject n % n % n % n % n %
English language arts 210 3.6 1,187 20.6 2,404 41.6 1,905 33.0 69 1.2
Mathematics 217 3.8 1,334 23.3 2,358 41.2 1,730 30.2 88 1.5

A subset of test administrators was asked to indicate the approximate number of hours spent instructing students on each of the conceptual areas by subject (i.e., ELA, mathematics). Test administrators responded using a 6-point scale: 0 hours, 1–5 hours, 6–10 hours, 11–15 hours, 16–20 hours, or more than 20 hours. Table 4.12 and Table 4.13 indicate the amount of instructional time spent on conceptual areas for ELA and mathematics, respectively. Around 48% of the test administrators provided at least 11 hours of instruction per conceptual area to their students in ELA, compared to 40% in mathematics.

Table 4.12: Instructional Time Spent on English Language Arts Conceptual Areas
Number of hours
0
1–5
6–10
11–15
16–20
>20
Conceptual area Median n % n % n % n % n % n %
Determine critical elements of text 6–10 248 12.2 514 25.2 294 14.4 215 10.5 269 13.2 499 24.5
Construct understandings of text 11–15 206 10.2 479 23.6 296 14.6 211 10.4 269 13.3 567 28.0
Integrate ideas and information from text 6–10 259 12.8 471 23.3 332 16.4 239 11.8 243 12.0 476 23.6
Use writing to communicate 6–10 259 12.8 470 23.2 322 15.9 230 11.4 237 11.7 508 25.1
Integrate ideas and information in writing 6–10 382 18.9 503 24.8 311 15.4 219 10.8 219 10.8 391 19.3
Use language to communicate with others 16–20   96   4.7 311 15.4 243 12.0 227 11.2 259 12.8 888 43.9
Clarify and contribute in discussion 11–15 233 11.5 392 19.3 308 15.2 252 12.4 286 14.1 555 27.4
Use sources and information 6–10 511 25.2 483 23.8 308 15.2 234 11.5 216 10.6 279 13.7
Collaborate and present ideas 6–10 458 22.5 488 24.0 321 15.8 238 11.7 215 10.6 314 15.4
Table 4.13: Instructional Time Spent on Mathematics Conceptual Areas
Number of hours
0
1–5
6–10
11–15
16–20
>20
Conceptual area Median n % n % n % n % n % n %
Understand number structures (counting, place value, fraction) 16–20 122   5.5 362 16.3 325 14.7 271 12.2 296 13.3 842 38.0
Compare, compose, and decompose numbers and steps 6–10 298 13.5 461 20.9 354 16.1 260 11.8 313 14.2 519 23.5
Calculate accurately and efficiently using simple arithmetic operations 11–15 366 16.6 408 18.5 283 12.8 238 10.8 303 13.7 612 27.7
Understand and use geometric properties of two- and three-dimensional shapes 6–10 392 17.8 569 25.8 406 18.4 313 14.2 281 12.7 245 11.1
Solve problems involving area, perimeter, and volume 1–5 909 41.2 473 21.4 326 14.8 190   8.6 157   7.1 151   6.8
Understand and use measurement principles and units of measure 1–5 588 26.7 592 26.8 394 17.9 253 11.5 188   8.5 191   8.7
Represent and interpret data displays 1–5 624 28.4 549 25.0 385 17.5 245 11.1 203   9.2 193   8.8
Use operations and models to solve problems 6–10 577 26.2 496 22.5 331 15.0 268 12.2 229 10.4 300 13.6
Understand patterns and functional thinking 6–10 353 15.9 541 24.4 489 22.1 281 12.7 251 11.3 299 13.5

Another dimension of opportunity to learn is student engagement during instruction. The First Contact survey contains two questions that ask educators to rate student engagement during computer- and educator-directed instruction. Table 4.14 shows the percentage of students who were rated as demonstrating different levels of attention by instruction type. Overall, 84% of students demonstrate fleeting or sustained attention to computer-directed instruction and 82% of students demonstrate fleeting or sustained attention to educator-directed instruction.

Table 4.14: Student Attention Levels During Instruction
Demonstrates
little or no attention
Demonstrates
fleeting attention
Generally
sustains attention
Type of instruction n % n % n %
Computer-directed (n = 11,672) 1,816 15.6 6,855 58.7 3,001 25.7
Educator-directed (n = 13,090) 2,333 17.8 8,613 65.8 2,144 16.4

4.5 Conclusion

Delivery of the DLM system was designed to align with instructional practice and be responsive to individual student needs. Assessment delivery options allow for necessary flexibility to reflect student needs while also including constraints to maximize comparability and support valid interpretation of results. The dynamic nature of DLM assessment administration is reflected in the linkage level and the EE selections made by test administrators. Evidence collected from the DLM system, test administration monitoring, and test administrators indicates that students are able to successfully interact with the system to demonstrate their knowledge, skills, and understandings.