Functionality of a data tool refers to the sum or any aspect of what a system or an instrument can do for a user. A tool's functionality can be used to help a user identify the features of the tool, including what capabilities the user needs to use the tool. How can an organization ensure that the data tool has the functionality needed? This section provides a series of functionality-related questions to consider before deciding to either purchase data tools or begin the process of data tool development.
Although state and local educational agencies need to collect and report accountability data by subgroup (e.g., adquate yearly progress and teacher quality), it is important for educators to use more than a single measure of achievement to make decisions about student learning. It is therefore critical to ensure that the tool can store achievement data in both numberic and text formats from state tests. It is also important that the tool have the capacity to store data from other sources, including district-wide interm and classroom-based assessments, teacher observations, work samples with rubrics, and digital artifacts of student performance.
While student performance data can give you information about how well students are achieving, they cannot tell you why students are performing that way. To investigate achievement more fully, you need a tool that can store multiple kinds of data from multiple sources. These include:
- Demographic data (e.g., enrollment, attendance, mobility, suspensions, behavior, and drop out)
- Opportunity to learn data (e.g., courses taken, time spent on instruction, library usage, after school programs, resource allocation)
- Context data (e.g., parent, teacher and student satisfaction surveys, teacher qualifications, level of teacher professional development, and transportation).
Ensure that the system has sufficient capacity to store all the variables you want to include for longitudinal data analysis for multiple years. Longitudinal data can provide important information to help states, districts, and schools meet NCLB goals and answer the following types of questions:
• Individual student's academic growth and proficiency over time
• Academic achievement of particular cohorts over time
• Changes in the achievement gap for specific groups of students
• Student mobility, retention, attrition, and drop out rates
• Prior achievement for all student subgroups
• Predictions of future student achievement
Anderson, S., Fowler, D., & Klein, S., et al. (2005). Judging student achievement: Why getting the right data matters. Washington, DC: MPR/NCEA.
Does the tool provide for multiple levels of aggregation and disaggregation to meet the information needs of states, districts, schools, and classrooms, and of NCLB?
NCLB requires educational agencies to report student academic performance data from tests by various demographic groups, including race and ethnicity, gender, English proficiency, migrant status, economic disadvantage, and disability. These categories are useful for highlighting whether some subgroups are achieving less well than others. To be effective, disaggregation should include more than a single characteristic, such as race or gender.
Educators can use increasing levels of disaggregation to determine the effectiveness of school practices and equity of services for different populations and subgroups. For example, academic achievement over time may be better for English language learners (ELLs) who have received English language instruction in one program than for those in another. Additional data disaggregation may reveal that one language program is more effective for students whose native language is Spanish than for those students whose native language is Chinese.
Educators can also disaggregate data for subgroups within major categories to examine the impact of specific curriculum on student performance. One example would be to consider whether the introduction of a new math curriculum is effective in closing the achievement gap and increasing the participation of underrepresented groups in algebra and higher-level math courses.
(For more information, please reference Heritage, M., & Yeagley. R. (2005). Data use and school improvement: Challenges and prospects. In J.L. Herman & E. Haertel (Eds.), Uses and misuses of data for educational accountability and improvement (National Society for the Study of Education Yearbook, Vol. 104, Issue 2, pp. 320-339 ). Chicago: National Society for the Study of Education. Distributed by Blackwell Publishing.)
It is likely that users will posses a range of skills related to data querying. Therefore, consider tools that can accommodate and support users who are just starting off as well as those who have experience in querying data. Often, users' querying skills become more sophisticated with data use, so be sure that the tool provides for increasingly complex queries beyond current levels of use. Pre-programmed queries, such as "what is the performance of X students on Y test?" can be useful starting points for data analysis. It is also important that tools allow for custom queries, such as "how does X cohort of students compare to Y cohort of students on the school wide math assessment over the past three years?"
Does the tool support a level of statistical analysis that is currently required to meet your data needs as well as the capacity to do higher-level analysis as your skills and needs increase?
The use of descriptive data has become a national mandate through NCLB. This law requires reports on academic performance accountability test data by race/ethnicity, gender, English proficiency, migrant status, economic disadvantage, and disability. The information is generated by descriptive analysis (e.g., mean, median, standard deviation, percentiles) and is a starting point for using data effectively. However, while descriptive analysis can provide information about how individuals or groups of students perform, descriptive data does not tell us why they achieve the levels they do.
When selecting a data tool, you will need to ensure that that the tool has the capacity to provide a range of levels of statistical analysis to meet current and future needs. Check to see if the tool is able to accomodate significance testing or inferential statistics, including t-tests and analysis of variance. Both of these statistical approaches are important to better understand if the results are due to real differences or simply chance. Finding out why things are the way they are requires statistical analyses that go beyond simple bivariate (2 variable) relationships. Techniques that involve multivariate analysis can be used to examine the relative effectiveness of different factors on student learning, such as specific programs, instructional techniques, and staff and resource allocations.
When it comes to understanding data, different audiences will have different concerns and levels of sophistication. It is important, then, to make sure that the tool you decide to use has the capacity to meet both the analytic and reporting needs of all stakeholders. For example, parents want to know how their school is faring relative to other schools and how their child is doing in terms of meeting standards. Policy makers want information about how many students are meeting standards, which students are gaining ground and which are falling behind. Building-level administrators and teachers want similar information as well as information about patterns of student performance over time and about the impact of new programs and strategies intended to improve student achievement.
Check on the type of analysis that the tool can perform (e.g., descriptive: count, percentage, mean, standard deviation; comparative, including cross-sectional; longitudinal; and correlational) and whether the analyses will meet both NCLB and local requirements. Also check that the data report formats are useful to each user group. For example, teachers benefit from visual displays that will help them identify strengths and weaknesses among their students. Teachers also benefit from viewing group reports and having the capacity to drill down to student profiles. Information on the impact of flexible groupings of students on performance that is linked to specific standards or assessment items is also useful to teachers. Administrators can use visual displays of information to help them allocate resources, decide on professional development needs, and assess progress toward goals
Also check that the tool can generate the reports you need in formats that will be understandable to your audiences. For reporting, a rule of thumb is to target the report to your audience's interests and needs and to make sure that any graphical representations are easy to understand. Clarke and Mayer (2003) advise that graphics should reveal data at different levels of granularity. A graphics display should use, at most, two modes of communication at one time (e.g., text, audio, or pictures), and extraneous sounds or animation should be avoided.
Data that can be queried directly by the user and that can be displayed in a variety of formats can help the user more deeply understand the meaning of the information (Mandinach et al., 2006). You should check that the tool is sufficiently flexible in allowing you to examine the data from different perspectives. These perspectives include a variety of visual displays, such as tables, graphs, and charts, as well as data at different levels of aggregation and disaggregation.
The feedback loop refers to the time between when the data are generated and when the results are reported to the end user. The feedback loop can be either: a) one of the biggest motivators for, or b) one of the biggest impediments to effective data use. The tighter the feedback loop (meaning the more quickly data are available), the more useful the data are to the end user (Mandinach et al., 2006). To ensure effective data use, the data must still be relevant when they are reported. Check to see that the data tool can provide data to the end user within an appropriate (i.e., usable) timeframe.
Access should not be restricted if the tool is going to be widely used at different levels of the education system (e.g., by building administrators and by teachers) to improve student achievement. It is particularly beneficial to encourage teacher use of data tools by providing teachers access from classroom and home computers.
Some tools permit users to query data easily via pull-down menus of data elements, report formats, and specific subgroups of students (e.g., ethnicity, gender, language proficiency, etc.). Such tools provide questions for data users to draw on and allow users to manipulate the data with customized queries. You should check the flexibility of all data tools and inquire as to whether the querying function can be used by a wide variety of users or if experts are always needed to conduct queries.
A number of data tools contain instructional materials or links to instructional suggestions and resources based on results of data analysis. However, many tools have few or no links to instructional practices and require teachers to make those links based on the data they have analyzed. Depending on your needs, you should check this aspect of functionality in the data tool.
Some data tools work better with some browsers than with others, some are net browser enabled, and a number of data tools are not cross-platform compatible. You will need to check these issues before deciding on a data tool for your use.
Depending on the capacity of the data tool, you may want to conduct higher-level statistical analyses with other software programs (e.g., SPSS). Make sure you check the tool's compatibility with a range of software packages.
Many districts already implementing grade book software applications may want to include student grades in future data analysis. If this is the case, it will be important to check that the data tool is compatible with your existing grade book software. If applicable, check to see if the grade book data can be imported and exported from the tool.
A number of data tools provide the capacity to longitudinally store digital artifacts of student performance. Videos of student performance, scanned essays with rubrics, and images of models the students have constructed are examples of artifacts that can provide additional student data to corroborate or contradict test-based scores. Although your immediate plans may not call for storage of digital artifacts, this is a data tool function that you might want to consider using in the future.
Some tools have a goal setting function and the ability to monitor and report progress toward goals. This reporting function can be a useful communication tool to provide stakeholders with periodic information about how well the district or school is progressing toward goals.