Saturday, March 19, 2011

School Improvement Planning and Data

"The real strength in using a data driven decision making process for school improvement is that numbers are objective. The data just don't lie."

Do you agree or disagree with the statement above? Why?

43 comments:

meg goodhand said...

Data doesn’t lie!

Pick your side…
You can find research to back it.

1) Retention or Social Promotion?
Students who drop out are five times more likely to have been retained than those who graduate (National Center for Education Statistics, 2006).
Or
Karl Alexander (1995) and colleagues reported findings from Baltimore indicating that retainees did somewhat better after retention than they had before.

2) Church = Happiness?
If you go to church even just a few times a year, you will have close friends and be notably happier in life. Those are the findings in a nationwide study published in the Dec. 7 edition of American
or
This new survey reports that confident nonbelievers are more emotionally healthy …," says a release from the Center for Inquiry (CFI),


Data is purposeful and can really help guide discussions and times for reflection when looking at your goals. But one needs to always be mindful that data can be used to sway people’s thinking without taking the time to completely understand all the variables within the study.

Matthew said...

Numbers are objective but the collection methods and elements behind the data can be subjective. We saw that in class with the NAEP data - there has been no growth but Simpson's Paradox shows that each subgroup had growth even if the whole group experienced little to no growth. Data is a very effective tool but it must be wised wisely - in deciding how to collect it and in how to analyze it/make opinions from it. To discount data because of the potential flaws is to refrain from using a very valid resource; to always follow the judgments drawn from data is to ignore the flaws behind data collection and analysis.

The group I was part of during Dr. Schainker's class this spring saw this with dropout data - you can find data to fit what you want because there are so many different ways to adjust the method behind the data collection. Data should be collected but you should always be aware of how the data was collected and various “flaws” each data set might have.

Janice said...

The data does not lie, but it does not always tell the entire story. I support data driven decision making, but you have to make sure you are looking at the whole picture. First of all, you have to make sure the assessments are valid. A bad test is not going to give you good data. It is also important to look at student growth. For example, the teacher that has an honors class where 100% of the class is proficient is not necessary a better teacher than the teacher with a standard class that is 75% proficient. You also have to look at the demographics of the students. We saw this in class while looking at the NAEP scores. The disaggregated data helps to tell the whole story. Even then you have to be careful, because data can be manipulated. It can also be difficult to compare data from year to year. Tests are often re-normed and new policies are implemented. For example, last year North Carolina mandated that all students that scored a level 2 on the EOC must re-test. You cannot accurately compare the 2008-2009 scores to the 2009-2010 scores if you have retests in second year but not the first. Data is objective; the interpretation of the data is not necessarily objective or accurate.

Russ Snyder said...

I agree with the statement that numbers are objective and the data doesn’t lie. One reason I believe this is because data is productive in allowing us to analyze student achievement. The way we do this is by looking at standardized test scores along with AYP/growth information to determine if our students are being instructed properly. However, there will be some individuals who will say that you can’t let a few numbers dictate what decisions are made to determine if real learning is taking place. Just because test score data is high does not mean that students are really learning the information or teachers are effectively instructing the students. Data can give us feedback into what programs or new initiatives should be created to benefit school improvement. Using what the numbers say, lets teachers and administrators see what areas of weakness are present, so the proper improvement or reform strategies can be implemented.
Sometimes data can be misleading when using it to make school improvement decisions. For example, a school composed of those with high socioeconomic status, compared to those who come from difficult backgrounds can provide false data. Therefore, it is important that we use data as a guide in breaking down demographic information to evaluate if particular students within our schools are learning what they should, but not making it the main priority. It is important that when making school improvement decisions that you take into account various factors (socioeconomic status, race, past experiences) when evaluating the data. Overall, data driven decision-making is essential in taking numerical information and determining those areas of strength and weakness in schools.

Tierre said...

I do agree with the statement however numbers can give off false implications or provide unclear explanations of research. Numbers are objective but also provides support of different programs and activities. Once data is received it will also be evaluated and analyzed to display the best results important to this case. Statistics show the true picture of certain situations yet there are often other contributing factors that have never been discussed or addressed. As Matthew stated we were in a group that was charged with researching and finding supportive data on how districts were attempting to address /reduce drop-out rates. We researched many different articles and documents, meanwhile we all found different information and supporting facts relevant to different district research methods. I am a strong supporter of accountability and if teachers could use data to lead the classroom environment with students then they would be more prepared. Statistics and Data are utilized to help or hinder the process of making decisions for schools to improve.

Coretta said...

The data doesn't lie; however, (1) data can be manipulated; (2) it may not reflect contributing factors; and (3) it can be misinterpreted. Although, there are limitations to using data, I would agree that strength does exist in using data-driven decisions. However, I would argue that while the numbers may be objective, it does not account for the possibilities that the interpretation may not be objective.

The proper use of data are real strengths because decisions can be made to target areas of concern based on the data. I also believe data provide baseline information.

Based on the data at CES, black students in Dual Language are out-performing black students in traditional classes. However, after carefully analyzing the data you realize that 100% of the black students in Dual Language equates to one student. Unlike the ten black students in the traditional classes. Did the data lie by indicating that 100% of the black students in DL were proficent? No, however, it was misleading to suggest that black students in traditional classes were not performing as well as those in Dual Language.

Coretta said...

Russ, what analysis is being made based on EOG testing? What factual information is given to support that students are not being taught properly. Could it be the tests are bias? Maybe students are experiencing testing anxiety or stereo-type threat. Whatever the reasons may be for low test scores, I am having a difficult time believing that the score indicates anything other than a score that could be used to support whatever someone decided they wanted the score to support.

I do agree that data can be misleading and data driven decisions are essential. However, I would suggest that the decisions are made after evaluating the limitations as well as the information recorded.

Jennifer said...

If the tests are aligned with the curriculum, if the students are taught the curriculum, and if the data is used to address strengths and weakness in the curriculum; then yes, the strength in using a data driven decision making process. But all too often the curriculum and the testing are not aligned, which causes the data to be skewed. I agree with Coretta, “The data doesn't lie; however, (1) data can be manipulated; (2) it may not reflect contributing factors; and (3) it can be misinterpreted.” Often there is some disjointedness in this state, the Curriculum & Instruction department is not discussing their visions and missions with the Department of Testing and Accountability. The teachers & students are caught in the middle between following new initiatives that are not aligned to the testing and achieving AYP. What does that data tell us? That the students were taught the information they were tested on? How can we use that data to support anything of substance?

Latisa said...

I completely agree that numbers are objective and can be used to assist in making sound decisions regarding school improvement, but I disagree with the statement that “the data just don’t lie.” Data is of two types—quantitative and qualitative. These two types of data are so closely related to each other that the lines between the two often become blurred. All quantitative data is based upon qualitative judgments; and all qualitative data can be described and manipulated numerically. It is impossible for people to make decisions based solely on numerical data. The “stuff in their heads” always factors in to how the data is interpreted. Inevitably, personal biases such as experience, values, assumptions, beliefs, and judgment come into play when interpreting the data and hypothesizing about how to improve student learning.
While I do think that data collection and analysis are a must in school improvement, it’s important to keep in mind that the numbers don’t always tell the whole story, and as a result, shouldn’t be the only factor in decision making.

Latisa said...

Jennifer, I agree with both you and Coretta that data can be manipulated, misinterpreted, and misleading; however, I’m not sure I understand the rest of your position. Are you saying that aligning what’s taught to what’s being testing would produce more accurate data? If so, what would that data tell us? Do you think it would be any more useful given that the results of the assessment would still be open to interpretation and could still be skewed?

Susan said...

I agree, however, I feel that much that is behind the data is subjective. For example, a school can gather data on a student to make placement decisions, but one must take into account many tests and/or observations in order to gather an accurate picture of the student. I have served on the AIG committee at my school for many years and have found that the data you see on paper is not always a true picture of a student’s abilities. There were many instances in which students did not qualify according to the established criteria. Once further investigation was conducted there were many factors that contributed to the “lower” test score than what we would have expected from that student. Data is important and it does not lie, however, many other factors contribute to the data.

Susan said...

Latissa, I agree. It is very important for educators to realize that no matter how much you rely on the "numbers" there is still going to be individual interpretation.

Russ Snyder said...

I agree with Matthew that there are flaws in using data to make school improvements. Many times data is twisted to make the numbers look one way or the other. That’s why it is vital that when evaluating the data, you must consider all elements associated with the school (demographics, socioeconomic status, etc.) to determine its definite meaning. I did think the NAEP data in Saturday’s class was interesting because the data had not changed much from the 70’s till now. Even though we use data to analyze what may be going on in a school, it cannot become the basis for change, just because we don’t like the numbers. Like Janice described, when considering what the data means, you have to look at all the factors and/or the whole picture. Tests do have flaws and the data it produces can provide deceptive information for those who are making school improvements. That’s why it is important to consider all factors when deciphering what the numbers really mean.

Tierre said...

As Janice stated numbers do not always support the efforts of all teachers however it does provide information that can be used to assess student’s progress through previous scores and data. Issues often stem from out dated curriculum guides or tools which are very important to monitor student’s current progress. Assessing student growth is another serious topic that needs to be addressed to gage the progress or understanding of the teacher guided lesson. When you are comparing data there are multiple areas of concern that must be evaluated to ensure that you are recording and tracking the progress of students.

Jennifer said...

I think now the students aren’t always able to perform well on assessments partly because the information on the assessments was not taught in the class; either because the teacher hadn’t gotten to the objective or because the objectives weren’t clearly defined. If the test could, at least, assess the information that the students were learning it could possibly tell us more accurately what students comprehend and what they don’t. Maybe! I agree with a comment you had in a previous blog or in class, not sure when. What you said…paraphrasing…..is that teachers are charged with differentiating the instruction to fit the learning style of each student, but assessments are not differentiated to accommodate for the variances in student modalities. I believe that lack of ways to measure student growth impacts data obtained through the current testing as well. All I am saying is that there are too many factors that impact the outcome of assessments to use assessment data as the only tool to measure if students are achieving. You are right, data is subject to interpretation!

Suzanne Sell said...

I’m torn about how I should respond to this question. I don’t agree that numbers are completely objective. For instance, there’s a very real thing called “testing anxiety.” My own daughter who graduated second in her class at a very competitive high school and received merit scholarships to college was a HORRIBLE test taker. She would definitely be one of the reasons to throw scores off in a building. And, with the importance that most schools place on test scores, it stands to reason that we have a lot more kids out there like my daughter who, if assessed in a different manner, would score much higher than they do. Obviously, the opposite is also going to be the case. I am sure there are students who manage to squeak out a 3 on an EOC when they managed to guess well.
However, because I think that standardized test scores are one of the best ways we have to HOLISTICALLY see how students in a school are doing, I am a proponent of data-driven decision making. If the data is disaggregated according to sub-group and teacher, analyzing test data is one good method of determining how individual students are performing. This analysis should be used by teachers to measure their own effectiveness and how their students are mastering particular objectives. Data is one of the only ways we have to measure whether teachers are individualizing instruction according to their own particular students’ needs since most of the data can be broken down by student and sub-group.
As a school leader, I can use data to help me place teachers where they have their real strengths. Just because a math teacher wants to teach Algebra II, if the data shows that he isn’t getting through to Algebra II students as well as he did with Algebra I students, we need to talk. Accordingly, if a Civics teacher is a master at helping students see cause and effect relationships, but is positively opposed to any form of rote memorization (some of which is necessary in a Civics course), maybe she would be better off teaching an AP Government class.
I don’t think that data tells the whole truth, but it is one of the most efficient things that school leaders have to make scheduling decisions. The trick is to make sure that, as school leaders, we remember that data isn’t completely objective.

Matthew said...

Another element associated with data that is valid is that you then have to know how to utilize the data for application. While the data may be valid it alone does not tell you what steps you should take going forward. One of the readings for this week exemplified this- a renewed focus on the lowest area was not the solution needed to improve test scores. Knowing how to use the data to accurately identify the gaps and to discover the next steps requires training and time. To use data to make decisions requires a commitment to time to accurately analyze the data and the elements behind the data as well as a commitment to make sure that the staff are capable (given the time and resources needed) to also analyze the data.

Suzanne Sell said...

I see that a lot of us agree that numbers don't lie, but that PEOPLE can choose to manipulate data to show what they want that data to show. Part of the reason why I am having such difficulty with this question is that it isn't as cut and dry as one would think. I would say that data doesn't lie, but it is never completely objective because people choose what to say about the data or what data to include when analyzing a situation. I still remember my first class in college. It was a logic class. Our professor told us to analyze some statistics as to how truthful we thought they were on a leikert scale. One of the stats came from a popular Trident sugarless gum commercial: "4 out of 5 dentists surveyed recommend sugarless gum for their patients who chew gum." Someone who didn't really look at this information may think it means that the majority of dentists think we should run right out and buy some sugarless gum. What the marketers were hoping people wouldn't notice was that the statistic really gives us some important information: the marketers CHOSE which dentists to survey (probably choosing only dentists who purchased gum themselves); dentists who weren't surveyed may have been dentists who were dead-set against patients chewing any gum at all or who thought there was no difference in oral health for patients who chewed sugarless gum versus gum with sugar; and that dentists only made this recommendation for patients who were gum chewers (perhaps meaning that they would prefer we chewed no gum at all, but that sugarless gum was the lesser of two evils). We know that what people DON'T say may, in fact, be more important than what they do say. When analyzing data, it's just as important to remember that also.

Parry Graham said...

From Cyndi:

Oh…please! Really? There is a great potential for error during research and collection of data and statistics. For example, from identifying “population, sample, parameter, and statistics” to “determining whether it is an observational study or experiment” to establishing if the “sample is a simple random sample, a voluntary response sample, a convenience sample” to “recognizing typical forms of biases such as potential under-coverage, non-response, question wording, and response bias” to summarizing the findings by “calculating the principal summary statistics (mean, median, quartiles, inter-quartile range, variance, standard deviation)” (http://www.causeweb.org/repository/LO/Objectives.pdf). The probability of miscalculations is forever present. So I guess it all depends on who is collecting the data and how competent they are. A study by the RAND organization shows “without the availability of high-quality data and perhaps technical assistance, data may become misinformation or lead to invalid inferences.” They continue by stating, “…the presence of raw data does not ensure its use. Rather, once collected, raw data must be organized and combined with an understanding of the situation (i.e., insights regarding explanations of the observed data) through a process of analysis and summarization to yield information. Information becomes actionable knowledge when data users synthesize the information, apply their judgment to prioritize it, and weigh the relative merits of possible solutions” (http://www.rand.org/pubs/occasional_papers/2006/RAND_OP170.pdf).

Consequently, the statements, “the real strength in using a data driven decision making process for school improvement is that numbers are objective. The data just don't lie." are completely inaccurate and misleading.

Janice said...

Jennifer, I would agree that it is important for the test to be aligned to the written curriculum. However, one inconsistency is that the taught curriculum does not always align with the written curriculum. You mentioned that data can be skewed if the students are not taught the curriculum. I think what your asking is, how can we expect students to do well on something that I never learned? However, I do no think this is a problem with the data, but with the teaching. As a school administrator, I think this is one of the most important things the data can tell me. If a teacher is not teaching the curriculum, then I want to know it so it can be addressed.

Parry Graham said...

Just so you know I wasn't making this up, and to give you an interesting example of how different types of data can be connected together to form some interesting conclusions, here is a summary of Dr. Ronald Ferguson's work connecting the decline of black achievement test scores and the rise of hip hop records:

http://www.catalyst-ohio.org/news/index.php?item=69&cat=25

I leave it to you to determine whether you believe these data are telling any type of truth or not.

Dr. G

Unknown said...

Does data lie? No, but it sure can skew the truth.
Example: “Practically 14% of the students were failed to pass the test. This is simply unacceptable!” Or “More than 85% of the students pass the test with sound scores. We are doing quite well!” Same data, two different perspectives. It all depends on who is presenting the data, and how they choose to use it.
With this said I do think that data driven decisions are a great start. Reliable data in its pure form is objective and takes no position. This is one less variable school leaders have to factor in when making choices. In the end, humans and their flaws have to make the decisions. I would suggest we change the terminology to “data guided” decision making.

Unknown said...

Latisa, I really agree with the points that you made. Although data can be accurate it still does not tell the whole story. The personal biases you spoke of in my opinion are actually needed. Data should be a large slice of the pizza but not the whole thing. The human element must be a part of the decision making process as well. Data shows that 56 mph in a 55 mph zone is illegal. Should that warrant a ticket?

D-Hack said...

I agree the data does not lie but it does not tell the entire story. For example, when looking at test scores we tend to look at the class percentage of 3s and 4s, not considering all the circumstances surrounding the students. Reading level, attendance, and course level are just some of the factors not taken into consideration when evaluating students and teachers. Those factors will have a direct correlation to student achievement. This data has often been used to determine a teacher’s worth. Data can be misleading. The data doesn’t show the amount of time teacher has dedicated for remediation of students or the hours a teacher puts into analyzing formative testing data to re-teach objectives that were not strongly tested. Data is just that, numbers, but decisions cannot be just data driven, administrators must dig deeper to find some other outlined reasons for success and or failure.

Shannon said...

I agree that data-based decision making is necessary. Data should be used to make instructional decisions, but educators need to be aware that it can be subjective.

Data can improve teaching efforts when teachers monitor assessment data carefully. Educators often do this to aid curriculum and instruction. We use various forms of assessments to guide what we teach. Furthermore, some data from state required standardized tests may help determine how students are doing. However, so many test biases occur throughout these tests that it is difficult to view the data as valid or reliable. In fact, in my school district, teachers and administrators are finding incorrect answers written into the local Benchmark tests. These incorrect answers are changing students’ test scores. To me, this is such a farce because it continues to happen on every locally created test. Therefore, I find it necessary that educators understand that test data does not always show a student’s true performance ability. Data can be misinterpreted and lead to invalid inferences because of human error.

On the other hand, we must measure success some way. I understand that state assessments can help pinpoint the school, grade level, subject area, and even classroom teachers that need assistance. The question still remains: Are these tests the best way to collect data?

Shannon said...

Susan, I agree with what you said about data being subjective. Furthermore, I agree with your statement that schools must take into account many types of tests to gather an accurate picture of a student’s ability.

Relevant data should be taken from various types of formal and informal assessments. There are times when it is necessary to calculate data, but there are many other times when assessments should not be data driven. Instead, such assessments should be content and performance driven. Educators should use formal and informal assessments to guide their perceptions of a student’s ability.

I feel that data collection can be effective and ineffective. Data collection is least effective when it is not aligned to goals and objectives. When different types of data are collected, but the focus of the data collection is not student outcome, then data-driven decision making will not be useful. Data collection is effective when a variety of assessment sources are studied over time. When various types of data are collected then data is more likely to be objective.

Shannon said...

After reading the article regarding Common Core State Standards for our nation, I wonder if educators will change current state's testing practices to a uniformed national standardized test. I believe each state should not only be required to follow a national curriculum, but also a national standardized test that delivers consistent data for all. Although subjective data will still be derived from these tests, at least assessments will be consistent throughout the country.

D-Hack said...

While coaching basketball this year I looked at my statistics for the season and I saw that statistically my best free throw shooter was 90%. My second leading free throw shooter was 79%. This was ironic because the 79% free throw shooter was my leading scorer. When I decided to dig a little deeper, I realized that the 90% free throw shooter had only attempted 20 free throws. My 79% free throw shooter had 70 attempts. The overall percentage was lower but she had been the free throw line three times as much. This also told me that my 79% free throw shooter was much more active than the 90% shooter. This is just another explanation of how data can be misleading. Interpretations of data will always be skewed according to analyzer and what they are trying to accomplish.

Susan said...

Shannon I agree. If we are going to follow a National Curriculum then we need common assessments. I feel that the curriculum and assessments should have been adopted as a package deal.

meg goodhand said...

Ah, David, so these statistics suggest we should have the kids take the tests 3x as often,
Eh?
Jennifer would go for that idea.

Unknown said...

I agree that data can be subjective and should not be the most important focus of school improvement. One must take all factors into consideration when using the individual data. Although numbers do not lie, the method in which the data was collected may lie or not tell the whole story. For instance I do not think that the method in which the data is used reveals all or takes into account any outside forces or circumstances that may lead to a certain outcomes.
I believe that data can be a great tool to drive decisions. But the most important thing is how the data is being used to make decisions.
Data should be analyzed carefully to drive instruction. The problem is that many times the data is not used to implement change in instruction or the data is collected without full understanding of what it means.

Unknown said...

Roderic and Latisa I agree with all of the statements that you made regarding the use of data and the fact that data alone does not tell the entire story or even begin to help one understand the whole story. In my opinion all the data means nothing with the human component. Roderic I really enjoyed the example that you used regarding the speeding ticket, that was a perfect example of how data alone is not enough to be the deciding factor.

Matthew said...

Meg- I took David's suggestion a different way. What I took from his post was that you have to dig deeper behind the data to find out what is really going on. In an end-of-game situation, do you trust the person who only took 20 free throws? Or do you trust the slightly worse (statistically) shooter who has shot foul shots three times as many. I don't know what coaches would decide but I'm probably going with the 79% free throw shooter.

To bring this analogy towards teaching- when we become principals and inherit a staff, we might immediately see that one teacher always has 90% of their students pass compared to another teacher who only has 60% of their students pass. Which teacher is better? The data says the teacher with 90% passing rate...but what if 90% of those passed the pre-requisite course the year before? Perhaps the teacher with a 60% passing rate always teaches a remedial course of students who failed that exact same test the first time they took it. Which teacher is "better" - the one with 90% passing but no real growth or the one with 60% passing out of a class where all had previously failed?

A random quick note about the EOG- it is important to remember that the EOG is designed to help give a larger picture; the data is much more effective at analyzing a school than it is at analyzing the material learned by one student. Therefore, retests are helpful as students are shown more content that way which, if they really learned it, should allow them to pass it on their second or third try.

Tristen Perlberg said...
This comment has been removed by the author.
Tristen Perlberg said...

Data is a main reason for people making the decisions that they do. Parents will decide to send or not to send their child to a school based on the numbers that are associated with that school. It is true that numbers are objective. However, what people fail to realize is that the number, though still under the same name or umbrella, may mean be computed using different data from one year to the next.

Unknown said...

Data is a good way to measure progress but you have to know what the information the data is telling. Are the scores measuring a student maximum potential in reading or math? No. Test scores can only tell you how a student did on a specific day with a specific set of questions. The data doesn’t tell you that a student may have had a headache and that the test was only making it worse. The data doesn’t tell you that a student may have been up all night because his baby sister wouldn’t stop crying. Data is snapshot and should be treated as such.

To use data effectively, more is better. Collecting data often and through a variety of methods will give you the most accurate snapshot of a student’s progress. The key is a variety of methods. Every student doesn’t excel with multiple choice tests or have the stamina to test for 2 ½ hours straight. The method for collecting the data needs to be age appropriate. Should 8 years be expected to perform their best for 2 ½ consecutive hours?

Unknown said...

David H,

You made a good point that data can be lead information in a certain direction. My group in our last class made this same point about the information that is available about our current dropout rate.

Unknown said...

I totally agree with you David H. Data alone does not tell the whole story. The human element of interpretation is critical. I would say though that your example makes me question my own opinion of data. Who is a better shooter? The young lady who shoots 79% or the one who shoots 90%? I totally understand that as a basketball coach I would give more credit to the player with 70 attempts. Then again if I back away from my own thoughts for a moment, who is to say that if the 90% shooter had taken 70 attempts she still would not have made 90%. WOW!! This is pretty interesting!

David Jordan said...

The numbers do not lie, but the way that the data is presented can be maniplated in a variety of ways to favor certain desired outcomes by involved parties. Data also depends on the assessment and how it is being presented. Too much of the data provided within education does not account for the incredible variety in variables that are affecting outcomes or percieved outcomes. The example in class was Simpson's Paradox.

David Jordan said...

Hopefully through time the types of data availible to educators will improve from standardized testing score comparisons. This isn't to say that traditional standardized testing is not valuable. I would argue the EOC's privided educators with objective information on the effectiveness of certain approaches to specific objectives.
Wake County has been very proactive at providing information to all involved parties through a variety of online source including SPAN and SAM. They have also been active in reavaluating the purpose of grades. As it stands, letter grades are a source of data. One letter grade per class is suposed to demonstrate everything that the child accomplished. Is an "A" student always an "A" student no matter where they move to in the US? If the purpose of data collection is to get an objective outlook on an issue, I would argue that a letter grade is one of the most subjective sources of data.

Unknown said...

Testing does produce data. Data can be numbers, and one could therefore argue that this would make this data collection an objective task. Unfortunately, the planning of what data will be collected, the designing of how it will be collected, the actual data collection, the analysis of the data, and the final interpretation of the data are all entirely subjective tasks.

So while data can be an effective tool in understanding the academic placement of the students and designing instruction to improve their achievement, it should by no means, be the only source of information on the students. Since data does not always paint an accurate picture, it is important for school leaders to have their staff record student achievement in many different ways, for example, summative testing data, student portfolio data and the teacher run formative assessments.

Unknown said...

Testing which produces data, allows for data driven decisions. However, before rushing out to interpret the data, it is important to ask many questions, starting probably with: is the data we are collecting even important to begin with? Does the data add value to the vision for the school? Much data is relevant, but an equal amount of data is not relevant to solving any issues that are being faced. Often there is the danger of either collecting too much irrelevant data or of getting a complicated set of data. In both these cases, the emphasis shifts closer to data collection strategies and farther away from finding solutions to school issues. Using good, simple data to support the school vision should be the direction of travel for the school.

Leaders recognize that collecting the data as a one-shot deal at the end of the course, with the hopes that it will show all their students are proficient, is not a good strategy, if the final aim is student learning.

However, for many leaders under pressure from federal programs to compete for funding and status, student learning is not the immediate concern. What is the immediate concern is to make sure that students, whose scores matter, are placed with teachers who have historically produced high scores and therefore favorable data for the school. There is a definite discrimination of which students are placed with which teachers. Can test data be considered reliable if the selection was not random? I think not.

Many other factors also play into the production of test scores. The demographics of the students plays a big role in determining test data outcome. Is this considered when interpreting the data? Usually no.

Is it fair to the students to have one end of course/grade test and have them and their teachers reap the consequences of that one score?

School leaders should spend time in planning out how testing should be conducted in their school, to focus on student achievement along with high levels of student learning. Summative data, from tests like the EOCs are important, as they give the stakeholders an idea of whether the teaching was aligned to the standard course of study (which is what the EOCs are based upon). However, it is not right to ignore the fact that these summative tests only test on a select part of the curriculum. So if we wanted data that reflected true student learning, we would run frequent formative assessments, followed by re-teaching concepts that were not understood, and doing this all the way to the summative test.

meg goodhand said...

Matthew, One of my many flaws is forgetting people don't know me. Consequently, my flippant remarks are sometimes taken the wrong way.
More testing is the last thing i would advocate!
I loved David's analogy with the basketball game and thought it was very pertinent. I whole-heartedly agree there is so much more you need to investigate with data.
I am not a coach, but after reading David's post I too would have gone with the 79% shooter.
I will be more careful with my sarcasm!