Jump to content

Recommended Posts

  • Replies 452
  • Created
  • Last Reply

Top Posters In This Topic

Posted (edited)
  Steeldrum said:
Said judge is a former band director of Haltom.....

 

 

And now the TxBands forums will blow up with theory, and hate for allowing a former director to judge a contest his band was in

Edited by TxRaider13
Posted
  chs31 said:
Can anyone explain how Coppell got a 17 in prelims in Music? That one really makes no sense at all.

 

I think there are several ways Coppell could theoretically get a 17. (Note: I wouldn't have given Coppell 17. I'm arguing possible scenarios.)

 

1) Coppell immediately followed Marcus in prelims. Besides Duncanville, I think anyone following Marcus would see their score a little depressed in music.

 

2) Look at the other prelim scores from the same judge. He obviously valued difficulty. (For example, Haltom at 2. IMHO, Haltom played the hardest book of the day by far.)

 

3) Coppell loses points from a music ensemble perspective because of individuals who stick out. (Bass Trombone, for one.) Also (IMHO) in general their brass blasted and didn't play with resonance (much like Bowie, who were also scored low by the same judge.)

 

Here's the show if you weren't there/don't remember:

 

I don't feel great picking apart the show, I really enjoyed it and had them easily ranked in the top 10. During finals, I pulled up the prelim scores and was shocked by the 17. I spent their finals performance trying to see how someone could come up with that ranking. It's possible. Music judges listen for different things (and that's okay). That's why there are three of them.

Posted
  The Devil said:
I think there are several ways Coppell could theoretically get a 17. (Note: I wouldn't have given Coppell 17. I'm arguing possible scenarios.)

 

1) Coppell immediately followed Marcus in prelims. Besides Duncanville, I think anyone following Marcus would see their score a little depressed in music.

 

2) Look at the other prelim scores from the same judge. He obviously valued difficulty. (For example, Haltom at 2. IMHO, Haltom played the hardest book of the day by far.)

 

3) Coppell loses points from a music ensemble perspective because of individuals who stick out. (Bass Trombone, for one.) Also (IMHO) in general their brass blasted and didn't play with resonance (much like Bowie, who were also scored low by the same judge.)

 

Here's the show if you weren't there/don't remember:

 

I don't feel great picking apart the show, I really enjoyed it and had them easily ranked in the top 10. During finals, I pulled up the prelim scores and was shocked by the 17. I spent their finals performance trying to see how someone could come up with that ranking. It's possible. Music judges listen for different things (and that's okay). That's why there are three of them.

Great thought out reply!

Posted

My Awards from 5A State 2010:

 

Kids having the most fun

- The Westfield bassoon duo (they did a great job of conveying the sense of "Yeah this is hard. So what? We've got this.")

- The kids who turned the wheels on the Brazoswood vertigo props (I'd go with the snare drummers but they were probably a little sick afterward.)

- The Marcus shadows

 

Most moving performance

- Donna. From "Goin' Home" from Dvorak 9 in the preshow to the closing at the heavenly gates, this was a touching show.

 

Creepiest Show

- Saginaw

 

Coolest moment

- Brazoswood finale

- Watching the kids in wheelchairs from Donna and Coronado participate in marching band

 

Worst moment

- Listening to parents complain about the finalists on the way out of the dome after prelims. It's always going to happen but it's never not disappointing.

 

Biggest surprises

- Area G. Those kids can play! It sometimes feels like the schools not from major metro areas don't get much credit. There's a lot of great teaching going on down in the Valley. Many kids who can play their instruments very well!

- Brazoswood and Reagan not in finals. I thought both of these bands played well enough to get in.

 

Kids with the most poise

- Any kid who had to play a solo while the guard messed with them. That takes concentration.

- "The Queen" from Spring who had to maintain composure while processing very slowly across the field. That was impressive.

 

Most awkward moments

- Waiting for warmup time to be over for bands without a preshow. Sometimes up to 2 minutes. If we're going to embrace this pre-show thing, we've got to make things a bit less awkward for the bands that choose not to do one. For example, announcing prior to the 1 minute mark.

- The clock during the Claudia Taylor Johnson show. It started early.

- The clock during the Woodlands show. It appeared to start late. (It seemed like the show was about 15 seconds over the time limit.) (I'm glad no one was penalized for clock issues. That really takes the focus off of the kids on the field.)

Posted
  The Devil said:
Most awkward moments

[...]

- The clock during the Claudia Taylor Johnson show. It started early.

- The clock during the Woodlands show. It appeared to start late. (It seemed like the show was about 15 seconds over the time limit.) (I'm glad no one was penalized for clock issues. That really takes the focus off of the kids on the field.)

 

Great list, and I just wanted to comment on these two things.

 

The clock absolutely did start way too early for CTJ and I was originally thinking that that caused the timer to be gunshy on TWHS's show, but I during Finals specifically watched for where the starting point of the show "should" be, and it is much much later than everyone is thinking it should have started. They play and march quite a bit of what is actually preshow and then get soft for a little bit before coming in to the big entrance to the main body of Petrouchka. This misunderstanding is why people thought the clock started really late for TWHS and why they think TWHS actually went over time.

 

That being said, I don't like when there is no real discernible gap between the preshow and the actual show. At the very least, there should be something really dramatically different that happens right when the show actually starts, like playing backfield for the preshow and then turning forward to start the actual show, or a slight pause and gap to allow for announcement/judge timing issues.

Posted
  jmj said:
If we have any splits and wide discrepancies in the judging - it's because every band is just that good.

 

 

Great summary by JMJ. People are always going to moan about judging but I personally thought the finals judging was pretty close. I sat with a group of MS and HS directors and not one of them had Duncanville in the top 5 BUT they all had the other 5 in the nearly the same order (Coppell/Hebron flipped). There was something about the Duncanville show that played better to the finals judges than the prelims group, but in the end the bands placed pretty much where they should be.

Posted

BTW -- they really need to change the way they handle the final annoucements. Only announcing the top 3 is an incredible buzz kill and horrendously anti-climatic. These kids earned their way into the final 10 in the state and deserve to have their moment. It would only take an extra 3 minutes to announce all 10 bands in reverse order of finish and give the kids some closure. Very short sighted by UIL.

Posted

With all the talk about inconsistent judging, I thought I would run some numbers to rank the bands by judging consistency. In other words, lets rank the bands by how much agreement there was with all the judges via their ranks.

 

To do this, I took the lowest music rank (using Coppell as an example that would be 17) and then the highest music rank (a 2 for Coppell) and then subtracted the two for the Music Consistency rank. Then I took the lowest marching rank and subtracted the highest marching rank to get Marching Consistency Rank (for Coppell that would be 2 and 3). I then added the two together for a Judging Consistency rank.

 

If you look at the results, you will see (surprise!) the two time defending state champion had almost unanimous agreement by the judges of Marcus' rank of 1. So one might infer that Marcus has mastered the art of putting a show together that will be liked by the judges down to a science. After Marcus, you can see that the judges pretty much all agreed where the Area G schools ended up in the prelims.

 

At the other end of the ranking would be schools that judges varied wildly amongst each other. I am a huge fan of Round Rock's show but there's something about the show that judges couldn't get an agreement on when they judge it. For example, one judge had them 5th in music and another had them 24th. For marching they were ranked of 5th and 25th. (24-5)+(25-5)=39. Keller tied them as well for judging inconsistency.

 

Rank School Music March Total

1 Marcus 2 1 3

26 Harlingen 1 3 4

35 Alexander 4 1 5

32 Donna 3 2 5

30 Taft 5 1 6

34 Saginaw 5 1 6

33 O'Connor 5 1 6

4 Hebron 6 1 7

8 Berkner 4 3 7

24 Cypress Woods 8 0 8

2 Bell 6 2 8

10 Westlake 9 0 9

13 Johnson 8 1 9

7 The Woodlands 8 1 9

5 Duncanville 3 6 9

36 Americas 2 8 10

6 Bowie 11 0 11

29 Coronado 4 7 11

27 North Shore 9 3 12

25 Hanna 7 5 12

12 Reagan 3 9 12

18 Westfield 9 4 13

17 Anderson 14 1 15

22 Clements 12 3 15

23 Cypress Falls 6 9 15

3 Coppell 15 1 16

11 Brazoswood 14 2 16

14 Rowlett 11 5 16

28 Lopez 9 7 16

20 Langham Creek 7 9 16

31 Hanks 3 13 16

9 Spring 17 6 23

21 Pearland 13 12 25

15 Haltom 21 8 29

19 Keller 15 24 39

16 Round Rock 19 20 39

 

Posted
  mrwood69 said:
I do understand the nature of a multijudge system, however it is very unlikely that for several performances, all the judges only see one of the two different extremes of the shows (the good and bad). At the same time, I don't think your reasoning explains that there also tends to be a judge that is just a consistent problem throughout the board giving bands scores that put them in the opposite half of the list than the others.

 

And that's what I'm trying to get at: what judge's "value". That shouldn't be happening. Every judge should be trained according to the standards in which the organization is setting. I mean, when is the last time people complained about BOA recaps, or DCI? No one ever sees the kinds of problems there like they do in UIL recaps. Why? Because those problems don't exist! The only time people complain about judging is when talking about UIL.

 

It makes perfect sense to complain about it. Apparently 2/3 of the music judges in prelims thought Coppell sounded the 2nd best... the other thought 17th, so I don't believe it's just about being excellent in every area, because frankly, in no area of Coppell's music were they the 17th best.

 

The problem UIL has is that they provide absolutely no science for their judges to go by, plain and simple. It needs to be fixed, because these judges are put in the most awkward situations afterwards when they see how incredibly different they were.

The UIL judges are some of the greatest experts and most respected people in music education. They have far more credibility than any of us, and based on that, we ought to trust them despite our own or the other judges disagreements. You can't say with real validity that in NO area were they 17th - if someone that experienced saw it that way, we have no place to agrue. Perhaps that judge looked more for musical impact than pure sonority - in that, I would agree they are downthe ladder somewhat. The difference in what the judges value helps to more efficiently seperate the best from the worst. The reason people find so much more problem in the judging in UIL than in BOA is less of what the judges choose, and much more of how the scores are counted. The discrepencies are more wide and easily spotted purely because of the ranking system over the decimal system.

Posted
  whitewing09 said:
I think that some of the complaining is because of how the scores are presented. UIL presents which ordinal rankings judges give, while BOA presents a value. What if the UIL judges raw scores were extreme close together, but since the information is present as ordinal, it doesn't show that maybe the judge thought that Coppell's show was excellent but they thought that there were other shows that were a little better.

 

Aren't BOA judges more specific. Like a judge for ensemble sound, individual sound, ensemble marching, individual marching? UIL doesn't have this. We can't say that they need to be trained to think more similarly because they might think similarly, but they happen to notice different discrepancies in the show.

 

I think UIL should stay the same. It adds a little spice to our lives. It's cool to see different bands scoring differently at UIL and BOA.

 

This is exactly what I was trying to say - I think you said it better

Posted
  Joeyz said:
The UIL judges are some of the greatest experts and most respected people in music education. They have far more credibility than any of us, and based on that, we ought to trust them despite our own or the other judges disagreements. You can't say with real validity that in NO area were they 17th - if someone that experienced saw it that way, we have no place to agrue. Perhaps that judge looked more for musical impact than pure sonority - in that, I would agree they are downthe ladder somewhat. The difference in what the judges value helps to more efficiently seperate the best from the worst. The reason people find so much more problem in the judging in UIL than in BOA is less of what the judges choose, and much more of how the scores are counted. The discrepencies are more wide and easily spotted purely because of the ranking system over the decimal system.

I wasn't slamming the judges when I put this list together. My point was to highlight the shows that the judges clearly agreed upon (i.e. Marcus being #1 and the Area G schools being ranked last) as well as highlight the shows that judges clearly didn't agree upon. There are many posts on this thread about Round Rock and why they probably received inconsistent judging because their show didn't fit the criteria that the UIL judges are suppose to judge with.

Posted

I totally understand. I liked the list - I found it interesting. My rant wasn't dirrected towards you - sorry if it seemed that way. But I do find it interesting that Area G schools are consistently so low

  bchorn said:
I wasn't slamming the judges when I put this list together. My point was to highlight the shows that the judges clearly agreed upon (i.e. Marcus being #1 and the Area G schools being ranked last) as well as highlight the shows that judges clearly didn't agree upon. There are many posts on this thread about Round Rock and why they probably received inconsistent judging because their show didn't fit the criteria that the UIL judges are suppose to judge with.
Posted
  bchorn said:

Rank School Music March Total

1 Marcus 2 1 3

26 Harlingen 1 3 4

35 Alexander 4 1 5

32 Donna 3 2 5

30 Taft 5 1 6

34 Saginaw 5 1 6

33 O'Connor 5 1 6

4 Hebron 6 1 7

8 Berkner 4 3 7

24 Cypress Woods 8 0 8

2 Bell 6 2 8

10 Westlake 9 0 9

13 Johnson 8 1 9

7 The Woodlands 8 1 9

5 Duncanville 3 6 9

36 Americas 2 8 10

6 Bowie 11 0 11

29 Coronado 4 7 11

27 North Shore 9 3 12

25 Hanna 7 5 12

12 Reagan 3 9 12

18 Westfield 9 4 13

17 Anderson 14 1 15

22 Clements 12 3 15

23 Cypress Falls 6 9 15

3 Coppell 15 1 16

11 Brazoswood 14 2 16

14 Rowlett 11 5 16

28 Lopez 9 7 16

20 Langham Creek 7 9 16

31 Hanks 3 13 16

9 Spring 17 6 23

21 Pearland 13 12 25

15 Haltom 21 8 29

19 Keller 15 24 39

16 Round Rock 19 20 39

 

 

So in other words, the judges were the MOST inconsistent with RR and Keller.

Posted

Spring has been left out of this discussion. But they were the next least likely group to make finals in the prediction analysis, other that Westlake, and had the greatest spread on placements of all bands who made finals. I think that makes them very fortunate and helped out by this system.

Posted

yes, and I'll say it again just to be perfectly clear. I'm not faulting the judges for being inconsistent here. I think Round Rock had a really good show that didn't line up with the criteria laid out by UIL. As such, that left a lot of room for the judges to interpret how RR should be ranked and therefore their ranks were all over the place.

  mrwood69 said:
So in other words, the judges were the MOST inconsistent with RR and Keller.
Posted
  Joeyz said:
The difference in what the judges value helps to more efficiently seperate the best from the worst. The reason people find so much more problem in the judging in UIL than in BOA is less of what the judges choose, and much more of how the scores are counted. The discrepencies are more wide and easily spotted purely because of the ranking system over the decimal system.

 

Joey brings up an interesting point here. Yes, there are discrepancies between judges, but if we really expected all the judges to be exactly the same, then why have multiple judges? The point of having multiple judges is to offer a variety of perspectives. Marching band isn't basketball, or football. It's a subjective art that has recently blossomed into a very complex form. You can compare it something like gymnastics, but you won't find this kind of complexity in any other contest in this world. We all have our perspectives on marching bands that have been influenced by the shows we've marched in, the shows we've had ties to and the many shows we've witnessed over the years. It's a little funny that some of us (myself included) have attempted to make something so subjective into something so scientific. There is no exact science to making a show or judging a show. It is an art that is slowly being lost in this world of constant competition. I understand that this forum is made to discuss this kind of thing, but Texas girl has helped me to remember to take a step back every once and a while and appreciate what's happening here. Congratz to Marcus, congratz to the finalists, heck congratz to the 14 piece bands that don't even dream of this level, you've all created something a majority of this world doesn't get the pleasure of being involved with. The very fact that this debate rolls on is a tribute to our passion for this art. Gotta love it.

Posted
  mrwood69 said:
So in other words, the judges were the MOST inconsistent with RR and Keller.

 

Note that these two performances were back to back. I don't know if that means anything to anybody else, just interpret what you want.

Posted
  TxRaider13 said:
This thought that who performs infront of a band, and behind make a difference is false...Their is a reason they judge to a sheet not to what the band did before

 

 

Yeah, drummerjoe is very superstitious.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...