whitewing09 Posted March 31, 2017 Posted March 31, 2017 Besides the inclusion of MPI in BOA, what are the major differences in how the two system evaluate music? One example that comes to mind is Keller. They did consistently well at UIL state, but in BOA they got destroyed in SA finals (although, they did do really well for prelims). Also, I'm actually very biased, but the quality difference in music performance between Hebron and the other top bands seemed greater than the razor thin they had. Am I missing something? Quote
Popular Post aaron067 Posted March 31, 2017 Popular Post Posted March 31, 2017 Warning...this turned out to be a wall of text. Sorry about that. MPE tapes often deal with design, especially in regard to demand placed on the students and how it affects their ability to perform the music. For instance, a group with incredibly difficult music arrangements with pretty strong execution is going to outscore a group with middle-of-the-road level arrangements who performs just as strongly, and probably a bit better to our UIL ears. There's obviously a point where high quality music arrangements definitely will not make up for a lack of quality execution, though, and also where truly immaculate execution can make up for weaker arrangements. The design being involved in the performance scores bothered me until a judge explained the general effect caption to me. There are three aspects to it: (1) the emotional connection with the audience, which is the most important; (2) the intellectual design, and can the students clearly demonstrate the complexity of that design; and (3) the artistry, or detailing and subtle nuance of absolutely every aspect of the performance. That final element is where bands like Marcus and Vandegrift really shine through in the GE caption, while bands like Flower Mound and Leander just blow you away with high energy and excitement, and groups like Carmel, Avon, and William Mason excel at the intellectual design. Of course, all the successful groups have a strong marriage of all three elements, but you can begin to really understand how some end up above others when you dig into the GE caption like this. I bring all of this up because GE does affect UIL scores, even though it isn't a judged element. Take Leander and Vandegrift. In an outdoor stadium, at the end of the season, on a first or second read, Leander has ended up outscoring Vandegrift. When you go back and watch videos, there's really no way Leander should ever top Vandgrift in a UIL contest based on quality of marching and playing (though they're still very strong and worthy of both their 5A and 6A finalist positions); however, their occasional higher placement is easily explained by judges getting carried away with the high energy of their performance, which is a huge factor when it comes to a Leander show. It's a lot like watching Bell's 2006 The Remaining live versus on a video. It still holds up on video despite some of the issues in performance captions, but the massive amount of emotion coming from the live performance was just unmatched and makes you forget about any of the other issues. On the score sheet, UIL really places very little emphasis on difficulty or content. Those sheets are concerned primarily with execution; however, some of the more experienced directors who judge often look for elements in a show that allow them to tick off boxes on the sheet. For instance, a group that features every section of their band successfully or who has both lyrical and challenging technical features for both brass and woodwinds throughout the show is going to be given more credit because it's more moments where a judge can be impressed. This means that a group with a show designed to feature and impress is probably going to end of scoring higher than a group with an out-of-the-box type of show with some judges, but not necessarily all. Unfortunately, UIL only requires a brief online course for judges at the Area level, and, beyond that, it's really a very subjective point system. That's why you see so much variety between rankings...not everyone manages their scores well over the course of a contest, and they're simply reacting based on their own experience and preferences. Regarding Hebron in particular, I think that may just be a fault of the system. Hebron performed at a level so far beyond all the musical capability of the other groups at GN that there was no precedent for how to compare. Gary Markham himself told me that Hebron's performance last year was one of the most memorable musical performance of all his years involved in BOA because they shifted the paradigm regarding what a high school band was actually capable of doing on the move, with such great control, and while performing such a great variety of styles and dynamics. They couldn't very well score them above a 20.0, and going first obviously didn't help. It's just the nature of this type of scoring system. Anthony V, Xenon, HighSchoolBandNerd212 and 1 other 4 Quote
bancl Posted April 3, 2017 Posted April 3, 2017 On 3/31/2017 at 2:26 AM, whitewing09 said: Besides the inclusion of MPI in BOA, what are the major differences in how the two system evaluate music? One example that comes to mind is Keller. They did consistently well at UIL state, but in BOA they got destroyed in SA finals (although, they did do really well for prelims). Also, I'm actually very biased, but the quality difference in music performance between Hebron and the other top bands seemed greater than the razor thin they had. Am I missing something? Two very different systems. If you look at Keller's scores from SA they dropped 5 points and went from top 3 in several captions including being .2 away from tying FloMo in total music performance to the bottom of finals that night in every caption. Bad performance perhaps? Friendswood is another good example of a band that historically has done really well in the UIL arena, but has missed finals at BOA regionals in the same year. Quote
LeanderMomma Posted April 14, 2017 Posted April 14, 2017 On 3/31/2017 at 4:02 PM, aaron067 said: Warning...this turned out to be a wall of text. Sorry about that. MPE tapes often deal with design, especially in regard to demand placed on the students and how it affects their ability to perform the music. For instance, a group with incredibly difficult music arrangements with pretty strong execution is going to outscore a group with middle-of-the-road level arrangements who performs just as strongly, and probably a bit better to our UIL ears. There's obviously a point where high quality music arrangements definitely will not make up for a lack of quality execution, though, and also where truly immaculate execution can make up for weaker arrangements. The design being involved in the performance scores bothered me until a judge explained the general effect caption to me. There are three aspects to it: (1) the emotional connection with the audience, which is the most important; (2) the intellectual design, and can the students clearly demonstrate the complexity of that design; and (3) the artistry, or detailing and subtle nuance of absolutely every aspect of the performance. That final element is where bands like Marcus and Vandegrift really shine through in the GE caption, while bands like Flower Mound and Leander just blow you away with high energy and excitement, and groups like Carmel, Avon, and William Mason excel at the intellectual design. Of course, all the successful groups have a strong marriage of all three elements, but you can begin to really understand how some end up above others when you dig into the GE caption like this. I bring all of this up because GE does affect UIL scores, even though it isn't a judged element. Take Leander and Vandegrift. In an outdoor stadium, at the end of the season, on a first or second read, Leander has ended up outscoring Vandegrift. When you go back and watch videos, there's really no way Leander should ever top Vandgrift in a UIL contest based on quality of marching and playing (though they're still very strong and worthy of both their 5A and 6A finalist positions); however, their occasional higher placement is easily explained by judges getting carried away with the high energy of their performance, which is a huge factor when it comes to a Leander show. It's a lot like watching Bell's 2006 The Remaining live versus on a video. It still holds up on video despite some of the issues in performance captions, but the massive amount of emotion coming from the live performance was just unmatched and makes you forget about any of the other issues. On the score sheet, UIL really places very little emphasis on difficulty or content. Those sheets are concerned primarily with execution; however, some of the more experienced directors who judge often look for elements in a show that allow them to tick off boxes on the sheet. For instance, a group that features every section of their band successfully or who has both lyrical and challenging technical features for both brass and woodwinds throughout the show is going to be given more credit because it's more moments where a judge can be impressed. This means that a group with a show designed to feature and impress is probably going to end of scoring higher than a group with an out-of-the-box type of show with some judges, but not necessarily all. Unfortunately, UIL only requires a brief online course for judges at the Area level, and, beyond that, it's really a very subjective point system. That's why you see so much variety between rankings...not everyone manages their scores well over the course of a contest, and they're simply reacting based on their own experience and preferences. Regarding Hebron in particular, I think that may just be a fault of the system. Hebron performed at a level so far beyond all the musical capability of the other groups at GN that there was no precedent for how to compare. Gary Markham himself told me that Hebron's performance last year was one of the most memorable musical performance of all his years involved in BOA because they shifted the paradigm regarding what a high school band was actually capable of doing on the move, with such great control, and while performing such a great variety of styles and dynamics. They couldn't very well score them above a 20.0, and going first obviously didn't help. It's just the nature of this type of scoring system. LeanderMomma 1 Quote
LeanderMomma Posted April 14, 2017 Posted April 14, 2017 Dangit I was trying to reply and hit the button too soon. I was just going to thank you for a brilliant analysis on what UIL judges are looking for! Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.