Research Issues

///

What Exactly Is “Understanding?” And How Do We Assess It?

BY TERRELL HEICK
5/10/12

Terry Heick is interested in learning innovation in pursuit of increased social capacity. He is editor of Edudemic Magazine for iPad, Director of Curriculum atTeachThought, and a regular blogger for Edutopia.


Assessing understanding might be the most complex task an educator or academic institution is tasked with. Unfortunately, professional development gives a lower level of attention to developing quality assessments, training that is rarely commensurate with this complexity. The challenge of assessment is no less than figuring out what a learner knows, and where he or she needs to go next.

In other words, what does a learner understand?

This in itself is an important shift from the days when curriculum was simply delivered regardless of the student’s content knowledge.

Among the big ideas Richard and Rebecca DuFour brought to the educational mainstream consciousness was a shift from teaching to learning, a subtle but critical movement. But even with this shift from curriculum, instruction and teacher actions, and toward data, assessment and learning, there remains uncomfortable murkiness.

Planning for Learning

In a traditional (and perhaps utopian) academic structure, learning objectives are identified, prioritized, mapped and intentionally sequenced. Pre-assessments are given as tools to provide data to revise planned instruction.

Next, in a collaborative group (PLCs and their data teams being the current trendy format), teachers together disaggregate data, perform item analyses, identify trends and possibility, and differentiate powerful and compelling instruction for each learner with research-based instructional strategies. Then student understanding is re-assessed, deficiencies are further remediated — rinse, repeat — until the learner demonstrates acceptable evidence of understanding.

But even this Herculean effort — which incredibly leaves gaps nonetheless — is often not enough because of the nature of understanding itself.

Defining Understanding

In their seminal Understanding by Design series, Grant Wiggins and Jay McTighe discuss the evasiveness of the term “understanding” by referencing Harold Bloom’s Taxonomy of Educational Objectives: Cognitive Domain, a book project finished in 1956 by Dr. Benjamin Bloom and colleagues. Quoted by Wiggins and McTighe, Dr. Bloom explains:

” . . . some teachers believe their students should ‘really understand,’ others desire their students to ‘internalize knowledge,’ still others want their students to ‘grasp the core or essence.’ Do they all mean the same thing? Specifically, what does a student do who ‘really understands’ which he does not do when he does not understand? Through reference to the Taxonomy . . . teachers should be able to define such nebulous terms.”

Wiggins and McTighe go on to say that “two generations of curriculum writers have been warned to avoid the term ‘understand’ in their frameworks as a result of the cautions in the taxonomy.”1 Of course, the Understanding by Design (UbD) series is in fact built on a handful of key notions, among them taking on the task of analyzing understanding, and then planning for it through backwards design.

But to pull back and look at the big picture is a bit troubling. There are so many moving parts in learning: assessment design, academic standards, underpinning learning targets for each standard, big ideas, essential questions, instructional strategies — and on and on and on in an endless, dizzying dance.

Why so much “stuff” for what should be a relatively simple relationship between learner and content?

Because it’s so difficult to agree on what understanding is — what it looks like, what learners should be able to say or do to prove that they in fact understand. Wiggins and McTighe go on in the UbD series to ask, “Mindful of our tendency to use the words understand and knowinterchangeably, what worthy conceptual distinctions should we safeguard in talking about the difference between knowledge and understanding?”2

Alternatives to Bloom’s Taxonomy

Of course, Wiggins and McTighe also helpfully provide what they call “6 Facets of Understanding,” a sort of alternative (or supplement) to Bloom’s Taxonomy. In this system, learners prove they “understand” if they can:

  1. Explain
  2. Interpret
  3. Apply
  4. Have perspective
  5. Empathize
  6. Have self-knowledge

Robert Marzano also offers up his take on understanding with his “New Taxonomy,” which uses three systems and the Knowledge Domain:

  1. Self-System
  2. Metacognitive System
  3. Cognitive System
  4. Knowledge Domain

The Cognitive System is closest to a traditional taxonomy, with verbs such that describe learner actions such as recall, synthesis and experimental inquiry. 

Solution

Of course, there is no solution to all of this tangle, but there are strategies educators can use to mitigate the confusion — and hopefully learn to leverage this literal cottage industry of expertise that is assessment.

1) The first is to be aware of the ambiguity of the term “understands,” and not to settle for just paraphrasing it in overly-simple words and phrases like “they get it” or “proficiency.” Honor the uncertainty by embracing the fact that not only is “understanding” borderline indescribable, but it is also impermanent. And the standards? They’re dynamic as well. And vertical alignment? In spots clumsy and incomplete. This is reality.

2) Secondly, help learners and their families understand that it’s more than just politically correct to say that a student’s performance on a test does not equal their true “understanding;” it’s actually true. If communities only understood how imperfect assessment design can be — well, they may just run us all out of town on a rail for all these years of equating test scores and expertise.

3) But perhaps the most powerful thing that you can do to combat the slippery notion of understanding is to use numerous and diverse assessment forms. And then — and this part is important — honor the performance on each of those assessments with as much equity as possible. A concept map drawn on an exit slip is no less evidence of understanding than an extended response question on a state exam.

In fact, I’ve always thought of planning, not in terms of quizzes and tests, but as a true climate of assessment, where “snapshots” of knowledge are taken so often that it’s truly part of the learning process. This degree of frequency and repetition also can reduce procedural knowledge, and allow for opportunities for metacognitive reflection post-assessment, such as the “So? So What? What now?” sequence.

If you are able to show all assessment results — formal and informal — for the most visible portion of the learning process, the letter grade itself, learners may finally begin to see for themselves that understanding is evasive, constantly changing, and as dynamic as their own imaginations.


1Understanding by Design, Expanded 2nd Edition (9780131950849): Grant Wiggins, Jay McTighe: Books. Web. 07 May 2012.

2In fact, in Stage 2 of UbD design process, the task is to “determine what constitutes acceptable evidence of competency in the outcomes and results (assessment),” deftly avoiding the term “understanding” altogether.

///

Writing Purposefully (@WritingPAD)
3/28/12 9:22 PM
Connectivity, co-creativity, metadesigning, collaboration, writing design. Take a look: youtube.com/watch?v=hVMRMR…#WritingPAD #manifesto

|||

Adoption and use of Web 2.0 in scholarly communications

http://rsta.royalsocietypublishing.org/content/368/1926/4039.full

  1. Rob Procter1,*,
  2. Robin Williams2,
  3. James Stewart2,
  4. Meik Poschen1,
  5. Helene Snee1,
  6. Alex Voss1 and
  7. Marzieh Asgari-Targhi1

+Author Affiliations


  1. 1Manchester eResearch Centre, University of Manchester, Arthur Lewis Building, Oxford Road, Manchester M13 9PL, UK

  2. 2Institute for the Study of Science, Technology and Innovation, University of Edinburgh, Old Surgeons’ Hall, High School Yards, Edinburgh EH1 1LZ, UK
  1. *Author for correspondence (rob.procter@manchester.ac.uk).

Abstract

Sharing research resources of different kinds, in new ways, and on an increasing scale, is a central element of the unfolding e-Research vision. Web 2.0 is seen as providing the technical platform to enable these new forms of scholarly communications. We report findings from a study of the use of Web 2.0 services by UK researchers and their use in novel forms of scholarly communication. We document the contours of adoption, the barriers and enablers, and the dynamics of innovation in Web services and scholarly practices. We conclude by considering the steps that different stakeholders might take to encourage greater experimentation and uptake.

http://rsta.royalsocietypublishing.org/content/368/1926/4039.full

|||

TimesHigherEducation (@timeshighered)
23/06/2011 14:13
Disciplinary tribalism ‘is stifling creativity’ http://t.co/c02QeCJ

Disciplinary tribalism ‘is stifling creativity’

23 June 2011

Are students being short-changed by a narrow approach to learning? Matthew Reisz reports

Although the division of knowledge into discrete, and often tightly policed, disciplinary blocks may be effective in creating “academic tribes and territories”, it often fails to serve the needs of students and society, a scholar has argued.

Gill Nicholls, deputy vice-chancellor (academic development) at the University of Surrey, discussed “the changing nature of disciplines and scholarship” at the recent International Conference on New Directions in the Humanities, in Granada, Spain.

In what she described as “a provocative paper meant to stimulate discussion”, she explored the implication of the power that individual disciplines have on teaching, learning and pedagogy.

In today’s university, she argued, “academics are deluged by vast quantities of new information. To avoid drowning, and to attain some kind of security, (they) seek to come ashore…on ever-smaller islands of learning and enquiry.”

Yet “the problems of society do not come in discipline-shaped blocks” and it is all too easy to find recent examples of “the dangerous, sometimes fatal narrowness of policies recommended by those (who claim to) possess expert knowledge”.

Today, said Professor Nicholls, we are witnessing the continuing growth of disciplinary speciality.

The very notion of a discipline, however, implies “both a domain to be investigated and the methods used in that domain…emphasising characteristics that separate discrete units of knowledge as opposed to those that might relate them”.

This in turn tends to “separate the academy” and make it difficult for universities to implement an integrated approach to learning. Courses that “aim to impart a pre-defined and fixed amount of established knowledge, concepts and skills” effectively ignore the need for the student to explore and be creative.

At the heart of the problem, said Professor Nicholls, is that disciplines are based on “divisions of know-ledge which are useful for the purposes of groups of people: academics, professionals, capitalists and state bureaucrats”.

But although they provide territories, career tracks and identities to the people working within them, they also have the effect of squeezing out anyone who does not fit, excluding the mavericks who can often bring crucial insights.

Asked about the practical implications of her analysis, Professor Nicholls said that “panels and journals need to find ways to allow academics to do interdisciplinary work and be recognised for it”.

Many practical problems require a variety of complementary perspectives, she said. To create the best prosthetics, for example, experts in computer engineering, health services and material science need to come together, and routes to promotion and prestige should aid, not prevent, such collaborations.

There are also important implications for teaching. “Students are often inculcated into asking questions which reinforce the strictures of their disciplines,” she said. “Instead, we should encourage them to look at different ways of interrogating the discipline they love.”

matthew.reisz@tsleducation.com.

___*

___*

http://www.researchresearch.com/

System failure

We lack the tools to understand the health of scholarship in the UK
Research Fortnight
14-06-2011

One of the sharpest lessons UK policymakers have learnt from the banking crisis is the importance of what is called systemic oversight.

Pre-crisis, the government, lobby groups and regulatory agencies were concerned mostly with the oversight of individual banks and with relatively small banking networks. As a result, they took their eye off the banking system as a whole.

Policymakers and regulators also lacked the tools to see when the system was close to collapse, which is why, when collapse (nearly) happened, much more damage was done than might have been the case. Hence more public money had to be injected into banks than would otherwise have been needed.

Research and higher education are not the same as banking, but the idea of systemic oversight—the idea of a system-wide lens to look through—is something universities and researchers would do well to think hard about.

It is clear that, during the past two decades at least, research and higher education have been subjected to a series of changes—both radical and incremental. We know a lot about the detail of individual policies and how to implement them. We have become expert in the various iterations of research assessment, or in how to implement concentration in research funding. Soon, our institutions will become expert in how to survive deep cuts to the teaching grant. We know much less about the impact of these policies—taken together—on the UK knowledge system as a whole.

Knowledge system here is an indicator of scholarship and encompasses everything from how many people have passed through higher education; trends in PhD counts as a fraction of population; trends in research publication metrics; and trends in how many UK research groups are breaking new ground in their fields.

If we regard each of these as a subset of UK scholarship, do we know its overall state of health? Individually, universities may well be at the top of global league tables and UK researchers may be in the upper reaches of publication and citation indices. But on its own, how much does each ranking tell us about the health of the system as a whole?

The answer is that we can take a guess, but we do not really know.

Those in charge of implementing the changes tell us there are at least two reasons why change is necessary: cuts are needed so that a bloated UK public sector learns to live within its means; concentration is needed so that individual scientists and world-leading institutions do not slip from their positions at the top of league tables.

The reality is that there is much uncertainty in the changes and we do not really know what their impacts will be, in part because we lack the right systemic tools.

The best case is that things could be no worse than they already are. The worst case is that if we imagine the UK knowledge enterprise to be a building, governments have been taking a shovel and slowly digging away at the foundations.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s