Featured

Darryl Williams’ Framework for Online Lessons – Teach Like a Champion


My colleague Darryl Williams leads our partnership work, where we work directly with schools to help them achieve their vision of high-quality equitable instruction in every classroom.

Because he spends so much time working directly with schools as they implement, he’s often the first to propose solutions to emerging challenges, and with COVID-mandated online learning on everyone’s mind, his recent work with a group of our schools is no exception.

Darryl put together a draft framework for what the structure of online lessons could look like across a school. He was building off one of our observations that the biggest challenge of both synchronous and asynchronous instruction is the same: fatigue, tuning out, exhaustion. We know students need face-to-face interactions. But even for adults, hours of Zoom time can be brutal. And asynchronous learning can also be exhausting and isolating. Thus the framework: a fairly brilliant bit of distillation on his part of how to balance the two types of instruction to make online learning both productive and sustainable.

I should note that the idea is that this model is not a mandate…. it would be adapted differently by each school and probably routinely adapted by teachers. But it sets a general structure that’s productive and sustainable and brings some necessary predictability and consistency to what teaching could look like.

In the Lesson Opening, students and teacher are present synchronously. We want a lot of interaction so students feel connected and included and accountable. They should see a smiling face and be asked to do something active–respond to a question in the zoom chat, say–within the first three minutes. And there should be an ‘orientation screen’ near to the beginning so students know what’s coming, and what materials they need to participate. And, as I discussed here, they should be able to see that things are planned and time is important. You might imagine a fusion of these two openings… The way Knikki gets started right away and involves everybody. How both she and Sean have a great Opening Screen with materials needed and activities described… And maybe you could mix in some of Sean’s humor and connected-ness (even though his video is asynchronous):

https://player.vimeo.com/video/442395423?dnt=1&app_id=122963

After ten or fifteen minutes of all of us together, connected and accountable, then maybe it’s time for some independent work. We love the idea of it being ‘semi-asynchronous’… that is, with cameras still on so you can support and check in with kids as Eric does, brilliantly, here. (Notice how seen and supported his kids feel even though they’re reading independently. And notice that the directions are up on his screen the whole time in case they forget.) Over time this could be fully asynchronous.

https://player.vimeo.com/video/432572826?dnt=1&app_id=122963

After a bit of independent work maybe it’s time to come back to a synchronous setting to check for understanding, process the independent work, and make sure students were productive on their own. That might look at bit like this clip of Ben Esser’s lesson, which again is highly interactive… there’s a great writing prompt that everyone completes. There are loving Cold Calls. There are breakout rooms (and Ben drops in to one to see how it’s going) etc.:

https://player.vimeo.com/video/442398877?dnt=1&app_id=122963

Then maybe the lesson shifts again to what we call Flex Time… students get some or all of their homework done. There’s time for you to check in with individuals or small groups who need more support. Darryl notes it’s a great time to offer targeted support for students who require accommodations or special education services… and the fact that the time is relatively predictable makes it easier for them to provide support. Kids might get to do a little reading. But you’d also want to be really clear: what work is due when? Submitted how?

When Darryl shared this I just thought it was tremendously useful as a tool to manage and sustain attention and focus, to balance formats in a relatively predictable way to get the most out of online lessons. Hope it’s helpful to you as well.

Featured

5 E’s!

Priscilla's World of Science

5/26/15

The 5 E’s is an instructional model based on the constructive pathway to learning, which says that learners build or construct new ideas on top of their old ideas.  The 5 E’s can be used with students of all ages, including adults.  Each of the 5 E’s describes a phase of learning, and each phase begins with the letter “E”: Engage, Explore, Explain, Elaborate, and Evaluate.  The 5 E’s allows students and teachers to experience common activities, to use and build on prior knowledge and experience, to construct meaning, and to continually assess their understanding of a concept.

Engage: This phase of the 5 E’s starts the process. An “engage” activity should do the following:

  1. Make connections between past and present learning experiences
  2. Anticipate activities and focus students’ thinking on the learning outcomes of current activities. Students should become mentally engaged in the concept, process, or skill to be learned.

View original post 155 more words

Cognitive Development and Higher Education

Cognitive development across the lifespan throws up an interesting problem for us here in Higher Education.There is fairly widespread agreement that Piaget got his developmental stages pretty close to the mark as he described how people develop from infancy through to adulthood. Although there is some argument about the details, with some adjustments that have been made here and there, the basic premise has pretty well stood the test of time.

There is fairly widespread agreement that Piaget got his developmental stages pretty close to the mark as he described how people develop from infancy through to adulthood. Although there is some argument about the details, with some adjustments that have been made here and there, the basic premise has pretty well stood the test of time.

The quandary faced by the higher education community lies in the final stage of cognitive development proposed by Piaget. The formal operational thinking stage that emerges at adolescence. As a person develops through their childhood, a normally developing child will reach a cognitive developmental milestone, acquire whatever skills that are attached to that stage of thinking, and move on.

As an example, as a young child, one of the stages is called egocentrism. Simply put, in this stage (finishes at about age four), a child thinks that everyone sees and experiences the world the same way that they do. If a child in this stage is viewing a scene and they were to ask you about something they were seeing, they wouldn’t be able to conceive the concept that you were not able to see exactly what they were, regardless of where you are. However, once a child passes through the stage, that doesn’t happen again in their lifetime. I doubt very much that you have experienced this recently because once the stage is passed it is simply the way you think.

This type of fairly linear developmental pattern holds true for virtually every cognitive developmental stage that we go through. However, this is not true of the final, formal operational thinking stage. Although the ability to think in a formal operational stage emerges during adolescence, thinking in this way requires teaching and practice. This is the only stage of cognitive development that is this way. All of the rest of the stages we simply acquire, but the formal operational thinking stage only bestows on us the ability to think that way, not the thinking itself.

Why is this a quandary for higher education? Because the higher part of higher education refers to the thinking that has to be developed for the expression of formal operational thinking. It doesn’t just happen, it has to be taught and practiced. We tend to call this thinking critical thinking and expect that our students arrive with this ability in place and ready to be fully expressed during their higher education. When it doesn’t happen, we are filled with disappointment and blame the secondary school system or the students themselves for not being prepared.

The research demonstrates to us that only a few (about 10%) of the adult population are ever fully equipped with formal operational thinking skills – whether or not they have received any higher education. Between 30% and 40% of the population lack the ability to engage in this type of thought completely. The remaining 50 to 60 percent have some formal operational thinking skills ranging from barely demonstrating that they have any to usually, but not always using them.

Given that we are now educating about 40% (or more) of the general population, how can it be that we are only seeing about 10% able to consistently use formal operational thinking skills to solve problems and analyze information? Because our model of “sit down, shut up, face the front, memorize, and regurgitate” used in 90% (or more) of the higher education classrooms neither teaches or requires the use of formal operational thinking skills.

The skills I’m talking about would include some of the following:

  •  a desire to seek, patience to doubt, fondness to meditate, slowness to assert, readiness to consider, carefulness to dispose and set in order; and hatred for every kind of imposture (Bacon 1605)
  • the intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action (Paul, 1987)
  • self-guided, self-disciplined thinking which attempts to reason at the highest level of quality in a fair-minded way (Elder)
  • the mental processes, strategies, and representations people use to solve problems, make decisions, and learn new concepts (Sternberg, 1986, p. 3)
  • the propensity and skill to engage in an activity with reflective skepticism
  • (McPeck, 1981, p. 8)
  • reflective and reasonable thinking that is focused on deciding what to believe or do (Ennis, 1985, p. 45)
  • thinking that is goal-directed and purposive, “thinking aimed at forming a judgment,” where the thinking itself meets standards of adequacy and accuracy (Bailin et al., 1999b, p. 287)
  • judging in a reflective way what to do or what to believe (Facione, 2000, p. 61)
  • skillful, responsible thinking that facilitates good judgment because it 1) relies upon criteria, 2) is self-correcting, and 3) is sensitive to context (Lipman, 1988, p. 39)
  • the use of those cognitive skills or strategies that increase the probability of a desirable outcome (Halpern, 1998, p. 450)
  • seeing both sides of an issue, being open to new evidence that disconfirms your ideas, reasoning dispassionately, demanding that claims be backed by evidence, deducing and inferring conclusions from available facts, solving problems, and so forth (Willingham, 2007, p. 8).
  • purposeful, self-regulatory judgment which results in interpretation, analysis, evaluation, and inference, as well as explanation of the evidential, conceptual, methodological, criteriological, or conceptual considerations upon which that judgment is based (Facione, 1990, p. 3)

I have written extensively about the state of higher education today, but our failure to deliver on our historical core purpose beggars belief. We can do better than this.


How could we take something as natural and wonderful as learning and turn it into education?


This work by HE Thoughts is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License.
Based on a work at hethoughts.wordpress.com.

Bloom’s Taxonomy — The English Classroom

Bloom’s Taxonomy should be your lifeline to teaching. It outlines low-level thinking to high-level thinking skills: Knowledge Comprehension Application Analysis Synthesis Evaluation When construct a lesson sequence, you should always consider the order of thinking. Do you want your students to memorise facts, analyse data, compare different theories, or evaluate a criteria against a piece […]

Bloom’s Taxonomy — The English Classroom

60 Non-Threatening Formative Assessment Techniques

April 19, 2019 – Updated on December 18, 2019 in Teaching 3 min read

Click here for the full list:

https://eduskillsconsult.files.wordpress.com/2020/08/formative-assessment-tools-levy-county.pdf

by TeachThought Staff

As frequently as a chef needs to check a sauce for taste, teachers should check for understanding.

These can be formal–formative or summative assessment, multiple choice, short answer, essay, matching, and related iconic ‘test’ forms. But they can also be informal–conversations, gallery walks, sketches, and more.

We recently shared the Inconvenient Truths of Assessment, and one of the takeaways from that post by Terry Heick could be that rather focusing on the design of assessment, we could instead focus on a climate of assessment–a classroom where snapshots of understanding are taken frequently and naturally, without the stress of performance for the student, or the burden of huge, unmanageable data results for the teachers.

So what about assessment as a matter of tone and purpose? If an assessment is non-traditional and non-threatening (or even less traditional and less threatening), how might that impact what it reveals? Does the tone of an assessment matter?

Is informal assessment a ‘lesser’ form altogether?

The Primary Benefit Of Informal Assessment

More than anything else, non-threatening, informal assessment can disarm the process of checking for understanding. The less formal the form, the less guarded or anxious the student might become. Stress and worry can quickly shut down the student’s ability to think, which yields misleading results–a poor “grade” which implies that a student understands a lot less than they actually do.

In that way, Levy County Schools in Florida’s Kim Lambert compilation of 60 Tools for Formative Assessment and Processing Activities can be useful to you as you collect data from all students, from the polished little academics to students for whom the classroom might be a less-than-comfortable place.

18 Inconvenient Truths About Assessment Of Learning

January 8, 2020 in Teaching 6 min read

https://www.teachthought.com/pedagogy/the-inconvenient-truths-about-assessment/

18 Inconvenient Truths About Assessment Of Learning

by Terry Heick

I. In terms of pedagogy, the primary purpose of an assessment is to provide data to revise planned instruction. It should provide an obvious answer to the question, “So? So what? What now?

II. It’s an extraordinary amount of work to design precise and personalized assessments that illuminate pathways forward for individual students–likely too much for one teacher to do so consistently for every student. This requires rethinking of learning models, or encourages corner-cutting. (Or worse, teacher burnout.)

III. Literacy (reading and writing ability) can obscure content knowledge. Further, language development, lexical knowledge (VL), and listening ability are all related to mathematical and reading ability (Flanagan 2006). This can mean that it’s often easier to assess something other than an academic standard than it is knowledge of the standard itself. It may not tell you what you want it to, but it’s telling you something.

IV. Student self-assessment is tricky but a key matter of understanding. According to Ross & Rolheiser, “Students who are taught self-evaluation skills are more likely to persist on difficult tasks, be more confident about their ability, and take greater responsibility for their work.” (Ross & Rolheiser 2001)

V. Assessments of learning can sometimes obscure more than they reveal. If the assessment is precisely aligned to a given standard, and that standard isn’t properly understood by both the teacher and assessment designer, and there isn’t a common language between students, teacher, assessment designer, and curriculum developers about content and its implications, there is significant “noise” in data that can mislead those wishing to use the data, and disrupt any effort towards data-based instruction.

VI. Teachers often see understanding or achievement or career and college-readiness; students often see grades and performance (e.g., a lack or abundance of failure) (Atkinson 1964).

VII. Self-evaluation and self-grading are different. ‘Self-evaluation’ does not mean that the students determine the grades for their assignments and courses instead of the teacher. Here, self-evaluation refers to the understanding and application of explicit criteria to one’s own work and behavior for the purpose of judging if one has met specified goals (Andrade 2006).

VIII. If the assessment is not married to curriculum and learning models, it’s just another assignment. That is, if the data gleaned from the assessment isn’t used immediately to substantively revise planned instruction, it’s at best practice, and at worst, extra work for the teacher and student. If assessment, curriculum, and learning models don’t ‘talk’ to one another, there is slack in the chain.

IX. As with rigor, ‘high’ is a relative term. High expectations–if personalized and attainable–can promote persistence in students (Brophy 2004). Overly simple assessments to boost ‘confidence’ are temporary. The psychology of assessment is as critical as the pedagogy and content implications.

X. Designing assessment that has diverse measures of success that ‘speak’ to the student is critical to meaningful assessment. Students are often motivated to avoid failure rather than achieve success (Atkinson 1964).

XI. In a perfect world, we’d ask not “How you do on the test,” but “How’d the test do on you?” That is, we’d ask how accurately the test illuminated exactly what we do and don’t understand rather than smile or frown at our ‘performance.’ Put another way, it can be argued that an equally important function of an assessment is to identify what a student does understand. If it doesn’t, the test failed, not the student.

XII. The classroom isn’t ‘the real world.’ It’s easy to say invoke ‘the real world’ when discussing grading and assessments (e.g., “If a law school student doesn’t study for the Bar and fail, they don’t get to become lawyers. The same applied to you in this classroom, as I am preparing you for the real world.”) Children (in part) practicing to become adults is different than the high-stakes game of actually being an adult. The classroom should be a place where students come to understand the ‘real world’ without feeling its sting.

When students fail at school, the lesson they learn may not be what we hope.

XIII. Most teachers worth their salt can already guess the range of student performance they can expect before they even give the assessment. Therefore, it makes sense to design curriculum and instruction to adjust to student performance on-the-fly without Herculean effort by the teacher. If you don’t have a plan for the assessment data before you give the assessment, you’re already behind.

XIV. Every assessment is flawed. (Nothing is perfect.) That means that the more frequent, student-centered, and ‘non-threatening’ the assessment is (here are some examples of non-threatening assessments) the better. It’s tempting to overvalue each assessment as some kind of measuring stick of human potential. At best, it’s an imperfect snapshot–and that’s okay. We just need to make sure teachers and students and parents are all aware and respond to results accordingly.

XV. As a teacher, it’s tempting to take assessment results personal; it’s not. The less personal you take the assessment, the more analytical you’ll allow yourself to be.

XVI. Confirmation bias within assessment is easy to fall for–looking for data to support what you already suspect. Force yourself to see it the other way. Consider what the data says about what you’re teaching and how students are learning rather than looking too broadly (e.g., saying ‘they’ are ‘doing well’) or looking for data to support ideas you already have.

XVII. Assessment doesn’t have to mean ‘test.’ All student work has a world of ‘data’ to offer. How much you gain depends on what you’re looking for. (Admittedly, this truth isn’t really inconvenient at all.)

XVIII. Technology can help make data collection simpler and more effective but that’s not automatically true. In fact, if not used properly, technology can even make things worse by providing too much data about the wrong things (making it almost unusable to teachers).

The Inconvenient Truth About Assessment

The Flipped Classroom

The Brainwaves Video Anthology

Jon is a teacher, educational coach, and writer who has had the privilege of helping educators “turn learning on its head.” Jon, along with Aaron Sams, is considered a pioneer in the Flipped Class Movement. He spent 24 years as a middle and high school science teacher before becoming the lead technology facilitator for a school district in the Chicago suburbs. Today Jon is dedicated to writing, speaking and otherwise promoting the flipped classroom concept. Jon helped found the Flipped Learning Network, a non-profit organization which provides resources and research about flipped learning. In 2002, Jon received the Presidential Award for Excellence for Math and Science Teaching; and in 2010, he was named Semi-Finalist for Colorado Teacher of the Year. He serves on the advisory board for TED-Education. In 2013 he, along with Aaron Sams was one of Tech & Learning’s 10 Most Influential of 2013. He serves on the advisory board for TED-Education. In 2014 he, along with Aaron Sams, are nominees for the Brock International Prize for Education.

This was written in 1916…

We make the child fit the school. The school of tomorrow will make the school fit the child. Education is the acquistion of power and ability, not an accumulation of facts. School should aim at making character.

School should cease to educate the youth away from life. The way to train for life is to begin to live that life at once. School should be shaped to this end. Non-essentials and busy-work nonsense should be eliminated.

JOHN M. MILLS – 1916 – JOURNAL OF EDUCATION