Monitoring & supporting students during online synchronous lessons

Introduction

In this article, I have a suggestion for ways to improve the quality of English language teaching (ELT) via webinar apps, e.g. BigBlueButton, Blackboard Collaborate, Adobe Connect, Cisco WebEx, or Zoom, while making minimal changes to learning & teaching as they are currently being practised as emergency remote teaching due to the COVID-19 pandemic & subsequent shelter-in-place measures.

During this time, an extraordinarily high number of teachers & students have been continuing their face-to-face classes online, possibly the highest number in the history of distance education. Now that so many more people have become familiar with at least some aspects of learning & teaching online, I think it’s likely that its flexibility & convenience can be better understood &, with better planning & preparation, will have an important role to play in enhancing ELT in the foreseeable future.

By now, many teachers will have been using webinar apps for several weeks to host remote online lessons for their students. They’ve more than likely discovered that they’re not entirely appropriate for delivering face-to-face classroom style lessons & that they’ve had to adapt their teaching styles, lesson planning, & re-design their learning activities, staging, etc..

A particular issue with webinar platforms is that they’re primarily designed to be a one-to-many presentation tool. Although teachers can elicit responses from students, which corresponds to whole-class concept checking questions, eliciting examples, etc., in the classroom, it is difficult for teachers to monitor how well individual students may be doing during a learning task, i.e. the online remote equivalent of walking around the face-to-face classroom & looking at students’ work & giving them guidance/feedback while they attempt learning tasks. It’s an essential part of effective teaching whereby the teacher sees what & how each student is doing, corrects misunderstandings, clarifies instructions, fills in gaps in students’ knowledge, & gives personalised guidance & feedback. The following is a way to use existing free web tools/services to facilitate & augment this kind of monitoring.

Useful tools for monitoring students

Synchronised, online document editing platforms, e.g. Collabora & Google Docs (GDocs), enable teachers & students to simultaneously view & edit office documents in real-time, e.g. word processor, spreadsheet, & slide show presentation documents. They include collaboration & feedback tools such as text chat windows, highlighting, & commenting, all of which make them suitable for some types of learning tasks in remote online learning & teaching on webinars. However, monitoring groups of students at once usually involves switching between students’ online documents, which is less than ideal. I’ve written a GDocs Multipage web app, which embeds up to 8 live GDocs into one web page so that the teacher has an overview & therefore can monitor them more easily.

N.B. Test that this works without crashing your computer – Having a webinar app & several GDocs open in your web browser simultaneously uses a lot of memory & processing power! This probably won’t work for teachers on netbooks, Chromebooks, tablet PCs, & other low-power computers. Also, the bigger your screen, the easier it is to see what students are doing.

Conceptual overview

The following is an outline of a strategy to use a combination of 3 web-based tools during an online lesson:

  • A web conferencing service
  • Google Docs (GDocs)
  • My GDocs Multipage web app ‘hack’

In this scenario, we’ll assume that a teacher is teaching 8 students:

  • During the lesson, the teacher will give the 8 students tasks to complete in pairs, i.e. 4 groups of 2 students, in their respective GDocs.
  • While the students are collaborating on the tasks, the teacher will be able to see all 4 GDocs on one page, being edited in real-time.
  • The teacher will be able to drop in on any of the students’ GDocs, communicate with the students with the GDocs text chat, highlight text & other elements, write comments, & make edits & suggestions.
  • In the whole-class follow up, the teacher will have the option to show & discuss examples from the students’ GDocs with the screen sharing function in the webinar app, taking & uploading screen shots, &/or copying & pasting text.

The lesson

In this example, we’ll assume that students have already done learning activities in previous lessons &/or asynchronous, self-study activities, in which they’ll have watched & listened to the lecture, read the transcripts & checked their listening & reading comprehension, covering lexical & grammatical items as necessary.

Topic: Listening skills & the academic lecture genre

Objectives: Learning to follow & understand lectures more easily by:

  • Listening for ‘topic & comment’ utterances, i.e. introducing the beginning of a new topic or sub-topic
  • Listening for linking adjuncts & other expressions that introduce examples

Materials:

  • Video of an academic lecture
  • The lecture transcript

Tasks: Highlight & comment on the phrases or sentences (markers) that the lecturer uses to signal…

  1. …new parts of the lecture.
  2. …supporting examples.

Step by step instructions

We’ll assume that the students don’t have GDocs accounts. If all the students have accounts, then these steps should be modified to make the GDocs more private & secure.

N.B. Please consider the ethical implications of requiring students to have Google accounts & wherever possible, consider more open, privacy oriented, ethical, & GDPR compliant options such as Collabora: https://www.collaboraoffice.com/collabora-online/

  1. Create a template doc which contains all the instructions, steps, texts, etc. for the students to work on.
  2. GDocs template
  3. N.B. When you duplicate GDocs, it doesn’t preserve comments so it isn’t convenient to use these for task examples, instructions, etc..
  4. Duplicate the template, one for each pair of students, i.e. 8 students in 4 pairs = 4 documents, so that you have something like this:
  5. GDocs duplicates
  6. Set the document properties of each copy to: share with a link, anonymous users can edit.
  7. Copy the 4 shareable GDoc links to a document that you can use to copy & paste from during the webinar. You can also use this document to keep notes on how the students’ task performances go & reflect on them later, e.g.
    1. Doc1: https://docs.google.com/open?id=[document code]
    2. Doc2: https://docs.google.com/open?id=[document code]
    3. Doc3: https://docs.google.com/open?id=[document code]
    4. Doc4: https://docs.google.com/open?id=[document code]
  8. Shortly before the lesson, copy the 4 shareable links into this web form https://matbury.com/html/gdocs-multipage/index.php, which should look like this:
  9. GDocs Multipage
  10. N.B. If you close the GDocs multipage that you’ve set up, you’ll more than likely lose it & will have to re-enter the GDocs links to set the page up again, so keep those links handy just in case!
  11. You can adjust the width & height so that they fit more conveniently into your web browser window. You can also scale web pages (i.e. zoom in & out) with Ctrl+ mouse scroll wheel. I recommend experimenting to find which combination works best for you.
  12. At the appropriate stage of the webinar lesson, use the screen sharing function to show the document to the students & talk them through the tasks so that they have a clear understanding of what they have to do.
  13. In the document where you’ve copied the 4 GDocs links, write the names of the pairs of students next to each link, e.g.
    1. Joan & David: https://docs.google.com/open?id=[document code]
    2. Marta & Mireia: https://docs.google.com/open?id=[document code]
    3. Xavier & Carles: https://docs.google.com/open?id=[document code]
    4. Magali & Montse: https://docs.google.com/open?id=[document code]
  14. On the webinar platform, put the students into pairs & send them to breakout rooms so that each pair can maintain audio communication.
  15. Copy & paste the corresponding links to the students’ breakout rooms so that they can access their copies of the GDoc. This ensures that the students go to the copies of the document that you intend them to & so you know who is collaborating on which document.
  16. Go to the GDocs Multipage that you created just before the lesson & you will see the students arrive on the docs (their avatars show up on the top right of each GDoc window) & their cursors as they type, highlight, etc..

Additional suggestions

To hear a pair of students speaking & to speak with them, you can go into their corresponding breakout room in the webinar app.

It’s a good idea to have a reflective activity set up on the webinar main shared whiteboard, e.g. reflective questions, so that as students finish & return to the webinar, they have something to discuss & consolidate their learning while they are waiting for others to finish & come back.

I recommend practising this procedure until you are comfortable with it before trying it out in a live lesson, where there’ll be distractions, issues, etc., to deal with: Just like classroom teaching, digital teaching is a complex set of skills that take time & practice to coordinate & master.

Once you’ve got the hang of this, you can try it with different types of docs, e.g. spreadsheets & slide shows, & different types of tasks, e.g. collaborative story telling, peer-review, inductive focus on form, extensive think-pair-share tasks, & to introduce & start longer-term projects.

Conclusion

This article has outlined a practical suggestion for mitigating one of the limitations of webinar apps & thereby improving the quality online distance English language teaching. It facilitates an essential component of classroom teaching practice; monitoring students while they perform learning tasks so that in the moment, personalised guidance & feedback can be given. In this case, we can see how pedagogical principles, rather than EdTech novelties, have driven the modifications to the design of the teacher’s view of the learning activity & that having a clear & purposeful concept of what we want to achieve in teaching at a distance enables us to see & adapt to the shortfalls of available digital tools.

I hope you find it useful.

P.S. You can find the GPL3 (free and open source) licensed source code for the GDocs web app ‘hack’ on my GitHub.com account: https://github.com/matbury/gdocs-multipage

Placement Tests: Communicative vs. Grammatical Competence

The blog post marks the end of a long hiatus in my blog writing. I’m starting back with a post about language assessment in general & placement tests in particular, & I’ll compare & contrast two English as a Foreign or Second Language (EFL/ESL) placement test assessment methods.

Context

I recently had the opportunity to compare & contrast two EFL/ESL assessment methods: semi-structured interviews & discreet-point multiple choice question (MCQ) tests. Both tests were administered to 475 international students who attended an EFL/ESL summer school in Canada. A sample size of n=475 gives a reasonably strong statistical power & this comparison took place in a typical summer school context.

From many years of experience of English language assessment in general & placement testing in particular, I believed that the semi-structured interview would be the more appropriate score to use to assign students to cohorts in similar levels of English language proficiency at the summer school. Semi-structured interviews place the main emphasis on communicative competencies, which are the ultimate goal of language learning. In other words, they’re a more direct form of assessment than typical structurally focused tests. Consequently, students were placed in their classes based entirely on their performance in the semi-structured interviews & only minor adjustments for incorrect student placement had to be made, mostly due to inter-rater reliability errors & that some students had under-performed due to jet-lag &/or test anxiety.

It is preferable to assess communicative competence at international summer schools because they bring together young students from around the world into mixed first language & nationality classes, making English the lingua franca that everyone has in common. In other words, English is the only language in which they can all understand each other & make themselves understood. Summer schools are an ideal opportunity to put into practice the English language that students have been studying year-round & further develop their communication skills. To accommodate this opportunity, English language summer schools often promote learning & teaching methods, such as communicative language teaching (CLT), task-based language learning (TBLL), content & language integrated learning (CLIL), & project-based learning (PBL), which place a strong emphasis on student-to-student cooperation & communicative competence.

Comparison of The Assessment Methods

Firstly, the semi-structured interviews & the discreet-point MCQ test assess different aspects of linguistic knowledge & ability (Table 1, below).

  • Semi-structured interviews assess students’ overall production of coordinated, integrated language skills & communicative competence, i.e. their ability to understand & make themselves understood in English & their ability to maintain a conversation through taking turns & giving appropriate responses.
  • Discreet-point MCQ tests primarily assess students’ ability to recognise correctly & incorrectly formed language structures, appropriate selection of vocabulary, & sometimes appropriate responses to given cues, prompts, &/or questions.

Table 1: A comparison of aspects of language assessed by discreet-point MCQs & semi-structured interviews

Discreet-point MCQs vs. Semi-structured Interviews
Test language recognition Test language production
Closed, tightly controlled Open-ended, spontaneous
Pre-defined Adaptive
Isolated knowledge & skills Coordinated, integrated knowledge & skills
Often focus on structural aspects (Usage) Focus on communicative competence (Use)

Details of the Two Tests

Discreet-Point MCQ test

The grammar test was a typical structural, norm-referenced, progressive discreet-point MCQ test, consisting of 50 items, of the type & quality frequently used in summer schools & language academies.

It’s worth noting that the design & development of effective MCQ language tests is challenging & requires a high degree of expertise. They require highly-skilled, careful, well-informed item writing, frequent statistical analyses on students’ responses to test items, e.g. facility index, discrimination index, & distractor efficiency. In this sense, effective MCQ tests are developed through an iterative process rather than designed.

In contrast, administering MCQ tests is relatively straightforward & easy to grade, e.g. untrained staff members can grade the tests with an answer key or even computer administered & graded tests can be implemented.

So MCQ tests require high expertise, time, & effort to develop but low expertise to administer & grade.

The Semi-Structured Interview

The speaking test was a semi-structured interview with communicative competencies & language functions as outlined in the Common European Framework of Reference for Languages (CEFR). The range of the speaking test was subdivided into 12 levels of language functions & communicative competencies, which is more fine-grained than typical placement tests, aligned with Trinity College London’s Graded Examinations in Spoken English (GESE), as indicated in Table 2 (below).

Table 2: Correspondence between CEFR & speaking test levels.

CEFR Level
A1.1 1
A1.2 2
A2.1 3
A2.2 4
B1.1 5
B1.2 6
B2.1 7
B2.2 8
B2.3 9
C1.1 10
C1.2 11
C2 12

In contrast to the discreet-point MCQ test, the semi-structured interviews required qualified, experienced teachers to administer each interview, for an average of 5 minutes per interview. Testers were provided with a criterion-referenced rubric & some additional visual materials to act as stimuli in order to elicit language, particularly from lower-proficiency students.

The exact duration of each interview mostly depends on the students’ language proficiency. The lower the proficiency of the student, the less time the interview would take & conversely, the higher the level, the longer the interview. For example, a cohort of 150 students would take 10 teachers about 60 minutes to interview (150 * 5 / 10 = 75) but if the levels are skewed towards the lower end, which is often the case on EFL/ESL summer schools, they would take substantially less.

Administering the speaking test is therefore more time-consuming & requires a larger number of more highly-qualified & experienced testers. However, the design is relatively quick as it relies to a high degree on the testers’ expertise in order to judge students’ language proficiency.

Comparison of the Test Results

As mentioned earlier, the semi-structured interviews were a valid & reliable predictor of students’ ability to participate in classes.

A frequency analysis (Figure 1, below) revealed that the distribution of grades was indeed skewed towards to lower end of the English proficiency scale, centred at around level 4 (CEFR A2.2).

Figure 1: Frequency distribution of the students’ scores (n=475) from the semi-structured interviews.

Frequency distribution of students' semi-structured interview scores

A scatter-plot of the test results for each student is a quick & easy to interpret way to compare each student’s score with their speaking test score. Each dot represents one or more students’ scores with their speaking test score on the x-axis (horizontal) & their grammar test score on the y-axis (vertical). If there were a strong correlation between the speaking & grammar tests, we would expect to see a relatively close grouping along a diagonal line from bottom left to top right (Figure 2, below).

Figure 2: Hypothetical representation of a strong correlation between the discreet-point MCQ test & the semi-structured interviews.

But what we see from the analysis of the actual test results a wide variation between the semi-structured interviews & the discreet-point MCQ test (Figure 3, below).

Figure 3: Representation of the actual correlation between the discreet-point MCQ test & the semi-structured interviews.

For example, the students who scored 48-50 in the discreet-point MCQ test, scored across a range of levels 4-11 (CEFR A2.2-C1.2) in the semi-structured interviews. This shows error margins of up to 9 levels when compared to the semi-structured interviews. Using the discreet-point MCQ test results to place students in classes would result in students with level 4 & 11 being placed in the same class. In other words, the MCQ test was a poor predictor of students’ communicative ability or their ability to participate in the appropriate level of class at the summer school.

This analysis of the test results clearly illustrates how pronounced the difference typically is between assessing grammatical competence & assessing communicative competence.

Where Does This Leave Discreet-Point MCQ Tests?

This is not entirely a critique of the discreet-point MCQ test format per se. These tests can be designed better & to assess a wider range of linguistic knowledge skills. The problem in this case, which is not atypical either, is that the items within the test focused almost entirely on structural aspects of language, i.e. grammatical competence, with too little attention given to how language should be used from a functional perspective, i.e. communicative competence.
As mentioned above, effective MCQ tests require highly-skilled & experienced item writers & are resource intensive & time-consuming to develop. Additionally, it’s even more challenging to develop MCQ items which assess many of the constituent knowledge & skills that make up communicative competencies. The question remains as to whether developing a discreet-point MCQ test that’s appropriate for EFL/ESL summer schools is a feasible goal. The first step is in understanding where the challenges lie & developing item-writing strategies the meet them.

 

Learning Styles, Mindsets, and Adaptive Strategies

Felder-Silverman Learning Styles InventoryDo learning styles promote learning? Are they helpful for learners at the various stages/levels of their development of understanding in their subject areas? Should learners use learning styles psychometric tests to determine how they should view their study habits and how they approach studying? In this article, I argue that far from being helpful, the fixed mindset that learning styles promotes acts to hinder learners’ cognitive and metacognitive development and can be counter-productive in the longer term. I describe how learning styles encourage learners to use the same study strategies regardless of context, as personal rules of thumb, and that this encourages learners to ossify their study habits rather than to allow them to develop and grow.

I argue that encouraging learners to think of their preferences as strategies that they adapt according to their current knowledge, skills, and abilities in a particular domain/topic will put them on a growth trajectory where they see themselves on trajectories of learning and development, from novice learner to expert learner, as they learn to think and study in new ways.

What are learning styles?

The basic idea of learning styles is that different learners have intrinsic personality traits that predispose them to particular media, modes, and strategies for learning. The learning styles hypothesis claims that if the concepts and subject matter are presented according to a learner’s preferred media, modes, and strategies, learners will learn more effectively and efficiently; a concept that learning styles proponents call meshing. Much has been written and debated about the learning styles hypothesis and there have been at least two major meta-studies which outline the research evidence available for their validity and reliability (Hayes, 2005; Pashler, McDaniel, Rohrer, & Bjork, 2008).

Rather than discuss the validity and reliability aspect, I propose that learning styles are not intrinsic personality traits but strategic adaptations for learning concepts and subject matter that learners use according to how experienced and knowledgeable they are in a particular domain. In this sense, the conventional assumptions of what learning styles are and how they work fall under the fundamental attribution error; a form of cognitive bias where we interpret someone’s actions as being intrinsic to them as a personality trait, and disregard the situation and context in which they are acting and responding to.

So, rather than being psychometric tests which diagnose our intrinsic personality traits, learning styles preferences can be better understood as indicators of our levels of cognitive development within particular domains of knowledge, i.e. where we are on the spectrum between novice and expert. They may be useful for adapting our learning strategies in appropriate ways. For example, rather than learners thinking of themselves as sequential or global thinkers, they should consider their current level of knowledge and understanding and which strategies will help them best, i.e. Novice learners should use a sequential strategy to learn the basic concepts with related concepts presented close together (in time and/or space) and with authentic examples (observational learning) and/or authentic experiences (experiential learning) which can be used by learners to see how they relate to personal subjective experience, while more experienced learners should take a more global approach and make more abstract generalisations in order to situate and connect the concepts they have already learned together in a coherent framework.

A Hypothetical Example: Novice vs. Expert Musicians

Let’s consider a hypothetical example scenario. When a novice musician is presented with the task of learning a new song, she will normally proceed to pick up her instrument and read the music notation on the page, playing the notes sequentially and listening to how they sound in terms of harmony, rhythm, and melody. A novice will need to play through the song a great number of times in order to develop their knowledge and understanding of it, hearing how the harmony, rhythm, and melody fit together and complement each other, and hearing any musical devices that create and release tension, i.e. what makes songs interesting and catchy, before the song “sticks” and she can perform it well without the aid of sheet music. A novice may or may not know how or why the song “works” and typically arrives at a superficial understanding of it, i.e. “It just sounds good.” This is a perfectly legitimate and appropriate strategy, considering the levels of development of her knowledge and experience of music and what she is capable of understanding.

In contrast, an expert musician will read over the whole song, usually without picking up her instrument. She will analyse any harmonic progressions, rhythmic patterns, divide and group the song into sections, e.g. verses and chorus’ or head and bridge, and will immediately draw upon her experience and knowledge to relate it to other similar songs, structures, and musical devices used to create tension and resolution. It takes considerably less time for an expert to learn to perform a song well than a novice because she is able to quickly and effortlessly situate the song in conceptual and contextual frameworks. While it may take a novice musician one or two weeks to learn a new song, an expert can do it in as little as half an hour.

Drawing from this example and according to Bransford et al (2000), we can say that experts differ from novices in that:

  • Experts notice features and meaningful patterns of information that are not noticed by novices.
  • Experts have acquired a great deal of content knowledge that is organized in ways that reflect a deep understanding of their subject matter.
  • Experts’ knowledge cannot be reduced to sets of isolated facts or propositions but, instead, reflects contexts of applicability: that is, the knowledge is “conditionalised” on a set of circumstances.
  • Experts are able to flexibly retrieve important aspects of their knowledge with little attentional effort.
  • Though experts know their disciplines thoroughly, this does not guarantee that they are able to teach others.
  • Experts have varying levels of flexibility in their approach to new situations. (Bransford, Brown, Cocking, Donovan, & Pellegrino, 2000)

A Strategic, Adaptive, Growth Mindset

So we see that novice and expert musicians use very different strategies to learn new songs, according to and dependent on their knowledge and experience. A novice does not have the necessary knowledge, skills, and abilities (KSAs) to examine, deconstruct, and understand songs in the same way that an expert does, so she tends to use strategies that are more hands-on, experiential, experimental, and sequential. She tends to learn by rote, repetition, and memorisation and follow rules and procedures that she can understand, given the KSAs she has developed so far. With time, practice, experience, guidance, and reflection she will be able to develop her own coherent conceptual and contextual frameworks for learning and understanding songs. A well-guided novice understands that, with effort and perseverance, she will be able to learn new songs in different and more efficient ways and understand them more deeply. In effect, what we are describing is a “growth mindset,” i.e. that learning styles are actually strategic, adaptive strategies that are developmental and modifiable (Chiu, Hong, & Dweck, 1997; Dweck, 2010; Hong, Chiu, Dweck, Lin, & Wan, 1999).

The Learning Styles Fixed Mindset

Now let us contrast this developmental, growth mindset view with what the various learning styles propose. In his review, (Hayes, 2005) identified 71 different schemas of learning styles but for the purposes of this article I am going to focus on one of the more popular schemas in use in higher education, the Felder-Silverman Inventory of Learning Styles (Felder & Silverman, 1988; Felder & Spurlin, 2005), for which they provide a web-form questionnaire for learners to self-assess their own learning styles according to this schema (Soloman & Felder, 1991a) and for which they offer study strategies advice to learners (Soloman & Felder, 1991b).

Learners complete the Felder-Silverman psychometric style questionnaire in order to be automatically assessed and categorised into balanced (1 – 3), moderate (5 – 7), and strong (9 – 11) learning styles preferences on four scales from 11 – 1 – 11 like this:

Active

11–10–9–8–7–6–5–4–3–2–1|1–2–3–4–5–6–7–8–9–10–11

Reflective

Sensing

11–10–9–8–7–6–5–4–3–2–1|1–2–3–4–5–6–7–8–9–10–11

Intuitive

Visual

11–10–9–8–7–6–5–4–3–2–1|1–2–3–4–5–6–7–8–9–10–11

Verbal

Sequential

11–10–9–8–7–6–5–4–3–2–1|1–2–3–4–5–6–7–8–9–10–11

Global

Felder et al (1998; 2005) claim that these preferences are consistent for each learner across domains and disciplines. When learners complete the Inventory of Learning Styles questionnaire it automatically informs them that the strategies they declare their preferences for are learning styles that are intrinsic personality traits to which they should adapt their studying habits. They are then referred to a document recommending study strategies that would best accommodate their learning styles preferences. In this sense, the Felder-Silverman, as well as many other learning styles schemas, promote a “fixed mindset,” i.e. that learning styles are fixed personality traits and not developmental or modifiable (Chiu et al., 1997; Dweck, 2010; Hong et al., 1999).

A Fixed vs. Growth Mindset

What’s wrong with a fixed mindset? It is tempting for musicians and practitioners in any field to view themselves as intrinsically capable or talented. However, Dweck (2010) informs us that for the purposes of learning and development:

“Individuals with a fixed mindset believe that their intelligence is simply an inborn trait – they have a certain amount, and that’s that.” “…when students view intelligence as fixed, they tend to value looking smart above all else. They may sacrifice important opportunities to learn – even those that are important to their future academic success – if those opportunities require them to risk performing poorly or admitting deficiencies.” (Dweck, 2010)

So, far from helping learners to develop the necessary knowledge, skills, and abilities (KSAs) through practice and perseverance, the fixed mindset, which I propose is at the foundation of learning styles, actively discourages learners who perceive themselves as less capable and talented; because their KSAs are not yet as well-developed as their peers; and discourages learners who perceive themselves as more capable and talented to not expose their shortcomings and instead encourages them to present a wall of (insecure) perfection to their peers. Have you ever noticed how some people feel intimidated and reticent when working with peers who they perceive to be more talented and capable than themselves?

Redefining learning styles in terms of a strategic, adaptive, growth mindset

In order to identify the differences between the fixed mindset promoted by learning styles and a strategic, adaptive growth mindset, let us take a closer look at the Felder-Silverman Inventory of Learning Styles schema, although this could equally apply to many of the many other learning styles schemas. When presented with a learning activity or opportunity, rather than a learner recalling her learning styles diagnosis from the psychometric test (fixed mindset), I argue that it would be advantageous for her to ask herself about how much she already knows, what experience she has, and to think about where she stands on the novice – expert learner scale, and which strategies are likely to help her most (growth mindset):

Active vs. Reflective Learning Styles Fixed mindset

According to the Felder-Silverman Inventory of Learning Styles:

Active

vs.

Reflective

Active learners tend to retain and understand information best by doing something active with it – discussing or applying it or explaining it to others.

Reflective learners prefer to think about it quietly first.

“Let’s try it out and see how it works” is an active learner’s phrase.

“Let’s think it through first” is the reflective learner’s response.

Active learners tend to like group work more than reflective learners.

Reflective learners prefer working alone.

Strategic, Adaptive, Growth Mindset

Novice learners’ developmental need: To gain basic, first-hand, experiential, implicit, procedural KSAs.

Experienced learners’ developmental need: To situate and connect already learned KSAs and relate them to each other.

Novice learners

to

Experienced learners

Make up the shortfall in basic KSAs in some way. A good strategy is to get some hands-on experience and active engagement with it.

Describe and analyse the context/situation we find ourselves in and reflect on how our KSAs apply/relate to it.

Sensing vs. Intuitive Learning Styles Fixed mindset

According to the Felder-Silverman Inventory of Learning Styles:

Sensing

vs.

Intuitive

Sensing learners tend to like learning facts.

Intuitive learners often prefer discovering possibilities and relationships.

Sensors often like solving problems by well-established methods and dislike complications and surprises

Intuitors like innovation and dislike repetition.

Sensors are more likely than intuitors to resent being tested on material that has not been explicitly covered in class.

Sensors tend to be patient with details and good at memorizing facts and doing hands-on (laboratory) work. Intuitors may be better at grasping new concepts and are often more comfortable than sensors with abstractions and mathematical formulations.

Sensors tend to be more practical and careful than intuitors.

Intuitors tend to work faster and to be more innovative than sensors.

Sensors don’t like courses that have no apparent connection to the real world.

Intuitors don’t like “plug-and-chug” courses that involve a lot of memorization and routine calculations.

Strategic, Adaptive, Growth Mindset

Novice learners’ developmental need: To learn the parts than make up the whole. To deepen understanding of KSAs to make them more explicit, i.e. make KSAs available to consciousness in order to develop more abstract and general hypotheses about them.

Experienced learners’ developmental need: Develop their conceptual frameworks further, locate gaps in KSAs, and situate new KSAs within their frameworks.

Novice learners

to

Experienced learners

Learn the basics, follow linear procedures, memorise information and methods, etc. Hands-on experience helps to put abstract concepts into context and is useful for testing/exploring boundary conditions, i.e. when methods, procedures, etc. start to fail/become inappropriate.

More experienced learners in this field can think more abstractly, can explore bending the rules, testing boundary conditions (i.e. where/when they break down/fail), and finding new ways to apply the knowledge.

Visual vs. Verbal Learning Styles Fixed mindset

According to the Felder-Silverman Inventory of Learning Styles:

Visual

vs.

Verbal

Visual learners remember best what they see – pictures, diagrams, flow charts, time lines, films, and demonstrations. Verbal learners get more out of words – written and spoken explanations.

Strategic, Adaptive, Growth Mindset

Novice learners’ developmental need: To develop an awareness of “the big picture,” that there are frameworks they must develop an understanding of, with gaps/spaces in which to situate new KSAs.

Experienced learners’ developmental need: To deepen understanding and develop more abstract concepts that they can generalise and use in novel situations and other domains.

Novice learners

to

Experienced learners

Are helped by having simplified, graphic overviews and illustrations of concepts and ideas; pictures, diagrams, flow charts, concept maps, time lines, narrative films, and demonstrations. They may need to see conceptual structures and frameworks in order to develop their understanding of them more fully.

Already have overviews and can map out the subject area. They are ready to go into greater detail and depth and reflect on the relationships and implications of the concepts.

Learning Styles Fixed mindset

According to the Felder-Silverman Inventory of Learning Styles:

Sequential

vs.

Global

Sequential learners tend to gain understanding in linear steps, with each step following logically from the previous one.

Global learners tend to learn in large jumps, absorbing material almost randomly without seeing connections, and then suddenly “getting it.”

Sequential learners tend to follow logical stepwise paths in finding solutions.

Global learners may be able to solve complex problems quickly or put things together in novel ways once they have grasped the big picture, but they may have difficulty explaining how they did it.

Strategic, Adaptive, Growth Mindset

Novice learners’ developmental need: To situate, relate, and connect concepts to theories and in frameworks, i.e. “the big picture,” and develop a deeper understanding.

Experienced learners’ developmental need: To sufficiently develop and build their KSAs into more abstract concepts so that they can easily transfer them across domains.

Novice learners

to

Experienced learners

Need structured, guided learning where they encounter related concepts close together (spatially and/or temporally) so as to emphasise their relationships/connections to each other. They need to understand some concepts before they can learn others in order to build a coherent picture of the subject/topic area.

Can connect the dots, have constructed larger, more complex, more abstract concepts and so can think more globally, taking the bigger picture, and the complex relationships between them into account. Much of their basic thinking has become automatic and barely registers in their consciousness (working memory). They are also more able to transfer those more abstract concepts into novel domains and adapt them accordingly.

Summary

I have proposed this alternative interpretation of learning styles, rethinking them not as fixed, psychometric attributes and personality traits, but as adaptive, strategic responses to the challenges that learners frequently face when acquiring and developing KSAs. By understanding learners’ preferences as indicators of their current levels of cognitive and metacognitive development, somewhere between novice and expert, we can help learners to develop learning strategies to situate themselves on trajectories of personal growth and to identify and prioritise the specific areas and aspects where they need to develop their KSAs further. In other words, to be balanced, self-aware, self-directed, strategic, adaptive, and well-rounded learners.

References

Bransford, J. D., Brown, A. L., Cocking, R. R., Donovan, M. S., & Pellegrino, J. W. (Eds.). (2000). How People Learn: Brain, Mind, Experience, and School: Expanded Edition (2nd ed.). The National Academies Press. Retrieved from http://www.nap.edu/catalog/9853/how-people-learn-brain-mind-experience-and-school-expanded-edition

Chiu, C., Hong, Y., & Dweck, C. S. (1997). Lay dispositionism and implicit theories of personality. Journal of Personality and Social Psychology, 73(1), 19–30. http://doi.org/10.1037/0022-3514.73.1.19

Dweck, C. S. (2010). Even Geniuses Work Hard. Educational Leadership, 68(1), 16–20.

Felder, R. M., & Silverman, L. K. (1988). Learning and Teaching Styles in Engineering Education. Engineering Education, 78(7), 674–81.

Felder, R. M., & Spurlin, J. (2005). Applications, Reliability and Validity of the Index of Learning Styles. International Journal of Engineering Education, 21(1), 103–112.

Hayes, D. (2005). Learning Styles and Pedagogy in Post-16 Learning: A Systematic and Critical Review. Journal of Further and Higher Education, (3), 289.

Hong, Y., Chiu, C., Dweck, C. S., Lin, D. M.-S., & Wan, W. (1999). Implicit theories, attributions, and coping: A meaning system approach. Journal of Personality and Social Psychology, 77(3), 588–599. http://doi.org/10.1037/0022-3514.77.3.588

Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning Styles: Concepts and Evidence. Psychological Science in the Public Interest, 9:3.

Soloman, B., & Felder, R. M. (1991a). Index of Learning Styles Questionnaire [Institution Website]. Retrieved August 4, 2015, from https://www.engr.ncsu.edu/learningstyles/ilsweb.html

Soloman, B., & Felder, R. M. (1991b). Learning Styles and Strategies. North Carolina State University. Retrieved from http://www.cityvision.edu/courses/coursefiles/402/STYLES_AND_STRATEGIES.pdf

Adding bibliography metadata to your web pages

320px-BooksOpen Access publishing is gaining popularity and, at the same time, increasing numbers of academics are uploading their papers on their personal websites, faculty pages, and blogs. This is great news for people who don’t have the luxury of an institution to pay for their access to the usually pay-walled research databases. Along with this positive development, I think providing bibliographical metadata in academic websites and blogs should become more of a priority. It is necessary for bibliography managers such as Mendeley and Zotero to quickly, easily, and accurately store, retrieve, and reference academic papers, which can save other academics, science writers, journalists, and students hours of work for each paper or article they write. If academic websites and blogs provide the metadata to support bibliography managers, it means that it’s that much easier for people to cite their research and provide links back to their websites, faculty pages, or blogs. However, unlike research databases, most websites, faculty pages, and blogs don’t usually provide this bibliographical metadata.

What is bibliographic metadata?

Metadata is intended for machines, rather than people to read. Bibliographic metadata tells bibliography managers and search engines by whom, when, where, what, and how an article or paper was published and makes it easier for people to find through title, author, subject, and keyword searches.

Can we embed it in a blog?

I use WordPress (this blog is built on my own customised version of WordPress) and it’s the most widely used and popular blogging software on the internet. While there’s a large and diverse range of plugins and extensions available, a search shows that while there are several that provide metadata for search engine optimisation (SEO), few offer support for bibliography managers, and none of the ones I’ve found support the minimum required metadata for academic citations. In order to find out how difficult or easy embedding metadata is, I tried an experiment on this blog to automatically generate as much relevant metadata from standard WordPress format blog posts as possible.

What does academic bibliography metadata look like?

Here’s an example metadata set for a published academic paper in an academic journal database (Applied Linguistics, Oxford Journals):

<!-- start of citation metadata -->
<meta content="/applij/4/1/23.atom" name="HW.identifier" />
<meta name="DC.Format" content="text/html" />
<meta name="DC.Language" content="en" />
<meta content="Analysis-by-Rhetoric: Reading the Text or the Reader's Own Projections? A Reply to Edelsky et al.1" name="DC.Title" />
<meta content="10.1093/applin/4.1.23" name="DC.Identifier" />
<meta content="1983-03-20" name="DC.Date" />
<meta content="Oxford University Press" name="DC.Publisher" />
<meta content="JIM CUMMINS" name="DC.Contributor" />
<meta content="MERRILL SWAIN" name="DC.Contributor" />
<meta content="Applied Linguistics" name="citation_journal_title" />
<meta content="Applied Linguistics" name="citation_journal_abbrev" />
<meta content="0142-6001" name="citation_issn" />
<meta content="1477-450X" name="citation_issn" />
<meta name="citation_author" content="JIM CUMMINS" />
<meta name="citation_author" content="MERRILL SWAIN" />
<meta content="Analysis-by-Rhetoric: Reading the Text or the Reader's Own Projections? A Reply to Edelsky et al.1" name="citation_title" />
<meta content="03/20/1983" name="citation_date" />
<meta content="4" name="citation_volume" />
<meta content="1" name="citation_issue" />
<meta content="23" name="citation_firstpage" />
<meta content="41" name="citation_lastpage" />
<meta content="4/1/23" name="citation_id" />
<meta content="4/1/23" name="citation_id_from_sass_path" />
<meta content="applij;4/1/23" name="citation_mjid" />
<meta content="10.1093/applin/4.1.23" name="citation_doi" />
<meta content="http://0-applij.oxfordjournals.org.aupac.lib.athabascau.ca/content/4/1/23.full.pdf" name="citation_pdf_url" />
<meta content="http://0-applij.oxfordjournals.org.aupac.lib.athabascau.ca/content/4/1/23" name="citation_public_url" />
<meta name="citation_section" content="Article" />
<!-- end of citation metadata -->

An APA Style (6th Edition) formatted citation from this metadata would look like this:

Cummins, J., & Swain, M. (1983). Analysis-by-Rhetoric: Reading the Text or the Reader’s Own Projections? A Reply to Edelsky et al.1. Applied Linguistics, 4(1), 23–41. http://doi.org/10.1093/applin/4.1.23

How can I add bibliographic metadata to my website or blog?

If you use WordPress, you’re in luck. I’ve made some modifications to my WordPress theme so that the appropriate bibliographic metadata is automatically added to the head section of each blog article.

Pre-requisites

  • A good FTP client. Filezilla is a good free and open source one.  Netbeans and Dreamweaver have FTP clients built in. If you’ve never used an FTP client before, look up some beginner tutorials to learn the basics of editing remote server files.
  • FTP access and login credentials to the web server where your blog is hosted.
  • A good text editor, e.g. Notepad++, NotepadqqGedit, GNU Emacs, etc., or an HTML integrated development environment, e.g. NetbeansBrackets, or Dreamweaver.

The metadata format for blogs is a little different from academic metadata, i.e. it uses the Dublin Core standard, but thee principles are similar. Here’s what I did:

  • I chose an existing WordPress core theme, twentytwelve, (but this should work with any theme) and created a child-theme: I created a new directory in /wordpress/wp-content/themes/twentytwelve-child/ WordPress automatically replaces files in themes with any files provided in child theme directories.
  • I made a copy of the header.php file from /twentytwelve/ and pasted it at /wordpress/wp-content/themes/twentytwelve-child/header.php
  • In a text editor, I opened the new header.php file and added the following lines of code between the PHP tags at the top of the page. This retrieves the metatdata from WordPress’ database:
// Set post author display name
$post_tmp = get_post($post_id);
$user_id = $post_tmp->post_author;
$first_name = get_the_author_meta('display_name',$user_id);
// Set more metadata values
$twentytwelve_data->blogname = get_bloginfo('name'); // The title of the blog
$twentytwelve_data->language = get_bloginfo('language'); // The language the blog is in
$twentytwelve_data->author = $first_name; //'Matt Bury'; // The article author's name
$twentytwelve_data->date = get_the_date(); // The article publish date
$twentytwelve_data->title = get_the_title(); // The title of the article
$twentytwelve_data->permalink = get_the_permalink(); // The permalink to the article
$twentytwelve_data->description = substr(strip_tags($post_tmp->post_content),0,1000) . '...'; // Take 1st 1000 characters of article as description
  • After that, in the same header.php file, between the <head> </head> tags, I added the following lines of HTML and PHP code. This prints the metadata on the article page. Please note that metadata is not visible when you read the web page because it’s for machines, not people to read. You can view it in the page source code (Ctrl + u in Firefox and Google Chrome):
<!-- start of citation metadata -->
<meta name="DC.Contributor" content="" />
<meta name="DC.Copyright" content="© <?php echo $twentytwelve_data->author; ?> <?php echo $twentytwelve_data->date; ?>" />
<meta name="DC.Coverage" content="World">
<meta name="DC.Creator" content="<?php echo $twentytwelve_data->author; ?>" />
<meta name="DC.Date" content="<?php echo $twentytwelve_data->date; ?>" />
<meta name="DC.Description" content="<?php echo $twentytwelve_data->description; ?>">
<meta name="DC.Format" content="text/html" />
<meta name="DC.Identifier" content="<?php echo $twentytwelve_data->title; ?>">
<meta name="DC.Language" content="<?php echo $twentytwelve_data->language; ?>" />
<meta name="DC.Publisher" content="<?php echo $twentytwelve_data->blogname; ?>" />
<meta name="DC.Rights" content="http://creativecommons.org/licenses/by-nc-sa/4.0/">
<meta name="DC.Source" content="<?php echo $twentytwelve_data->blogname; ?>">
<meta name="DC.Subject" content="<?php echo $twentytwelve_data->title; ?>">
<meta name="DC.Title" content="<?php echo $twentytwelve_data->title; ?>">
<meta name="DC.Type" content="Text">

<meta name="dcterms.contributor" content="" />
<meta name="dcterms.copyright" content="© <?php echo $twentytwelve_data->author; ?> <?php echo $twentytwelve_data->date; ?>" />
<meta name="dcterms.coverage" content="World" />
<meta name="dcterms.creator" content="<?php echo $twentytwelve_data->author; ?>" />
<meta name="dcterms.date" content="<?php echo $twentytwelve_data->date; ?>" />
<meta name="dcterms.description" content="<?php echo $twentytwelve_data->description; ?>">
<meta name="dcterms.format" content="text/html" />
<meta name="dcterms.identifier" content="<?php echo $twentytwelve_data->title; ?>">
<meta name="dcterms.language" content="<?php echo $twentytwelve_data->language; ?>" />
<meta name="dcterms.publisher" content="<?php echo $twentytwelve_data->blogname; ?>" />
<meta name="dcterms.rights" content="http://creativecommons.org/licenses/by-nc-sa/4.0/">
<meta name="dcterms.source" content="<?php echo $twentytwelve_data->permalink; ?>" />
<meta name="dcterms.subject" content="<?php echo $twentytwelve_data->title; ?>" />
<meta name="dcterms.title" content="<?php echo $twentytwelve_data->title; ?>" />
<meta name="dcterms.type" content="Text" />
<!-- end of citation metadata -->

Please note that I put the Dublin Core metadata twice in two slightly different formats for maximum compatibility with search engines and bibliography managers.

What about comprehensive academic bibliography metadata?

You’ll probably have noticed that the metadata I’ve included in my article pages, while sufficient for web page citations, doesn’t contain the same degree of detail as academic bibliography data (see first metadata snippet above), e.g. journal titles, ISSN’s, ISBN’s, etc.. As far as I know, there isn’t yet a way of storing that data in standard WordPress and so it more than likely needs a specialist plugin so authors can explicitly enter it to be stored and printed on the corresponding article pages. Would anyone like to develop one?

Online Cognitive Apprenticeship

A Cognitive Apprenticeship Approach to Student and Faculty Online Learning and Teaching Development: Enculturing Novices into Online Practitioner Environments and Cultures in Higher Education

cognitive apprenticeship

In a previous article, How prepared are learners for elearning? I wrote about the difficulties in identifying if learners are “ready” to study online and some suggestions for possible ways to identify the necessary knowledge, skills, and abilities for successful online learning.

I believe it would be unethical to identify or even diagnose such issues, thereby rejecting some learners who may otherwise be capable of thriving in online learning environments, without exploring some potential ways to address those issues. I’ve just created a small subsection on this blog that outlines a proposal for higher and further education oriented institutions and organisations that may help both learners and teaching practitioners involved in online communities of inquiry. It covers the following areas:

  1. Online Cognitive Apprenticeship Model
  2. Programme Aims and Objectives
  3. Organisational Structure and Context
  4. Programme Participants
  5. The Cognitive Apprenticeship Model
  6. Example Activities/Tasks
  7. Programme Delivery and Integration
  8. Evaluation and Assessment
  9. Participant Support: Necessary and Sufficient Conditions for Psychological Change
  10. The Programme as an Agent of Change
  11. References

Keywords: situated cognition, cognitive apprenticeship, meta-cognitive skills, enculturation, practitioner culture, legitimate peripheral participation, authentic tasks, reflective practice, online academic practice

Read the full proposal here: Online Cognitive Apprenticeship Model

PDF Version

pdfThere’s also a PDF version of the entire proposal from my Athabasca University Academia.edu account.