Learning styles and pedagogy in post 16 learning phần 3 pot

18 465 0
Learning styles and pedagogy in post 16 learning phần 3 pot

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Features of studies that Dunn and Dunn cite as demonstrating reliability include: controls on data collection through tight administration of the model, using authorised centres and certified learning styles trainers random selection of students sample sizes that generate statistically reliable scores. Nevertheless, the random selection of students in studies reviewed for this report does not apply universally: some studies select an experimental sub-group of people with strong preferences, others use whole classes or year groups and some do not explain their selection criteria. Where such information is provided, we have included sample sizes in our evaluations. Validity Proponents of the model claim high face, construct and predictive validity for elements within the model and for the model as a whole. For example, the lack of a correlation between LSI type and measures of intelligence is cited as ‘support for its [the LSI’s] construct validity’ (Sinatra, Primavera and Waked 1986, 1243). Further support is offered by De Bello, who cited a 2-year study of different learning style instruments at Ohio State University and reported that the Dunn, Dunn and Price LSI had ‘impressive reliability, face and construct validity’ (Kirby 1979, cited by De Bello 1990, 206). From ‘award-winning, experimental and correlational research with the LSI conducted at more than 50 universities’, De Bello (1990, 206) went on to claim ‘extremely high predictive validity’. De Bello’s paper, however, does not contain any statistics relating to reliability and validity and is simply a description of different learning styles instruments. In a similar vein, Hlawaty and Honigsfeld (2002) cited De Bello (1990), Curry (1987) and Tendy and Geiser (1998/9) to support their claim that the LSI has ‘good or better validity and reliability than nine other instruments’. In a study of 1087 full-time first-year undergraduates, Nelson et al. (1993) tested the impact of the PEPS on achievement and retention. They claimed that working with preferences identified through the PEPS showed significant percentage differences of achievement and retention between control and experimental groups, with academic achievement improving the longer that students studied according to their preferences. External evaluation General comments Apart from the many studies that the Dunns cite as showing validity and reliability, there appears to be little independent evaluation of their model. A further difficulty is created by Rita Dunn’s rejection of any evaluations that are ‘third party’ and therefore carried out by people ‘uncertified and untrained in the model’ (Dunn 2003c, 37). Confirmation of the model’s validity was offered by Curry (1987) who evaluated the LSI and PEPS against nine other instruments within a ‘family of models measuring instructional preferences’. However, Curry did not give details of the studies from which she drew her data or her criteria for selecting particular studies as offering ‘good’ support for validity. In addition, her report made clear that, despite judging reliability and validity to be good (see below), Curry regarded instructional preferences as less important in improving learning than other factors such as strategies or cognitive styles. In addition, data presented by Curry as evidence of good validity only confirmed predictive validity and not construct or face validity. When we examined the Curry paper, we found that being better than nine very poor instruments is not the same as being sufficiently reliable and valid for the purpose of making individual assessments. In her evaluation, Curry appeared to rely more on quantity, namely that there should be at least 20 supporting studies, rather than quality. There has been criticism about the choice of individual elements in the LSI. For example: ‘there is little information regarding the reasons for the choice of the 18 elements, nor is there any explanation given of possible interactions of the elements. The greatest problem … is its lack of attention to the learning process’ (Grigorenko and Sternberg 1995, 219). Hyman and Roscoff (1984, 38) argue that: The Learning Styles Based Education paradigm calls for the teacher to focus on the student’s learning style when deciding how to teach. This call is misleading … Teaching is not a dyadic relationship between teacher and student … [but] a triadic relationship made up of three critical and constant elements: teacher, student and subject matter. Some reviewers dispute both validity and reliability in the model. For example, reviews by Knapp (1994) and Shwery (1994) for the 1994 Mental Measurements Yearbook incorporated conclusions from two other reviews (Hughes 1992 and Westman 1992). Knapp (1994, 461) argued that: the LSI has no redeeming values’, and that ‘the inventory had a number of weaknesses’. He concluded that: ‘I am no expert on learning styles, but I agree with Hughes [one of the reviewers] that this instrument is a psychometric disaster.’ page 28/29LSRC reference Section 3 Shwery (1994) also questioned aspects of the LSI: ‘The instrument is still plagued by issues related to its construct validity and the lack of an a priori theoretical paradigm for its development.’ Reliability Curry (1987) judged the internal reliability of the LSI and PEPS to be good, with an average of 0.63 for the LSI and 0.66 for the PEPS. Yet she did not indicate what she regarded as ‘good’ coefficients and these are normally accepted to be 0.7 or above for a sub-scale. LaMothe et al. (1991) carried out an independent study of the internal consistency reliability of the PEPS with 470 nursing students. They found that only 11 of the 20 scales had alpha coefficients above 0.70, with the environmental variables being the most reliable and the sociological variables the least reliable. Knapp (1994) 6 expressed concerns both about the approach to reliability in the design of the LSI and the reporting of reliability data: in particular, he criticised repeating questions in the LSI to improve its reliability. He added: No items are, in fact, repeated word for word. They are simply reworded … Such items contribute to a consistency check, and are not really concerned with reliability at all … Included in the directions on the separate answer sheet … is the incredible sentence ‘Some of the questions are repeated to help make the inventory more reliable’. If that is the only way the authors could think of to improve the reliability of the inventory, they are in real trouble! There are also concerns about the Dunns’ claims for internal consistency. For example, Shwery (1994) says: Scant evidence of reliability for scores from the LSI is provided in the manual. The authors report [that] ‘research in 1988 indicated that 95 percent’ (p.30) of the 22 areas … provided internal consistency estimates of 0.60 or greater. The actual range is 0.55–0.88. Internal consistency of a number of areas … was low. As such, the link between the areas and justifiably making decisions about instruction in these areas is questionable. Murray-Harvey (1994) reported that the reliability of ‘the majority’ of the PEPS elements was acceptable. However, she considered ‘tactile modality’ and ‘learning in several ways’ to ‘show poor internal consistency’ (1994, 378). In order to obtain retest measures, she administered the PEPS to 251 students in 1991 and again in 1992. Environmental preferences were found to be the most stable, with coefficients of between 0.48 (‘design’) and 0.64 (‘temperature’), while sociological and emotional preferences were less so (0.30 for ‘persistence’ and 0.59 for ‘responsibility’), as might be expected from Rita Dunn’s (2001a) characterisation of these areas as more open to change. However, the physiological traits, which are supposed to be relatively stable, ranged from 0.31 for a specific ‘late morning’ preference to 0.60 for a general ‘time of day’ preference (Price and Dunn 1997). Overall, 13 out of 20 variables exhibited poor test–retest reliability scores of below 0.51. Two separate reviews of the PEPS by Kaiser (1998) and Thaddeus (1998) for the Mental Measurements Yearbook highlighted concerns about the Dunns’ interpretations of reliability. Both reviews noted the reliability coefficients of less than 0.60 for ‘motivation’, ‘authority-oriented learning’, ‘learning in several ways’, ‘tactile learning’ and ‘kinaesthetic learning’. Thaddeus also noted that some data was missing, such as the characteristics of the norm group to whom the test was administered. Validity Criticism was directed at a section entitled ‘reliability and validity’ in the LSI manual (Price and Dunn 1997, 10). Knapp (1994) argued that ‘there is actually no mention of validity, much less any validity data’ and Shwery (1994) noted that ‘the reader is referred to other studies to substantiate this claim’. These are the dissertation studies which supporters cite to ‘provide evidence of predictive validity’ (De Bello 1990, 206) and which underpin the meta-analyses (Dunn et al. 1995). There were also problems in obtaining any information about validity in the PEPS (Kaiser 1998; Thaddeus 1998) and a problem with extensive lists of studies provided by the Dunns, namely that: ‘the authors expect that the validity information for the instrument can be gleaned through a specific examination of these studies.’ (Kaiser 7 1998). Kaiser also makes the point that ‘just listing the studies in which the PEPS was used does not add to its psychometric properties’. 6 Page numbers are not available for online Buros reports from the Mental Measurements Yearbooks. The same applies to Shwery (1994). 7 Page numbers are not available for online Buros reports from the Mental Measurements Yearbooks. The same applies to Thaddeus (1998). Reviews of the PEPS also raised problems about missing data and the quality of Dunn et al.’s ci ta tions, referencing and interpretations of statistics. Thaddeus (1998) concluded that, once the underlying theory was developed, the PEPS would be a more valuable instrument and provide a direction for future research to establish its reliability and validity. Likewise, Kaiser (1998) concluded that ‘the PEPS is not recommended for use until more evidence about its validity and reliability is obtained’. Implications for pedagogy The model and its instruments are intended to be a diagnostic alternative to what supporters of the Dunns’ model call ‘soft evaluation’ by teachers (presumably informal observation, although this is not made clear), which they argue is often inaccurate. When used in conjunction with teachers’ own insight and experience, the model is claimed to be a reliable and valid measure for matching instruction and environmental conditions to high preferences shown by the inventory, especially when students have to learn new and difficult material. Rita Dunn (2003c, 181) claimed that: students whose learning styles were being accommodated could be expected to achieve 75% of a standard deviation higher than students who had not had their learning styles accommodated. Thus, matching students’ learning style preferences was beneficial to their academic achievement. The main purpose of the model is to improve students’ attainment through matching instruction, environment and resources to students’ high preferences. Nelson et al. (1993) argued that a ‘matching’ approach based on preferences is more effective than conventional study skills and support programmes which are remedial. Supporters of the model claim a substantial body of evidence for academic success resulting from changing teaching approaches. We summarise the key claims here. Most people have learning style preferences. Individuals’ learning style preferences differ significantly from each other. Individual instructional preferences exist and the impact of accommodating these preferences can be measured reliably and validly. The stronger the preference, the more important it is to provide compatible instructional strategies. Accommodating individual learning style preferences (through complementary instructional and counselling interventions, environmental design and resources) results in increased academic achievement and improved student attitudes toward learning. Students whose strong preferences are matched attain statistically higher scores in attainment and attitude than students with mismatched treatments. Most teachers can learn to use a diagnosis of learning style preferences as the cornerstone of their instruction. Most students can learn to capitalise on their learning style strengths when concentrating on new or difficult academic material. The less academically successful the individual, the more important it is to accommodate learning style preferences. There are characteristic patterns of preference in special groups, particularly the ‘gifted’ and ‘low achievers’. Claims made for patterns of preference and abilities in gifted students are summarised in Table 5 above, together with references to studies that claim these patterns. page 30/31LSRC reference Section 3 Table 5 Studies of the learning-style preferences of able students Preference Morning Learning alone Self-motivated Tactile modality Learning alone Persistent Authority figure present Parent/teacher-motivated Mobility Measure of ability Higher performance Gifted Gifted Gifted Source Callan 1999 Pyryt, Sandals and Begorya 1998 Griggs 1984 Hlwaty 2002 However, the notion of ‘gifted’ varies between the three reports that use it to measure ability, as do the outcomes that emerge from the preferences. Pyryt, Sandals and Begorya (1998, 76) advised caution about these patterns since, although differences were found between gifted students, average ones and students with learning difficulties or disabilities, ‘the magnitude of group differences is small’. Burns, Johnson and Gable (1998) found that while statistically significant differences were found between gifted and average students, the elements of the LSI associated with giftedness were different in each study. They concluded (1998, 280) that ‘it is difficult to accept the idea that the population of academically able students share common learning styles preferences’. We have attempted to draw from the literature any instances in which the preferences tend to ‘cluster’, but the reporting of data has not enabled us to ascertain the strength of preferences that might interact with each other. Where scores are reported, their interpretation appears rather loose. For example, Gadt-Johnson and Price (2000) reported that tactile learners in their large sample of over 25,000 children in grades 5–12 have associated preferences for the ‘kinaesthetic’, ‘auditor y’, ‘intake’, ‘learn in several ways’, ‘less conforming’, ‘teacher motivated’ and ‘parent motivated’ elements. It is only later in the reporting of this research that it becomes clear that none of these ‘associated preferences’ was represented by a score of more than 60 or less than 40; that is, they were not high or low preferences as defined by the model. Supporters of the model offer detailed prescriptions for teaching various types of student: for example, they report that ‘globals’ appear to need more encouragement; short, varied tasks (because of their lower motivation); and when faced with new and difficult information, it should be interesting, related to their lives and allow them to become actively involved. Advice covers individuals and groups, classroom management, lesson pace, activity, kinaesthetics and sequencing of material. Advice is related directly to different types of learner; for example, the idea that underachievers, ‘at risk’ and dropout students are almost exclusively tactual/kinaesthetic learners (see eg Dunn 1990c). Supporters also offer advice for other preferences. For example, students who learn better with sound should have music without lyrics as opposed to melodies with words, while baroque appears to cause better responsiveness than rock, and students who prefer light should have soft, not bright, light. The empirical basis for a distinction between the effects of different musical genres and quality of lighting is not given. There is also detailed advice for developing flexible and attractive environmental conditions; for example: Redesign conventional classrooms with cardboard boxes, bookshelves, and other useable items placed perpendicular to the walls to make quiet, well-lit areas and, simultaneously, sections for controlled interaction and soft lighting. Permit students to work in chairs, on carpeting, on beanbag chairs, or on cushions, or seated against the wall, as long as they pay attention and perform better than they have previously. Turn the lights off and read in natural day light with underachievers or whenever the class becomes restless. (Dunn 1990b, 229) Such advice derives from empirical evidence from studies cited by Dunn as supporting her model (see Dunn and Griggs 2003). Several books offer advice through examples of how particular schools have transformed seating, decor, classroom planning and timetabling in order to respond to students’ preferences as expressed through the LSI (see eg Dunn and Griggs 1988). These offer detailed ‘before and after’ vignettes of schools, their students, local communities and learning environments as well as ‘The How-to Steps’. In addition, the Dunn, Klavas and Ingham (1990) Homework prescription software package is offered to provide ‘a series of directions for studying and doing homework based on each individual’s … scores’ (Dunn and Stevenson 1997, 336) which, it is claimed, increases student achievement and reduces anxiety (Nelson et al. 1993; Lenehan et al. 1994). These studies, however, are open to the criticism that the observed benefits reflect a ‘level of intervention’ effect rather than a ‘nature of intervention’ effect, since all groups received ‘traditional instruction’ and the most successful group had ‘homework prescriptions’ as an additional element. This suggests that success may be attributed to the greatest quantity of input; the methodological problems of catalytic validity and the ‘Hawthorne Effect’ are also likely to play an important part. Empirical evidence of pedagogical impact Reporting on a meta-analysis of 36 experimental studies based on the LSI and PEPS with different groups of students, Dunn et al. (1995) claimed a mean effect size equivalent to a mean difference of 0.75 – described as ‘in the medium to large range’. Of the 36 studies, only six examined the effect sizes of the Dunn and Dunn model as a whole, while the remaining 30 focused on one of the four sub-areas of the inventory (environmental, emotional, sociological, physiological). For example, of the two studies in the emotional sub-area, Napolitano (1986) focused exclusively on the ‘need for structure’ element, while White (1981) looked more broadly at ‘selected elements of emotional learning style’. The largest mean effect size found relates to the 14 studies in the physiological sub-area (n=1656). Five studies which relate specifically to modality preference yield a mean effect size of about 1.4 and four studies on time-of-day preference average out to 0.9. In terms of analytic and global processing, a significant difference in test scores was found for students described as ‘simultaneous processors’ when they were matched with two kinds of ‘global’ instructional materials (Dunn et al. 1990). A more recent and extensive meta-analysis was carried out at St John’s University, New York, by Lovelace (2003). This included many of the earlier studies (from 1980 onwards) and the overall results were similar to those reported above. The mean weighted effect sizes for matching students’ learning style preferences with complementary instruction were 0.87 for achievement (131 effect sizes) and 0.85 for attitude (37 effect sizes). We certainly cannot dismiss all of the experimental studies which met the inclusion criteria used in these meta-analyses. However, we detect a general problem with the design of many of the empirical studies supporting the Dunn and Dunn learning styles model. According to the model, the extent to which particular elements should be tackled depends upon the scores of students within a particular learning group. However, many of the dissertations that are the basis of the supporting research focus on individual elements in the model, and appear to have chosen that element in advance of testing the preferences of the experimental population and sometimes only include students with strong preferences. In addition, the studies often test one preference and then combine results from single studies to claim overall validity. The only study we have found that applies the Dunn and Dunn model in the UK was carried out by Klein et al. (2003a, 2003b); the intervention took place in two FE colleges, with another two acting as a control group. Teachers were trained to use the PEPS with 120 first-year and 139 second-year students taking an intermediate level General National Vocational Qualification (GNVQ). The researchers claimed a positive impact on achievement and motivation, but withdrawal rates did not show a statistically significant difference between the intervention and the comparison group, at 52% and 49% respectively. In relation to the final GNVQ grade, just over 40% gained a ‘pass’ and 8% a ‘merit’ in the intervention group, while 60% gained a ‘pass’ and 8% a ‘merit’ in the comparison group. In initial and final basic skills tests, the intervention group’s performance improved, but the comparison group’s improvement was statistically significant. However, attendance in the intervention group was significantly higher than in the comparison group, as were students’ positive perceptions of the quality of their work. The report used data from observations and interviews with staff and students to show increased enjoyment, class control and motivation. Our evaluation of this research raises questions about research design and conclusions. For example, the study did not control for a ‘Hawthorne Effect’ and so it is unclear whether positive responses were due to novelty, the variety of aids and new teaching methods and a more empathetic and flexible approach from teachers. Any intervention that offers an enthusiastic new approach and attention from researchers in a context where there is little management interest and few resources for staff development might have similar effects. Variables such as college culture, staffing and degree of management support were not controlled for, yet such factors are likely to affect the performance of the two groups. Caution is also needed in commending students’ positive evaluations of their own work when their final grades remained poor. Our review suggests that research should take into account the impact of the model and consider the very different cultures of colleges and the fact that teachers in further education deal with diverse classes, have very little control over important factors (such as time of day and environment), are frequently part-time and have been subjected to repeated changes in curricula, organisation and funding (see Coffield et al. 2004, Section 2). Finally, as Klein et al. (2003a, 2003b) confirmed, the intervention did not raise achievement and retention rates. Indeed, the performance of the intervention group was poorer than that of the comparison group, suggesting the possibility that an intervention that focuses too much on process as opposed to subject knowledge and skills could militate against higher achievement. Withdrawal, attendance and achievement rates on many vocational courses in FE colleges are poor. Perhaps the focus of attention should be on these more fundamental problems in further education, since they are highly unlikely to be ameliorated by the administration of a learning styles instrument. Conclusions A number of strengths in the Dunn and Dunn model emerge from this review. First, it offers a positive, inclusive affirmation of the learning potential of all students, based on a belief that anyone can benefit from education if their preferences are catered for. This view of learning, and particularly of individuals who have not succeeded in the education system, encourages teachers to ask themselves an insightful and critical question, namely: how can we teach our students if we do not know how they learn? page 32/33LSRC reference Section 3 Second, the model encourages teachers to respect difference, instead of regarding students who fail to learn as ‘stupid’ or ‘difficult’. In contrast to an educational culture in the UK that labels learners as either of ‘low’ or ‘high’ ability, the model encourages teachers to reject negative judgements about learners and to see them as able to learn in different ways, providing that the methods of teaching change. The approach encourages learners and teachers to believe that it does not matter how people learn as long as they do learn. Third, the model has support among practitioners and encourages a range of teaching and assessment techniques, as well as flexibility and imagination in designing resources and in changing environmental conditions. It suggests to teachers that many of their teaching problems will diminish if they change their focus and begin to respond more sensitively to the different learning preferences of their students. The model pressurises teachers to re-examine their own learning and teaching styles and to consider the possibility that they are appropriate for a minority of students, but seriously inappropriate for a majority. Fourth, the model encourages teachers and students to talk about learning and gives them a language (eg kinaesthetic) which may legitimise behaviour, such as moving about the room, that was previously stigmatised as disruptive. Despite these strengths, our evaluation highlights serious concerns about the model, its application and the quality of the answers it purports to offer about how to improve learning. First, the model is based on the idea that preferences are relatively fixed and, in the case of some elements, constitutionally based. Our continuum of learning styles (see Figure 4) shows that other models are not based on fixed traits, but instead on approaches and strategies that are context-specific, fluid and amenable to change. Moreover, references to brain research, time-of-day and modality preferences in the Dunn and Dunn model are often at the level of popular assertion and not supported by scientific evidence. Second, a view that preferences are fixed or typical of certain groups may lead to labelling and generalising in the literature that supports the model (eg Dunn 2003c). In addition, a belief that people should work with their strong preferences and avoid their weak ones suggests that learners work with a comforting profile of existing preferences matched to instruction. This is likely to lead to self-limiting behaviour and beliefs rather than openness to new styles and preferences. Although the model offers a language about learning, it is a restricted one. Furthermore, despite claims for the benefits of ‘matching’, it is not clear whether matching is desirable in subjects where learners need to develop new or complex preferences or different types of learning style altogether. Supporters of the model make the general claim that working with preferences is necessary at the beginning of something new or difficult, but this is unlikely to be true of all subjects or levels. Nor does this assertion take account of a need to develop new preferences once one is familiar with a subject. A preoccupation with matching learning and teaching styles could also divert teachers from developing their own and students’ subject skills. The amount of contact time between teachers and students is increasingly limited and the curricula of many post-16 qualifications in the UK system are becoming more prescriptive. Time and energy spent organising teaching and learning around preferences is likely to take time away from developing students’ knowledge of different subjects. The individualisation of matching in the model could also detract from what learners have in common or discourage teachers from challenging learners to work differently and to remedy weaknesses. Although the model fits well with growing interest in individualisation in the UK system as ‘good practice’, our review of this issue in Coffield et al. (2004, Section 4), suggests that ideas about matching individual learning needs and styles tend to be treated simplistically by policy-makers, inspectors and practitioners. Third, supporters claim that a self-report measure is ‘objective’. We have to ask how far objective measurement is possible when many learners have limited self-awareness of their behaviour and attitudes in learning situations. This fact may help to explain why it is so difficult to devise reliable self-report instruments. A further difficulty is that a large number of the studies examined for this review evaluated only one preference in a test or short intervention. For this reason, there is a need for longitudinal evaluation (lasting for months rather than days or weeks) of the reliability and validity of students’ preferences, both within and outside learning style interventions. Since supporters claim reliability and validity to promote its widespread use as a scientifically robust model, evaluation should be carried out by external, independent researchers who have no interest in promoting it. There are also particular difficulties for non-specialists in evaluating this model. Until a number of studies have been read in the original, the nature of the sources which are repeatedly cited in long lists by the model’s authors and supporters does not become apparent. Academic conventions of referencing mask this problem. For example, Collinson (2000) quotes at length one study by Shaughnessy (1998) to support claims for the LSI, but the original source is a rather glowing interview with Rita Dunn in a teachers’ magazine. It is therefore important to evaluate critically the evidence used to make sweeping claims about transforming education. Fourth, claims made for the model are excessive. In sum, the Dunn and Dunn model has the appearance and status of a total belief system, with the following claims being made. It is relevant to, and successful with, all age groups from children in kindergarten through middle school, secondary school, university or college and on to mature, professional adults. It is successful with students who have strong, moderate and mixed degrees of environmental preference. Using teaching strategies that are congruent with students’ learning styles leads to statistically significant higher scores in academic attainment, attitudes to learning and behaviour. Higher scores in attainment, attitudes and behaviour have been achieved with students at all academic levels from those with learning difficulties or disabilities through low-achieving, to average and gifted students. It has been successfully implemented in urban, suburban and rural schools; in public, private and combined schools. It is effective with all subject areas from those taught in school to those taught in higher education; for example, allied health professions, anatomy, bacteriology, biology, business studies, education, engineering, health information management, law, legal writing, marketing, mathematics, music, nursing, physics, sonography and study skills. In higher education, ‘most students will retain more knowledge … for a longer period of time … enjoy learning more … and college retention rates will increase’ (Mangino and Griggs 2003,185). It is supported by ‘approximately 800 studies conducted by a) researchers at more than 120 institutions of higher education … b) practitioners throughout the United States … and c) The United States government’ (Dunn 2003d, 269). Fifth, the main author of the model and her supporters generalise about the learning of whole groups without supporting evidence. For example, Rita Dunn has argued recently that ‘it is not the content that determines whether students master the curriculum; rather, it is how that content is taught’ (2003d, 270; original emphasis). There are, however, numerous, interacting reasons why students fail to learn and process is only one of them. Similarly, one of Dunn’s successful higher-degree students claimed that ‘Auditory learners remember three quarters of the information they hear by listening to a teacher, a tape or recording, or other students. Visual learners retain three quarters of the information they see’ (Roberts 2003, 93; original emphasis). Such overblown claims only serve to give the research field of learning styles a bad name. It may, however, be argued that such assertions can and should be dismissed, but those who have become champions of the Dunn and Dunn model speak the language of conviction and certainty; for example, ‘it is mandatory that educators provide global … and tactual and kinaesthetic resources’ (Burke 2003,102). Sixth, supporters do not appear to consider the problem of catalytic validity, where the impact of an intervention is affected significantly by the enthusiasm of its implementers. In the light of these problems, independent evaluation is crucial in a UK context, where the DfES is showing an interest in the model as a way to improve teaching and learning. In the face of poor motivation and achievement in further education, there is no evidence that the model is either a desirable basis for learning or the best use of investment, teacher time, initial teacher education and professional development. Finally, the model is promoted by its chief protagonist, Rita Dunn, as though it were incapable of being falsified. For example, she and her co-authors write: ‘It is immoral and it should be illegal for certified teachers to negatively classify children who learn differently, instead of teaching them the way they learn’ (Dunn et al. 1991). It is apparently ‘inconceivable … that communities, parents and the judiciary would permit schools to function conventionally and continue to damage global, tactual, kinaesthetic children who need Mobility (sic) and informal classroom environments to function effectively’ (Dunn 2003d, 269; original emphasis). It is exactly this inability of Rita Dunn to conceive that other professionals have the right to think and act differently from the injunctions of the model that constitutes its most serious weakness. This anti-intellectual flaw makes the Dunn and Dunn model unlike any other evaluated in this review. page 34/35LSRC reference Section 3 Table 6 Dunn and Dunn’s model and instruments of learning styles General Design of the model Reliability Validity Implications for pedagogy Evidence of pedagogical impact Overall assessment Key source Weaknesses The model makes simplistic connections between physiological and psychological preferences and brain activity. It is a model of instructional preferences, not learning. It is unsophisticated in its adoption of ideas from other fields, eg modality preference, circadian rhythm, hemispheric dominance. Training courses and manuals simply list large numbers of studies where preferences are either prioritised or connected to others. Practitioners therefore have to take the theoretical support on trust. Critics highlight major problems with the design and reliability of key instruments. There have been external criticisms of evidence of validity. The implications for pedagogy are so forcefully expressed that no other options are considered. Labelling and generalising about types of student may lead to simplistic injunctions about ‘best practice’. Effect sizes of individual elements are conflated. There is a serious lack of independent evaluation of the LSI. Strengths A user-friendly model that includes motivational factors, social interaction, physiological and environmental elements. High or low preferences for 22 different factors are identified by learners. Strong preferences form the basis for teachers to adopt specific techniques or make environmental changes to areas such as light, sound, design, time of day or mobility. Supporters make strong claims for reliability. Supporters make strong claims for validity It is claimed that: individual differences in preference can be discerned it is possible to adapt environments and pedagogy to meet these preferences the stronger the preference, the more effect an intervention will have the impact will be even greater if low-achieving learners’ strong preferences are catered for. The model has generated an extensive programme of international research. Isolation of individual elements in empirical studies allows for evaluation of the effects of those elements. Despite a large and evolving research programme, forceful claims made for impact are questionable because of limitations in many of the supporting studies and the lack of independent research on the model. Concerns raised in our review need to be addressed before further use is made of the model in the UK. Dunn and Griggs 2003 Introduction The group of theorists summarised in this section have been clustered because we consider that they have a shared view (implicitly or explicitly expressed) of learning styles as ‘structural properties of the cognitive system itself’ (Messick 1984, 60). They also, as Riding and Rayner (1998) note, concentrate on the interactions of cognitive controls and cognitive processes. For this group, styles are not merely habits, with the changeability that this implies; rather, ‘styles are more like generalised habits of thought, not simply the tendency towards specific acts … but rather the enduring structural basis for such behaviour.’ (Messick 1984, 61) and as such, are not particularly susceptible to training. For this reason, many of these styles are very similar to measures of ability. For the theorists in this family, styles are linked to particular personality features, with the implication that cognitive styles are deeply embedded in personality structure. Descriptions, origins and scope of the instruments The theorists from this family who are mentioned in this overview are listed in Table 7 below. The learning styles in this family tend to be expressed as bipolar constructs. For many in the cognitive structure family, there is a strong intellectual influence from psychotherapy; for example, Kagan and Kogan (1970, 1276) paraphrase Klein (1958): cognitive structures intervene between drives and environmental demands. It is because cognitive structures are conceived to have a steering and modulating function in respect to both drives and situational requirements that Klein has given them the designation of ‘cognitive control principles’. The importance of drives – Freud’s pleasure/reality principle and Anna Freud’s defence mechanisms – are particularly evident in the learning styles models developed by Holzman and Klein (1954), Hunt et al. (1978) and Gardner and Long (1962). The descriptors – ‘constricted/flexible’, ‘need for structure’ and ‘tolerant/intolerant’ – reveal the authors’ engagement with issues of learning security and intellectual ‘comfor t zones’. Section 4 The cognitive structure family page 36/37LSRC reference Table 7 Learning-styles instruments in the cognitive structure family Author (date) Witkin (1962) Witkin (1971) Kagan (1963, 1966) Kagan (1967) Guilford (1967) Gardner et al. (1953, 1962) Pettigrew (1958) Holzman and Klein (1954) Hunt (1978) Hudson (1966) Broverman (1960) Principal descriptors field dependence-independence analytic-descriptive/relational/ inferential-categorical impulsivity/reflexivity focus/scan (focus: facts and examples; scan: principles and concepts) cognitive attitudes equivalence range tolerance for unrealistic experiences broad/narrow leveller/sharpener (constricted/flexible control) need for structure: conforming/dependent convergent-divergent thinking limits of learning, automisation Instrument Rod and Frame Test Group Embedded Figures Test (GEFT) Conceptual Style Test (CST) Matching Familiar Figures Test Free Sorting Test Category Width Scale Schematising Test Paragraph Completion Method Stroop Word Colour Inference Test The most influential member of the cognitive structure group is Witkin, whose bipolar dimensions of field dependence/field independence have had considerable influence on the learning styles discipline, both in terms of the exploration of his own constructs and the reactions against it which have led to the development of other learning styles descriptors and instruments. The educational implications of field dependence/independence (FDI) have been explored mainly in the curriculum areas of second-language acquisition, mathematics, natural and social sciences (see Tinajero and Paramo 1998a for a review of this evidence), although its vogue as a purely learning styles instrument has arguably passed. However, FDI remains an important concept in the understanding of individual differences in motor skills performance (Brady 1995) and musical discrimination (Ellis 1996). Three tests are used to study FD and FI: the Rod and Frame Test, the Body Adjustment Test and the Group Embedded Figures Test. The Rod and Frame Test involves sitting the participant in a dark room. The participant can see a luminous rod in a luminous frame. The frame is tilted and the participant is asked to make the rod vertical. Some participants move the rod so that it is in alignment with the tilted frame; others succeed in making the rod vertical. The former participants take their cues from the environment (the surrounding field) and are described as ‘field dependent’; the latter are uninfluenced by the surrounding field (the frame) and are described as ‘field independent’. The Body Adjustment Test is similar to the Rod and Frame Test in that it also involves space orientation. The participant is seated in a tilted room and asked to sit upright. Again, field-dependent participants sit in alignment with the room, while field-independent participants sit upright, independent of the angle of the room. The Group Embedded Figures Test is a paper and pencil test. The participant is shown a geometric shape and is then shown a complex shape which contains the original shape ‘hidden’ somewhere. The field-independent person can quickly find the original shape because they are not influenced by the surrounding shapes; the opposite is true of the field-dependent person. The authors claim that results from the three tests are highly correlated with each other (Witkin and Goodenough 1981). Davies (1993, 223) summarises the claims made by the authors for field dependence/independence: ‘According to Witkin and Goodenough (1981), field independents are better than field dependents at tasks requiring the breaking up of an organised stimulus context into individual elements and/or the re-arranging of the individual elements to form a different organisation.’ Measurement of the instruments Overall, there are two key issues in relation to the cognitive structure learning styles: the conflation of style with ability and the validity of the bipolar structure of many of the measures. Style and ability While he reports that measures of cognitive style appear to have test–retest reliability, Messick (1984, 59) considers that there is an ‘unresolved question … the extent to which the empirical consistencies attributed to cognitive styles are instead a function of intellective abilities’, since cognitive styles are assessed with what he calls ‘ability-like measures’. In particular, he argues (1984, 63) that measurements of field independence and field dependence are too dependent on ability: ‘by linking global style to low analytical performance, field dependence is essentially measured by default.’ That this weakness of the cognitive structure family appears to be particularly true of Witkin is borne out by empirical studies: ‘the embarrassing truth of the matter is that various investigators have found significant relations between the Witkin indexes, on the one hand, and measures of verbal, mathematical and spatial skills, on the other.’ (Kogan 1973, 166). Indeed, Federico and Landis, in their analysis of field dependence, category width and 22 other measures of cognitive characteristics, found (1984, 152) that ‘all cognitive styles except reflection-impulsivity are significantly related to ability and/or aptitudes. Field independence has more (ie 10) significant correlations [ranging from 0.15 to 0.34] with abilities and aptitudes than any other style’. Huang and Chao (2000) found that in a small study (n=60, mean age 17), students with learning disabilities were more likely to be field dependent than a matched group of ‘average’ students. Indeed, the construction of field dependence as a disability in itself is highlighted by Tinajero et al. (1993) who report on studies from the field of neuropsychology which attempt to link field dependence with cerebral injury, though the question as to which hemisphere is injured is an unresolved one. The theorists in the cognitive structure family take great pains to differentiate between ability and style – ‘Abilities concern level of skill – the more and less of performance – whereas cognitive styles give greater weight to the manner and form of cognition’ (Kogan 1973, 244; original emphasis) – but we are forced to conclude that if the measures used to assess style are too closely linked to ability tasks, then we may have what Henry Fielding in Tom Jones memorably describes as ‘a distinction without a difference’. [...]... Educational Psychology He markets the Cognitive Styles Analysis (CSA) privately through Learning and Training Technology Definitions, description and scope Riding and Rayner (1998, 7–8) define cognitive style as ‘the way the individual person thinks’ and as ‘an individual’s preferred and habitual approach to organising and representing information’ They define learning strategy as ‘those processes which are... the demands of a learning activity’ To distinguish between cognitive style and learning strategy, Riding and Cheema (1991, 195–196) claim that: ‘Strategies may vary from time to time, and may be learned and developed Styles, by contrast are static and are relatively in- built features of the individual.’ Riding and Rayner (1998) do not define learning style, but group models of learning style in terms... FDI and overall achievement have indicated that field independent subjects perform better’ (Tinajero and Paramo 1998a, 237 ) Tinajero and Paramo (1997, 1998b) are typical of later FDI advocates in that they willingly accept the interaction of field independence and achievement and focus their attention, in terms of implications for pedagogy, on ways of exploring field-dependent students’ strategies in. .. holistic thinking) Riding takes a rather different view, seeing holists as field-dependent and impulsive, unwilling to engage in complex analytical tasks Another point of difference is that where Riding places analysis and synthesis as polar opposites, Bloom sees them as interdependent processes We simply do not know enough about the interaction and interdependence of analytic and holistic thinking in different... researchers such as Vermunt (1996) and Antonietti (1999), both of whom emphasise the role of metacognition and of metacognitive training in modifying learning styles For Riding, metacognition includes an awareness of cognitive styles and facilitates the development of a repertoire of learning strategies (not styles) Riding seems to consider the ‘default’ position as being constant, rather than variable... this include Riding and Wigley’s (1997) study of the relationship between cognitive style and personality in FE students; the study by Sadler-Smith, Allinson and Hayes (2000) of the relationship between the holist-analytic dimension of the CSA and the intuition-analysis dimension of Allinson and Hayes’ Cognitive Style Index (CSI), and Sadler-Smith and Riding’s (1999) use of cognitive style to predict learning. .. bomb-disposal experts and anti-terrorist operatives, make explicit the link between prosocial FD preferences and autonomous FI preferences in governing career choice, when other predisposing factors – in this instance, thrill-seeking behaviours – are taken into account Davies’ (19 93) findings that FD subjects are more vulnerable to ‘hindsight bias’ – that is, the inability to imagine alternative outcomes... focus/scan (Kagan and Krathwohl 1967) and student appraisal of instructional effectiveness which were strong enough to support predictions, and concludes (1981, 620) that: ‘Though research on learning styles and orientations are [sic] intriguing, there is scant evidence that these “cognitive styles are strongly linked to instructor/course evaluations.’ LSRC reference Peer matching and mismatching research... suggest As Riding and Cheema (1991) argue, similar dimensions or categories do appear in many other typologies However, as things stand, our impression is that Riding has cast his net too wide and has not succeeded in arriving at a classification of learning styles that is consistent across tasks, consistent across levels of task difficulty and complexity, and independent of motivational and situational... He and his students and colleagues have carried out a large number of correlational and predictive studies focusing on learning outcomes, but it would be unwise to accept unreplicated findings in view of the problems of reliability indicated above An instrument which is so inadequate in terms of test–retest reliability cannot be said to provide robust evidence for adopting particular strategies in post- 16 . modality’ and learning in several ways’ to ‘show poor internal consistency’ (1994, 37 8). In order to obtain retest measures, she administered the PEPS to 251 students in 1991 and again in 1992 ‘motivation’, ‘authority-oriented learning , learning in several ways’, ‘tactile learning and ‘kinaesthetic learning . Thaddeus also noted that some data was missing, such as the characteristics. anti-intellectual flaw makes the Dunn and Dunn model unlike any other evaluated in this review. page 34 /35 LSRC reference Section 3 Table 6 Dunn and Dunn’s model and instruments of learning styles General Design

Ngày đăng: 09/08/2014, 19:21

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan