The Knowledge Integration Environment:
Helping Students Use The Internet Effectively
James D. Slotta, Marcia C. Linn
Graduate School of Education
University of California at Berkeley
To Appear In Jacobson, M. J. & Kozma, R. (Ed.), Learning the Sciences of the 21st Century. Hilldale, NJ: Lawrence Erlbaum & Associates.
RUNNING HEAD: Internet resources in the science classroom
How can we prepare students to make the Internet a lifelong learning resource as many envision? Because of its scope, flexibility, and accessibility, the Internet has clear promise for science instruction. However, students require new kinds of support in making sense of information from the World Wide Web, as they must recognize and interpret "evidence" from a wide range of new sources. Much of the information on the Internet is written by amateur authors with the purpose of convincing readers, but might be interpreted by students as established fact. Additionally, many Web sites are long and complex, so that relevant science content is often difficult to discern. How can we help students recognize the valid information from the invalid, or the credible sources from the non-credible ones? In this paper, we explore how students learn to ask critical questions that reveal the strengths and weaknesses of incomplete or conjectural information.
We have developed the Knowledge Integration Environment (KIE) to engage students in sustained investigation, providing them with cognitive and procedural supports as they make use of the Internet in their science classroom. Students work collaboratively in KIE, performing design, critique, or theory-comparison projects that involve scientific "evidence" from the World Wide Web. Students working within the KIE environment can take advantage of cognitive, social and procedural supports including checklists to help monitor progress, and tools to help organize thoughts about evidence. These software elements allow for the design of curriculum projects that support knowledge, although project designers must incorporate them appropriately to guarantee success. The KIE software and curriculum are complimentary tools that work together to support students as they use the Internet in constructive, autonomous science learning. In this paper, we focus on those aspects of KIE that support students in critiquing Internet evidence.
To illustrate how KIE supports and guides students, we describe research where students in an eighth grade science class performed a KIE project relating to passive solar architecture. In their class, KIE projects are used to supplement an existing physical science curriculum that deals with topics of light, heat, and sound energy. Several KIE projects are used during the semester to help students integrate the ideas they have learned in other classroom activities (e.g., labs and lectures). This paper is concerned with one such activity, known as "Sunlight, SunHEAT"!
In the Sunlight, SunHEAT! project, students are asked to survey six different Web sites that deal with passive solar energy. These Web sites, selected deliberately by the project designer on the basis of their good science content, still vary in terms of how much science they include, their purpose (e.g., sophisticated advertisement for a home design vs. carefully prepared educational resource), their target audience, their overall length and complexity, and their overall relevance to the topic of passive solar energy. The KIE helps students successfully critique such "evidence" from the Web, and ask productive questions to help them apply its content in a design or debate project.
The KIE includes cognitive guidance to help students understand and interpret evidence, procedural guidance for using the KIE software effectively, and social supports to encourage discourse and sharing of ideas. For example, KIE prompts students to ask questions like: Who authored this evidence? What is their profession? Is this evidence useful to you? How trustworthy is it? In the Sunlight, SunHEAT! activity, the diverse set of evidence allows students to practice these valuable critiquing skills. For each piece of evidence, students formulate two "critical questions" that address specific gaps in their own science knowledge. This project allows students to think actively about how evidence impacts their understanding. Students using a KIE project like "Sunlight, SunHEAT!" connect and link their ideas through personally meaningful activities.
We describe the kind of learning supported by KIE as "knowledge integration." The overarching pedagogical framework which guides the development of the KIE software and curriculum is known as "Scaffolded Knowledge Integration." This chapter begins with a discussion of the principles of Scaffolded Knowledge Integration, offers a brief review of the various components of the KIE software, and then describes research on the design of cognitive guidance for the "Sunlight, SunHEAT!" project. We explore the nature of student critiques and questions, and identify ways to scaffold such knowledge integration activities.
How does knowledge integration work?
Since 1985, we have worked in a middle school classroom developing the Computer as Learning Partner curriculum, where students work collaboratively to understand science topics relating to light, heat, and sound (Linn and Songer, 1991; Linn, 1992). Through this ongoing effort of principled refinement, we have formulated a theoretical framework of learning as knowledge integration: a dynamic process where students build connections between their existing knowledge and the curriculum content.
In our view, students come to science class with a repertoire of models of scientific phenomena as suggested by work of Piaget (1970), diSessa (1988, 1993), and others. We use the term "model" loosely to refer to ideas, conjectures, principles, visualizations, and examples from everyday life. All of these mental constructs are drawn upon by students in support of their reasoning, and we refer to their totality as a "repertoire of models."
Students engage in knowledge integration when they add new models to this repertoire, refine existing models, and restructure their knowledge. Our framework guides design of science projects where students are enabled to develop the autonomous learning skills they will need for a lifetime of knowledge integration. Thus, a curriculum should help students add accessible models, scaffold critical knowledge integration skills, and provide social supports for learning.
Our view suggests new goals for science education, to emphasize knowledge integration rather than the transmission of science content. Students should learn how to add new ideas to their existing repertoire, select appropriate models for use in solving their problems, identify good explanations, and make sense of new ideas and evidence. Finally, science instruction should help every student become a lifelong learner.
Knowledge integration can often go awry, as an extensive body of research demonstrates. Reif and Larkin (1991) showed that students often connect physics ideas superficially to formulas rather than concepts. Soloway (1985) has shown that students connect programming ideas to syntax rather than processes or patterns. Chi, Feltovitch, and Glaser (1981) found that students organize knowledge around apparatus rather than principles in introductory physics courses. How can we guide this process so that students develop fruitful ideas rather than superficial ones that are not connected to their repertoire of existing models? What models can we add to the mix of ideas students already possess such that they engage in fruitful knowledge integration, and what activities can we provide such that students organize their existing ideas productively?
Our work on the Computer as Learning Partner (CLP) project has validated our cognitive, procedural and social supports for knowledge integration. Open-ended science exploration environments often fail to provide adequate scaffolding for knowledge integration. For example, "discovery learning" does not provide the supports available in most real-world scientific investigations (Eylon and Linn, 1988).
In our own research community, we provide extensive resources for project work. Individuals start by undertaking projects in supportive research groups with experienced collaborators or in classes where regular guidance is provided. We have an agreed-upon vocabulary, criteria for evaluating progress, and a professional practice of critiquing. Generally, investigators draw on a wealth of knowledge of related work to design a project of their own, with much research consisting of replication and extension of existing findings. Finally, there exists an established set of methodologies that support research efforts, as well as reward structures for successful work. In unscaffolded exploration, many students fail to make real progress toward knowledge integration because they lack these valuable supports.
Based on our research in the CLP project, we have formulated a set of design principles in what we call the Scaffolded Knowledge Integration Framework. The CLP research demonstrated that students approach the study of heat and temperature with a wide array of loosely connected ideas and language, sometimes suggesting that they view heat and temperature as being one and the same (e.g., "turn up the heat; turn up the temperature"), while other times maintaining their difference (e.g., "The baby has a temperature so heat the bottle."). They may connect insulation with physical barriers and distinguish heating from cooling. They may have fruitful expectations about whether a metal or wood stick would prevent burned hands when roasting marshmallows, but harbor less helpful views about the insulating properties of aluminum foil (e.g., students in our CLP classroom often suggest that aluminum foil is a good insulator for wrapping cold cokes or hot potatoes!).
In the Computer as Learning Partner work, we sought to foster knowledge integration by engaging students in sorting out these ideas and coming up with a predictive set of models. We help students develop personal criteria for linking ideas and expectations about what it means to explain and what it means to understand. Knowledge integration is a complex cognitive and social process that is influenced by many factors (e.g., the students knowledge of domain material; the nature of the instruction provided; the students values and attitudes about school and learning; social interactions within the classroom). We have articulated four major principles of our framework that include cognitive, social and epistemological components. We have drawn heavily on these principles in designing the Knowledge Integration Environment software and curriculum.
1) New goals for science learning are required in order to shift students (and teachers) away from their traditional focus on memorizing content material and performing well on standardized tests. We advocate a curriculum that emphasizes opportunities for students to evaluate scientific evidence according to their own personal understanding, to articulate their own theories and explanations, and participate actively in principled design. In this way, students gain valuable lifelong learning skills which will serve them in all future endeavors. We advocate starting in middle school or earlier to establish a habit of distinguishing among ideas and reconciling diverse perspectives.
The Scaffolded Knowledge Integration framework recommends that any models introduced by instructors should connect to students everyday life (Linn and Muilenberg, 1996; Songer and Linn, 1989). Many researchers have called for explicit instruction in connecting ideas with everyday experiences. Reif and Heller (1982) in physics, Anderson (1985) in tutoring, and others have reinforced the notion of using such connections. Other approaches involve the modeling of ideas in the form of case studies (Guzdial, Rappin and Carlson, 1995; Linn and Clancy, 1990), visualization tools (White and Fredericksen, 1989), or employing spontaneous "think aloud" problem solving in class (Schoenfeld, 1985). Students also need to develop autonomy in evaluating connections and seeking out disconnected information. Thus, we find this process of connecting ideas most valuable when combined with efforts to encourage students to emulate it autonomously. Schoenfeld (1985) stresses this in his work on learning mathematics. He models the process of solving complex problems and encourages students to emulate his approach, often asking whether or not they have yet completed a problem to encourage them to find more connections and to test the connections they have made. In the Scaffolded Knowledge Integration framework, connecting ideas is an extremely important element, and is captured by our principle of "making thinking visible."
2) Make thinking visible. An important aspect of helping students work within their own repertoire of models is to help students make their own thinking visible, as suggested by many authors (e.g., Collins, Brown and Holum, 1991; see also Kozma, this volume). We provide students with tools and opportunities to represent their own thinking. This includes providing students with feedback about their current models. We also scaffold students in acquiring more sophisticated models, as well as more diverse models. Several types of scaffolding help, including cognitive, procedural, and metacognitive supports. In the Computer as Learning Partner curriculum, we designed accessible models of new or difficult concepts. These models might be "intermediate causal models" (e.g., White and Fredericksen, 1989; Slotta and Chi, 1996) or qualitative models (e.g., Linn and Songer, 1991). For example, in thermodynamics, a "heat flow" rather than a sophisticated molecular account might be more accessible to students who hold the intuitive conception of heat as a caloric-like substance (see Linn and Songer, 1991; Slotta, Chi, and Joram, 1995, Linn and Muilenberg, 1996).
3) We emphasize autonomous student activities that connect to student concerns and engage students in sustained reasoning. Design or critique projects that require students to form opinions or explanations about evidence or to make principled design decisions encourage autonomy. To make such projects authentic, we draw on students existing knowledge and incorporate scientific evidence that students find personally relevant.
4) Social supports for learning help students learn valuable skills of collaboration and gain insights from their peers. Science learning is rarely performed in isolation from ones peers; rather, peer exchange is often vital to learning. (e.g., Brown and Campione, 1990; Vygotsky, 1987). Science projects should be designed to foster collaborative work, both because this will be an important skill for students throughout their lives, and also because it is an efficient means of learning how others connect ideas. Designing an effective social context for learning involves guiding the process of social interaction. Dewey (1920) called for taking advantage of the social context of learning at the turn of the century. Recent environments such as CSILE (Scardamalia and Bereiter, 1991; 1992), KGS (Songer, 1993; 1996), Community of Learners (Brown and Campione, 1994), CAMILE (Guzdial, Rappin and Carlson, 1995 ), and the Multi-Forum Kiosk (Hoadley and Hsi, 1993) demonstrate the advantages of having students discuss their efforts at knowledge integration with peers.
Hearing ideas in the words of peers, validating each others' ideas, and asking questions of peers can all foster links and connections among ideas. Yet, these efforts can fail if students lack important ideas, copy paragraphs from texts without reflection, or reinforce unfruitful ideas held by others. Opportunities for class discourse succeed when structured into the curriculum, so that students share opinions, offer feedback to others, and reflect on the mix of ideas. In this area, we seek ways to support all students in their learning of science, and to actively work against social status or gender stereotypes that discourage some groups from participating in science discourse.
To make a learning environment productive involves principled design and redesign. In working to develop the Knowledge Integration Environment (KIE), we have closely adhered to the principles of Scaffolded Knowledge Integration, and refined our KIE software and curriculum after each semester of classroom trials. We now describe the KIE software, illustrating how it incorporates the principles of Scaffolded Knowledge Integration, including the constraints involved in building effective curriculum units.
The Knowledge Integration Environment
The KIE software implements the Scaffolded Knowledge Integration framework, providing an instructional shell that is suitable for any science domain and for students in grades 4 to 14. We conceive of KIE as a platform to scaffold students as they work in projects that rely on scientific "evidence" from the Web, as well as other evidence from their class work. Our ultimate goal is to scaffold autonomous learning, which includes the abilities to integrate diverse sources of information and critique their credibility. Clearly, the World Wide Web is an ideal domain for such learning, especially given its breadth of content and convenient location (right on the student's desktop!). We are challenged to take advantage of the ambiguous credibility of Web resources, which has a clear application to the fostering of critiquing skills.
KIE consists of both custom and commercial software components, all working together to support collaborative work as students use Web evidence, compose scientific arguments, or design artifacts. KIE provides students with access to relevant Internet evidence, on-line guidance of three varieties (procedural, cognitive, and social), organizational tools to scaffold project completion, argument-building tools to assist in sorting and interpreting evidence, an on-line discussion tool which supports meaningful peer interactions, and Web-based search tools to scaffold collaborative search. The KIE software runs on the student's personal computer, helps keep track of student work and manage the project flow, and interacts with various Web-based tools. KIE includes several major software components:
The KIE Evidence Database is a structured database of Web resources that can be used in KIE projects. These are Web sites that have been selected by a KIE curriculum designer because they are relevant to the project, or sometimes created specifically for the project. Each of these items is added to a database of evidence, where it is annotated with conceptual keywords (e.g., "light", "heat", "evolution"), target age level, and estimated time required for reading. In addition, evidence-specific guidance is attached to each item in the database.
Designing evidence and guidance to help students develop integrated understanding is a primary focus of the research reported in this chapter. Web sites often consist of many linked pages, which could quickly lead students into confusion or wasted effort. We avoid selecting highly complicated sites, and help students navigate and use selected evidence by providing each evidence item with an "Evidence Cover Page."
The Evidence Cover page contains guidance to be read by the student in advance of exploring the evidence. We refer to this guidance as "Advance Organization" information. Figure 1 shows an example of the Evidence Cover page for a lengthy Web site whose topic is passive solar architecture. Because the content of the site is so relevant to the project topic of passive solar energy, we chose to include the site as evidence. The Evidence Cover page provides students with advance guidance about the overall structure of the Web site, how to use it effectively, and some things to keep in mind while surveying its content. Below, we describe research on the design of such Advance Organization information to help students critique and ask questions about Web materials.
Figure 1. The Evidence Cover Page for one piece of evidence from the Sunlight, SunHEAT. Project. Note, some additional text occurs "below" what is visible in the Figure.
To provide additional guidance while students are actually viewing the evidence, we have written a custom piece of software, known as Mildred the Cow Guide. This interactive guidance tool allows students to receive cognitive guidance-on-demand about any piece of evidence they are reviewing. Figure 2 shows Mildred providing hints about the piece of evidence whose Evidence Cover page appears in Figure 1. Students consult Mildred when they are confused by the evidence, or are struggling with the project activities. Mildred does not provide students with any answers. Rather, her "hints" come in the form of questions that focus students on important "science content" within the evidence. The question of how to design guidance for students is an ongoing part of the KIE research program (Davis, 1997; Bell, Davis and Linn, 1995).
Figure 2. Mildred the Cow Guide provides hints, notes and rating of evidence.
The KIE Tool Palette, seen at the right of Figure 3, provides links to all the components of KIE. By clicking on buttons for one of these tools (or the Activities Checklist, described below), students bring the chosen software to the foreground of the screen. This might be Mildred, The "Netbook" (a tool for organizing project documents, visible in Figure 3 behind the Details Window), Netscape (the Web browser employed by KIE), The SpeakEasy (an electronic discussion tool, described below), or Sensemaker (an evidence sorting and argument creation tool, also discussed below). The Tool Palette is one constant in KIEs otherwise dynamic appearance, providing students with continuous access to all KIE functionality.
Figure 3. KIE Tool Palette and Details Window with Netbook in background.
The Activities Checklist, visible above the Tool Palette in Figure 3, guides students through a sequence of project activities, such as: "state your opinion about a debate;" "survey evidence from the Web;" or "revise your design." When students enter KIE, they log in and choose a project from a menu of options. Typically, students engage in one KIE project at a time, but we foresee contexts in which students would have more than one project underway. The KIE software manages student work, and keeps track of which activities have been completed. This information is made visible to students in the "activity checklist" that is always visible to students as they work on a project. The checklist displays which activities have been "checked off," and provides an interface with which students can check (or uncheck) activities that they have completed. The "current activity" is the first one on the list that is not checked off.
Students can always request detailed procedural guidance about an activity by clicking the "Details" button that appears below the checklist. This brings up the Details window (also shown in Figure 3) with guidance about the current activity, or about the project as a whole (known as "The Big Picture"). The Details window provides students with explicit instructions about exactly what to do (e.g., "survey three pieces of evidence form the list of evidence") and equally important, how to do it (e.g., "click on Mildred, choose a piece of evidence from the list of evidence there, then perform the rating for that piece of evidence and fill in the Notes section"). This kind of procedural scaffolding relieves the instructor of what is otherwise a standard component of using computer technology in the classroom: running around from student to student, answering questions and helping them figure out what to click on next. We continue to refine our guidance software, providing the teacher with more time to focus on students conceptual concerns. Overall, students who use KIE become quickly familiar with the various tools and guidance structures, and rarely get confused about "what to do next."
The SenseMaker provides students with an effective way of making thinking visible, as shown in Figure 4. Developed by Philip Bell (Linn, Bell, & Hsi, in press), this unique argument editor allows students to create argument "frames" (represented by the nested boxes) and sort evidence into similarity-based groupings to support their argument. For example, as shown in Figure 4, when students are asked to compare two theories of "How far does light go," they are provided with initial Sensemaker frames labeled, "Light goes forever," and "Light dies out." Students review the evidence on the Web, take notes on each item in Mildred, return to Sensemaker and "sort" each item into a relevant frame. New frames can be created, either within the higher-level frames, or in parallel to them. In this way, students are scaffolded to make use of the evidence in forming their own argument about a topic. Students can "follow" the Evidence links from Sensemaker directly to the evidence (which opens up the relevant Web site in Netscape), and then return to Sensemaker in a cyclical fashion. It is also possible to make new evidence items in the Sensemaker corresponding to personally relevant experiences or insights. When students have completed such a sorting task, they are left with a visible representation of their interpretation of all the evidence, which allows them to reflect and make connections between the evidence and their existing repertoire of models.
Figure 4. The Sensemaker Tool allows students to "sort" evidence links into relevant frames. Students can create new links or frames, and follow links to view evidence in Netscape.
The SpeakEasy provides an on-line asynchronous discussion tool for students. When first joining a SpeakEasy discussion, students log in and respond to a seed "topic statement" that was provided by the instructor. By requiring this initial "opinion", each student is enabled to take some position before reading other students comments. The student then proceeds to the conversation area (displayed in Figure 5) to read and respond to other students statements, as well as offer new topics for conversation. Each comment is represented with social information, including a small image of the contributors face and first name, as well as an indication of whether the comment is a statement, an addition to a statement, a rebuttal of a statement, a question, or a response to a question.
Figure 5. The SpeakEasy Discussion Tool allows for equitable and productive discussions about science topics, peer interaction and exchange of ideas.
Social discourse is an intrinsic part of knowledge integration, and a lifelong learning skill that should be an important goal of all education (Hsi, 1996; Scardamalia, Bereiter, McLean, Swallow, and Woodrull (1989). Traditionally, teachers have only a limited time to allow for classroom discussion, and management of productive discourse is a very difficult task. Typically, few students are able to voice an opinion in response to a teachers comments, with even less exchange occurring between students (Hsi, dissertation; Wellesley College Center for Research on Women, 1992).
In contrast, electronic discussions can occur asychronously, extending throughout the duration of the project or even the semester. Hsi and Hoadley (1997) found that electronic discussions enable equitable participation in scientific discourse. Their studies compared gender differences in participation between class discussion and electronic discussion. All electronic discussions are characterized by high levels of conceptual content, elaborations, and question-asking. Overall, students generate a repertoire of models for phenomena, ask content-focusing questions, and provided causal explanations. We continue to research the domain of electronic discussion, both in terms of matters of equity in the classroom (Hsi, 1997), as well as in understanding issues of social representations (Hoadley et. al, 1997).
In addition to the software components outlined above, we are working in partnership with scientists and teachers to create KIE projects for our growing library of activities. In all of this work, we continue to research important issues surrounding pedagogical approaches, as well as issues with the use of Internet materials. In the sections below, we present an overview of this work, and discuss in detail one strand of our classroom research relating to how KIE projects can foster vital critiquing skills. For more details about the software, curriculum, and partnership opportunities, see our KIE Web site (www.kie.berkeley.edu).
The KIE software provides many important tools and opportunities for knowledge integration activities. However, the design of curriculum projects and related materials remains equally central. Table 1 presents a list of the kinds of activities that characterize successful KIE projects. A growing library of KIE projects is currently available, providing teachers with access to curriculum that meets these criteria.
___________________
INSERT TABLE 1 ABOUT HERE
___________________
Designing Guidance for Sunlight, SunHEAT!
How can we design KIE activities to help students critique Web evidence by scrutinizing its authorship and assessing its relevance to the project topic (in this case, passive solar energy)? How can we help students focus their attention on the relevant science content within the evidence? Critiquing and asking questions about science evidence is an essential knowledge integration skill, and one that we must support in any learning environment that makes use of Web resources (Palinscar and Brown, 1984; Collins, Brown and Newman, 1990; Rosenshine, Meister and Chapman,1996).
As mentioned above, successful KIE projects rely on cognitive and procedural guidance included in the KIE Evidence Cover Page accompanying each evidence item in the project. The present study contrasts different levels of cognitive and procedural guidance provided to students on the Evidence Cover Pages. One group of students encountered a brief overview of the evidence; the second group received specific advice about how to make good use of each evidence item; and third group of students received the same advice, together with some model questions obtained from student work in a previous semester. This comparison study suggests ways to design procedural and cognitive guidance to support students in the lifelong learning skills of critiquing and questioning evidence.
Materials: the Sunlight, SunHEAT! project
The "Sunlight, SunHEAT!" project supports students as they critique Web evidence and gain understanding of passive solar energy. The topic connects to student interests in solar energy (many of whom live in homes or apartments with some solar architecture - such as solar water heaters or insulated window coverings), and contributes to a subsequent project where students design a house for the desert. In the three consecutive semesters that we have studied this project, students have been excited by its content and often refer back to its evidence during subsequent projects.
The "Sunlight, SunHEAT!" KIE project includes several activities: (1) a preliminary class discussion about "good science questions;" (2) a small group hands-on activity; (3) teacher-led discussion of evidence and the Internet; (4) small group Evidence Critique and questioning; and (5) a final class discussion.
In the preliminary class discussion about effective questions, students focus on three criteria for useful science questions: (a) they are specific (e.g., "Is the car battery drained?" as opposed to "Why won't my car start?"); (b) they are relevant to gaps in understanding; and (c) they are productive in the sense of suggesting an experiment. In the hands-on activity, students spend one class period exploring a mysterious object called a radiometer, and formulate questions that will lead them to a better understanding of how it works.
In the "Evidence and the Internet" discussion, the class establishes definitions of evidence, including evidence from the Web, and considers the importance of critiquing evidence source and content. The classroom targets several specific concerns about Web evidence (e.g., authorship, bias, validity, etc).
In the Evidence Critique activity, students are guided by the instructor through the review and critique of the first evidence item (which is an educational Web site concerning the history of passive solar energy, the science of solar energy, and some applications). Students then choose three of the remaining five evidence items to critique on their own. For each piece of evidence, students respond to four questions: How difficult was it for you to understand this evidence? (1 = "not difficult", 3 = "very difficult"). Who do you think wrote this evidence (check all that apply: scientist, advertiser, journalist, regular adult, student). How credible or trustworthy was the evidence (1 = low credibility; 3 = high credibility). Write two questions about the science in this evidence that would help you understand it more completely.
In the final class discussion, students shared their questions about the evidence, and discussed their critiques. Students in this study had never encountered the KIE software before, and many had never experienced the World Wide Web. However, they appeared to easily understand and complete all activities in the project, as reflected by the high quality of their questions and critiques across all instructional conditions.
The six different Web sites used as evidence in this project varied in their credibility and relevance to the topic. The first site was designed for educational purposes in the domain of alternative energy, but was very long and convoluted, consisting of more than a hundred distinct pages all linked together by an esoteric navigational system provided at the bottom of each page. Apparently, this was an effort at bringing "textbook-like" material to the Web, as the navigation system consisted mainly of "Next" and "Previous" arrows that would lead the reader serially through its content. The next site was commercial in nature, dedicated to a specific brand of energy efficient home. Despite the marketing angle of this site, it contained high quality discussions of the science of passive solar energy geared towards explaining the design of the home. Three other sites were also of this nature, in that they provided examples of interesting solar homes with supporting scientific content. A final site was authored by a private individual to document his own solar house, and included a broad range of interesting discussions.
Instructional Conditions
We designed three different versions of the Evidence Cover page to provide alternative forms of advance guidance, as well as a link to the actual evidence site on the Web. KIE provides cognitive and procedural guidance to students in advance of their exposure to the Web evidence to focus their attention on useful parts of the evidence, connect its content to their existing ideas, and use it productively in the course of project activities. In this study, we assess the effectiveness of different levels of guidance for evidence on the Sunlight, SunHEAT! project.
For each piece of evidence, three different versions of the Evidence Cover Page were constructed and delivered to different groups of students. The Overview version of the cover page included only limited statements about the basic theme of the Web site, followed by a link to the site. The Advance Guidance version included (1) Focusing Information to help students identify the critical concepts in the site, (2) Challenging Information to bring difficult or hidden questions or issues to light, and (3) Logistic Information to suggest effective approaches to reading and evaluating the site. The third version of the cover page was referred to as Advance Guidance plus Model Questions, and included the Advance Guidance information, plus model questions that were drawn from the best of student questions in a previous semester. The central research question concerned the effect of these different types of guidance on the quality of student questions as they critiqued ordinary Web sites.
Participants
Three hundred forty-five eighth-grade students participated in the project: 177 in one semester, and 168 in the following semester. These students had never used the KIE environment before, and varied widely in their prior experience with computers and the Worldwide Web.
Procedure
The project was conducted within the first month of the semester, and was embedded in a broader physical science curriculum relating to topics of light, heat, and sound energy. All students worked collaboratively in pairs to complete the activities over five class days as part of their curriculum in eighth-grade science class. All activities were coordinated by their normal science teacher. Students spent the first day in the hands-on exploration activity, and the second day in a class discussion of evidence and the Internet. They then spent three class periods critiquing the KIE "Sunlight, SunHEAT!" evidence, with a final wrap-up discussion at the end of the last period .
Analysis of Critiques and Questions
To assess the impact of the different forms of guidance, we look deeply into the content of our various measures (i.e., the critique ratings and the questions generated). As student work was performed collaboratively, outcomes represent contributions from both members of the pair, in an unknown mixture. Our claims thus apply to pairs of students: if pairs of students working in one condition reliably outperform pairs of students working in another, we reasonably infer that the measured differences reflect cognitive effects.
Although students could choose which of the five KIE evidence items to critique, two of the five items were selected by a large majority of the student groups. Students rarely selected the item relating to the energy design and performance of a privately owned solar home (which was produced by the home owner) or the commercial advertisement for a solar design company (which consisted primarily of text). Thus, allowing students to choose amongst evidence items resulted in less variety than expected. We plan to replace the unpopular items in future versions of this project.
Here, we analyze the two most popular evidence items: the first was a colorful description of one company's passive solar house design, including many attractive graphics, as well as some very difficult science content; the second was an alternative house design, which was popular for its catchy name, and the fact that it included recycled materials such as empty aluminum cans and old tires.
Restricting our discussion to those students who critiqued both of these items reduces our sample from 174 pairs of students to 129 pairs: 30 pairs in the Overview condition, 56 pairs in the Advance Guidance condition, and 43 pairs in the Advance Guidance plus Modeling condition. Each pair of students performed three critique ratings and formulated two questions for each of the two evidence items. We assess the effect of the different instructional conditions on student critiques, as well as the overall effectiveness of KIE in supporting student critique activities.
Analysis of evidence critiques
Students in all three instructional conditions critiqued the evidence items by performing ratings of (a) the authorship, (b) the usefulness of the evidence for their understanding of passive solar energy, and (c) the validity ("trustworthiness") of the evidence content. These ratings were contrasted between the groups of students, as well as with "expert ratings" obtained from two members of the research team. We can interpret differences between ratings from the three groups to be a consequence of the different guidance provided to those groups on their respective Evidence Cover pages.
Analysis of student questions
Students were required to formulate two questions about each piece of evidence. From class discussion, as well as Mildred prompts, students understood that their goal was to ask questions that would help them understand the science content of the evidence more completely. Since both pieces of evidence were advertising house designs, much of their content consisted of images of houses, discussion of ethical matters, and even pricing scales. Thus, students were challenged to locate and focus on the fraction of the evidence offering science content. In earlier project activities, students had identified criteria for effective questions, which therefore provide us a means of assessing student questions. Each question was scored (on a scale of 1 to 3) for the following three criteria:
1. Specificity: Does the question focus on a specific part of the evidence? For example, a non-specific question might be: "Why should we have solar homes?" In contrast, a question that targets a specific feature of a house design from the evidence (e.g., the circulation of an "air envelope") might be: "How can the cold air manage to rise, when I thought it was hot air that rises?"
2. Relevance: Does the question target a gap in the student's knowledge? It is possible to have a highly specific question whose answer would not contribute much to any scientific understanding. For example, the house designs reviewed in one of the evidence items makes use of passive solar energy stored in thick exterior walls made of recycled tires filled with dirt. One student asked of this evidence, "What happens to the tire walls in an earthquake?" This is a very specific, and even interesting question. But it is somewhat peripheral to the task, which was to generate questions about the science of heating these homes. Further, its answer (they don't stand any more of a chance of collapse than conventional walls) would not contribute much to the student's understanding of that science. Rather, a question like, "Why are the tire walls filled with dirt?" would result in some direct increase in understanding the thermal physics of the house design.
3. Productivity: Is the question "productive," in the sense of possibly spawning an experiment or activity that would address the gap in the student's knowledge? There was considerable focus placed on this issue in the class discussion of evidence, as well as in the Explore and Question (radiometer) exercise. The classroom teacher would contrast a question like "Why are the tire walls filled with dirt," (which essentially requests the entire answer from some authority) with a question like: "Would the tire walls be as good if they weren't filled with dirt?" (which suggests a possible experiment) or "Why does the dirt help the tire walls retain heat?" (which is cast at a more precise level of causal inquiry).
These three measures can be seen as comprising a progressive scale. That is, questions that have a specific focus might not be relevant or productive (e.g., the "appeal to authority" type of question); questions that are relevant to a student's understanding are probably specific, but not necessarily productive; and questions that are productive are probably both specific and relevant. We employed these three measures in exploring differences between the three instructional groups, as well as to gain some overall insight into how KIE is able to support students in this important knowledge integration activity.
Results
Student critiques of authorship
Students were completely successful in judging the authorship of these two evidence items. There was virtually no difference between authorship judgments of the three instructional groups and those of the expert ratings, consistent with findings reported by Clark and Slotta (1997). These evidence items may not have provided the best test of the effect of advance guidance on authorship judgments, since they were both obviously advertisements. Nearly every student who participated in the project judged the authors of these sites to be "advertisers." Although in this particular project the effect of advance guidance on students' ability to judge authorship was at ceiling, the task of judging authorship -- however easy to perform -- may have helped to support students as they began their critique of the evidence by drawing their attention to the "advertisement" nature of the evidence.
Student critiques of credibility
Students in different instructional groups were found to differ in their ratings of the credibility of the evidence. When asked to judge how "trustworthy" the information in the evidence was, students who received Cognitive Guidance were in closer agreement with the expert ratings than their peers in the Brief Introduction condition. Table 2 provides means and standard deviations for all student and expert ratings of credibility. It is important to note that the expert ratings differed for the two items. Evidence item one consisted of a sophisticated account of the energy properties of a specific house design, which was clearly presented with compelling graphics and supporting theory. The two "expert raters" judged this evidence to be "highly credible" (a score of 3), and students in the three groups are seen to progressively agree with this rating with increased levels of advance guidance. Similar results are seen for credibility judgments of evidence item two, which was judged by the expert raters as "moderately credible" (a score of 2). The Overview group is seen to rate the credibility of this item more highly than the other two groups, who draw progressively nearer to the expert judgment. This difference is significant, with F(2, 126) = 3.52, p < .05.
___________________
INSERT TABLE 2 ABOUT HERE
___________________
Student Usefulness Judgments
Usefulness judgments are perhaps the most interesting of all the critiquing measures, as they do not agree with the expert ratings, yet still vary systematically. Table 2 displays the usefulness ratings of students in the three instructional groups, as well as those of the expert raters. Once again, the two "expert raters" of the evidence agreed that evidence item one was more sophisticated than evidence item two, as the latter provided only a qualitative description of the science involved in their house design. The expert rating of usefulness was thus agreed to be "very useful" for item one (a score of 3), and "somewhat useful" for item two (a score of 2). Student ratings across all three groups agreed with these expert ratings, with evidence item number one being rated more useful overall than evidence item number two: F(1, 125) = 7.54, p < 0 01. Still, a curious effect is observed in the group differences, with those students in the Advance Guidance groups tending to rate both evidence items as being more useful than did students in the Overview group. Combined across the two evidence items, this effect is significant, with F(2, 125) = 3.60, p < .05. Apparently, having one's attention focused on the relevant science in a piece of evidence leads to heightened estimations of its inherent usefulness. This is an encouraging result, because it demonstrates that the advance guidance affected the character of student critiques of the science content of the evidence.
Students relevant science questions
Figure 7 shows that student questions improved significantly along all three of our measures (specificity, relevance, and productivity) as a result of advance guidance. The difference between the three conditions was significant: F(2, 121) = 15.01; p < .0001. Table 3 provides the mean values of student questions for the two evidence items. All apparent differences in the table are significant, and even the Advance Guidance+Modeling condition showed a significant increase over the Advance Guidance condition: F(1, 90) = 3.96; p < .05.
Figure 7. Students with increasing level of scaffolding asked questions of the evidence that were increasingly more specific, more relevant, and more productive.
___________________
INSERT TABLE 3 ABOUT HERE
___________________
The results shown in Figure 7 and Table 3 suggest that our approach in designing guidance for the Sunlight, SunHEAT! evidence was successful in scaffolding students to ask critical questions of evidence. In all three measures, students who received cognitive guidance consistent with the SKI framework were able to ask questions that were rated nearly a standard deviation better than those of students receiving the Overview information.
The power of advance guidance is clear in these results. This effect can only derive from differences in the advance guidance information, for once the students left the Evidence Cover pages (where the advance guidance and modeling information was supplied) and went out to the actual Web sites, they all viewed exactly the same information. That is, students in all three groups received exactly the same evidence, in the form of existing Web sites. Thus, KIE was able to influence student critiques, as well as their ability to ask critical questions, simply by providing students with guidance in advance of their seeing the evidence.
Discussion
This chapter describes the KIE software, and details the careful design process we follow to create effective activities for KIE. Our overall progress involves basic research, development, testing, and refinement of the KIE software and curriculum, as well as ongoing partnerships with teachers and scientists to create new KIE projects. Cognitive guidance can help overcome some of the dangers and challenges of incorporating Internet materials into science curriculum. Carefully designed guidance can help students use Internet materials effectively in spite of variable characteristics like authorship, organizational complexity, content validity, rhetorical purpose, and age appropriateness.
We have developed the Knowledge Integration Environment to take advantage of the opportunities afforded by Internet evidence. Like all citizens, students in today's world must become lifelong learners on the Internet, using the Web to answer their questions and solve their problems. Cognitive guidance can help students learn to actively critique the source and content of materials from the Web, and ask questions that foster their own knowledge integration. Our future research will pursue the question of whether such critiquing abilities can be sustained by students, even when they are using the Internet in the absence of KIE guidance.
The research described here provides an example of the manner in which we have refined the KIE software and curriculum based on principles of Scaffolded Knowledge Integration. By manipulating the guidance on the KIE Evidence Cover pages, we can improve the effect of this information on students success in critiquing the evidence. Our discussion of the "Sunlight, SunHEAT!" project illustrates the way in which specific KIE curriculum projects evolve through successive refinement according to the principles of knowledge integration. This marriage of software and curriculum is no coincidence, as the rapidly developing Internet technology commands a synergistic approach to development. Changes in the software are motivated by research on the effects of various curriculum elements, all of which rely on the function and affordances of an emerging technology. Thus, we pursue a flexible and responsive approach to development in order to capitalize on this inherent synergy.
Refining the KIE software
We have illustrated our basic approach to refining the KIE software by describing how we study the design of cognitive guidance for Internet evidence. The KIE software can scaffold students' use of evidence, increasing their ability to connect the evidence content to their existing ideas about science. This clarifies forms of cognitive guidance that help students critically evaluate Web evidence: "Focusing Information" helps students identify critical science concepts; "Challenging Information" which brings difficult or hidden issues to the student's attention; and "Logistic information" suggests strategies for approaching the Web site (e.g., "you may want to follow the link at the bottom of the page, which will lead you to a helpful table of facts").
For every component or function of the KIE software, we have employed this strategy of comparing various designs in a tailored curriculum project. This has allowed us to inform our design decisions by testing ideas and gaining new insights for subsequent revisions. Just as we are confident that the advance guidance we provide on our Evidence Cover pages will support students as they learn to critique evidence, we have a similar research basis for other components of KIE: the nature and content of Mildreds guidance; the most effective representations for SpeakEasy conversations; and what information is most helpful to students on the Tool Palette, to name a few.
Refining the KIE Curriculum
A design goal for KIE is to support students as they learn to critique Web materials. In the "Sunlight, SunHEAT" Project, these supports took the form of Advance Organization guidance from the Evidence Cover page, as well as Mildred notes which prompted students to rate the author, credibility and usefulness of the evidence. In this way, students' attention was guided to the most important elements of critique for their purposes. Without these vital supports, students could develop inappropriate or inconsistent critiquing strategies, or fail to attend to some important features as they make use of Internet evidence.
The Scaffolded Knowledge Integration framework has suggested a focus on critiquing skills, and we have designed both curriculum and software in this light. Students learn from critiquing, because this is an activity which they find personally relevant and comprehensible. Through critiquing, they also gain a valuable lifelong learning skill. Furthermore, critiquing is particularly relevant to activities which use the World Wide Web, since evidence from the Web is often of questionable origin and intent. Thus, most KIE curriculum projects support students in critiquing the evidence used within the project.
A second knowledge integration activity addressed by this research is that of generating questions. By articulating their own questions about the evidence, students are able to target gaps in their understanding. We have seen from the results above that students' questions become more relevant and productive when they are provided with advance cognitive guidance as to which aspects of the evidence are the most salient. To support students in asking effective questions of evidence, we provided modeling questions which were drawn from student work in a previous semester.
A Synergy of Technology, Software, and Curriculum
The KIE software and curriculum has been through many cycles of refinement, with major revisions occurring in each of the past three years. During this time, students have used KIE in a variety of different classroom contexts, ranging from middle school, to high school, to University engineering courses. Because KIE was developed in order to support a process of knowledge integration, rather than any specific conceptual content or domain, it has emerged as a highly powerful and flexible learning environment. The Scaffolded Knowledge Integration framework guides KIE design decisions, and research findings such as those reported here allow us to refine our understanding of Scaffolded Knowledge Integration.
Our research program has led us naturally to embrace the Internet as an essential resource for knowledge integration in the science classroom. In combination with tools and resources offered by KIE, the diversity and accessibility of the Web helps students become lifelong science learners. In Web-based activities, they can connect science instruction with their existing ideas, gain valuable lifelong learning skills, and discover personally relevant science evidence. Additionally, the uncertainty of Web authoring and validity of information can serve as an important opportunity for students to gain prowess in looking at evidence with a critical eye. The cognitive scaffolding offered by KIE enables students to critique Web evidence, and recognize its potential strengths and weaknesses.
References
Anderson, J. R., Boyle, C. F., & Reiser, B. J. (1985). Intelligent tutoring systems. Science, 228, 456-46
Bell, P., & Davis, E. A. (1996). Designing an activity in the Knowledge Integration Environment. Paper presented at the 1996 Annual Meeting of the American Educational Research Association, New York, NY.
Bell, P., Davis, E. A., & Linn, M. C. (1995). The knowledge integration environment: Theory and design, Proceedings of the Computer Supported Collaborative Learning Conference (CSCL '95: Bloomington, IN), (pp. 14-21). Hillsdale, NJ: Lawrence Erlbaum Associates.
Brown, A. L., & Campione, J. C. (1994). Guided discovery in a community of learners. In K. McGilly (Ed.), Classroom lessons: Integrating cognitive theory and classroom practice. Cambridge, MA: MIT Press/Bradford Books.
Brown, A. L. & Campione, J. C. (1990). Interactive learning environments and the teaching of science and mathematics. In M. Gardner, J. G. Greeno, F. Reif, A. H. Schoenfeld, A. diSessa, & E. Stage (Eds.), Toward a scientific practice of science education (pp. 111-140). Hillsdale, NJ: Lawrence Erlbaum Associates.
Brown, J. S., Collins, A., & Duguid, P. (1988). Situated cognition and the culture of learning. Institute for Research on Learning report No. IRL 88-0006
Chi, M. T. H., Feltovich, P., & Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5(2), 121-152.
Clark, D. B. and Slotta, J. D. (1997). Interpreting Evidence on the Internet: Sex, Lies, and Multimedia. Paper presented at the annual meeting of the American Educational Research Association. Chicago, IL.
Collins, A., Brown, J. S., & Newman, S. E. (1990). Cognitive apprenticeship: teaching the crafts or reading, writing and mathematics. In L. Resnick (Ed.) Knowing, Learning and Instruction: Essays in honor of Bob Glaser (pp. 453-494). Hillsdale, NJ. LEA.
Collins, A., Brown, J. S., & Holum, A. (1991). Cognitive apprenticeship: Making thinking visible. American Educator, 15(3), 6-11, 38-39.
Davis, E. A. (1997). Students beliefs about science and learning. Paper presented at the Annual Meeting of the American Educational Research Association. March 22-28, Chicago, Ill.
Dewey, J. (1920). The child and the curriculum. Chicago, IL: University of Chicago Press.
diSessa, A. A. (1988). Knowledge in pieces. In G. Forman & P. Pufal (Eds.) Constructivism in the Computer Age. Hillsdale, NJ: Lawrence Erlbaum Associates.
diSessa, A. A. (1993). Toward an epistemology of physics. Cognition and Instruction , 10, 1-196.
Eylon, B. S., & Linn, M. C. (1988). Learning and instruction: An examination of four research perspectives in science education. Review of Educational Research, 58(3), 251-301.
Guzdial, M., Rappin, N., & Carlson, D. (1995). Collaborative and multimedia interactive learning environment for engineering education. In Proceedings of the ACM Symposium on Applied Computing 1995 (pp. 5-9). Nashville, TN: ACM Press.
Hsi, S. and Hoadley, C. M. (1997). Productive discussion in science: gender equity through electronic discourse. Journal of Science Education and Technology, 6(1).
Hoadley, C. M., Fishman, B., Harasim, L., Hsi, S., Levin, J., Pea, R., Scardamalia, M., and Linn, M. C. (1997) Collaboration, communication, and computers: what do we think we know about networks and learning? Panel presented at the Annual Meeting of the American Educational Research Association, Chicago, April, 1997.
Hoadley, C. M., & Hsi, S. (1993). A multimedia interface for knowledge building and collaborative learning. The adjunct proceedings of InterCHI '93, (International Computer-Human Interaction Conference), (pp. 103-104). Amsterdam, The Netherlands: Association for Computing Machinery.
Linn, M. C. (1992). The Computer as Learning Partner: Can Computer Tools Teach Science? In K. Sheingold, L. G. Roberts, and S. M. Malcolm (Eds.), Technology for Teaching and Learning. Washington, DC: American Association for the Advancement of Science.
Linn, M. C. & Clancy, M. J. (1990). Designing instruction to take advantage of recent advances in understanding cognition. Academic Computing(April), 20-41.
Linn, M. C., Bell, P., & Hsi, S. (in press). Lifelong science learning on the Internet: The Knowledge Integration Environment. Interactive Learning Environments.
Linn, M. C., & Muilenberg, L. (1996). Creating Lifelong Science Learners: What Models Form a Firm Foundation?. Educational Researcher, 25 (5), 18-24.
Linn, M. C., & Songer, N. B. (1991). Cognitive and conceptual change in adolescence. American Journal of Education, 99 (4) 379-417.
Palinscar, A. S., & Brown, A. L. (1984). Reciprocal teaching of comprehension-monitoring and comprehension-fostering activities. Cognition and Instruction, 2, 117-175.
Piaget, J. (1970). Science, Education and the Psychology of the Child. New York: Orion Press.
Reif, F., & Heller, J. I. (1982). Knowledge structure and problem solving in physics. Educational Psychologist, 17, 102-127.
Reif, F. & Larkin, J. H. (1991). Cognition in scientific and everyday domains: Comparison and learning implications. Journal of Research in Science Teaching, 28 (9), 733-760.
Rosenshine, B., Meister, C., & Chapman, S. (1996). Teaching students to generate questions: a review of the intervention studies. Review of Educational research, 66 (2).
Scardamalia, M., & Bereiter, C. (1991). Higher levels of agency for children in knowledge building: A challenge for the design of new knowledge media. The Journal of the Learning Sciences, 1, 37-68.
Scardamalia, M., & Bereiter, C. (1992). A knowledge building architecture for computer supported learning. In E. De Corte, M. C. Linn, H. Mandl, & L. Verschaffel (Eds.), Computer-based learning environments and problem solving. Berlin: Springer-Verlag.
Scardamalia, M. & Bereiter, C. (1983). Child as co-investigator: Helping children gain insight into their own mental processes. In S. Paris, G. Olson, & H. Stevenson (Eds.), Learning and motivation in the classroom. Hillsdale, NJ: Lawrence Erlbaum Associates.
Scardamalia, M., Bereiter, C., McLean, R., Swallow, J., & Woodrull, E. (1989). Computer supported intentional learning environments. Journal of Educational Computing Research, 5(1), 5168.
Schoenfeld, A. H. (1985). Mathematical problem solving. Orlando, FL: Academic Press.
Slotta, J. D. & Chi, M. T. H., and Joram, E. (1995). Assessing the ontological nature of conceptual physics: A contrast of experts and novices. Cognition and Instruction,13, (3), 373-400.
Slotta, J. D. and Chi, M. T. H. (1996). Understanding constraint-based processes: a precursor to conceptual change in physics. Paper presented at the Eighteenth Annual Cognitive Science Society Conference, San Diego, CA, July 12-17.
Songer, N. B. (1996). Exploring learning opportunities in coordinated network-enhanced classrooms: A case of kids as global scientists. The Journal of the Learning Sciences, 5(4), 297-327.
Songer, N. B. (1993). Learning science with a child-focused resource: A case study of Kids as Global Scientists, Proceedings of the Fifteenth Annual Meeting of the Cognitive Science Society, (pp. 935-940). Hillsdale, NJ: Lawrence Erlbaum Associates.
Songer, N. B., & Linn, M. C. (1989). Everyday problem solving in thermodynamics.
Soloway, E. (1985). From problems to programs via plans: The content and structure of knowledge for introductory LISP programming. Journal of Educational Computing Research, 1(2), 157-172.
Vygotsky, L. S. (1987). The collected works of L. S. Vygotsky: Volume 1, Problems of general psychology (R. W. Rieber & A. S. Carton, Series Ed.). New York: Plenum.
Wellesley College Center for Research on Women (1992). How schools shortchange girls. Executive Summary. American Association of University Women Educational Foundation.
White and Frederiksen, 1989. Causal model progressions as a foundation for intelligent learning environments. Artificial Intelligence, 24, 99-157.
Table 1: KIE Projects Emphasize Scaffolded Knowledge Integration
KIE projects do emphasize... |
KIE projects do not emphasize... |
depth of knowledge |
breadth of knowledge |
learning a repertoire of models and when to apply them |
learning isolated "right" answers |
subject matter applicable to real life |
esoteric subject matter |
student responsible for learning |
teacher responsible for learning |
teacher as coach/facilitator |
teacher as transmitter of knowledge |
constructivist teaching methods |
didactic teaching methods |
student critique of evidence sources |
unquestioning acceptance of authority |
using the Web productively |
surfing the Web randomly |
reflection on progress |
blindly following steps |
Table 2. Mean Credibility and Usefulness Ratings for Two Evidence Items (with Standard Deviations).
Evidence Item + Rating Category |
Expert Rating |
Overview |
Cognitive Guidance |
Cognitive Guidance + Modeling |
House #1 - Usefulness |
3 |
2.50 (0.64) |
2.68 (0.47) |
2.71 (0.46 |
House #1 - Credibility |
3 |
2.32 (0.67) |
2.51 (0.50) |
2.61 (0.54) |
House #2 - Usefulness |
2 |
2.28 (0.66) |
2.46 (0.54) |
2.57 (0.50) |
House #2 - Credibility |
2 |
2.61 (0.50) |
2.32 (0.58) |
2.27 (0.54) |
Table 3. Mean Values of Student Questions as Rated on Scales of Specificity, Relevance, and Productivity (with standard deviations).
Evidence Item + Question Type |
Overview |
Cognitive Guidance |
Cognitive Guidance + Modeling |
House #1 - Specificity |
2.43 (0.78 |
2.77 (0.42) |
2.78 (0.41) |
House #1 - Relevance |
2.02 (0.80) |
2.43 (0.55) |
2.39 (0.54) |
House #1 - Productivity |
1.82 (0.64) |
2.17 (0.55) |
2.23 (0.58) |
House #2 - Specificity |
1.88 (0.72) |
2.29 (0.73) |
2.58 (0.55) |
House #2 - Relevance |
1.50 (0.63) |
1.75 (0.61) |
2.01 (0.80) |
House #2 - Productivity |
1.33 (0.47) |
1.36 (0.60) |
1.56 (0.64) |