By Dan Lang
At the Shaping Sustainable Futures for Internalization in Higher Education Conference on June 24-25, quality assurance was a major topic of discussion in at least two sessions. The connection to internationalization was how to assure quality of students and quality for students: students being recruited and considered for admission, and students making decisions about to which colleges or universities to apply. Each has a different audience.
The audience for the first is principally universities and their admissions committees. The audience for the second group is students themselves, a group served by conventional (but different) system-wide or institutional quality assurance or accreditation protocols. Some of which are more useful and reliable than others, but there is at least a general agreement and understanding about what they should look like and what they should do.
However, for the audience of colleges and universities deciding which international students to recruit and, especially, to admit to undergraduate study the assurance of quality is complicated, elusive, and less certain than for domestic applicants. There are system-wide intermediary buffers like Navitas and individual university buffer programs, of which the Green Path Program at the University of Toronto Scarborough is an example. Most Canadian universities, either collectively or one-by-one, have long had credential equivalency protocols. The British GCE A Level System is widely used throughout the Commonwealth. Some of these programs assure language proficiency. Some assure curricular content and level. All, however, are a step away from assuring the quality of individual cross-border applicants and the schools from which they come.
This aspect of quality assurance has a special and urgent relevance to colleges and universities in Ontario. Since 1995, international students have been ineligible for inclusion in the province’s formula for allocating operating grants to colleges and universities. International enrolments generated only fee revenue in essentially a free-market environment. Starting in 2020, over a graduated schedule, the province will begin replacing a portion of formula funding with performance funding. International enrolment will still be excluded from the formula grant but – and this is the key point – will be included in performance funding.
Although the metrics for all the performance indicators have not been announced yet, that rate of graduation will certainly be one of them. To the extent that the quality of entering students is a factor in the rate of graduation, the quality of international students will begin to make a difference in the level of public funding received by colleges and universities.
Questions and difficulties like these did not begin with the relatively recent expansion international enrolment across borders. As surprising and hard to believe as it may seem, American universities faced a similar problem more than a century ago. By the late 1800s, due mainly to the Morrill Act of 1862 and its later extension, the Agricultural College Act of 1890, many American states had more student capacity in universities than they did in high schools with curricula that prepared students for university.
One answer to the problem was the university “laboratory school,” examples of which could be found as early as 1873, mainly in mid-western states. The most well-known, which became a model for others, was established by William Harper, John Dewey, and Alice Chipman Dewey at the University of Chicago in 1896. The laboratory school assured quality in two ways. It defined a university preparatory curriculum and it set standards for teachers and pedagogy. The curricula, initially, were closely matched with the first-year curriculum at the respective host universities.
Another answer is the more surprising: the founding of the College Entrance Examination Board in 1899. Initially the CEEB addressed quality assurance through the development of model curricula, as the laboratory schools did, but much more broadly and much more deeply. There was an important additional provision: all college and university members of the CEEB agreed to preferentially recognize and accept high school credits earned in schools that adhered to CEEB curricula. Recognition and acceptance was not contingent, as much transfer credit today still is.
In 1901, the CEEB introduced the first version of what came to be called the Scholastic Aptitude Test. Nearly 1000 students participated in the initial administration of the test. Although the SAT has critics today, its purpose then (and now) was to improve access by recognizing the quality of students applying from schools about which college and university admissions committees knew little, or, if they did know and the quality was below CEEB standards, that fact would not be held against students who demonstrated requisite quality by the alternative means of the SAT. The quality of selection and, in turn, of entering students was more assured. In terms of associative value, that assurance was important to students as applicants who were concerned about the quality of their prospective classmates.
Today, the SAT is administered in 175 countries, and unconditionally recognized system-wide in five major participants in internationalization: the United States, Australia, Canada, Singapore, and the United Kingdom. Is this a better solution than developing a quality assurance protocol and lexicon that can cross many borders? Maybe “whereof what’s past is prologue.” Maybe it isn’t. But it is at least a different perspective from which to view the issue.