Thanks for checking out this fourth and final segment of this blog series on the topic of Myths About Remote Proctoring. To conclude this series, we're going to evaluate three questions which can lead to myths.

  1. Can proctoring technology itself fail a student?
  2. Is testing necessary?
  3. Does proctoring make a difference?

If you missed any of the other blogs in series we have explored, here's a quick recap: First, we considered myths related to privacy. This can be the privacy of one’s physical surroundings or their computing equipment. Next was myths about security, which can relate to data in transmission or storage as well as the treatment of biometric data. Third, was myths related to accessibility. Leaners may need accommodations for physical, cognitive, and/or emotional disabilities. Finally, we considered myths related to bias which can include socio-economic, religious, gender and/or racial bias. 

Now that you're all caught up, let's dive into today's topic to explore if proctoring technology can fail a student, if testing is necessary, and if proctoring makes a difference. Take a look at the SmarterReflections video to learn more.

Don't want to watch the video? No problem! Keep scrolling to read the video blog recap below.

Manage and Administer Proctored Exams

Video Summary

Myth: Proctoring Technology Can Fail a Student

Given all of the concerns described with remote, virtual proctoring, it is logical that a concluding concern some students have is that the proctoring technology itself will fail them. The fear is that the technology would incorrectly identify some behavior as an incident of academic dishonesty and then automatically lock them out of the exam thereby failing them. The automated virtual proctoring modality provided by SmarterProctoring controls against this is three ways.

First, faculty are allowed to determine what actions they consider inappropriate. When faculty gives an exam in a physical classroom some are more vigilant than others. While one faculty member may regularly walk around the room observing students, others may sit and read a book while students are testing. At SmarterServices, we recognize and value the academic freedom that should be extended to faculty and have built our automated modality to provide that. When a faculty member configures an automated exam, they can toggle on or off options such as verifying ID, recording a webcam screen or audio, doing a room scan, or allowing only one monitor, etc. Faculty are in control over the level of monitoring.

Second, SmarterProctoring does not compute any sort of numerical score of academic integrity that faculty members could construe as some evaluation or grade. Instead, we provide to the faculty a labeled list along a timeline of the testing anomalies observed so that the faculty member can then review the event themselves. It is then the faculty member, not the technology, who determines if an incident of academic misconduct occurred.

For example, our technology will indicate when a second face appears. If this face is of a child who unexpectedly entered the room, the faculty member would likely not be concerned about this. But, if the face was of another student, this could be a matter of concern. Just as in the classroom, the faculty member is in control over the environment and decides if behaviors are of concern.

Finally, unlike some other services, SmarterProctoring does not stop the testing session if an authentication attempt is failed or if an anomaly is detected. The fact that some other proctoring tools do stop the exam is one of the main concerns that students have. The students fear, for example, that if they are taking the exam near the deadline and the technology mistakenly locks them out, then they mail fail the exam due to non-submission. Not only does SmarterProctoring not stop the exam, but in an effort not to distract students as they are testing, it does not notify the student, only the faculty member, that a testing anomaly has been detected.


Myth: Testing is Not Necessary

As the entire world experienced the life-changing phenomenon of the pandemic, many questions have been raised and processes improved in several aspects of our lives. At all levels of education there was a rapid shift from classroom instruction to emergency remote instruction. As faculty and administrators grappled with how to do this, the issue of assessing learner mastery was a topic of much discussion. One approach that was taken by some organizations and faculty, was to utilize some form of authentic assessment as opposed to traditional testing.

As defined by Edutopia, authentic assessment takes many forms and for the measurement of mastery of some skills, it is a very appropriate method. Authentic assessment can be done using essays, interviews, demonstrations, portfolios, journals, and more. Many of these methods can be done online using document sharing, video conferencing, etc.

But along with all of the promise that authentic assessment makes, there are concerns from faculty and students. Many faculty members are resistant to utilizing much authentic assessment because of the greatly increased level of administrative burden of grading such artifacts. The time required to review, grade, and provide feedback on such projects can be quite labor and time intensive. Students have expressed concern that the process seems too subjective even when utilizing a grading rubric. They fear that the faculty member’s own bias toward the topic and/or demographic factors could influence their grade.

Finally, while authentic assessment is appropriate for the demonstration of many skills, there are some subjects for which assessment of knowledge does need to involve a more traditional test. This may especially be true in STEM courses in which students need to demonstrate mastery through a math or science test.

Myth: Proctoring Does Not Make a Difference

Finally, critics of remote virtual proctoring often content that proctoring itself just does not make a difference. They maintain that the costs and effort associated with proctoring is not justified and they are content to utilize honor codes.

A recent study titled Cheating in Online Courses: Evidence from Online Proctoring conducted at Radford University in Virginia concluded that virtual proctoring is an effective deterrent to cheating in online courses. After controlling for multiple variables (aptitude, gender, ethnicity, etc.), grades were substantially higher in online courses that did not require virtual proctoring.

The research compared test outcomes in identical, online, asynchronous courses, one without proctoring and one with remote, recorded proctoring of the exams. The authors of the paper concluded:

The main implication of these results is that academic dishonesty is indeed a serious issue in online courses. Despite a series of mitigation measures that were adopted without direct proctoring – such as the use of a special browser, a restricted testing period, randomized questions and choices, and a strict timer – it appears that cheating was relatively commonplace. Cheating apparently also paid off handsomely, at least when it comes to exam performance, often raising scores by about a letter grade. A related implication is that some form of direct proctoring is perhaps the most effective way of mitigating cheating during high-stakes online assessments. The fact that a technological solution such as the one examined in this study (online proctoring through a webcam recording software) does an effective job in mitigating academic dishonesty is thus reassuring for all stakeholders. The results in this paper do not suggest that the solution is perfect – for that matter, there is no evidence that in-person monitoring is either – but they are significant enough to indicate its efficacy. Coupled with the relatively low-cost, user friendly nature of this type of technology, the results should broadly encourage its adoption by concerned faculty and institutions. From these results one can also infer that online proctoring of assessments is a viable strategy to mitigate cheating in online courses.

Another article was recently published in the International Journal for Educational Integrity that illustrates the fact that proctoring matters. It analyzed the skyrocketing usage of the homework help site – Chegg. The website which does utilize an honor code that prohibits cheating allows students to post a question, potentially from an exam, and receive an answer from someone typically in less than thirty minutes. The authors of this article found that the number of questions posted on the site in five different science, technology, engineering and mathematics disciplines increased by 196.25 percent in April to August of 2020 compared to the same period in 2019. The authors concluded “Given the number of exam style questions, it appears highly likely that students are using this site as an easy way to breach academic  integrity by obtaining outside help.”

When an exam is proctored using automated virtual proctoring the student’s physical and computing environment are controlled prohibiting them from accessing such resources.

SmarterProctoring Can Alleviate These Concerns

We hope that this blog series has been thought provoking and useful. When students and faculty have concerns about an issue, we must give their concerns consideration. At SmarterServices we are constantly thinking about these matters and by design SmarterProctoring has been created to reduce these concerns that lead to myths related to privacy, security, accessibility and bias.

SP FAQ Download

If you missed any of the blogs in this series, click the links below to check them out:

For more great topics, click here to subscribe to our blog.