Due to the worldwide pandemic, Zoom and other video conferencing platforms became the dominant method of professional and personal communication. Because of this shift, high-value meetings such as job interviews now take place on these platforms. This is consequential because now interviewees must use different cues to gauge their real-time performance during these conversations - which can be especially difficult for non-native English speakers.
Our solution, the Zoom Interview Assistant, is a service that allows users to improve their interview performance by choosing either Practice Mode or Interview Mode.
In Practice Mode, the user is able to select questions that the system will “ask”; and they have control over the content, length, and order of each question, including the ability to randomize these questions.
The timer in the top right corner of the screen allows the user to see the length of their answer to each question. Simultaneously, live transcription is displayed on the bottom of the screen and the resulting analysis on the right side of the screen. Therefore, the user is aware of the time taken to answer each question as well as the quality of the content in each answer, in real time.
At the conclusion of the practice interview, the Insights Summary screen appears, which displays an overview of the user’s performance while answering these questions. This page includes four main categories (Eye Contact, Filler Words, Grammar Errors, and Sentence Improvements) and allows the user to click on each category for more detailed information.
In Interview Mode, the same Insights Summary screen appears at the conclusion of the conversation. Likewise, during the interview, the transcription functionality tracks the responses of the interviewee and displays real-time findings and suggestions for the user.
Users are able to track their interview performance over time with the Insights Summary aggregation page, which allows users to revisit past practice sessions and interviews. In total, the Zoom Interview Assistant empowers users to iteratively improve their virtual interview performance and increase the likelihood of a positive outcome, like receiving a job offer.
Zoom Interview Assistant needs to expand its capabilities in understanding accents, since non-native English speakers are likely a large market for such a product. Further, as this product is centered around improving interview performance by providing feedback to the user, such feedback needs to be iteratively optimized for implementability. In other words, how can we design the feedback for users to maximize the positive effects of providing the feedback? Longitudinal user testing is needed to investigate this question.