Peter Tarasewich

Dr. Tarasewich is part of a research team of people from Northeastern University and the University of Massachusetts Boston that is working on modifications and testing of the Enhanced Restricted Focus Viewer, a software tool that tracks a users attention on a computer screen and is used for human-computer interaction research. Recent testing has shown that this tool compares favorably to traditional eye-tracking equipment. The results of this work have been accepted to a major HCI journal.
He is also working with Carole Hafner (information science) and Adam Reeves (psychology) at Northeastern University on the development and experimental validation of theoretically-motivated principles and guidelines for mobile interface design, and the creation of software tools to help designers apply the principles and guidelines in practice. Preliminary research to date has resulted in the development of a split attention model of human information processing and a comprehensive model of interaction context.
Current Projects
Usability testing is a crucial part of information system design. However, there are limitations to the methods that are currently available for such testing. Laboratory usability testing can be very effective, but often investigates only basic metrics such as task times, error rates, and subjective satisfaction. One solution is the use of eye-tracking equipment to record users’ visual attention. As an alternative we have created a new software tool called the Enhanced Restricted Focus Viewer (ERFV) that can be used for usability testing of graphical interfaces that contain hyperlinks (e.g., Web sites). The primary benefit of the ERFV over many other usability testing methods is its ability to track the path of the user’s visual attention. Data about visual attention can reveal information about interface design and usability that would not be found by simply looking at task time and error rates.
Mobile devices play an increasingly larger role in supporting the interactions of our society. While pictures and sounds are routinely sent and received with these devices, they continue to process a great deal information in the form of text. Therefore, text entry will remain a necessary part of human-computer interaction (HCI) with mobile devices. Designing effective input methods is a challenging part of HCI, and small mobile devices make this challenge even greater. The author calls for improving the state-of-the-art in text entry methods for handheld and wearable mobile devices.
This research investigates the design and use of pixel-based displays, which convey meta-information to a particular user using one or more individual lights. We have already completed several studies of pixel-based displays, looking at design tradeoffs, information bandwidth and learning, personalization, and testing the displays with a mobile device in a realistic setting. Our ultimate research goal is to create mobile technology that increases productivity by improving the coordination and communication of mobile response teams (MRT) and their activities.