Does Age-dynamic Movement Accelerate Facial Age Impression? Perception Of Age From Facial Movement: Studies Of Japanese Women
Apr 17, 2023
Abstract
more youthful impression by approaching anti-aging from a different viewpoint of facial movement.

Click Here To Get To Know Cistanche On Anti-Aging
Introduction
An individual’s face is an information source communicating knowledge about their individual characteristics such as gender and age, as well as their physical and mental conditions, such as state of health and emotions [1,2]. In particular, facial youthfulness is emerging as a major motivating factor in the coming super-aging society [3]. Moreover, among Caucasians, age impression from the face has been proposed as a biomarker of aging [4,5] and looking older than one’s actual age is associated with increased mortality [4,6]. Thus, the importance of facial youthfulness goes beyond a high sense of beauty and is relevant to health and well-being. Previous studies on age impression based on facial shape [7,8] and skin appearance [9–16] have been reported. While the facial shape cannot be easily changed, the skin appearance can be controlled by cosmetics. As a result, people (especially women) are very interested in their own skin condition. Previous studies have reported mainly on static skin conditions by investigating the effects of morphological and tonal characteristics, such as wrinkles, sagging, spots, and unevenness of color [9–16]. However, the facial skin we interact with in real life is constantly moving during the conversation, and we perceive various impressions from moving faces. In fact, we may have the experience of feeling both youthful and old at fleeting moments in our conversations. Therefore, in order to elucidate the overall impression, in real life, it is necessary to consider the effects of skin movement.

Conversely, some reports have focused on facial motion information in person identification and recognition of facial expressions. Regarding the effect of motion information on person identification, it was found that the correct response rate was higher for video images than for still images [17,18] and that people can discriminate both between individuals and between males and females from motion-based information alone [19]. In terms of the effect on the recognition of facial expressions, the speed at which they change contributes to the perception of emotions [20], and facial motion information increases the correct response rate for facial expression recognition [21]. Thus, facial motion information is an important clue for face and facial expression recognition. However, there are few reports regarding the effects of facial and skin movement on age impression. Previous studies have focused on the motility of muscles, such as facial and skeletal muscles. Nevertheless, what we see in others’ facial expressions are not the muscles but the soft skin tissues covering the muscles. These soft skin tissues are known to lose viscoelasticity with age [22]. Therefore, the motility of the skin surface when adopting an expression must change with age. One may wonder how a slight difference in skin movement can affect the overall impression of the face. It is known that human face recognition is mediated by the face-specific cortical areas of the brain, and its abilities become proficient with experience [23,24], enabling humans to detect even slight differences, such as the placement of facial parts (e.g., the distance between the eyes and mouth). Thus, it can be assumed that we perceive age from slight differences in skin movement. Based on the assumption that aging causes a decline in skin movement, we hypothesized that facial movement inherently creates a lively and youthful impression on others, but age-related deterioration of skin movement may dilute the youthful effect of that impression.
Materials and methods
dynamics and the observer’s gaze during age judgment. In addition, we briefly report the effect of the angles on the models’ age impression, wherein a pronounced movement effect was identified.
Ethics statement
All demonstration tests related to this study were performed according to the protocol approved by the ethical committee of POLA Chemical Industries, Co., Ltd., in accordance with the Declaration of Helsinki (Approval No. 2015-G-005; January 16, 2015). Written informed consent has been given (as outlined in the PLOS ONE consent form) to publish these case details. Two participants used as images in the figures have given informed consent including permission to use the facial images in an online open-access publication.
Participants
A total of 112 Japanese women aged 20–49 years participated as observers who evaluated age impressions. It was reported that, as a decline in visual perceptual function, a decline in the ability to adapt to light and dark is gradually observed from the 40s, and becomes more pronounced over 50s [25]. In addition, it was reported that as a result of evaluating the reaction time to memorized cards for each age group in terms of the cognitive processing speed of perceived information, it was confirmed that the reaction time was longer in the 50s than in other younger groups [26]. For these reasons, in this study, the exclusion criteria for the subjects in the previous study were set to be in their 50s or later. Table 1 shows the age breakdown of the observers
Stimuli
The facial models were 80 Japanese women aged 20–69 years (divided into five age groups with 10-year increments, with 16 facial models in each age group). Table 1 shows the breakdown by age. All models were unfamiliar to the observer. In order to eliminate the influence of individual differences in makeup, the models were asked to participate in this experiment without makeup.
Each model expressed facial movements from the neutral face (N) to lowering chin with open mouth expression (Ia); vertically shrunk face expression (Ib); from the neutral face (N) to horizontally opened mouth expression (IIa); horizontally closing mouth expression (IIb); and from the neutral face (N) to puffing cheek expression (IIIa) to shrinking cheek expression (IIIb) so that the expression switches every second using a metronome set at 60 beats per minute (bpm) (Fig 1). Each expression was taught to the models based on the action unit defined by the Facial Action Coding System proposed by Ekman et al. [27]. Since the intensity of facial expression varied between models, the effects to perceived age were compared within the exercise condition.
As the area of the face skin changes depending on the observation angle, the effect of movement may differ depending on the observation angle. In this study, the face was photographed at four angles: (a) front, (b) 45˚ rotated right in the yaw, (c) 33˚ rotated right in the yaw and pitch below, and (d) 33˚ rotated right in the yaw and pitch above (Fig 2A). These four angles were selected as the facial observation angles that we come across in daily life. In addition to the frontal direction, we used the horizontal direction and the up-and-down oblique direction, which is a compound of the rotation axis, considering the application to other angles. Note that in order to focus on the relationship between facial dynamics and age, changes in the recorded factor of observation angles were ignored intentionally in the analysis.



Fig 1. Facial expression sequence used as stimuli. Photograph N shows a neutral facial expression; Ia, lowering chin with opening mouth expression; Ib, vertically shrunk facial expression; IIa, horizontally opening mouth expression; IIb, horizontally closing mouth expression; IIIa, puffing cheek expression; IIIb, shrinking cheek expression. The Roman numerals I, II, and III indicate vertically stretching and shrinking mouth expressions, horizontally stretching and shrinking mouth expressions, and puffing and shrinking cheek expressions, respectively. The facial expressions changed every second. The long arrow over the facial photographs shows the stream o time.
A video of the face model was recorded with a camera (GV90C, Library, Japan) installed at the four angles described above. Fig 2B shows the installation conditions of the cameras and lights.

Fig 2. (A) Viewpoint used as stimuli. Dynamically-changing facial expressions were filmed with a digital video camera from four different directions: (a) 0˚ rotation (front), (b) 45˚ rotated right in the yaw (right side), (c) 33˚ rotated right in the yaw and pitch above (right upward), and (d) 33˚ rotated right in the yaw and pitch below (right downward). (B) Equipment layout to film with a digital video camera. Dynamically changing facial expressions were filmed with a digital video camera from four different directions: (a) upper view of equipment layout, (b) front view of equipment layout.

Fig 3. Two types of stimulus conditions. Photographs show two stimulus conditions: Dynamically moving expression shown by a nine-second video clip at a speed of 30 frames per second (fps) containing nine facial expression processes as the “dynamic stimulus,” and static facial expressions shown by nine static images taken from video files at the highest degree of facial expression, with a speed of 1 fps as the “static stimulus”.
Two kinds of stimuli were produced as follows: A movie for a total of nine seconds in which the moving images of N-Ia-Ib, N-IIa-IIb, and N-IIIa-IIIb were linked. These were defined as “dynamic stimuli.” Another nine-second video linked the nine still images at the moments when the intensity of each facial expression was the highest. This was the “static stimulus” (Fig 3). The frames with the maximum expression intensity were defined using the frontal video as follows: the frame with the maximum or minimum distance between the trichion and gnathion for vertical movements; the frame with the maximum or minimum distance between the left and right ceiling for horizontal movements; and the frame in which the area occupied by the face was the largest or smallest for puffing and shrinking cheek movements. The face size was set as what it appears to be at a distance at which humans talk with each other on a daily basis. The face width (horizontal distance between the zygomatic arches) was 12 cm (11.421 degrees of visual angle) for each face model on a 24-inch color liquid crystal display (Color Edge CX2414, EIZO, Japan). The stimulus size was 700 pixels horizontally and 800 pixels vertically. The face stimuli were observed at a viewing distance of 60 cm. The following two points were considered in creating visual stimuli to confirm the effects of skin movement only on age impression, excluding other characteristics of faces. At first, since facial expressions had a substantial impact on the bias of age estimation[28], in this study, facial movements are expressed as simple, independent movements, such as vertical and horizontal stretching and shrinking mouth expressions, and puffing and shrinking cheek expressions. In addition, static facial expressions appeared as a “static stimulus,” consisting of nine static images (1 fps each) taken from video clips, with each facial expression at its highest intensity.

The second point is that the decrease in the motor function of the facial muscles, not skin dynamics, may inhibit skin movement as the model creates facial expressions. Therefore, it was necessary to employ a way to exclude the overall duration of movement as a factor. In our study, the facial movement was temporally controlled during recording in accordance with the metronome, so the prescribed facial movements could be expressed at the correct time. The models were trained to vary their facial expressions before the experiment. Furthermore, one facial model whose intensity and timing of expression deviated from the experimenter’s designation was excluded in advance.
Specification of the facial location for age estimation by analyzing gaze data
To specify the facial location where observers unconsciously estimate the age of others, the gazes of the 112 observers described above were recorded with an eye-tracking system (Tobii Pro X2-30) (Tobii, Stockholm, Sweden). The sampling rate of the eye tracking system was 30 Hz and fixation was determined by means of a velocity-based filter (Tobii I-VT filter [29], velocity threshold 30 degrees/second, and minimum fixation duration of 67 ms). The following six regions of interest (ROI) were selected: eyes, nose, mouth, and surrounding area of the eyes, including brows, cheeks, and forehead (Fig 4). Ratios of gaze-fixing time for the six ROIs in the faces were calculated during the estimation of age impression. The two types of stimuli (dynamic and static) and observation angles of the face were integrated and analyzed.
Age perception test
First, the observers were told the age group that the models belonged to, and then they were allowed to observe dynamic stimulation or static stimulation for 9 s. They were assigned a two-alternative forced choice task to answer which half of the age group—former or latter— the model belonged to by pressing the appropriate keys (Fig 5). The observer watched the faces of all 80 models as they appeared on a display, once for each model, and evaluated the age impressions. The models’ video images as watched by the observer were randomly selected from eight different facial-expression video files. Four moving and four static facial images were taken from four different directions. The order in which each image was presented was random, and the number of presented stimuli was counterbalanced by facial models and viewing angles.

Fig 5. An example of the procedure of the age-cognitive experiment. After looking at the introduction part of the movie, the observers were asked to fix their gazes at a center point on a liquid-crystal display (LCD) for 0.2 s, observe the dynamic stimulus or static stimulus for 9 s, and answer to which half of the age group, the former or latter half, the model belonged by pressing a button.
To evaluate the effects of the models’ facial movements on their age impressions, the percentage of the models belonging to the latter half of the age group was calculated; a two-way analysis of variance (ANOVA) with dynamic and static faces, and the five different age groups was conducted; and multiple comparison test with Bonferroni correction was performed. We also examined the factors associated with the observation angle in models whose facial movements significantly influenced the age impression. All statistical analyses were performed using SPSS, version 24.0 (IBM, Armonk, NY, USA).






