We took a look at some of Facebook’s emotion-based patents to understand how the company is thinking about capturing and responding to people’s emotional reactions, which has been a tricky area for consumer tech companies but key to their future. On the one hand, they want to identify which content is most engaging and respond to audience’s reactions, on the other emotion-detection is technically difficult, not to mention a PR and ethical minefield.
Links in this post are accessible to CB Insights clients, and clients can also view additional related patents.
Patent: Augmenting text messages with emotion information
Date filed: November 24, 2015
Date granted: May 25, 2017
This patent would automatically add emotional information to text messages, predicting the user’s emotion based on methods of keyboard input. The visual format of the text message would adapt in real time based on the user’s predicted emotion. As the patent notes (and as many people have likely experienced), it can be hard to convey mood and intended meaning in a text-only message; this system would aim to reduce misunderstandings.
The system could pick up data from the keyboard, mouse, touch pad, touch screen, or other input devices, and the patent mentions predicting emotion based on relative typing speed, how hard the keys are pressed, movement (using the phone’s accelerometer), location, and other factors.
To integrate emotional data into the messages, Facebook could change the text font, size, spacing, or use other formatting tools.
Patent: Techniques for emotion detection and content delivery
Date filed (application): February 25, 2014
Date published (application): August 27, 2015
This patent proposes capturing images of the user through smartphone or laptop cameras, even when the user is not actively using the camera. By visually tracking a user’s facial expression, Facebook aims to monitor the user’s emotional reactions to different types of content.
To monitor the user, Facebook proposes using “passive imaging data,” or visual data captured automatically through a laptop or phone’s front-facing camera. The user often faces this camera without thinking about it, while using the phone or laptop normally, and Facebook hopes to start leveraging this imaging data.
Once the system captures the images, an API component would identify the user’s emotion and store the data. Then, Facebook could a) determine which emotions a piece of content elicits, which could be useful for Facebook as well as the content producers, and b) deliver content to the user based on the displayed emotion, which could help Facebook keep users more engaged.
Patent: Systems and methods for dynamically generating emojis based on image analysis of facial features
Date filed (application): November 16, 2015
Date published (application): May 18, 2017
This patent describes a more streamlined process for sending messages with emojis. The system would capture real-time image data of a face (such as through a selfie) and analyze the facial features to determine the user’s emotion, and map it to the emoji that would be the best fit. For example, it could serve up a smiling emoji in response to a photo of someone smiling. The user can then add the emoji to a post or message.
The patent mentions several additional features, such as the ability to modify the emoji based on more detailed analysis of the user’s face, and the ability to capture gestures made by the user and add those to the emoji (such as the thumbs up in the image above).
By reducing users’ facial expressions to emojis from a pre-set list, Facebook could potentially analyze users’ emotions more easily. Facebook could gain clearer insight into feelings and reactions, while also adding a new interactive feature.
Want more trending patent data? Log in to CB Insights or sign up for free below.