W^H? The Holmes Murphy Blog

  • How Private Are Your Thoughts?

    With the new buzz of artificial intelligence (AI) technology, many emotions have arisen regarding the topic. How will this impact employment? What type of advances will be made in medicine and science? How will our privacy be protected? How will the laws keep up with this rapidly evolving technology?

    It may be hard to believe, but advancements in the use of artificial technology do have implications on the insurance industry (think coverage, rates, loss control, new exposures, etc.).

    Take the auto industry, for example. It has been very active with AI and testing it within vehicles. Affectiva is an MIT MediaLab startup, which launched an AI software that uses facial and voice tracking features to understand drivers’ emotions, energy, and distraction levels. Their objective is to prevent accidents by enacting alerts and other safeguards when red flags are raised by a driver’s cognitive state. For example: The technology could detect how many yawns or blinks the driver has taken and then suggest a stop on the way or play the driver’s preferred soothing playlist. It could also identify the gender of the driver and analyze facial expressions to see when they’re angry or surprised. Affectiva is already working with BMW, Porsche, and other dealerships on this fascinating technology. With this in mind, will insurance carriers use this aggregated information to help determine rates for drivers or void coverage for a claim if the driver is proven to have contributed to an accident due to their mental state?

    The U.S. isn’t the only country interested in investing in AI. China has already been actively deploying the technology within its companies to measure the productivity of workers. For example: Train drivers on the Beijing-Shanghai high-speed rail are required to wear EEG devices to monitor their brain activity while working. There is also headwear being distributed in China’s state-owned companies to monitor their workers’ cognitive activity. This has been coined as “emotional surveillance,” and there is still limited information on these devices, but the intent is to measure the concentration levels, agitation, and productivity of the workers and even send them home, if needed.

    There are already similar initiatives being done in the U.S. by technology software companies in partnership with insurance carriers to collect data of employee activity to create a safer workforce. Through wearable technology, the platform aims to improve worker health, safety, and productivity, as well as notify employers when the conditions become unfavorable. This is only the first steps in what could become a revolutionized world for risk management.

    This use of AI could be a huge exposure with employment practices liability and the issue heightens due to the fact laws don’t exist for protection of our thoughts and emotions. Do employers have the right to reassign or terminate employees because of their cognitive state? Will employers be required to accommodate through the ADA if an employee has an emotional disability being alleged? Will employees be exposed to emotional and cognitive discrimination during the hiring process? These are all potential concerns for organizations, as well as insurance carriers, as this technology further develops.

    Even though AI is still in its infancy stage and there are many years of testing and research to come, it’s still a topic that needs to be addressed head-on, particularly the impact it could have on societal values, freedom, and rights. As technology develops and we become even more advanced in the tech space, we need to be prepared for solutions and ways to tackle the uncertainty it brings.

    Published on: 01.17.19

    Join the Discussion