

Discover more from HRHeadStart
HRHeadStart #92: Work & Algorithms; Connecting with Co-Workers
The Talent Agenda
Companies around the world are trying to address the question if and how they should invest in AI technologies. The other pertinent question to think about is when a company invests in AI to augment human performance, do employees use these technologies and trust the algorithms to make decisions. Common sense would dictate that employees would trust the algorithms more if they understood how they worked. However, new research is showing the opposite to be true.
The study by the Laboratory for Innovation Science at Harvard shows that there is a need to “social-proof the algorithms” i.e. ensuring that employees know that people similar to them - with their knowledge and experience of their work- had designed and tested the performance of the algorithms.
To test how employees react to AI systems, the researchers worked last year with the luxury fashion retailer Tapestry Inc., whose accessory and lifestyle brands include Coach, Kate Spade, and Stuart Weitzman. The firm employs 18,000 people worldwide and has about $6.7 billion in annual sales.
Like all retailers, Tapestry tries to put the right number of products in the right stores at the right time, so it sells as much as possible and doesn’t lose track of stock.
As part of the study, Tapestry managers who oversee shelf stocking provided employees called “allocators” with two sets of recommendations to help them choose which goods to display. One set was from an algorithm that allocators could interpret, and the other was from a “black box” algorithm they couldn’t.
Researchers then tested consumer reactions to allocators’ decisions for 425 product SKUs—the numbers used to trace each item—at 186 stores. The products were grouped in 241 “style-colors'' and sizes.
When the allocators received a recommendation from an interpretable algorithm, they often overruled it based on their own intuition. But when the same allocators had a recommendation from a similarly accurate “black box” machine learning algorithm, they were more likely to accept it even though they couldn’t tell what was behind it. Their resulting stocking decisions were 26 percent closer to the recommendation than the average choice.
Why? Because they trusted their own peers who had worked with the programmers to develop the algorithm.
This has implications for the adoption of AI technologies and achieving the desired traction and impact by leveraging them. The other implication is also that internal and external organizational boundaries would need to be blurred to create stronger collaboration between internal talent (with strong business knowhow) and technology partners.
Working Better
Connecting with people we work with is a good thing. It helps us build connections, open up communication flows, accelerate progress and build teamwork. Whether you’ve been in your current role for a while or joined a new organization/team or you’re managing a remote team, check out these 3 simple tips to strengthen ties with your co-workers (4 minutes)
Tiny Thought
In every endeavour in life, we have to choose between energy and self-doubt.