Amazon supports store innovations with ML algorithms

Amazon is evolving its in-store shopping and payment experience with cutting-edge machine learning (ML) algorithms.

At Amazon’s recent global artificial intelligence (AI) event re:MARS 2022, Dilip Kumar, VP, Physical Retail and Technology, Amazon, discussed how the omnichannel giant is using computer vision and ML algorithms to continuously make things easier and faster deliver in-store shopping experiences for customers.

Below are highlights from Kumar’s presentation on how Amazon applies algorithms to its frictionless shopping technology Just Walk Out, Palm payment solution Amazon One, physical clothing store Amazon Style and smart shopping cart Amazon Dash Cart.

Just go outside
In the case of Just Walk-Out technology, which allows shoppers to skip the checkout line at many Amazon stores, select Whole Foods Market stores and several third-party retail stores, Amazon uses sensors, optics and image processing algorithms. As a result, the company has reduced the number of cameras required in stores using Just Walk Out technology, making them less expensive, smaller and capable of local deep networks.

Amazon’s Just Walk Out sensors and algorithms are designed to detect a wide range of products and differences in shopping behavior in large grocery stores. The company has also increased the variety of environments its algorithms can take into account, using Just Walk Out technology, and third-party vendors.

[Read more: Amazon selling its cashierless store platform to other retailers]

Amazon One
Initially introduced In two Seattle-area Amazon Go stores in September 2020, Amazon One is designed to allow customers to use their unique Palm signature to check out at a store or present a loyalty card. When developing Amazon One, the retailer needed data to train and test its AI algorithms across demographics, age groups, temperatures, and variations such as calluses and wrinkles unique to one’s palm so the service can correctly determine whose palm was hovering over the device.

When Amazon began building Amazon One, it recognized the limited availability of public datasets consisting of palm and vein images to train the algorithms. So Amazon has advanced existing technologies to create huge amounts of different, realistic synthetic palm and vein images to train the AI ​​models and prepare the solution for a variety of users.

Amazon style
At Amazon Style, Amazon’s physical clothing store, the company developed new algorithms that use a customer’s information — such as a diverse set of recommended items, where the similarity to their current selection is balanced with a diverse set of options.

The system also generates supplementary selections, such as B. A shirt matched with a pair of jeans to create a recommended outfit. In addition, Amazon has created synthetic datasets to mimic variations of real-life shopping scenarios.

[Read more: First Look: Amazon’s first-ever physical clothing store opens]

Amazon Dash cart
When Amazon built the Amazon Dash Cart, a smart shopping cart that allows customers to skip the checkout line at many of its U.S. Amazon Fresh stores, the company developed a series of computer vision and sensor fusion algorithms to recognize items in motion, including the exact recording of weight and quantity. The image processing algorithms also have tight latency budgets as the shopping cart tracks a customer’s receipt in real time.

[Read more: Amazon’s newest smart device – the Dash Cart]

“Looking back at the progress my team has made reminds me of Amazon saying, ‘It’s always day one,’ and it’s certainly still day one for us in physical retail and technology ‘ Kumar said in a company blog post. “It feels like we’re just beginning to tackle some of the complex challenges in the world of physical retail, and I’m excited to see what the team does next to push the boundaries of AI.”

Comments are closed.