The current limit of screen readers is the inability to integrate with the applications we love the most. Just ask Siri or Google to open Spotify on your smartphone: it succeeds but then it all ends here. You can not play a single song, not a playlist, and even search for songs or artists in the catalog. Integration is the element that would make them really useful and functional tools. We are trying with Samsung Bixby but we are still far from realization. And that’s why Amazon decided to open the doors to developers of Alexa, the artificial intelligence of the Echo board speaker. He did it by launching the Lex program, a platform hostata on internal Web Services and can enhance the communication skills of third-party projects, which will benefit from advanced voice support for various areas of use.
As stated on Lex’s website: “The platform offers advanced depth learning for speech recognition and dictation, as well as for the natural language recognition and understanding of texts, enabling the creation of engaging applications and realistic conversations. With Amazon Lex, the same technology on which Alexa are available to all developers, enabling the creation of sophisticated chatbot, in a simple and fast “. In other words, Lex will do the dirty work of connecting the information in the app with software ecosystem cognition skills offered by Amazon, to get a complete interactive environment and evolving. From the hardware point of view, now for such projects they can be loaded on the only supported device, the Echo speaker, but it is clear that the intent of the multinational is to extend more and more the presence of Alexa elsewhere. We speak not only of ad-hoc device but also the implementation of smartphones and tablets of different brands; particularly targeted by AI solutions in the near future.