I think there is a limit to how “intuitive” library resource discovery tools can be. The more complicated the system behind the interface, the more one needs to know about how it works in order to use it well. This is different from usability, which is about optimising the match between user intention and means to achieve it.
Do you remember the brief fashion for federated search in the late 2000s? These interfaces were promoted as a simple way to search multiple databases simultaneously. However, the reality was that such systems would display results in the order they were returned from the remote servers (rather than ranked by relevance*, as many users expected) and would often display only the first 50 results retrieved, rather than every matching record. Once users understood what a federated search tool was doing, it often prompted them to return to searching native interfaces separately, where they could at least be more confident that each tool was performing a comprehensive search.
*Relevancy ranking of results – in itself, another concept that once understood, will be discarded in favour of more transparent ranking e.g. publication date. Relevancy algorithms are often closely guarded secrets, and I understand that they operate on a popularity basis, where articles which are most downloaded or most cited will rank highest in search results. This may work well for general web searches, but it’s hardly how scholars would want their academic searches to operate, especially as research often involves seeking obscure or niche information which by definition will score poorly on popularity.