Trust and quality are complex concepts and there’s no standard path to address them because they are a matter of perception, which can vary and change dynamically based on the situation. There are, however, approaches that allow to minimize this. One can start for example by providing transparency. For each dashboard provide also detailed reports that through drilldown (or also by running the reports separately if that’s not possible) allow to validate the numbers from the report. If they don’t trust the data or the report, then they should pinpoint what’s wrong. Of course, the two sources must be in synch, otherwise the validation will become more complex.
One can recognize multiple issues in what you describe – the way the tool was introduced, the way dashboards flooded the space, how people reacted, etc. Introducing a tool is also a matter of strategy, tactics and operations and the various aspects related to them must be addressed. Few organizations address this properly. Many organizations work after the principle “build it and they will come!” even if they build the wrong thing. The dashboard is not the problem, but the processes that led to it, respectively what wasn’t done in the background. That has unfortunately less visibility and only the people involved in the project could tell more, if they lived enough in the organization to tell the story.
Notebooks are useful, though they have their limitations, and their use has further implications. Learning Python, R or other programming languages is not for everybody, even for the ideal data citizen. SQL can be approachable and please don’t say that SQL is dead! There’s a place for each of the tools you mentioned, though as soon one breaks the boundaries, sooner or later the misuse will kick back. That happens for dashboards, data visualizations and any other mode of processing the data.
You raise important points though the whole picture is more complex.