What your traditional software inventory solution does not tell you, final part

Wednesday, June 29, 2016

This post was originally published on 29 June 2016

So far, we have looked at traditional software inventory methodologies and discussed different ”hard” software types to properly get picture of. In this last part of the series, we look one more non-traditional application type and wrap up the series.

Web applications

The final application usage model we take a look at in the context of understanding you software landscape, and one contributor to the inherent lack of the big picture in the current generation of software inventory solutions available and in wide use, is web applications.
While it could be useful to note that Store –based Universal applications and their equivalents in other operating systems are in a way contender to the web applications (mainly to fix many of the issues of responsiveness, and having an offline experience) for the future, the use of the web applications is firmly here and is going to stay for the foreseeable future.

Web applications are – from a consuming client’s point of view – of course very much the Internet browser usage; may that be built-in Internet Explorer/Edge or some 3rd party browser like Firefox or Chrome. Main problem here from the software inventory perspective is that the inventoried data is just that: browsers installed on the machine. As the web applications live inside the browser session, just like the remotely published virtual applications living inside RDP, ICA or PCoIP client’s session, they are not generally seen or recognized by the software inventory runs.

So in a way, the usage for web applications is pretty much invisible to traditional software inventory solutions, yet again making them missing from the centralized reporting when considering what applications are in use in the organisation. What’s even worse is the fact that since they run purely inside browser, users are pretty free to take all kind of point solutions into use very quickly, for what ever problem at hand they need to solve, since there is no local infrastructure installations or privileges required to do so.

While on the one hand empowering the users to be more productive, the flip side of the coin is that this may have security implications for losing sight of where to the [potentially confidential] data that’s going to be processed through these services are sent to and also enabling fragmentation of the software landscape inside the environment as nobody but the users themselves might know about these applications. Leading to a situation wherein different parts of the organisation might take functionally-similar application into use as they don’t know (themselves or through IT) that another one is already “deployed”.

Really reliably detecting the web application, from the technical perspective, is especially hard as they live inside the browser and not as separate application processes or executables, and in many ways are indistinguishable from any other web sites accessed from the browser when looked at from the outside. Not to mention the fact that a web application is actually not even ”there” on the user’s machine before he or she uses it for the first time!

It’s unfortunate but for web applications particular there actually doesn’t exist any sensible generic solution from software inventory perspective. All sort of different tactics however could be employed to try and detect most commonly used i.e. major brand-name web applications, which is already a good start. However, this again would need an explicit support from the software inventory or systems management solution to do so, as just scanning the installed list products or scanning the executables on disk would not cut it.


As we have seen from the previous discussion, the software landscape has changed dramatically from what it used to be even 10 years ago, but even more so 20 years ago when most of the still in-use software inventory solutions were originally created. The mechanisms and tactics employed by those solutions need to get smarter to adapt to increasingly fast changing world of application deployment and usage, otherwise the things we see is not at all the whole picture or even major subset of it.

There are, of course, other issues with traditional software inventory solutions, such as the focus on providing too much information in too detailed level, which causes us to see mostly trees instead of the proverbial forest. But these concerns are not related to not knowing what we don’t see, but rather seeing too much of what we see.

Not having an accurate view of the software landscape, which is perhaps the single most complex aspect of the workstation environment, means that the decisions we make going forward related to acceptable software use and ways to access those tools has to be based on partly guesswork rather than on reality. And decisions based on guesswork are inferior to decisions based on actual, hard, actionable data.

As we are generally unable to keep iron-fist control of everything by this day and age, nor should we thrive to do so, means that the tools we use must also be evolved to be up to the task and support wide-variety of software use-cases found in modern workstation environments!