From NFR perspective trying to figure out how to use Splunk to extract user behavior pattern during peak load conditions by mining through the web server access log. This information is vital to create work load model for simulation in the performance test environment. Need to create a dashboard view of how many users were in the system and what pages they were accessing etc.
Ok what I am looking for is the same metrics that I will see when I use Google Analytics. I was trying to see if the same metrics Splunk can capture from the web server access log which is the source for the below information
I was told that Splunk light can be used to extract the above information so wanted to know.
@GaneshK - you need to be much more specific about what you are asking us for. Basically, the above constitutes a request for an app and about a man-year of development. Or a Master's thesis on user behavior.
Let me throw out some basic ideas.
First, I doubt if user behavior is very much different during peak load work-related user behavior than work-related user behavior at any other time. (Assuming "work", but you can substitute "gaming" or any other domain in the sentence.) So, don't worry necessarily about "peak hours" until we have an actual dashboard and can check that.
Second, there is not much point in measuring or analyzing any behavior that you cannot duplicate, so you should start by understanding what behavior your performance test system is going to want to model.
Third, identify one user who is appropriate to what you are trying to test, or one of each kind if there are multiple kinds of users, and, one at a time, analyze what kinds of activity they engage in. To do this, you get a sample user ID, search index=*
for that user ID, and see what kinds of events are logged. From those events, find out what other items (like workstation ID) identify that person, broaden the search to include those items as well, and then look at the overall pattern of activity, what other fields are available, and so on.
The analysis in step three gives you the universe of transactions that you can detect. Compare that to the list from step two of events that you are trying to duplicate. NOW, you have enough information to start designing a preliminary dashboard.
Then, come ask us the next few steps.