I assume that pages means web pages and that it is a web application that you would like to monitor. The straightforward way would be to use a scripted input that runs at a given interval and delivers the load times. Those times go into an index and from there you do your analysis.
A simple python script could look like this:
#!/usr/bin/python import urllib import time url = 'http://www.google.com' nf = urllib.urlopen(url) start = time.time() page = nf.read() end = time.time() nf.close() print 'url="%s" latency="%s"' % (url,end-start)
You should see events like these appearing in your index:
Now you could try a search like
index=* sourcetype=webresponse | timechart avg(latency) by url
Of course this is just a starter to show the concept. To make it work for you, you probably need to put your 1000+ urls into a file and read the urls from that file, looping through the timing and printing a line for every result.
Hope it helps
thanks @olodach for the response ..
What i have is around 1000 pages .. and logs which contains the action performed on that page and the response time of that action ..
right now .. I am saving the avg for entire day per page and then comparing .. through CSVs
was wondering if there is a better solution..
thanks for sharing a little more background. It makes it so much easier to suggest something helpful. If you'd like me to sketch the searches, please provide some headers from the csv and what you'd like to compare. Generally speaking, though, what you'd probably want to do is:
|inputlookup yourcsv1.csv | eval scope1_val=your_value_to_compare | lookup yourcsv2.csv <common_key probably the url> output your_value_to_compare | eval result=scope1_val <comparewith> your_value_to_compare | stats/timechart/chart...whatever comes next based on result
With the first eval we "saved" the first value into a new field. Then we overwrite the values with the lookup from the second (newer) csv and can compare them directly. If you have more than one field to compare, save them all into new fields.
If you want to keep a longer record of all daily aggregates for trending, you might try something like this:
|inputlookup yourdailyavg.csv |addinfo | eval _time=info_search_time |collect index=<yourtrendindex> testmode=false addtime=true
This will read the csv, timestamp them with the search time and insert all fields into the index you provided. Use a dedicated index for this and after the aggregates are indexed, you can compare the values like standard events:
index=<yourtrendindex> | timechart avg(response_time) by url
You could run the import as a scheduled report, provided the file gets copied to the right place/name by a cron job.
In my opinion, though, if you have the license bandwidth available, the best solution would be to index the proxy logs directly and do the avg summaries and the analytics directly on those.
Hope I make sense after all.
unfortunately you only provide fragmented information on what you try to do. From what you say, it sounds as if you keep appending data to one single csv file. Now, if you keep appending to a single file, it would be best to use a file monitor input rather than an inputlookup to index the events. Please let me know if any of the solutions that I provided worked for you.