Splunk Search

Good design ?

reverse
Contributor

so I have 1000 pages in my application ..
I want to check which pages are performing poorly ... a trend ..

I am thinking of using CSV to store data of all the response time and then compare the value to deduce the desired output .

Is there any better idea ?

Tags (2)
0 Karma

ololdach
Builder

Hi reverse,

I assume that pages means web pages and that it is a web application that you would like to monitor. The straightforward way would be to use a scripted input that runs at a given interval and delivers the load times. Those times go into an index and from there you do your analysis.

A simple python script could look like this:

#!/usr/bin/python
import urllib
import time

url = 'http://www.google.com'
nf = urllib.urlopen(url)
start = time.time()
page = nf.read()
end = time.time()
nf.close()
print 'url="%s" latency="%s"' % (url,end-start)
  • on unix: start by creating a new splunk app
  • copy the above script into the etc/apps/your_app/bin directory and make it executable
  • create a new script input with the splunk UI from within the app. When you set up the sourcetype, tell splunk to use the current time as timestamp and give it a speaking name like 'webresponse'

You should see events like these appearing in your index:
url="http://www.google.com" latency="0.000102043151855"

Now you could try a search like index=* sourcetype=webresponse | timechart avg(latency) by url

Of course this is just a starter to show the concept. To make it work for you, you probably need to put your 1000+ urls into a file and read the urls from that file, looping through the timing and printing a line for every result.

Hope it helps
Oliver

0 Karma

reverse
Contributor

thanks @olodach for the response ..
What i have is around 1000 pages .. and logs which contains the action performed on that page and the response time of that action ..
right now .. I am saving the avg for entire day per page and then comparing .. through CSVs

was wondering if there is a better solution..

0 Karma

ololdach
Builder

Hi reverse,
thanks for sharing a little more background. It makes it so much easier to suggest something helpful. If you'd like me to sketch the searches, please provide some headers from the csv and what you'd like to compare. Generally speaking, though, what you'd probably want to do is:

|inputlookup yourcsv1.csv | eval scope1_val=your_value_to_compare | lookup yourcsv2.csv <common_key probably the url> output your_value_to_compare | eval result=scope1_val <comparewith> your_value_to_compare | stats/timechart/chart...whatever comes next based on result

With the first eval we "saved" the first value into a new field. Then we overwrite the values with the lookup from the second (newer) csv and can compare them directly. If you have more than one field to compare, save them all into new fields.

If you want to keep a longer record of all daily aggregates for trending, you might try something like this:

|inputlookup yourdailyavg.csv |addinfo | eval _time=info_search_time |collect index=<yourtrendindex> testmode=false addtime=true

This will read the csv, timestamp them with the search time and insert all fields into the index you provided. Use a dedicated index for this and after the aggregates are indexed, you can compare the values like standard events:

index=<yourtrendindex> | timechart avg(response_time) by url

You could run the import as a scheduled report, provided the file gets copied to the right place/name by a cron job.

In my opinion, though, if you have the license bandwidth available, the best solution would be to index the proxy logs directly and do the avg summaries and the analytics directly on those.

Hope I make sense after all.
Olli

0 Karma

reverse
Contributor

Presently on a daily basis .. I am creating this CSV.. 4 columns I append this CSV daily .. then use queries to compare ..

   Date Page_id Action Time_taken
0 Karma

ololdach
Builder

Hi reverse,
unfortunately you only provide fragmented information on what you try to do. From what you say, it sounds as if you keep appending data to one single csv file. Now, if you keep appending to a single file, it would be best to use a file monitor input rather than an inputlookup to index the events. Please let me know if any of the solutions that I provided worked for you.
Best
Oliver

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...