<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Good design ? in Splunk Search</title>
    <link>https://community.splunk.com/t5/Splunk-Search/Good-design/m-p/493276#M137582</link>
    <description>&lt;P&gt;Hi reverse,&lt;/P&gt;

&lt;P&gt;I assume that pages means web pages and that it is a web application that you would like to monitor. The straightforward way would be to use a scripted input that runs at a given interval and delivers the load times. Those times go into an index and from there you do your analysis.&lt;/P&gt;

&lt;P&gt;A simple python script could look like this:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;#!/usr/bin/python
import urllib
import time

url = 'http://www.google.com'
nf = urllib.urlopen(url)
start = time.time()
page = nf.read()
end = time.time()
nf.close()
print 'url="%s" latency="%s"' % (url,end-start)
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;UL&gt;
&lt;LI&gt;on unix: start by creating a new splunk app &lt;/LI&gt;
&lt;LI&gt;copy the above script into the etc/apps/your_app/bin directory and make it executable&lt;/LI&gt;
&lt;LI&gt;create a new script input with the splunk UI from within the app. When you set up the sourcetype, tell splunk to use the current time as timestamp and give it a speaking name like 'webresponse'&lt;/LI&gt;
&lt;/UL&gt;

&lt;P&gt;You should see events like these appearing in your index:&lt;BR /&gt;
url="&lt;A href="http://www.google.com"&gt;http://www.google.com&lt;/A&gt;" latency="0.000102043151855"&lt;/P&gt;

&lt;P&gt;Now you could try a search like &lt;CODE&gt;index=* sourcetype=webresponse | timechart avg(latency) by url&lt;/CODE&gt;&lt;/P&gt;

&lt;P&gt;Of course this is just a starter to show the concept. To make it work for you, you probably need to put your 1000+ urls into a file and read the urls from that file, looping through the timing and printing a line for every result.&lt;/P&gt;

&lt;P&gt;Hope it helps&lt;BR /&gt;
Oliver&lt;/P&gt;</description>
    <pubDate>Sat, 05 Oct 2019 07:05:59 GMT</pubDate>
    <dc:creator>ololdach</dc:creator>
    <dc:date>2019-10-05T07:05:59Z</dc:date>
    <item>
      <title>Good design ?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Good-design/m-p/493275#M137581</link>
      <description>&lt;P&gt;so I have 1000 pages in my application ..&lt;BR /&gt;
I want to check which pages are performing poorly ... a trend .. &lt;/P&gt;

&lt;P&gt;I am thinking of using CSV  to store data of all the response time  and then compare the value to deduce the desired output .&lt;/P&gt;

&lt;P&gt;Is there any better idea ?&lt;/P&gt;</description>
      <pubDate>Sat, 05 Oct 2019 04:43:58 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Good-design/m-p/493275#M137581</guid>
      <dc:creator>reverse</dc:creator>
      <dc:date>2019-10-05T04:43:58Z</dc:date>
    </item>
    <item>
      <title>Re: Good design ?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Good-design/m-p/493276#M137582</link>
      <description>&lt;P&gt;Hi reverse,&lt;/P&gt;

&lt;P&gt;I assume that pages means web pages and that it is a web application that you would like to monitor. The straightforward way would be to use a scripted input that runs at a given interval and delivers the load times. Those times go into an index and from there you do your analysis.&lt;/P&gt;

&lt;P&gt;A simple python script could look like this:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;#!/usr/bin/python
import urllib
import time

url = 'http://www.google.com'
nf = urllib.urlopen(url)
start = time.time()
page = nf.read()
end = time.time()
nf.close()
print 'url="%s" latency="%s"' % (url,end-start)
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;UL&gt;
&lt;LI&gt;on unix: start by creating a new splunk app &lt;/LI&gt;
&lt;LI&gt;copy the above script into the etc/apps/your_app/bin directory and make it executable&lt;/LI&gt;
&lt;LI&gt;create a new script input with the splunk UI from within the app. When you set up the sourcetype, tell splunk to use the current time as timestamp and give it a speaking name like 'webresponse'&lt;/LI&gt;
&lt;/UL&gt;

&lt;P&gt;You should see events like these appearing in your index:&lt;BR /&gt;
url="&lt;A href="http://www.google.com"&gt;http://www.google.com&lt;/A&gt;" latency="0.000102043151855"&lt;/P&gt;

&lt;P&gt;Now you could try a search like &lt;CODE&gt;index=* sourcetype=webresponse | timechart avg(latency) by url&lt;/CODE&gt;&lt;/P&gt;

&lt;P&gt;Of course this is just a starter to show the concept. To make it work for you, you probably need to put your 1000+ urls into a file and read the urls from that file, looping through the timing and printing a line for every result.&lt;/P&gt;

&lt;P&gt;Hope it helps&lt;BR /&gt;
Oliver&lt;/P&gt;</description>
      <pubDate>Sat, 05 Oct 2019 07:05:59 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Good-design/m-p/493276#M137582</guid>
      <dc:creator>ololdach</dc:creator>
      <dc:date>2019-10-05T07:05:59Z</dc:date>
    </item>
    <item>
      <title>Re: Good design ?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Good-design/m-p/493277#M137583</link>
      <description>&lt;P&gt;thanks @olodach for the response ..&lt;BR /&gt;
What i have is around 1000 pages  .. and logs which contains the action performed on that page and the response time of that action ..&lt;BR /&gt;
 right now .. I am saving the avg for entire day per page and then comparing .. through CSVs&lt;/P&gt;

&lt;P&gt;was wondering if there is a better solution..&lt;/P&gt;</description>
      <pubDate>Sat, 05 Oct 2019 11:14:21 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Good-design/m-p/493277#M137583</guid>
      <dc:creator>reverse</dc:creator>
      <dc:date>2019-10-05T11:14:21Z</dc:date>
    </item>
    <item>
      <title>Re: Good design ?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Good-design/m-p/493278#M137584</link>
      <description>&lt;P&gt;Hi reverse,&lt;BR /&gt;
thanks for sharing a little more background. It makes it so much easier to suggest something helpful. If you'd like me to sketch the searches, please provide some headers from the csv and what you'd like to compare. Generally speaking, though, what you'd probably want to do is:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;|inputlookup yourcsv1.csv | eval scope1_val=your_value_to_compare | lookup yourcsv2.csv &amp;lt;common_key probably the url&amp;gt; output your_value_to_compare | eval result=scope1_val &amp;lt;comparewith&amp;gt; your_value_to_compare | stats/timechart/chart...whatever comes next based on result
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;With the first eval we "saved" the first value into a new field. Then we overwrite the values with the lookup from the second (newer) csv and can compare them directly. If you have more than one field to compare, save them all into new fields. &lt;/P&gt;

&lt;P&gt;If you want to keep a longer record of all daily aggregates for trending, you might try something like this:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;|inputlookup yourdailyavg.csv |addinfo | eval _time=info_search_time |collect index=&amp;lt;yourtrendindex&amp;gt; testmode=false addtime=true
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;This will read the csv, timestamp them with the search time and insert all fields into the index you provided. Use a dedicated index for this and after the aggregates are indexed, you can compare the values like standard events:&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;index=&amp;lt;yourtrendindex&amp;gt; | timechart avg(response_time) by url
&lt;/CODE&gt;&lt;/PRE&gt;

&lt;P&gt;You could run the import as a scheduled report, provided the file gets copied to the right place/name by a cron job.&lt;/P&gt;

&lt;P&gt;In my opinion, though, if you have the license bandwidth available, the best solution would be to index the proxy logs directly and do the avg summaries and the analytics directly on those.&lt;/P&gt;

&lt;P&gt;Hope I make sense after all.&lt;BR /&gt;
Olli&lt;/P&gt;</description>
      <pubDate>Sat, 05 Oct 2019 17:00:32 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Good-design/m-p/493278#M137584</guid>
      <dc:creator>ololdach</dc:creator>
      <dc:date>2019-10-05T17:00:32Z</dc:date>
    </item>
    <item>
      <title>Re: Good design ?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Good-design/m-p/493279#M137585</link>
      <description>&lt;P&gt;Presently on a daily basis .. I am creating this CSV.. 4 columns I append this CSV daily .. then use queries to compare ..&lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;   Date Page_id Action Time_taken
&lt;/CODE&gt;&lt;/PRE&gt;</description>
      <pubDate>Sat, 05 Oct 2019 19:13:08 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Good-design/m-p/493279#M137585</guid>
      <dc:creator>reverse</dc:creator>
      <dc:date>2019-10-05T19:13:08Z</dc:date>
    </item>
    <item>
      <title>Re: Good design ?</title>
      <link>https://community.splunk.com/t5/Splunk-Search/Good-design/m-p/493280#M137586</link>
      <description>&lt;P&gt;Hi reverse, &lt;BR /&gt;
unfortunately you only provide fragmented information on what you try to do. From what you say, it sounds as if you keep appending data to one single csv file. Now, if you keep appending to a single file, it would be best to use a file monitor input rather than an inputlookup to index the events. Please let me know if any of the solutions that I provided worked for you.&lt;BR /&gt;
Best&lt;BR /&gt;
Oliver&lt;/P&gt;</description>
      <pubDate>Sat, 05 Oct 2019 19:20:45 GMT</pubDate>
      <guid>https://community.splunk.com/t5/Splunk-Search/Good-design/m-p/493280#M137586</guid>
      <dc:creator>ololdach</dc:creator>
      <dc:date>2019-10-05T19:20:45Z</dc:date>
    </item>
  </channel>
</rss>

