Activity Feed
- Karma Re: Is it possible to calculate 'responseTime' values in a cumulative stats table? for jeffland. 06-05-2020 12:50 AM
- Karma Re: Is the Transaction command suitable for large volumes of data and what is the benefit of using this command? for woodcock. 06-05-2020 12:48 AM
- Karma Re: How to split a multivalue field into single values? for richgalloway. 06-05-2020 12:48 AM
- Karma Re: How to split a multivalue field into single values? for richgalloway. 06-05-2020 12:48 AM
- Karma Re: How to split a multivalue field into single values? for richgalloway. 06-05-2020 12:48 AM
- Karma Re: Is the Transaction command suitable for large volumes of data and what is the benefit of using this command? for niketn. 06-05-2020 12:48 AM
- Karma Re: Is the Transaction command suitable for large volumes of data and what is the benefit of using this command? for snoobzilla. 06-05-2020 12:48 AM
- Karma Re: How to get subsearch to return a result which is NOT EQUAL to the returned value? for somesoni2. 06-05-2020 12:48 AM
- Karma Re: How to get subsearch to return a result which is NOT EQUAL to the returned value? for somesoni2. 06-05-2020 12:48 AM
- Karma Re: Alert search query to monitor for fluctuation in system performance times between today and yesterday for somesoni2. 06-05-2020 12:48 AM
- Karma Re: Alert search query to monitor for fluctuation in system performance times between today and yesterday for somesoni2. 06-05-2020 12:48 AM
- Karma Re: Using the 'where' clause as a Custom Alert Trigger condition? for renjith_nair. 06-05-2020 12:48 AM
- Karma Re: Using the 'where' clause as a Custom Alert Trigger condition? for somesoni2. 06-05-2020 12:48 AM
- Karma Re: Using the 'where' clause as a Custom Alert Trigger condition? for renjith_nair. 06-05-2020 12:48 AM
- Karma Re: Alert search query to monitor for fluctuation in system performance times between today and yesterday for somesoni2. 06-05-2020 12:48 AM
- Karma Re: Using the 'where' clause as a Custom Alert Trigger condition? for somesoni2. 06-05-2020 12:48 AM
- Karma Re: How to replace multiple field values with the same replacement value in a search? for sundareshr. 06-05-2020 12:48 AM
- Karma Re: How to replace multiple field values with the same replacement value in a search? for sundareshr. 06-05-2020 12:48 AM
- Karma Re: Why are some fields from XML data not displayed in search results? for ktugwell_splunk. 06-05-2020 12:48 AM
- Got Karma for Re: How to split a multivalue field into single values?. 06-05-2020 12:48 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
1 | |||
0 | |||
0 |
01-22-2019
06:43 PM
Hi Jeff,
This worked well and simplified the query, thank you for taking the time to assist.
I actually did not need to reverse the sort order as I wanted cumulative from the top of the list down.
My final requirement is to add a column in the stats for a percentage of the "No of transactions (total)" value against the total.
I am thinking I might be best to do this in Excel.
... View more
12-12-2018
04:15 PM
I have identified an issue with a response time stats report that was built by a former Splunk specialist at my organization and I'm having trouble identifying the root cause or developing a better solution.
The goal is to produce a stats table where the first column defines the range (in seconds) and the second column displays a count of transactions that occurred in that range.
However, it seems that the calculations do not align with my own check of the raw data which I made in Excel — I feel the ranges must be incorrectly defined.
The ranges to include in the table are as follows:
<1.0 sec (next column will include a count of transactions which have a response time value of between 0 - 1.0)
<2.0 sec (next column will include a count of transactions which have a response time value of between 0 - 2.0)
<3.0 sec (next column will include a count of transactions which have a response time value of between 0 - 3.0.. etc)
<4.0 sec
<5.0 sec
<6.0 sec
<7.0 sec
<8.0 sec
<9.0 sec
<=10.0 sec
The search query is as follows — it is line 2 that I can't get my head around and feel it may be incorrect — it seems to be rounding values up and this is not appropriate, as we are dealing with hard range cutoffs, I.E, 1.0 seconds, 2.0 seconds, etc:
eventstats count as "total" |
eval in_range=round(case(responseTime<10, ceil(responseTime), responseTime>=10.0,10.0),1) |
streamstats count as cnt avg(responseTime) as run_avg |
stats first(total) as total last(run_avg) as run_avg max(cnt) as count count as cnt by in_range |
sort 0 in_range |
eval range=if(in_range>=10, ">= 10.0 sec","< "+tostring(in_range)+" sec") |
eval run_avg=round(run_avg,1) |
rename cnt as "No of Transactions"|
table range "No of Transactions"
The result of this search is a table which appears to have the correct format, however the "No of transactions" values do not seem to correctly fall within the ranges defined.
Second part to the problem - optional:
In addition to this, the ranges are not cumulative - ie, the actual ranges which it seems to be reporting are 0-1 sec, 1-2 sec, 2-3 sec, etc
... View more
- Tags:
- stats
- streamstats
08-24-2017
08:39 PM
I am working with data from a database which produces information on transactions.
The problem is that transactions can have any number of related attributes, and transaction details will be replicated with a new line for each attribute.
In the format of:
[Transaction ID] [tab] [Attribute name] [tab] [Attribute value] [tab] [date]
Example:
11111 Amount 12000
11111 Reference 101010
11111 Operator John
11111 Subject Credit
11111 Notes XXXXXXXX
11112 Amount 75000
11112 Reference 202020
11112 Operator Will
I am trying to identify a REGEX expression for EACH attribute which will match on the following logic;
"Amount" - followed by TAB - followed by variable length NUMBER - followed by TAB
"Reference" - followed by TAB - followed by variable length NUMBER - followed by TAB
"Operator" - followed by TAB - followed by variable length STRING - followed by TAB
"Subject" - followed by TAB - followed by variable length STRING- followed by TAB
"Notes" - followed by TAB - followed by variable length STRING- followed by TAB
Currently when I load this data into Splunk I am using auto event break so I have multiple events related to each "Request ID" and I can extract the Attribute and Value however this is obviously not ideal as I would like to extract the attribute itself, i.e.; Amount, Reference, Operator, etc.
Thanks!
... View more
07-09-2017
09:49 PM
Could I perhaps use:
... | transaction transaction_id
Or is the transaction command only for combining different fields of the same value (in this case it is the same field)
... View more
07-09-2017
06:58 PM
Hi,
Our system logs events in a bizarre way in which multiple lines of data will all relate to a single transaction, however each line will have a different attribute.
Current Relationship: Multiple events TO One transaction_id
Ideal relationship: One event TO One transaction_id
This also means that must data is duplicated - I want to merge these events all into one relating to the one unique transaction_id.
Is it possible that we upload this raw data each month and then format it correctly into single events automatically.
Example as follows:
transaction_id status date type attribute value
10001 complete 10/10/17 request name Jenny
10001 complete 10/10/17 request company Ford
10001 complete 10/10/17 request reference 564682
10001 complete 10/10/17 request amount $12,345
I would instead like it formatted as follows:
transaction_id status date type name company reference amount
10001 complete 10/10/17 request Jenny Ford 564682 $12,345
Now all events have the same number of related attributes.
Thanks in advance!
... View more
01-08-2017
08:36 PM
Which tab of data will trigger an alert??
The alert emails me back with all products and their response times for each day - regardless of whether they have had a 2x increase since yesterday @somesoni2
... View more
01-08-2017
08:34 PM
@somesoni2
Below the search input box:
** 697,139 events (1/1/17 12:00:00.000 AM to 1/9/17 2:19:19.000 PM)**
In the statistics tab there is just one result.
However the events tab has all of the events.
... View more
01-08-2017
08:15 PM
@renjith.nair
Today and Yesterday are not numbers - they are time related as you can see from the above search?
@somesoni2 advised that they are however related to the responsetime due to the chart function.
... View more
01-08-2017
07:55 PM
@renjith.nair
@somesoni2
My mistake - I made a typo in the previous comment - my actual search query is as follows:
source="transactionLog" type="report" earliest=-1d@d latest=now
| eval Day=if(_time>=relative_time(now(),"@d"),"Today","Yesterday")
| chart avg(responsetime) over product by Day
| search Today>2*Yesterday
... View more
01-08-2017
07:53 PM
@somesoni2
This is how I had it set up initially as you had previously suggested on a different Answer.
The problem is that this does not work.
Whether I have that where clause on the end of the search or not, I still receive the same number of results (700,000+) including all results which do not fit the clause requirements..?
Hence why I have posted this question..
... View more
01-08-2017
07:23 PM
@renjith.nair
I have tested this and it still does not seem to work - as you can see, the Today and Yesterday are referencing time periods - so when I attempt to add this to the end of my search, it still yields all of the 700,000+ results;
source="transactionLog" type="report" earliest=-1d@d latest=now
| eval Day=if(_time>=relative_time(now(),"@d"),"Today","Yesterday")
| chart avg(responsetime) over product by Day
| search where Today>2*Yesterday
The last line above is the new line.
Perhaps the fault lies in the logic within my search query..
... View more
01-08-2017
07:14 PM
@renjith.nair
Thank you for your response.
I am curious why anyone would choose to use the where clause if the entire result list is returned by the search query, rather than including the condition in the main search query as you suggested?
... View more
01-08-2017
04:50 PM
I am attempting to set up an Alert which will trigger when average response times for various products over the week have increased by at least double in comparison to the previous week.
However it is not working out exactly as I had in mind.
My search query for the alert is as follows;
source="transactionLog" type="report" earliest=-1d@d latest=now
| eval Day=if(_time>=relative_time(now(),"@d"),"Today","Yesterday")
| chart avg(responsetime) over product by Day
And then I am using a custom trigger condition as follows;
search where Today>2*Yesterday
However the problem is, whether I add the where clause to the end of my search or not, there are still over 700,000 events returned as results - so my alert notification returns all response times for ALL products (even the ones which did not see an increase).
ie; whether I include the where clause at the end of my search or not, there is still the same number of returned events?
This means the alert notification contains a whole lot of irrelevant data - I would ideally like to see ONLY the instances in the alert notification where the average response time has doubled, not all of the data.
I assume the WHERE clause does not actually filter out results which do not match the clause?
Is there a more suitable way to approach this?
... View more
01-05-2017
08:46 PM
1 Karma
Have you tried something like:
eval countPercentage = countA/countB*100
... View more
01-05-2017
05:15 PM
If you understand what I am saying -
If I inspect the "Day" or "Week" field - I can see it only has 2 values, Today and Yesterday
So how can we use the where clause comparing them as though they are numbers?
where Today>1.5*Yesterday
... View more
01-05-2017
05:09 PM
@somesoni2
Should it not say something like:
eval Day=responsetime if(_time>=relative_time(now(),"@d"),"Today","Yesterday")
... View more
01-05-2017
05:08 PM
@somesoni2
I dont understand how "Today" or "Yesterday" can contain response time data as the eval for those only refer's to the _time field:
eval Day=if(_time>=relative_time(now(),"@d"),"Today","Yesterday")
... View more
01-05-2017
05:06 PM
@somesoni2 - what are your thoughts on my query above?
... View more
01-05-2017
04:59 PM
@somesoni2
I am thinking something like the below is more suitable as it actually is comparing the responsetime
source="transactionLog" type="report" earliest=-1w@w latest=@w |
stats avg(responsetime) as Previous_Week_Response |
appendcols [search source="transactionLog" type="report" earliest=@w latest=now | stats avg(responsetime) as Current_Week_Response] |
eval Week=if(_time>=relative_time(now(),"@w"),"Current_Week","Previous_Week") |
chart avg(responsetime) over product by Week |
where Current_Week_Response>=2*Previous_Week_Response
... View more
01-05-2017
04:41 PM
@somesoni2
I have a concern about the query you have provided above.
It appears that the where clause is actually comparing Today being 1.5x Yesterday - however, Today and Yesterday have no relation to the responsetime - they are only relating to the _time field???
Is this correct?
... View more
01-05-2017
02:40 PM
@somesoni2
I am just curious on how I can set the alert trigger conditions for this alert to function correctly.
The problem is, my search is returning over 75,000 events, and these are not restricted to the events which fit the where clause - it is simply returning all events in that time period.
However, the chart command is correctly displaying the products that fit the where clause.
Do I need to set a CUSTOM trigger as below;
Trigger condition: Custom
Custom condition: Current_Week>=2*Previous_Week
... View more
01-04-2017
08:30 PM
@somesoni2
If I wanted to edit this to compare this week's average vs last weeks - could I so by changing to the following:
earliest=-1w@w latest=now | eval Week=if(_time>=relative_time(now(),"@w"),"This week","Last week")
... View more
01-04-2017
07:03 PM
I am working with a set of transactions data where in each transaction could relate to any of our numerous systems/products.
The data could be simplified for the sake of this question to contain fields similar to the below:
Event 1:
TransationID: 1000
Product: System1
ResponseTime: 1234
Event 2:
TransactionID: 1001
Product: System2
ResponseTime: 4321
Event 3:
TransactionID: 1002
Product: System3
ResponseTime: 5678
Etc.
So I am interested in setting up an alert which works in the following manner:
The search query will return the average response time for each product for yesterday, and compare that to the average response time for each product today and for any products where there has been an X% increase - send an email alert - If possible, the alert will advise the product in question which has experienced the fluctuation in response time.
I feel like perhaps a custom trigger condition may be suitable for this instance?
The search query below will successfully provide me with the average response times per product for yesterday:
source="transactionLog" type="report" earliest=-1d@d latest=@d | stats avg(responsetime) by product
I saw a slightly similar question on Splunk Answers in which the person was attempting to use a subsearch in order to generate his alert query - however my situation differs as I am breaking down by products, so I am not sure that I can use a Number of Results trigger condition.
UPDATE:
I have put together the below query - I feel this is suitable as it actually compares the responsetime - I am having trouble understanding the logic from some of the comments in the answers below.
source="transactionLog" type="report" earliest=-1w@w latest=@w |
stats avg(responsetime) as Previous_Week_Response |
appendcols [search source="transactionLog" type="report" earliest=@w latest=now |
stats avg(responsetime) as Current_Week_Response] |
eval Week=if(_time>=relative_time(now(),"@w"),"Current_Week","Previous_Week") |
chart avg(responsetime) over product by Week |
where Current_Week_Response>=2*Previous_Week_Response
... View more
12-27-2016
05:51 PM
@somesoni2
I am concerned that my search will end up as:
search query NOT (id=10000) OR (id=10001) OR (id=10002)
This above syntax is not valid right?
... View more
12-27-2016
05:49 PM
@somesoni2
Based on your response above - would I need to place brackets around the subsearch similar to below;
search query NOT ([subsearch query | return field])
... View more