All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @nithin204, the way to pass a parameter to a drilldown is the one I described, please try this: <link>search?q=$drilldown_srch|u$$&amp;earliest=$field1.earliest$&amp;latest=$field1.latest$</link... See more...
Hi @nithin204, the way to pass a parameter to a drilldown is the one I described, please try this: <link>search?q=$drilldown_srch|u$$&amp;earliest=$field1.earliest$&amp;latest=$field1.latest$</link> anyway, usually a drilldown search takes the same time variables of the original. Ciao. Giuseppe
I have a "cost" for two different indexes that I want to calculate in one and the same SPL. As the "price" is different depending on index, I can't just use a "by" clause in my count/sum as I don't k... See more...
I have a "cost" for two different indexes that I want to calculate in one and the same SPL. As the "price" is different depending on index, I can't just use a "by" clause in my count/sum as I don't know how to apply the separate costs in that way. Let's say... idxCheap costs $10 per event. idxExpensive costs $20 per event. I've written this SPL that works, although the "cost" data ends up in a unique column for each index. The count is still in the same column. index=idxCheap OR index=idxExpensive | stats count by index | eval idxCheapCost = case(index="idxCheap", count*10) | eval idxExpensiveCost = case(index="idxExpensive", count*20)  The results looks like this: count idxCheapCost idxExpensiveCost index 44892 448920   idxCheap 155   3100 idxExpensive   Any pointers on how to most efficiently and dynamically achieve this?
No, exactly. I have configure two tcpout groups, one for the production servers and the other for development server. The data is not sent to two tcpout groups, only is sent to one. However, the inte... See more...
No, exactly. I have configure two tcpout groups, one for the production servers and the other for development server. The data is not sent to two tcpout groups, only is sent to one. However, the internal agent logs  yes it sent to development and production tcpout groups. It’s possible that the internal agent logs have the behavior you mention?
Hi @gcusello ,  I have to use the second $ as well after drilldown_srch as that is token.  <link>search?q=$drilldown_srch$?earliest=$field1.earliest$&amp;latest=$field1.latest$|u$</link> If I skip... See more...
Hi @gcusello ,  I have to use the second $ as well after drilldown_srch as that is token.  <link>search?q=$drilldown_srch$?earliest=$field1.earliest$&amp;latest=$field1.latest$|u$</link> If I skip the second "$" after the drillwon_srch, and if I click the value the new search opens as $drilldown_srch in the search bar in new window.    If I use the $drilldown_srch$ , the search is working correct but it is not taking the time variables. It always have a default of 15mins.  Thanks 
I already figured out how to get it to work.
I want to calculate average count per day and maximum count per month. Like all the Mondays , Tuesdays of a given month combined and averaged 
Applying the stanza you referenced below worked for us as well:   [indexAndForward] index = true selectiveIndexing = true  Thanks!
Hi @adamsobczykhsbc , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @Strangertinz , sorry but what do you want to calculate with avg(count) and max(count)? count isn't a field to calculate average or maximun. you can have the count of events by period index=_i... See more...
Hi @Strangertinz , sorry but what do you want to calculate with avg(count) and max(count)? count isn't a field to calculate average or maximun. you can have the count of events by period index=_internal | bucket _time span =1d | eval date=strftime(_time,"%a-%b") | stats count by date Ciao. Giuseppe
Hi @MorgenHepton, could you share your error? are you receiving Splunk internal logs? you can check this running a simple search              index=_internal host=<your_host> what is you architec... See more...
Hi @MorgenHepton, could you share your error? are you receiving Splunk internal logs? you can check this running a simple search              index=_internal host=<your_host> what is you architecture? I understood that you have an stand-alone Splunk server (an all-in-one installation) and a Universal Forwarder (in a different system) that sgould send logs to the stand-alone system, is it correct? running a telnet on the UF on ports 9997 and 8089 can you reach to connect the stand-alone server? Ciao. Giuseppe
You would have to share some sample events (anonymised of course) in order for us to determine why your rex command is not extracting the code for you
I have used Splunk to threat hunt many times and have aspirations to build a distributed Splunk instance in the feature. I decided to start learning the installation, configuration, and deployment pr... See more...
I have used Splunk to threat hunt many times and have aspirations to build a distributed Splunk instance in the feature. I decided to start learning the installation, configuration, and deployment process of Splunk, by building a standalone instance. I get to a point where I think I have completed all the steps necessary to have a functioning Splunk set up. (connections are established on 8089 and 9997) and my web page is good. As soon as my apps are pushed to my (client)  this is when Splunk starts throwing an error stating indexers and ques are full. it also appears I am getting no logs from my applications. Any help is greatly appreciated. 
I am using the below query to merge 2 queries using append. However, I am unable to get the value of the field named "Code" from the first query under | search "Some Logger" printed in the Statistics... See more...
I am using the below query to merge 2 queries using append. However, I am unable to get the value of the field named "Code" from the first query under | search "Some Logger" printed in the Statistics section: index=* sourcetype=* host=* | search "Some Logger" | rex "LoggerName\|(?<time>\w+)\|(?<Service>\w+)\|(?<Type>\w+)\|(?<brand>\w+)\|(?<template>\w+)\|(?<hashId>[\w-]+)\|(?<Code>\w+)" | table Code | append [ search host=* | search "LoggerName2*" | rex field=_raw "field1=(?<field1>)\}" | rex field=_raw "field2=(?<field2>)," | rex field=_raw "field3=(?<field3>[a-zA-z-_0-9\\s]*)" | rex field=_raw "(?<field4>[\w-]+)$" | rex field=_raw "field5=(?<field5>)," | rex field=_raw "field6=(?<field6>)," | table field1,field2 ] The result from the 2nd/child query i.e. | search "LoggerName2*" is printing just fine in a tabular format. Value of the code field is API response code i.e. can be either 2XX, 3XX, 4XX, 5XX. Could someone please help ? Thanks!
Thanks for your quick response but the query is not working 
Try adding site to your by clauses on your stats commands
I just figured it out with guidance from https://stackoverflow.com/questions/7124778/how-can-i-match-anything-up-until-this-sequence-of-characters-in-a-regular-exp  correct rex will be |rex field=_... See more...
I just figured it out with guidance from https://stackoverflow.com/questions/7124778/how-can-i-match-anything-up-until-this-sequence-of-characters-in-a-regular-exp  correct rex will be |rex field=_raw "Error\=(?<ErrDesc>.+?(?=\|\"))" thanks for help!
Hi @adamsobczykhsbc, could you share some sample of your logs (both cases)?. Ciao. Giuseppe
Hi @gcusello , it works for that specific case, however my extracted field ErrDesc will not always have pipe "|" inside, basically I want to match anything until |" , no matter if pipe is in the fiel... See more...
Hi @gcusello , it works for that specific case, however my extracted field ErrDesc will not always have pipe "|" inside, basically I want to match anything until |" , no matter if pipe is in the field or not
Hi @Strangertinz , please try this: index=_internal | bucket _time span =1d | eval date=strftime(_time,"%a-%b") | stats avg(count) as Count max(count) as maximum by date Ciao. Giuseppe
Hi @adamsobczykhsbc, please try this: | rex "Error\=(?<ErrDesc>[^\|]+\|[^\|]+)" that you can test at https://regex101.com/r/mIlDeU/1 Ciao. Giuseppe