All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Worked on 9.2.1 , the add-on was not running.
If you know all the sourcetypes you are interested in (A, B, C, D, E, F in my example), you could do something like this | timechart span=1d count as event_count by sourcetype usenull=f | foreach A ... See more...
If you know all the sourcetypes you are interested in (A, B, C, D, E, F in my example), you could do something like this | timechart span=1d count as event_count by sourcetype usenull=f | foreach A B C D E F [| eval <<FIELD>>=coalesce(<<FIELD>>,0) | eval <<FIELD>>=if(<<FIELD>>==0,"No events found",<<FIELD>>)]
The eval command is converting bytes into gigabytes.  Add another `/1024` to convert to terabytes. index=_internal sourcetype=splunkd source=*license_usage.log type=Usage idx=* | stats sum(b) as usa... See more...
The eval command is converting bytes into gigabytes.  Add another `/1024` to convert to terabytes. index=_internal sourcetype=splunkd source=*license_usage.log type=Usage idx=* | stats sum(b) as usage by idx | rename idx as index | eval usage=round(usage/1024/1024/1024/1024,2)  
Hi @vijreddy30 , see in the Splunk Validated Architectures document (https://www.splunk.com/en_us/pdfs/tech-brief/splunk-validated-architectures.pdf) what Splunk means for HA and how to implement it... See more...
Hi @vijreddy30 , see in the Splunk Validated Architectures document (https://www.splunk.com/en_us/pdfs/tech-brief/splunk-validated-architectures.pdf) what Splunk means for HA and how to implement it. For your requirements, it's really difficoult to answer to your question! Maybe there are some replication mechanisms, based on VM-Ware, to do this, but I'm not an expert on VM-Ware and this isn't the location for this question. Ciao. Giuseppe
Yes, the query works - however i want the values to be formatted differently within the search results. I would like the values to show in terabytes.  For example, using the query i get a value of 45... See more...
Yes, the query works - however i want the values to be formatted differently within the search results. I would like the values to show in terabytes.  For example, using the query i get a value of 4587.43 (in GB) for an index ingestion value. I would like this to round and show in Terabytes as 4.59
Hello, I figured it out. It was in the documentation all along. In the map settings you need to go to the Color and Style section, activate the Show base layer option, in the Base layer tile server ... See more...
Hello, I figured it out. It was in the documentation all along. In the map settings you need to go to the Color and Style section, activate the Show base layer option, in the Base layer tile server put the url "https://api.maptiler.com/maps/outdoor/{z}/{x}/{y}.png?key=YourAPIKeyHere"  and below that field select Raster. The URL above is in the Dashboard Studio Maps documentation.
So I have a data source which is very low volume and is not expected to have events at all (like only if there is an unexpected event, it logs that).  I have a requirement to produce a report showing... See more...
So I have a data source which is very low volume and is not expected to have events at all (like only if there is an unexpected event, it logs that).  I have a requirement to produce a report showing there were no unexpected events in last 90days. I tried following search query but it is not giving the results per day.   index=foo | timechart span=1d count as event_count by sourcetype | append [|stats count as event_count | eval text="no events found"]   PS - the count you are seeing below is for the other sourceytpe that is under the same index=foo, and the sourcetype where the count is 0 is displayed at the bottom ( sourcetype name is not displayed as there are no events for that sourcetype). I want my output to be specific to this sourcetype and display count = 0 for all the days where the data is not present.  
The "Splunk Add-on for NetApp Data ONTAP" is showing on the site as Unsupported. Splunk Add-on for NetApp Data ONTAP | Splunkbase We are trying to find out if the app can be used with REST API, sin... See more...
The "Splunk Add-on for NetApp Data ONTAP" is showing on the site as Unsupported. Splunk Add-on for NetApp Data ONTAP | Splunkbase We are trying to find out if the app can be used with REST API, since OnTAP is eliminating its support for legacy ZAPI/ONTAPI Can anyone provide information as to the long-term prospects of this or another App which would collect data from Netapp OnTAP?
  Try something like this (this assumes that you want daily results based on when the get was received, rather than the put, if this is different, change the bin command to use the other field) ind... See more...
  Try something like this (this assumes that you want daily results based on when the get was received, rather than the put, if this is different, change the bin command to use the other field) index=myindex source=mysoruce earliest=-7d@d latest=@d | eval PPut=strptime(tomcatput, "%y%m%d %H:%M:%S") | eval PGet=strptime(tomcatget, "%y%m%d %H:%M:%S") | stats min(PGet) as PGet, max(PPut) as PPut, values(Priority) as Priority by TRN | eval tomcatGet2tomcatPut=round((PPut-PGet),0) | eval E2E_5min=if(tomcatGet2tomcatPut<=300,1,0) | eval E2E_20min=if(tomcatGet2tomcatPut>300 and tomcatGet2tomcatPut<=1200,1,0) | eval E2E_50min=if(tomcatGet2tomcatPut>1200 and tomcatGet2tomcatPut<=3000,1,0) | eval E2EGT50min=if(tomcatGet2tomcatPut>3000,1,0) | eval Total = E2E_5min + E2E_20min + E2E_50min + E2EGT50min | bin PGet as _time span=1d | stats sum(E2E_5min) as sum_5min sum(E2E_20min) as sum_20min sum(E2E_50min) as sum_50min sum(E2EGT50min) as sum_50GTmin sum(Total) as sum_total by _time Priority | eval good = if(Priority="High", sum_5min, if(Priority="Medium", sum_5min + sum_20min, if(Priority="Low", sum_5min+ sum_20min + sum_50min, null()))) | eval Per_cal=round(100*good/sum_total,1) | xyseries _time Priority Per_cal  
That query works for me.  What results do you get and how do they not match what you want?
Hi All,   I want to fetch data from splunk to Power BI . Please suggest.  I know there is a splunk ODBC driver where we can fetch the data but we are using SAML authentication. can you help what t... See more...
Hi All,   I want to fetch data from splunk to Power BI . Please suggest.  I know there is a splunk ODBC driver where we can fetch the data but we are using SAML authentication. can you help what to give in the username and password and there is an option to use bearer token where to use and how to use the token.  I need to create a custom search  to fetch the data.   @gcusello  your inputs are needed on this.
Yes @ITWhisperer, i have extracted all TRN, tomcatget, Queue, TimeMQPut, Status, and Priority. you're right tomcatput=TimeMQPut, ignore about the status am not using it for the response time calcul... See more...
Yes @ITWhisperer, i have extracted all TRN, tomcatget, Queue, TimeMQPut, Status, and Priority. you're right tomcatput=TimeMQPut, ignore about the status am not using it for the response time calculation.  Splunk query which i shared has response time. | eval E2E_5min=if(tomcatGet2tomcatPut<=300,1,0) | eval E2E_20min=if(tomcatGet2tomcatPut>300 and tomcatGet2tomcatPut<=1200,1,0) | eval E2E_50min=if(tomcatGet2tomcatPut>1200 and tomcatGet2tomcatPut<=3000,1,0) | eval E2EGT50min=if(tomcatGet2tomcatPut>3000,1,0) | eval Total = E2E_5min + E2E_20min + E2E_50min + E2EGT50min | stats sum(E2E_5min) as sum_5min sum(E2E_20min) as sum_20min sum(E2E_50min) as sum_50min sum(E2EGT50min) as sum_50GTmin sum(Total) as sum_total by Priority This will give below output. Now am creating a field called good and adding adding a condition. If priority is High then it should be in sum_5min if priority is medium then it should be in sum_20min, so adding sum_5min + sum_20min If priority is High then it should be in sum_50min, so adding sum_5min + sum_20min + sum_50min | eval good = if(Priority="High", sum_5min, if(Priority="Medium", sum_5min + sum_20min, if(Priority="Low", sum_5min+ sum_20min + sum_50min, null()))) After getting the good field data, now am calculating percentage of success which display in a table format When i try a timechart it doesnt work as expected. timechart span=1d avg(per_cal) by Priority Gives me output no results found.
The Rickest Dill around! Thank you!
| where time() - strptime(startDateTime,"%FT%T%Z") > 4*60*60 and isResolved="False"
| chart count by Alert status | addtotals col=t fieldname=Count label=Total labelfield=Alert
It depends on your overall syslog-ingesting process. As you're saying that "device sends data to a Syslog server and then up to our splunk instance" I suppose there is a "middle-man" in form of some ... See more...
It depends on your overall syslog-ingesting process. As you're saying that "device sends data to a Syslog server and then up to our splunk instance" I suppose there is a "middle-man" in form of some syslog receiver either pushing the data to HEC input or writing to files from which the data is picked up. In this case it depends on that "middle-man" configuration. If however it's just a case of a bit imprecise wording and all your devices send directly to your Splunk component, you have to make sure that you have proper inputs configuration on that box (and proper sourcetype configs as well). As a rule of thumb you can't have several different sourcetypes on a single tcp or udp port input with Splunk or Universal Forwarder alone.
Thanks for your answer but in my case _time may only appear once in 2 days when we have a new update and incident resolve state change to true. Can we do something that my  startDateTime checks with ... See more...
Thanks for your answer but in my case _time may only appear once in 2 days when we have a new update and incident resolve state change to true. Can we do something that my  startDateTime checks with the latest time of the scheduled alert and raise an alarm if it is more than 4 hours ?  
You can call the | datamodel <your_datamodel> [<root_node>] acceleration_search_string command to see what search is used to generate the search used to accelerate the datamodel if that's what you ... See more...
You can call the | datamodel <your_datamodel> [<root_node>] acceleration_search_string command to see what search is used to generate the search used to accelerate the datamodel if that's what you want.
I am trying to get the ingestion per day in Terabytes for each index. I am using the below search which works, however the ingestion numbers are not formatted great. For example, using the below sear... See more...
I am trying to get the ingestion per day in Terabytes for each index. I am using the below search which works, however the ingestion numbers are not formatted great. For example, using the below search,  for an index i get a usage value of 4587.16 which would be 4.59 terabytes per day. I am looking for this number to be rounded in the search results to show like 4.59 index=_internal sourcetype=splunkd source=*license_usage.log type=Usage idx=* | stats sum(b) as usage by idx | rename idx as index | eval usage=round(usage/1024/1024/1024,2)
Is this issue resolved now or do you need more help? This is the issue with the key of the certificate of KVstore.