All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

So I have a data source which is very low volume and is not expected to have events at all (like only if there is an unexpected event, it logs that).  I have a requirement to produce a report showing... See more...
So I have a data source which is very low volume and is not expected to have events at all (like only if there is an unexpected event, it logs that).  I have a requirement to produce a report showing there were no unexpected events in last 90days. I tried following search query but it is not giving the results per day.   index=foo | timechart span=1d count as event_count by sourcetype | append [|stats count as event_count | eval text="no events found"]   PS - the count you are seeing below is for the other sourceytpe that is under the same index=foo, and the sourcetype where the count is 0 is displayed at the bottom ( sourcetype name is not displayed as there are no events for that sourcetype). I want my output to be specific to this sourcetype and display count = 0 for all the days where the data is not present.  
The "Splunk Add-on for NetApp Data ONTAP" is showing on the site as Unsupported. Splunk Add-on for NetApp Data ONTAP | Splunkbase We are trying to find out if the app can be used with REST API, sin... See more...
The "Splunk Add-on for NetApp Data ONTAP" is showing on the site as Unsupported. Splunk Add-on for NetApp Data ONTAP | Splunkbase We are trying to find out if the app can be used with REST API, since OnTAP is eliminating its support for legacy ZAPI/ONTAPI Can anyone provide information as to the long-term prospects of this or another App which would collect data from Netapp OnTAP?
  Try something like this (this assumes that you want daily results based on when the get was received, rather than the put, if this is different, change the bin command to use the other field) ind... See more...
  Try something like this (this assumes that you want daily results based on when the get was received, rather than the put, if this is different, change the bin command to use the other field) index=myindex source=mysoruce earliest=-7d@d latest=@d | eval PPut=strptime(tomcatput, "%y%m%d %H:%M:%S") | eval PGet=strptime(tomcatget, "%y%m%d %H:%M:%S") | stats min(PGet) as PGet, max(PPut) as PPut, values(Priority) as Priority by TRN | eval tomcatGet2tomcatPut=round((PPut-PGet),0) | eval E2E_5min=if(tomcatGet2tomcatPut<=300,1,0) | eval E2E_20min=if(tomcatGet2tomcatPut>300 and tomcatGet2tomcatPut<=1200,1,0) | eval E2E_50min=if(tomcatGet2tomcatPut>1200 and tomcatGet2tomcatPut<=3000,1,0) | eval E2EGT50min=if(tomcatGet2tomcatPut>3000,1,0) | eval Total = E2E_5min + E2E_20min + E2E_50min + E2EGT50min | bin PGet as _time span=1d | stats sum(E2E_5min) as sum_5min sum(E2E_20min) as sum_20min sum(E2E_50min) as sum_50min sum(E2EGT50min) as sum_50GTmin sum(Total) as sum_total by _time Priority | eval good = if(Priority="High", sum_5min, if(Priority="Medium", sum_5min + sum_20min, if(Priority="Low", sum_5min+ sum_20min + sum_50min, null()))) | eval Per_cal=round(100*good/sum_total,1) | xyseries _time Priority Per_cal  
That query works for me.  What results do you get and how do they not match what you want?
Hi All,   I want to fetch data from splunk to Power BI . Please suggest.  I know there is a splunk ODBC driver where we can fetch the data but we are using SAML authentication. can you help what t... See more...
Hi All,   I want to fetch data from splunk to Power BI . Please suggest.  I know there is a splunk ODBC driver where we can fetch the data but we are using SAML authentication. can you help what to give in the username and password and there is an option to use bearer token where to use and how to use the token.  I need to create a custom search  to fetch the data.   @gcusello  your inputs are needed on this.
Yes @ITWhisperer, i have extracted all TRN, tomcatget, Queue, TimeMQPut, Status, and Priority. you're right tomcatput=TimeMQPut, ignore about the status am not using it for the response time calcul... See more...
Yes @ITWhisperer, i have extracted all TRN, tomcatget, Queue, TimeMQPut, Status, and Priority. you're right tomcatput=TimeMQPut, ignore about the status am not using it for the response time calculation.  Splunk query which i shared has response time. | eval E2E_5min=if(tomcatGet2tomcatPut<=300,1,0) | eval E2E_20min=if(tomcatGet2tomcatPut>300 and tomcatGet2tomcatPut<=1200,1,0) | eval E2E_50min=if(tomcatGet2tomcatPut>1200 and tomcatGet2tomcatPut<=3000,1,0) | eval E2EGT50min=if(tomcatGet2tomcatPut>3000,1,0) | eval Total = E2E_5min + E2E_20min + E2E_50min + E2EGT50min | stats sum(E2E_5min) as sum_5min sum(E2E_20min) as sum_20min sum(E2E_50min) as sum_50min sum(E2EGT50min) as sum_50GTmin sum(Total) as sum_total by Priority This will give below output. Now am creating a field called good and adding adding a condition. If priority is High then it should be in sum_5min if priority is medium then it should be in sum_20min, so adding sum_5min + sum_20min If priority is High then it should be in sum_50min, so adding sum_5min + sum_20min + sum_50min | eval good = if(Priority="High", sum_5min, if(Priority="Medium", sum_5min + sum_20min, if(Priority="Low", sum_5min+ sum_20min + sum_50min, null()))) After getting the good field data, now am calculating percentage of success which display in a table format When i try a timechart it doesnt work as expected. timechart span=1d avg(per_cal) by Priority Gives me output no results found.
The Rickest Dill around! Thank you!
| where time() - strptime(startDateTime,"%FT%T%Z") > 4*60*60 and isResolved="False"
| chart count by Alert status | addtotals col=t fieldname=Count label=Total labelfield=Alert
It depends on your overall syslog-ingesting process. As you're saying that "device sends data to a Syslog server and then up to our splunk instance" I suppose there is a "middle-man" in form of some ... See more...
It depends on your overall syslog-ingesting process. As you're saying that "device sends data to a Syslog server and then up to our splunk instance" I suppose there is a "middle-man" in form of some syslog receiver either pushing the data to HEC input or writing to files from which the data is picked up. In this case it depends on that "middle-man" configuration. If however it's just a case of a bit imprecise wording and all your devices send directly to your Splunk component, you have to make sure that you have proper inputs configuration on that box (and proper sourcetype configs as well). As a rule of thumb you can't have several different sourcetypes on a single tcp or udp port input with Splunk or Universal Forwarder alone.
Thanks for your answer but in my case _time may only appear once in 2 days when we have a new update and incident resolve state change to true. Can we do something that my  startDateTime checks with ... See more...
Thanks for your answer but in my case _time may only appear once in 2 days when we have a new update and incident resolve state change to true. Can we do something that my  startDateTime checks with the latest time of the scheduled alert and raise an alarm if it is more than 4 hours ?  
You can call the | datamodel <your_datamodel> [<root_node>] acceleration_search_string command to see what search is used to generate the search used to accelerate the datamodel if that's what you ... See more...
You can call the | datamodel <your_datamodel> [<root_node>] acceleration_search_string command to see what search is used to generate the search used to accelerate the datamodel if that's what you want.
I am trying to get the ingestion per day in Terabytes for each index. I am using the below search which works, however the ingestion numbers are not formatted great. For example, using the below sear... See more...
I am trying to get the ingestion per day in Terabytes for each index. I am using the below search which works, however the ingestion numbers are not formatted great. For example, using the below search,  for an index i get a usage value of 4587.16 which would be 4.59 terabytes per day. I am looking for this number to be rounded in the search results to show like 4.59 index=_internal sourcetype=splunkd source=*license_usage.log type=Usage idx=* | stats sum(b) as usage by idx | rename idx as index | eval usage=round(usage/1024/1024/1024,2)
Is this issue resolved now or do you need more help? This is the issue with the key of the certificate of KVstore.
HI Team, i am caught in a maze of how to use stats function to get the data in expected format i want.  Sample data.  We have alerts based on their different status values.  Alert and  status are ... See more...
HI Team, i am caught in a maze of how to use stats function to get the data in expected format i want.  Sample data.  We have alerts based on their different status values.  Alert and  status are field names. Alert values(status) Total_Count 001_Phishing_Alert In progress Resolved On-Hold 5 002_Malware_alert In-progress Resolved 6 003_DLP_Alert In-Progress 4 Desired / Expected output:  Want to split in based on each individual status value Alert Count In-Progress Resolved On-Hold 001_Phishing_Alert 5 3 1 1 002_Malware_Alert 6 3 3 0 003_DLP_alert 4 4 0 0 Total 15 8 4 1 I am trying using        |..base search | stats count by Alert, status .... OR |..base search.. | stats count, values(status) by Alert        nothing is working out to show the desired output.  Can someone pls assist? 
Honestly - I have no idea what those tables are supposed to represent. I understand that there are two separate "zones" and you have some servers in them. How many servers and what roles do they hav... See more...
Honestly - I have no idea what those tables are supposed to represent. I understand that there are two separate "zones" and you have some servers in them. How many servers and what roles do they have? Is there any connectivity between those zones or are they completely air-gapped? If so then how do you handle licensing? What do you mean by "implement high availability"? On which layer? Architecting an environment from scratch is usually a relatively big and important task so while this forum is a good place for asking general architecture-related questions it's not a replacement for the work of properly trained Splunk Pre-sales team or Splunk Partner engineers.
HI @spl_unker  we are facing a similar issue what you faced a year ago. Our short description is also getting truncated after 80 character and when i checked the code snippet it has same details as y... See more...
HI @spl_unker  we are facing a similar issue what you faced a year ago. Our short description is also getting truncated after 80 character and when i checked the code snippet it has same details as you shown in your post. FIELD_SEPARATOR = "||" INDEX_LENGTH = 80 DID YOU FOUND THE ANSWER FOR THIS. IS THIS HAS ANYTHING TO DO WITH THE INDEX_LENGTH. Should we change the index length and redeploy. Your answer on this will be highly appreciated  
OK. If it's just for testing the functionality, I won't be bugging you about it too much Just remember that apart from very specific cases index-time extractions are best avoided. But back to th... See more...
OK. If it's just for testing the functionality, I won't be bugging you about it too much Just remember that apart from very specific cases index-time extractions are best avoided. But back to the point - if you want to extract a field from a previously extracted field, you need to have two separate transforms and make sure they are triggered in a proper order. So you need to first define a transform which extracts a field (or set of fields) from raw data. And then define another transform which extracts your field from an already extracted field. As a bonus you might (if you don't need it indexed) add yet another transform to "delete" (by setting it to null() using INGEST_EVAL) the field extracted in the first step. Example: transforms.conf: [test_extract_payload] REGEX = payload:\s"([^"]+)" FORMAT = payload::$1 WRITE_META = true [test_extract_site] REGEX = site:\s(\S)+ FORMAT = site::$1 WRITE_META = true SOURCE_KEY = payload props.conf: [my_sourcetype] TRANSFORMS-extract-site-from-payload = test_extract_payload, test_extract_site  This way you'll get your site field extracted from an event containing payload: "whatever whatever site: site1 whatever" but not from just "whatever whatever site: site1 whatever" or payload: "whatever whatever" site: site1
Ok, I understand but you don't know if an other way exists ?  For example , modify the limits.conf in this way :  or try to work with transforms.conf :    What do you think ?    Thank y... See more...
Ok, I understand but you don't know if an other way exists ?  For example , modify the limits.conf in this way :  or try to work with transforms.conf :    What do you think ?    Thank you so much   
| where _time - strptime(startDateTime,"%FT%T%Z") > 4*60*60 and isResolved="False"