All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi All,   I want to fetch data from splunk to Power BI . Please suggest.  I know there is a splunk ODBC driver where we can fetch the data but we are using SAML authentication. can you help what t... See more...
Hi All,   I want to fetch data from splunk to Power BI . Please suggest.  I know there is a splunk ODBC driver where we can fetch the data but we are using SAML authentication. can you help what to give in the username and password and there is an option to use bearer token where to use and how to use the token.  I need to create a custom search  to fetch the data.   @gcusello  your inputs are needed on this.
Yes @ITWhisperer, i have extracted all TRN, tomcatget, Queue, TimeMQPut, Status, and Priority. you're right tomcatput=TimeMQPut, ignore about the status am not using it for the response time calcul... See more...
Yes @ITWhisperer, i have extracted all TRN, tomcatget, Queue, TimeMQPut, Status, and Priority. you're right tomcatput=TimeMQPut, ignore about the status am not using it for the response time calculation.  Splunk query which i shared has response time. | eval E2E_5min=if(tomcatGet2tomcatPut<=300,1,0) | eval E2E_20min=if(tomcatGet2tomcatPut>300 and tomcatGet2tomcatPut<=1200,1,0) | eval E2E_50min=if(tomcatGet2tomcatPut>1200 and tomcatGet2tomcatPut<=3000,1,0) | eval E2EGT50min=if(tomcatGet2tomcatPut>3000,1,0) | eval Total = E2E_5min + E2E_20min + E2E_50min + E2EGT50min | stats sum(E2E_5min) as sum_5min sum(E2E_20min) as sum_20min sum(E2E_50min) as sum_50min sum(E2EGT50min) as sum_50GTmin sum(Total) as sum_total by Priority This will give below output. Now am creating a field called good and adding adding a condition. If priority is High then it should be in sum_5min if priority is medium then it should be in sum_20min, so adding sum_5min + sum_20min If priority is High then it should be in sum_50min, so adding sum_5min + sum_20min + sum_50min | eval good = if(Priority="High", sum_5min, if(Priority="Medium", sum_5min + sum_20min, if(Priority="Low", sum_5min+ sum_20min + sum_50min, null()))) After getting the good field data, now am calculating percentage of success which display in a table format When i try a timechart it doesnt work as expected. timechart span=1d avg(per_cal) by Priority Gives me output no results found.
The Rickest Dill around! Thank you!
| where time() - strptime(startDateTime,"%FT%T%Z") > 4*60*60 and isResolved="False"
| chart count by Alert status | addtotals col=t fieldname=Count label=Total labelfield=Alert
It depends on your overall syslog-ingesting process. As you're saying that "device sends data to a Syslog server and then up to our splunk instance" I suppose there is a "middle-man" in form of some ... See more...
It depends on your overall syslog-ingesting process. As you're saying that "device sends data to a Syslog server and then up to our splunk instance" I suppose there is a "middle-man" in form of some syslog receiver either pushing the data to HEC input or writing to files from which the data is picked up. In this case it depends on that "middle-man" configuration. If however it's just a case of a bit imprecise wording and all your devices send directly to your Splunk component, you have to make sure that you have proper inputs configuration on that box (and proper sourcetype configs as well). As a rule of thumb you can't have several different sourcetypes on a single tcp or udp port input with Splunk or Universal Forwarder alone.
Thanks for your answer but in my case _time may only appear once in 2 days when we have a new update and incident resolve state change to true. Can we do something that my  startDateTime checks with ... See more...
Thanks for your answer but in my case _time may only appear once in 2 days when we have a new update and incident resolve state change to true. Can we do something that my  startDateTime checks with the latest time of the scheduled alert and raise an alarm if it is more than 4 hours ?  
You can call the | datamodel <your_datamodel> [<root_node>] acceleration_search_string command to see what search is used to generate the search used to accelerate the datamodel if that's what you ... See more...
You can call the | datamodel <your_datamodel> [<root_node>] acceleration_search_string command to see what search is used to generate the search used to accelerate the datamodel if that's what you want.
I am trying to get the ingestion per day in Terabytes for each index. I am using the below search which works, however the ingestion numbers are not formatted great. For example, using the below sear... See more...
I am trying to get the ingestion per day in Terabytes for each index. I am using the below search which works, however the ingestion numbers are not formatted great. For example, using the below search,  for an index i get a usage value of 4587.16 which would be 4.59 terabytes per day. I am looking for this number to be rounded in the search results to show like 4.59 index=_internal sourcetype=splunkd source=*license_usage.log type=Usage idx=* | stats sum(b) as usage by idx | rename idx as index | eval usage=round(usage/1024/1024/1024,2)
Is this issue resolved now or do you need more help? This is the issue with the key of the certificate of KVstore.
HI Team, i am caught in a maze of how to use stats function to get the data in expected format i want.  Sample data.  We have alerts based on their different status values.  Alert and  status are ... See more...
HI Team, i am caught in a maze of how to use stats function to get the data in expected format i want.  Sample data.  We have alerts based on their different status values.  Alert and  status are field names. Alert values(status) Total_Count 001_Phishing_Alert In progress Resolved On-Hold 5 002_Malware_alert In-progress Resolved 6 003_DLP_Alert In-Progress 4 Desired / Expected output:  Want to split in based on each individual status value Alert Count In-Progress Resolved On-Hold 001_Phishing_Alert 5 3 1 1 002_Malware_Alert 6 3 3 0 003_DLP_alert 4 4 0 0 Total 15 8 4 1 I am trying using        |..base search | stats count by Alert, status .... OR |..base search.. | stats count, values(status) by Alert        nothing is working out to show the desired output.  Can someone pls assist? 
Honestly - I have no idea what those tables are supposed to represent. I understand that there are two separate "zones" and you have some servers in them. How many servers and what roles do they hav... See more...
Honestly - I have no idea what those tables are supposed to represent. I understand that there are two separate "zones" and you have some servers in them. How many servers and what roles do they have? Is there any connectivity between those zones or are they completely air-gapped? If so then how do you handle licensing? What do you mean by "implement high availability"? On which layer? Architecting an environment from scratch is usually a relatively big and important task so while this forum is a good place for asking general architecture-related questions it's not a replacement for the work of properly trained Splunk Pre-sales team or Splunk Partner engineers.
HI @spl_unker  we are facing a similar issue what you faced a year ago. Our short description is also getting truncated after 80 character and when i checked the code snippet it has same details as y... See more...
HI @spl_unker  we are facing a similar issue what you faced a year ago. Our short description is also getting truncated after 80 character and when i checked the code snippet it has same details as you shown in your post. FIELD_SEPARATOR = "||" INDEX_LENGTH = 80 DID YOU FOUND THE ANSWER FOR THIS. IS THIS HAS ANYTHING TO DO WITH THE INDEX_LENGTH. Should we change the index length and redeploy. Your answer on this will be highly appreciated  
OK. If it's just for testing the functionality, I won't be bugging you about it too much Just remember that apart from very specific cases index-time extractions are best avoided. But back to th... See more...
OK. If it's just for testing the functionality, I won't be bugging you about it too much Just remember that apart from very specific cases index-time extractions are best avoided. But back to the point - if you want to extract a field from a previously extracted field, you need to have two separate transforms and make sure they are triggered in a proper order. So you need to first define a transform which extracts a field (or set of fields) from raw data. And then define another transform which extracts your field from an already extracted field. As a bonus you might (if you don't need it indexed) add yet another transform to "delete" (by setting it to null() using INGEST_EVAL) the field extracted in the first step. Example: transforms.conf: [test_extract_payload] REGEX = payload:\s"([^"]+)" FORMAT = payload::$1 WRITE_META = true [test_extract_site] REGEX = site:\s(\S)+ FORMAT = site::$1 WRITE_META = true SOURCE_KEY = payload props.conf: [my_sourcetype] TRANSFORMS-extract-site-from-payload = test_extract_payload, test_extract_site  This way you'll get your site field extracted from an event containing payload: "whatever whatever site: site1 whatever" but not from just "whatever whatever site: site1 whatever" or payload: "whatever whatever" site: site1
Ok, I understand but you don't know if an other way exists ?  For example , modify the limits.conf in this way :  or try to work with transforms.conf :    What do you think ?    Thank y... See more...
Ok, I understand but you don't know if an other way exists ?  For example , modify the limits.conf in this way :  or try to work with transforms.conf :    What do you think ?    Thank you so much   
| where _time - strptime(startDateTime,"%FT%T%Z") > 4*60*60 and isResolved="False"
Howdy all, Perhaps someone can help me to remember the SPL query that lists out the datasets as fields in the data model. It's something like:   | datamodel SomeDataModel accelerate....(i don... See more...
Howdy all, Perhaps someone can help me to remember the SPL query that lists out the datasets as fields in the data model. It's something like:   | datamodel SomeDataModel accelerate....(i dont remember the rest and cant find the relevant notes I had)   Thanks in advance.
Hello Everyone, Recently I have installed Splunk db connect app (3.16.0) in my Splunk heavy forwarder (9.1.1). As per the documentation I have installed jre and installed the my sql add on. And crea... See more...
Hello Everyone, Recently I have installed Splunk db connect app (3.16.0) in my Splunk heavy forwarder (9.1.1). As per the documentation I have installed jre and installed the my sql add on. And created Identities, connections and inputs. But when I check for the data it is not getting ingested. So I enabled the debug mode and checked the logs and got the hec token error. But the hec token is configured in inputs.conf file and same I could see in the Splunk web gui under Data inputs -> HTTP event collector. could you please help if anyone faced this error before? Error: ERROR org.easybatch.core.job.BatchJob - Unable to write records java.io.IOException: There are no Http Event Collectors available at this time. ERROR c.s.d.s.dbinput.recordwriter.CheckpointUpdater - action=skip_checkpoint_update_batch_writing_failed java.io.IOException: There are no Http Event Collectors available at this time.   ERROR c.s.d.s.task.listeners.RecordWriterMetricsListener - action=unable_to_write_batch java.io.IOException: There are no Http Event Collectors available at this time.
It does depend on what values you have in your 25k dropdown list, but if we assume that the list is generated dynamically with some sort of search, your search could include a filter which is based o... See more...
It does depend on what values you have in your 25k dropdown list, but if we assume that the list is generated dynamically with some sort of search, your search could include a filter which is based on a token from a much smaller dropdown so that you can limit your results to under 1000.
Dear All, I want to setup an alert in an event. The event contains three timestamps, New Event time, Last update, and startDate time. So these are the logs coming from MS365.  I want the Alert if o... See more...
Dear All, I want to setup an alert in an event. The event contains three timestamps, New Event time, Last update, and startDate time. So these are the logs coming from MS365.  I want the Alert if one of the field "Incident Resolved = False" is satisfied even after 4 hours of startDate time. So we receive first event at startDate  but we don't want an alert until 4 hours of startDate .   startDateTime: 2024-07-01T09:00:00Z