All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hi  You can extract a rex of all failures. | rex field =_raw ".?failures<field name>.\w "
I haven't confirmed myself, but I believe this issue is resolved in 9.2.2.
Since switching to org.apache.httpcomponents.client5:httpclient5:5.3.1 (from org.apache.httpcomponents:httpclient:4.5.14) we have lost tracking of business transactions between tiers.  Are there a... See more...
Since switching to org.apache.httpcomponents.client5:httpclient5:5.3.1 (from org.apache.httpcomponents:httpclient:4.5.14) we have lost tracking of business transactions between tiers.  Are there any known issues with the Java Agents not supporting httpclient5 or is there an agent version where this is known to work? This is similar to https://community.appdynamics.com/t5/Idea-Exchange/App-Agent-supporting-Apache-httpclient-5/idi-p/43192 Thanks Steve
Well... One could argue about the "don't use RAID0" (which actually isn't a RAID because there is no redundancy) since with RF>1 you provide redundancy at the whole cluster's level. But that's someth... See more...
Well... One could argue about the "don't use RAID0" (which actually isn't a RAID because there is no redundancy) since with RF>1 you provide redundancy at the whole cluster's level. But that's something we could debate long over a beer if I ever go to .conf
One way is to do it as @richgalloway showed - with a composite regex accounting for both orders of fields (Just include possible whitespaces - I don't remember if they are included in windows events ... See more...
One way is to do it as @richgalloway showed - with a composite regex accounting for both orders of fields (Just include possible whitespaces - I don't remember if they are included in windows events or not). Another way is to use INGEST_EVAL and use something like this for your eval queue=if(match(first_regex_and_so_on) AND match(second_regex...), "nullQueue", queue) Be aware thought that it won't work for the events from inputs with renderXml=true. Anyway, additionally you could look into filtering out those values even earlier - in your forwarder's input's stanza using blacklisting.
Apart from your main question there are three issues with your search 1. You're using spath on the whole event which would mean that the fields are not auto-extracted. Where do you have your fields ... See more...
Apart from your main question there are three issues with your search 1. You're using spath on the whole event which would mean that the fields are not auto-extracted. Where do you have your fields from then? It's a bit unclear to me. 2. Are you aware what is the difference between (message!=something) and (NOT message=something)? 3. The search term with a wildcard  at the beginning is gonna be very costly performance-wise. OK. Having gone past that... You can use streamstats to "copy" values from an event to subsequent events. It's not clear what your search for event A is but the general idea would be: <base search matching both eventA and eventB conditions> | eval firsteventid=if(<criteria matching event A>) | eval secondeventid=if(<criteria matching event B>) | streamstats time_window=30s values(firsteventid) as previousfirsteventid ```here we do the copy-over``` | where secondeventid=previousfirsteventid ```if you can expect multiple firsteventids you might need to do some multivalue matching```  
I need to collect data from a folder on a Windows machine, the problem is that this folder is mounted as a disk and the host sends data to it. The classic inputs.conf for the folder source does not w... See more...
I need to collect data from a folder on a Windows machine, the problem is that this folder is mounted as a disk and the host sends data to it. The classic inputs.conf for the folder source does not work. How can I fix this problem?
I have been looking into Smart Agents to simplify Agent management and our Applications run in Microsoft Windows. Looking at the documentation a smart agent installable is not available for Windows? ... See more...
I have been looking into Smart Agents to simplify Agent management and our Applications run in Microsoft Windows. Looking at the documentation a smart agent installable is not available for Windows? I am surprised given its widespread installed base.  Please correct me if I am wrong and would like to know more about this. Update : Please ignore the above there is an installation package available for Windows here, Smart Agent (appdynamics.com) Thanks.
I have a search which yields a time and correlated serial number for event A. I want to use this time and serial number to search for event B, event B must meet criteria X index="june_analytics_log... See more...
I have a search which yields a time and correlated serial number for event A. I want to use this time and serial number to search for event B, event B must meet criteria X index="june_analytics_logs_prod" message=* new_state: Diagnostic, old_state: Home* NOT message=*counts*| spath serial output=serial_number| table _time, serial_number| ```table command is just for readability```   Criteria X: Event B must occur within 30s immediately after event A Event B must have the same serial number as event A Event B message field must contain the phrase "placeholder 123" Any event that matches criteria X, I want to extract data from How can I use this data from event A to search for event B?    capture attached to show what current table looks like
I develop an app on a private Splunk Enterprise server and have a piece of code that accesses the rest api: # Use Splunk REST API to get all input parameters splunkd_uri = os.environ.get("SPLUNKD_U... See more...
I develop an app on a private Splunk Enterprise server and have a piece of code that accesses the rest api: # Use Splunk REST API to get all input parameters splunkd_uri = os.environ.get("SPLUNKD_URI", "https://127.0.0.1:8089") endpoint = f"{splunkd_uri}/servicesNS/nobody/{app_name}/data/inputs/{app_name}" headers = { 'Authorization': f'Splunk {session_key}' } response = requests.get(endpoint, headers=headers, verify=False, timeout=30) Everything works locally but when I run app-inspect before submitting to splunkcloud I get: FAILURE: If you are using requests.get to talk to your own infra with non-public PKI, make sure you bundle your own CA certs as part of your app and pass the path into requests.get as an arg. File: bin\utils\splunk_rest.py Line Number: 19 I am trying to understand how to solve this issue, because if I put a CA that matches the server I am working on, it will not satisfy the splunkcloud server that my clients will use. I think I am misunderstanding a core piece around how to utilize Rest API pragmatically. What is the correct way to go about this? Can it work both for Splunk enterprise and on Splunk Cloud? any clue or tip may help Thanks
@Alankrit- Below is the search you can use. But just to clarify few points: This search is not very efficient and do not meant for reporting and only meant for troubleshooting purposes so you can f... See more...
@Alankrit- Below is the search you can use. But just to clarify few points: This search is not very efficient and do not meant for reporting and only meant for troubleshooting purposes so you can find the source of duplicate events. index=* sourcetype=* host=* | stats count by index, sourcetype, host, _raw | where count>1   I hope this helps!!! Kindly upvote if it does!!!
Try putting the two expressions together separated by [\s\S]+ to represent any intervening text. EventCode=4634[\s\S]+Security_ID=".+?\$$" If the order of fields might vary, use this variation to m... See more...
Try putting the two expressions together separated by [\s\S]+ to represent any intervening text. EventCode=4634[\s\S]+Security_ID=".+?\$$" If the order of fields might vary, use this variation to match both orders. (?:EventCode=4634[\s\S]+Security_ID=".+?\$$")|(?:Security_ID=".+?\$$"[\s\S]+EventCode=4634)
Some general recommendations: Keep the OS, $SPLUNK_HOME, and $SPLUNK_DB on separate mount points Don't use NFS Avoid RAID 0 Use a supported file system Partition size depends on the instance ty... See more...
Some general recommendations: Keep the OS, $SPLUNK_HOME, and $SPLUNK_DB on separate mount points Don't use NFS Avoid RAID 0 Use a supported file system Partition size depends on the instance type and the amount of data to be stored.  300GB is recommended for non-indexers.  Indexer storage needs depend on index retention, replication, and use of SmartStore.
Same issue here. looking into this, an collegae of my has created an separate python script to bypass this.. but now the app only collect the first subscription, looks like the app sees one and... See more...
Same issue here. looking into this, an collegae of my has created an separate python script to bypass this.. but now the app only collect the first subscription, looks like the app sees one and then stops.
Thank you for your notable comment. I suspected that my configuration didn't work because of indexed extraction. But I hadn't time to check and I wasn't sure about it Talking about preamble, I te... See more...
Thank you for your notable comment. I suspected that my configuration didn't work because of indexed extraction. But I hadn't time to check and I wasn't sure about it Talking about preamble, I tested settings that you mentioned a couple of times, but each time it worked worse than the nullQueue approach Maybe I just was not enough attentive...)
Yup. That is one of ways to handle it.  
That does indeed seem strange because you should be getting events into an index regardless of an add-on. The only things that could make you not see the events would be bad timestamp parsing (but th... See more...
That does indeed seem strange because you should be getting events into an index regardless of an add-on. The only things that could make you not see the events would be bad timestamp parsing (but that would happen regardless of destination index), bad timerange you're searching (ditto) or no permissions for the cisco index (but as you're saying you've created the index I'm assuming you've got admin rights here). Try to run | tstats count where index=cisco by source sourcetype over all time and see if you get any results
Hi @fahimeh , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Poin... See more...
Hi @fahimeh , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @PickleRick  Thanks for your reply, and knowledge about source can called in the first position. Sorry i don't know that because in many case i got it always solved when i add index=* before the... See more...
Hi @PickleRick  Thanks for your reply, and knowledge about source can called in the first position. Sorry i don't know that because in many case i got it always solved when i add index=* before the source.  And with your query i only get 0 count, so i think it because my client don't ingest in the Endpoint. Thankyou for your reply and your information.  Danke
thank you I will test and tell you exactly how it worked.