All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@ITWhisperer File1.csv has the session_id.
We are seeing some Timeout and Authentication error while collecting data from OTEL kubernetes collector through HEC, Could anyone please let me know if there is a need to change limits in config fil... See more...
We are seeing some Timeout and Authentication error while collecting data from OTEL kubernetes collector through HEC, Could anyone please let me know if there is a need to change limits in config files.   Below are the errors 2023-09-26T14:47:17.613Z info exporterhelper/queued_retry.go:433 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "metrics", "name": "splunk_hec/platform_metrics", "error": "Post \"https://xyz:8088/services/collector\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)", "interval": "2.769200676s"}   2023-09-26T14:47:11.590Z error exporterhelper/queued_retry.go:401 Exporting failed. The error is not retryable. Dropping data. {"kind": "exporter", "data_type": "metrics", "name": "splunk_hec/platform_metrics", "error": "Permanent error: \"HTTP/1.1 401 Unauthorized\\r\\nContent-Length: 148\\r\\nCache-Control: private\\r\\nConnection: Keep-Alive\\r\\nContent-Type: text/xml; charset=UTF-8\\r\\nDate: Tue, 26 Sep 2023 14:47:11 GMT\\r\\nServer: Splunkd\\r\\nVary: Authorization\\r\\nX-Content-Type-Options: nosniff\\r\\nX-Frame-Options: SAMEORIGIN\\r\\n\\r\\n<?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\"?>\\n<response>\\n <messages>\\n <msg type=\\\"WARN\\\">call not properly authenticated</msg>\\n </messages>\\n</response>\\n\"", "dropped_items": 31}
With every new major release of splunk, more and more components are adding audit logs. Volume of audit logs has increased significantly. UI/Search Scheduler/Search Dispatcher etc generate audit logs... See more...
With every new major release of splunk, more and more components are adding audit logs. Volume of audit logs has increased significantly. UI/Search Scheduler/Search Dispatcher etc generate audit logs. If auditqueue is full, then all these components are getting serialized to insert audit event into auditqueue. Resulting in skipped searches/ slow UI login etc. To mitigate the problem apply following workaround by using file monitoring instead of in-memory indexing. Workaround Disable direct indexing of audit events and instead fallback on file monitoring. This workaround decouples scheduler/UI threads from ingestion pipeline queues. Steps. 1. In etc/system/local/audit.conf ( or any audit.conf you like) we can turn off audit trail direct indexing. [auditTrail] queueing=false After that we have to add stanza in etc/system/local/inputs.conf( or any inputs.conf you like) to monitor audit.log [monitor://$SPLUNK_HOME/var/log/splunk/audit.log*] index = _audit source = audittrail sourcetype = audittrail 2. Stop splunk 3. Delete all audit.log* files ( to avoid re-ingestion). This step is optional if you don't care about duplicate audit events. 4. Start splunk  
Hi @Jubin.Patel, We are having the same issue, were you able to get this resolved?
Blocked auditqueue can cause random skipped searches, scheduler slowness on SH/SHC and slow UI.
I get the same error. I can't see my certifications either........
got nasty gram for posting links search online for freeload101 github in scripts nmap_fruit.sh
got nasty gram for posting links search online for freeload101 github in scripts nmap_fruit.sh
got nasty gram for posting links search online for freeload101 github in scripts nmap_fruit.sh 
  got nasty gram for posting links search online for freeload101 github in scripts nmap_fruit.sh
nmap XML to SPLUNK HEC !!!    https://github.com/freeload101/SCRIPTS/blob/b3f83288a9f289d86f6cdd04898478d0427097ce/Bash/NMAP_FRUIT.sh#L80    
nmap XML to SPLUNK HEC !!!  https://github.com/freeload101/SCRIPTS/blob/b3f83288a9f289d86f6cdd04898478d0427097ce/Bash/NMAP_FRUIT.sh#L80  
In the following description it is written at point 1 to download and install AppD app on SNOW store. But then you say you need to do something else before doing the thing at point number 1. Am I... See more...
In the following description it is written at point 1 to download and install AppD app on SNOW store. But then you say you need to do something else before doing the thing at point number 1. Am I mistaken in understanding the order and logic on how this is communicated? Regards
session_id doesn't appear to exist in both look ups so you won't be able to "join" using that. If you mean you want to "join" by id, then a simple lookup should work | inputlookup File1.csv | lookup... See more...
session_id doesn't appear to exist in both look ups so you won't be able to "join" using that. If you mean you want to "join" by id, then a simple lookup should work | inputlookup File1.csv | lookup File2.csv id Alternatively, if you want to use both the id and operation name you could try something like this | inputlookup File1.csv | lookup File2.csv id operation _name
Please share your full search (anonymised as necessary) preferably in  as code block </> to preserve formatting.
Hi,       is there a way to to change which versions of Forwarders are in support according to the Cloud Monitoring Console? Currently at the moment, v9.1.1 is showing as out of support?   Many Th... See more...
Hi,       is there a way to to change which versions of Forwarders are in support according to the Cloud Monitoring Console? Currently at the moment, v9.1.1 is showing as out of support?   Many Thanks
_time field looks something like "2023-09-06T18:30:00.000+00:00" in the lookup CSV. Whereas in the results generated by the query it looks like "2023-09-06 18:30:00" I tried converting the _time f... See more...
_time field looks something like "2023-09-06T18:30:00.000+00:00" in the lookup CSV. Whereas in the results generated by the query it looks like "2023-09-06 18:30:00" I tried converting the _time field as suggested with help of one of solutions provided earlier by you (Solved: Re: convert date to epoch - Splunk Community). But no luck. Can you please help with the query?
Hi All, I have two csv files.  File1.csv -> id, operation_name, session_id File2.csv -> id, error, operation_name I want to list the entries based on session_id like ->id, operation_name, sessi... See more...
Hi All, I have two csv files.  File1.csv -> id, operation_name, session_id File2.csv -> id, error, operation_name I want to list the entries based on session_id like ->id, operation_name, session_id, error. Basically all the entries from file1.csv for the session_id and errors from file2.csv.  Could you please help how to combine these csv? Note: I am storing the data to CSV as a output lookup since I couldn't find a way to search these via single query. So trying to join from csv.
Hi, in the official compatibility matrix there is no column for Indexer 8.0.x anymore as its no longer supported. https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibi... See more...
Hi, in the official compatibility matrix there is no column for Indexer 8.0.x anymore as its no longer supported. https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibilitybetweenforwardersandindexers   Does anyone know up to which version of the Universal Forwarder is compatibel with an 8.0.x Indexer (with an 8.0.x Heavy Forwarder infront) ?
Not much to go on here - how are your forwarders configured? have they ever worked? do you have network connectivity between your forwarders and indexers? are there any errors or other messages in th... See more...
Not much to go on here - how are your forwarders configured? have they ever worked? do you have network connectivity between your forwarders and indexers? are there any errors or other messages in the logs where your forwarders are running? what have you looked at to try and determine the root cause?