All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Yann.Buccellato, Thanks for sharing this feedback, I have shared this with the Docs team. I will report back when I hear back.
hi we have create new index on our platform but they collect any data The inputs.conf stanza are welll configurated with the new index name but our index are empty So i try to list the check to do... See more...
hi we have create new index on our platform but they collect any data The inputs.conf stanza are welll configurated with the new index name but our index are empty So i try to list the check to do in order to make our index working thanks
Exactly! So how do you match entries in File2.csv?
Still nothing. I have been trying to find a solution, but I couldn't find anything. I've tried to send an email to the support, but got no answer.   
Hi @AmirSA  can you try accessing following URL to view certifications  https://splunk.my.site.com/customer/s/list-views/certifications     
Hi @gebr  As 8.0.x no longer supporrted by splunk as per support policy from  https://www.splunk.com/en_us/legal/splunk-software-support-policy.html#core  I would suggest to upgarde your infra ... See more...
Hi @gebr  As 8.0.x no longer supporrted by splunk as per support policy from  https://www.splunk.com/en_us/legal/splunk-software-support-policy.html#core  I would suggest to upgarde your infra to last version of Splunk. e.g 9.0.x.  if you are not able to upgrade for sometime ,  may be i would suggest to go for 8.0.1 splunk UF  or same version as HF/Indexer  you can download from older version from https://www.splunk.com/en_us/download/previous-releases-universal-forwarder.html 
@ITWhisperer File1.csv has the session_id.
We are seeing some Timeout and Authentication error while collecting data from OTEL kubernetes collector through HEC, Could anyone please let me know if there is a need to change limits in config fil... See more...
We are seeing some Timeout and Authentication error while collecting data from OTEL kubernetes collector through HEC, Could anyone please let me know if there is a need to change limits in config files.   Below are the errors 2023-09-26T14:47:17.613Z info exporterhelper/queued_retry.go:433 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "metrics", "name": "splunk_hec/platform_metrics", "error": "Post \"https://xyz:8088/services/collector\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)", "interval": "2.769200676s"}   2023-09-26T14:47:11.590Z error exporterhelper/queued_retry.go:401 Exporting failed. The error is not retryable. Dropping data. {"kind": "exporter", "data_type": "metrics", "name": "splunk_hec/platform_metrics", "error": "Permanent error: \"HTTP/1.1 401 Unauthorized\\r\\nContent-Length: 148\\r\\nCache-Control: private\\r\\nConnection: Keep-Alive\\r\\nContent-Type: text/xml; charset=UTF-8\\r\\nDate: Tue, 26 Sep 2023 14:47:11 GMT\\r\\nServer: Splunkd\\r\\nVary: Authorization\\r\\nX-Content-Type-Options: nosniff\\r\\nX-Frame-Options: SAMEORIGIN\\r\\n\\r\\n<?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\"?>\\n<response>\\n <messages>\\n <msg type=\\\"WARN\\\">call not properly authenticated</msg>\\n </messages>\\n</response>\\n\"", "dropped_items": 31}
With every new major release of splunk, more and more components are adding audit logs. Volume of audit logs has increased significantly. UI/Search Scheduler/Search Dispatcher etc generate audit logs... See more...
With every new major release of splunk, more and more components are adding audit logs. Volume of audit logs has increased significantly. UI/Search Scheduler/Search Dispatcher etc generate audit logs. If auditqueue is full, then all these components are getting serialized to insert audit event into auditqueue. Resulting in skipped searches/ slow UI login etc. To mitigate the problem apply following workaround by using file monitoring instead of in-memory indexing. Workaround Disable direct indexing of audit events and instead fallback on file monitoring. This workaround decouples scheduler/UI threads from ingestion pipeline queues. Steps. 1. In etc/system/local/audit.conf ( or any audit.conf you like) we can turn off audit trail direct indexing. [auditTrail] queueing=false After that we have to add stanza in etc/system/local/inputs.conf( or any inputs.conf you like) to monitor audit.log [monitor://$SPLUNK_HOME/var/log/splunk/audit.log*] index = _audit source = audittrail sourcetype = audittrail 2. Stop splunk 3. Delete all audit.log* files ( to avoid re-ingestion). This step is optional if you don't care about duplicate audit events. 4. Start splunk  
Hi @Jubin.Patel, We are having the same issue, were you able to get this resolved?
Blocked auditqueue can cause random skipped searches, scheduler slowness on SH/SHC and slow UI.
I get the same error. I can't see my certifications either........
got nasty gram for posting links search online for freeload101 github in scripts nmap_fruit.sh
got nasty gram for posting links search online for freeload101 github in scripts nmap_fruit.sh
got nasty gram for posting links search online for freeload101 github in scripts nmap_fruit.sh 
  got nasty gram for posting links search online for freeload101 github in scripts nmap_fruit.sh
nmap XML to SPLUNK HEC !!!    https://github.com/freeload101/SCRIPTS/blob/b3f83288a9f289d86f6cdd04898478d0427097ce/Bash/NMAP_FRUIT.sh#L80    
nmap XML to SPLUNK HEC !!!  https://github.com/freeload101/SCRIPTS/blob/b3f83288a9f289d86f6cdd04898478d0427097ce/Bash/NMAP_FRUIT.sh#L80  
In the following description it is written at point 1 to download and install AppD app on SNOW store. But then you say you need to do something else before doing the thing at point number 1. Am I... See more...
In the following description it is written at point 1 to download and install AppD app on SNOW store. But then you say you need to do something else before doing the thing at point number 1. Am I mistaken in understanding the order and logic on how this is communicated? Regards
session_id doesn't appear to exist in both look ups so you won't be able to "join" using that. If you mean you want to "join" by id, then a simple lookup should work | inputlookup File1.csv | lookup... See more...
session_id doesn't appear to exist in both look ups so you won't be able to "join" using that. If you mean you want to "join" by id, then a simple lookup should work | inputlookup File1.csv | lookup File2.csv id Alternatively, if you want to use both the id and operation name you could try something like this | inputlookup File1.csv | lookup File2.csv id operation _name