All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi! This is my first time using Splunk and I am on the free tiral version. I setup an HEC token and ran a test on Windows following this command:   curl -k https://prd-p-n38b3.splunkcloud.com:8088/... See more...
Hi! This is my first time using Splunk and I am on the free tiral version. I setup an HEC token and ran a test on Windows following this command:   curl -k https://prd-p-n38b3.splunkcloud.com:8088/services/collector -H "Authorization: Splunk 78c2aexx-xxxx-xxxxx-xxxx-xxxxx869e53" -d "{\"sourcetype\": \"event\", \"event\": \"Test message\"}"    While the events are being generated, I see 0 bytes. What am I doing wrong? I also see the events in the HEC logs but no data.    
@stevensk  Before using the ACS API, you must download the ACS Open API 3.0 specification, which includes the parameters, codes, and other data you need to work with the ACS API. You must also creat... See more...
@stevensk  Before using the ACS API, you must download the ACS Open API 3.0 specification, which includes the parameters, codes, and other data you need to work with the ACS API. You must also create an authentication token in Splunk Cloud Platform for use with ACS endpoint requests. For details on how to set up the ACS API, see  https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/Config/ACSusage#Set_up_the_ACS_API  https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/Config/ManageHECtokens?_gl=1*68bsg2*_gcl_au*OTIzODM2My4xNzM5OTM3OTYz*FPAU*OTIzODM2My4xNzM5OTM3OTYz*_ga*MzkwNjAzMDUwLjE3MzIwMzQ4MDY.*_ga_5EPM2P39FV*MTc0MDE2MDc5MC4xMDAuMS4xNzQwMTYxNjI3LjAuMC42ODQ4NzQ1NDc.*_fplc*OSUyRnhkdFR5S2pWUnJJcTlqTW9pYUtYVWhKRzNRWWl3ZjUlMkZFMVNxU1RhVENWdHdRaTFWNDZyNjQ3RldWNnRoR2lCc3NaS2F0NVE3eVVXWW95OWM0Vm5ON25SNnZpNFF1OEQ4QmZES2xLMG51enBqZDNzN0Z2V3ZBd3dXRHFXUSUzRCUzRA..#View_existing_HEC_tokens_2   
@stevensk  To monitor which sources or devices are using specific HEC tokens in Splunk Cloud, you can leverage the Admin Config Service (ACS) API. Here's a high-level overview of how you can achieve... See more...
@stevensk  To monitor which sources or devices are using specific HEC tokens in Splunk Cloud, you can leverage the Admin Config Service (ACS) API. Here's a high-level overview of how you can achieve this: https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/Config/ManageHECtokens 
Hi Thanks for the info. This is Splunk Cloud so we cannot edit any conf files, nor is there an option in the Web UI when creating HEC tokens to enable this. The following search seems to give all ... See more...
Hi Thanks for the info. This is Splunk Cloud so we cannot edit any conf files, nor is there an option in the Web UI when creating HEC tokens to enable this. The following search seems to give all Errors for devices trying to connect with a HEC token, but I do not seem to see successful sources, only failed. index=_internal sourcetype=splunkd component=HttpInputDataHandler   Also the source_IP value, since it is Splunk Cloud, are the Splunk Cloud Loadbalancer IPs. We were told this in a case with Splunk.  
@stevensk  You could get configured HEC tokens/inputs from HEC node e.g. | rest splunk_server=<your hec node> /services/data/inputs/http Of course you should have added that node to peer your SH o... See more...
@stevensk  You could get configured HEC tokens/inputs from HEC node e.g. | rest splunk_server=<your hec node> /services/data/inputs/http Of course you should have added that node to peer your SH or just run above towards your HEC node(s) with curl. That query shows allowed indexes and forced indexes for those tokens. Another way to check which tokens are used is You can check which HEC token is in use in _introspection Index with below query. index=_introspection host=YOUR_HEC_HOST sourcetype=http_event_collector_metrics data.token_name=* | rename data.* as * | table host, component, token_name, num_* If there will be 0 num_of_requests or num_of_events for longer time span then I guess you can disable those token for few days and then remove it.    
I am pushing the configs from the cluster master to two indexers. No HF. The change in transforms still did not work.  I am using the  Splunk_TA_mcafee-wg . Is it possible that  a configuration is ta... See more...
I am pushing the configs from the cluster master to two indexers. No HF. The change in transforms still did not work.  I am using the  Splunk_TA_mcafee-wg . Is it possible that  a configuration is taking precedence over my changes? I have tried making a local folder in the app and adding the props and transforms there. No luck. 
Hello @stevensk  To be able to see the source IP of HEC forwarders you need to enable logging on HEC's input.conf file: [http] enableSplunkdSidechannel = true And then run search to see logs cont... See more...
Hello @stevensk  To be able to see the source IP of HEC forwarders you need to enable logging on HEC's input.conf file: [http] enableSplunkdSidechannel = true And then run search to see logs containing specific token: index=_internal sourcetype=splunkd "token=" To filter by source IP you can run for example this search: index=_internal sourcetype=splunkd "token=" | rex "sourceIp=(?<source_ip>\d+\.\d+\.\d+\.\d+)" | stats count by source_ip I hope this will help you. Have a nice day,      
@gcusello  I see  As for you comment "if instead you need to exclude both the holidays and the one following days, you have to implement a mix between the two solutions " its a no. its more sim... See more...
@gcusello  I see  As for you comment "if instead you need to exclude both the holidays and the one following days, you have to implement a mix between the two solutions " its a no. its more simply than that. Just need to add one following day to the lookuptable date. for Muting Tried my query but doesn't seem like the results are correct. or how would you go about it?
We want to be able to monitor what sources/devices are using what HEC tokens. I know we can use _introspection to retrieve metrics on a HEC token to see if it is being used, but need to know "what" ... See more...
We want to be able to monitor what sources/devices are using what HEC tokens. I know we can use _introspection to retrieve metrics on a HEC token to see if it is being used, but need to know "what" is sending to/using a HEC token.  What sources (IP or host) are sending to a HEC token. We are using Splunk Cloud.       
Create another dashboard or panel which displays the event as you would like it and modify the drilldown on the original panel(s) to link to the new dashboard or make the new panel visible in the cur... See more...
Create another dashboard or panel which displays the event as you would like it and modify the drilldown on the original panel(s) to link to the new dashboard or make the new panel visible in the current dashboard.
Check the ulimits settings to make they meet or exceed Splunk's recommendations. Verify THP is disabled. If both of the above pass then open a case with Splunk Support.
Well, I expected it to authenticate the account when there is a token present. When Splunk knows that the account exists (It has authenticated before AND it has a token), why is that not sufficient ... See more...
Well, I expected it to authenticate the account when there is a token present. When Splunk knows that the account exists (It has authenticated before AND it has a token), why is that not sufficient for authentication?
We ended up creating local (splunk) accounts for authenticating with token. Sorry for the late response.
Sorry, I was giving an example as I didnt have any users searching over 30 days. Please see below, I have added a table command too. index=_audit action="search" info=completed | eval et=COALESCE(s... See more...
Sorry, I was giving an example as I didnt have any users searching over 30 days. Please see below, I have added a table command too. index=_audit action="search" info=completed | eval et=COALESCE(search_et, api_et), lt=COALESCE(search_lt, api_lt) | eval timespanSecs=lt-et | where timespanSecs>(60*60*24*30) | eval friendlyTime=tostring(timespanSecs, "duration") | table user search friendlyTime Is this what you are after? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Sorry. The first time I sent it it said it was reported as spam so wasnt sure it went through
This is not what i am looking for -  i am trying to get a list of splunk users and searches that are searching back 30 DAYS or longer. 
A segmentation fault (signal 11) in Splunk can have several potential causes, including memory corruption, insufficient resources, software bugs, or issues with the system configuration. Since you me... See more...
A segmentation fault (signal 11) in Splunk can have several potential causes, including memory corruption, insufficient resources, software bugs, or issues with the system configuration. Since you mentioned that you haven't changed anything recently, it's crucial to systematically investigate and rule out potential causes. Steps to Troubleshoot and Diagnose the Issue: 1. Check Splunk Logs for More Clues Look at the splunkd.log and crash.log files for additional context around the crash. These logs can be found in:   $SPLUNK_HOME/var/log/splunk/splunkd.log $SPLUNK_HOME/var/log/splunk/crash*.log   Run:   grep -i 'fatal' $SPLUNK_HOME/var/log/splunk/splunkd.log grep -i 'segfault' $SPLUNK_HOME/var/log/splunk/crash*.log   This might provide more context on what was happening before the crash.   2. Validate System Memory and Kernel Overcommit Settings Your crash log shows "No memory mapped at address", which suggests possible memory issues. Check for kernel memory overcommitting, which can lead to random segmentation faults. Run:   cat /proc/sys/vm/overcommit_memory   If the value is 0, memory overcommit handling is heuristic-based. If 1, the system allows overcommitting memory, which is not recommended for Splunk. If 2, it's strict (recommended). If it's set to 1, you could consider changing it to see if that effects the Splunk service:   echo 2 | sudo tee /proc/sys/vm/overcommit_memory   And persist the change in /etc/sysctl.conf:   vm.overcommit_memory = 2   Also see https://splunk.my.site.com/customer/s/article/Indexer-crashed-after-OS-upgrade   3. Check Transparent Huge Pages (THP) THP can cause issues with Splunk's memory management. Disable it if it’s enabled. Check the current status:   cat /sys/kernel/mm/transparent_hugepage/enabled   If it says [always], disable it temporarily:   echo never | sudo tee /sys/kernel/mm/transparent_hugepage/enabled echo never | sudo tee /sys/kernel/mm/transparent_hugepage/defrag   To make it permanent, add the following to /etc/rc.local:   echo never > /sys/kernel/mm/transparent_hugepage/enabled echo never > /sys/kernel/mm/transparent_hugepage/defrag   4. Check ulimits for Splunk User If the indexer is running into resource exhaustion, check its ulimits:   ulimit -a   Try updating to the following if not already:   nofile = 65536 nproc = 16384   Adjust in /etc/security/limits.conf:   splunk soft nofile 65536 splunk hard nofile 65536 splunk soft nproc 16384 splunk hard nproc 16384     5. Check Splunk’s Memory and CPU Usage Run:   ps aux --sort=-%mem | grep splunk free -m top -o %CPU   Look for excessive memory or CPU consumption.   6. Check for Recent Software Updates or Kernel Changes If the system has undergone automatic updates, it might have introduced compatibility issues. Check recent updates:   cat /var/log/dpkg.log | grep -i "upgrade" # Debian/Ubuntu cat /var/log/yum.log | grep -i "update" # RHEL/CentOS     Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @boknows , please don't duplicate questions! See my answer to your other question: https://community.splunk.com/t5/Splunk-Search/Host-override-with-event-data/m-p/712235#M240309 Ciao. Giuseppe
Hi @boknows , the transforms.conf isn't correct: you aren't performing a field extraction, so please try: [changehost] DEST_KEY = MetaData:Host REGEX = ^([^\s]+) FORMAT = host::$1 Then, where did ... See more...
Hi @boknows , the transforms.conf isn't correct: you aren't performing a field extraction, so please try: [changehost] DEST_KEY = MetaData:Host REGEX = ^([^\s]+) FORMAT = host::$1 Then, where did you locate them? they must be located in the first full Splunk instance they pass through, in  other words in the first heavy Forwarder or, if not present any HF, in the Indexers. Ciao. Giuseppe
I am trying to get a list of splunk users and searches that are searching back 30 DAYS or longer.