All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm missing something and it's probably blatantly obvious.... I have a search returning a number but I want to have a fillergauge show the value as it approaches a maximum value.  In this examp... See more...
I'm missing something and it's probably blatantly obvious.... I have a search returning a number but I want to have a fillergauge show the value as it approaches a maximum value.  In this example, I'd like the gauge to cap at 10,000 but it always shows 100.   
are there any working screenshots or demo available for this app? there seems to be no video tutorial or any guidance docs beside the main doc.  Any guidance would be helpful -  I am looking for ... See more...
are there any working screenshots or demo available for this app? there seems to be no video tutorial or any guidance docs beside the main doc.  Any guidance would be helpful -  I am looking for a way to get JIRA->Splunk data in whenever there is a change in the issue or just able to query all the issues in JIRA via splunk and pull back stats 
Below quite simple query to fill drop down list in my dashboard.    index=gwcc | eval file=lower(mvindex(split(source,"/"),-1)) | dedup file | table source, file | sort file The point is it t... See more...
Below quite simple query to fill drop down list in my dashboard.    index=gwcc | eval file=lower(mvindex(split(source,"/"),-1)) | dedup file | table source, file | sort file The point is it takes 30-60 seconds to generate it.   Do you have an idea how to simplify it ? Or write in more efficient way ?  
I'm looking into upgrading Splunk Enterprise from 9.0.4 to 9.3.0. following the upgrade docs, there's a step to backup the KV Store. Check the KV store status To check the status of the KV store, ... See more...
I'm looking into upgrading Splunk Enterprise from 9.0.4 to 9.3.0. following the upgrade docs, there's a step to backup the KV Store. Check the KV store status To check the status of the KV store, use the show kvstore-status command: ./splunk show kvstore-status When I run this command, it's asking me for a splunk username and password.  this was handed over by a project team, but nothing was handed over about what the splunk password might be, or also if we actually  use a KV store.  I've tried the admin password, but that's not worked. I've found some splunk documents advising the KV store config would be in $SPLUNK_HOME/etc/system/local/server.conf, under [kvstore] There is nothing in our server.conf under kvstore. I've also found some notes talking about KVStore not starting if there's a $SPLUNK_HOME\var\lib\splunk\kvstore\mongo\mongod.lock file present We have 2 splunk servers - one of these has a lock file dated Oct 2022, and the other dated July 19th.  So based on this, I suspect it's not used otherwise we'd have hit issues with it before? That's just a guess, but this is my first foray into splunk, so I thought I'd ask if, based on the above scenarios whether I need to back up the KV store or not, or are there any other checks to confirm definitively if we have a KV store that's used? thanks in advance  
After updating the SSL keys, events with errors "ExecProcessor from python /opt/splunk/etc/apps/SA-Hydra/bin/bootstrap_hydra_gateway.py" from the source "/opt/splunk/var/log/splunk/splunkd.log" began... See more...
After updating the SSL keys, events with errors "ExecProcessor from python /opt/splunk/etc/apps/SA-Hydra/bin/bootstrap_hydra_gateway.py" from the source "/opt/splunk/var/log/splunk/splunkd.log" began to be sent to the index "_internal". Splunk version is 7.3.2..
Hi all, Has anyone had experience matching Linux audit logs to CIM before? I installed the Add-on for Unix and Linux, but it didn't help. Looking at some of the use cases in Security Essentials, it... See more...
Hi all, Has anyone had experience matching Linux audit logs to CIM before? I installed the Add-on for Unix and Linux, but it didn't help. Looking at some of the use cases in Security Essentials, it seems they expect data from EDR solutions like CrowdStrike or Symantec, rather than local Linux audit logs. Does this mean there is no way to use the out-of-the-box use cases created in Security Essentials/Enterprise Security for Linux logs?   Thanks
I have an input playbook with two output variables. I can retrieve these variables when I call the playbook using the playbook block in the UI. However, I now need to loop over items in a list and c... See more...
I have an input playbook with two output variables. I can retrieve these variables when I call the playbook using the playbook block in the UI. However, I now need to loop over items in a list and call the playbook for each item in that list, this requires using the phantom.playbook function. From what I can see, there is no way to retrieve the output of this playbook now, is that correct?   Example below: for item in prepare_data__post_list: phantom.playbook(playbook="local/__Post_To_Server", container={"id": int(container_id)}, inputs={"body": item, "headers": prepare_data__headers, "path": prepare_data__path})
Hello Splunkees,   what are the differences between the different options for app updates? I know 3 diffentent ways to update an app:   1) Via webinterface: Apps -> Manage Apps -> Install app fro... See more...
Hello Splunkees,   what are the differences between the different options for app updates? I know 3 diffentent ways to update an app:   1) Via webinterface: Apps -> Manage Apps -> Install app from file -> Check 'Upgrade app. Checking this will overwrite the app if it already exists.' 2) Via CLI:  ./splunk install app <app_package_filename> -update 1 -auth <username>:<password> 3) Extract the content of the app.tgz to $SPLUNK_HOME/etc/apps/ (if app already exists, override files) and after that restart splunk service.   Background of my question: I want to implement an automated app update process with ansible for our environment and I want to use the smartest method. Currently, we're using Splunk 9.1.5.   Thank you!   BR dschwarz
KPIのみを表示するサービスアナライザーを作成したいのですが、作成することは可能ですか?可能であれば手順を知りたいです。
I Have 60 Correlation Search in Content Management  Some of my Correlation Search doesn't trigger to Incident Review but when i search it manually it show the result. No Suppression, No Throttling, ... See more...
I Have 60 Correlation Search in Content Management  Some of my Correlation Search doesn't trigger to Incident Review but when i search it manually it show the result. No Suppression, No Throttling, and now i'm confused.  Someone help me please
Hi everyone, I’m currently sending vCenter logs via syslog to Splunk and have ensured that the syslog configuration and index name on Splunk is correct. However, the logs still aren’t appearing in t... See more...
Hi everyone, I’m currently sending vCenter logs via syslog to Splunk and have ensured that the syslog configuration and index name on Splunk is correct. However, the logs still aren’t appearing in the index. I have tried to tcpdump and I can see the logs arriving at my Splunk instance. below I attach the syslog configuration and tcpdump result on my splunk instance. What could be the cause of this issue, and what steps should I take to troubleshoot it? Thanks for any insights!
Hi,   The Splunk Heavy Forwarders and Deployment Servers were running under Splunk user. Unfortunately, during the upgrade process, some admin used the root account, and now the these Splunk instan... See more...
Hi,   The Splunk Heavy Forwarders and Deployment Servers were running under Splunk user. Unfortunately, during the upgrade process, some admin used the root account, and now the these Splunk instances are running as root. How can I switch back to the Splunk user? These instances are running on Red Hat Linux.
Experiencing an issue on a few random servers, some Domain Controllers and some Member Servers. Windows Security Event logs just seem to randomly stop sending. If I restart the Splunk UF, then the ev... See more...
Experiencing an issue on a few random servers, some Domain Controllers and some Member Servers. Windows Security Event logs just seem to randomly stop sending. If I restart the Splunk UF, then the event logs start gathering. We are using the Splunk UF 9.1.5, but I also noticed this issue on Splunk UF 9.1.4. I thought it had been corrected when we upgraded to Splunk UF 9.1.5, but its re-appeared - most recent occurrence seemed to occur roughly about 3 weeks ago on 15 servers across multiple clients we manage. This unfortunately has resulted in the loss of data for a few weeks as the local event logs eventually got discarded as the data filled up. I have now written a Splunk Alert to notify us each day if any servers are in this situation (compares the Windows servers reporting into two different indexes, one index is for Windows Security Event logs), so we can more easily spot the issue. We are just raising a case with Splunk support today about the issue.
Hello Members,   I have configured splunk HF to recieve data input as port 1531/udp    i used command firewall-cmd --permanent --zone=public --add-port=1531/udp   but when i used firewall-cmd -... See more...
Hello Members,   I have configured splunk HF to recieve data input as port 1531/udp    i used command firewall-cmd --permanent --zone=public --add-port=1531/udp   but when i used firewall-cmd --list-all dosen't appear on the opening ports is this consider a problem and also checked netstat and the port is listening on 0.0.0.0 (all)   thanks
Hi Everyone, I have some events with the field Private_MBytes and host = vmt/vmu/vmd/vmp I want to create a case when host is either vmt/vmu/vmd and Private_MBytes  > 20000 OR when host is vmp an... See more...
Hi Everyone, I have some events with the field Private_MBytes and host = vmt/vmu/vmd/vmp I want to create a case when host is either vmt/vmu/vmd and Private_MBytes  > 20000 OR when host is vmp and  Private_MBytes > 40000 then it should display the events with severity_id 4. Example eval severity_id=if(Private_MBytes >= "20000" AND host IN [vmd*,vmt*,vmu*],4,2) eval severity_id=if(Private_MBytes >= "40000" AND host ==vmp*,4,2)   Note :  if Private_MBytes > 40000, and then if there is any vmd/vmu/vmt it should display severity_id 4 only and for vmp also.
  How do I change the directory path for the error below. the problem is with the /bin/bin in the path.  Any help is greatly appreciated!
I understand that maxTotalDataSizeMB takes precedence over frozenTimePeriodInSecs. What happens if frozenTimePeriodInSecs is defined and maxTotalDataSizeMB  is not? The Splunk docs don't cover this ... See more...
I understand that maxTotalDataSizeMB takes precedence over frozenTimePeriodInSecs. What happens if frozenTimePeriodInSecs is defined and maxTotalDataSizeMB  is not? The Splunk docs don't cover this specific circumstance, and I haven't been able to find anything else about it. I have requirement to keep all department security logs for 5 years regardless how big the indexes get. They need to delete at 5.1 years. My predecessor set it up so that frozenTimePeriodInSecs= (5.1years in seconds) and maxTotalDataSizeMB =1000000000mb (roughly 1000TB's) so that size would not affect retention, but now nothing will delete and we're retaining logs from 8 years ago. If I comment out  maxTotalDataSizeMB, will frozenTimePeriodInSecs take precedence or will the default  maxTotalDataSizeMB settings take over? My indexes are roughly 7Tb, so the 500Gb would wipe out a bunch of stuff I need to keep.  In my lab environment, I commented out  maxTotalDataSizeMB and set frozenTimePeriodInSecs to 6 months, but still have logs from 2 years ago. Unfortunately, my lab environment doesn't have enough archived data to test the default cut off.    Thanks!
I have a question.  We have an stand alone Splunk instance in AWS running version 7.2.3 and are looking to upgrade it to 9.3.0.  I see to get to that version I will have to do about 4 upgrades.  Also... See more...
I have a question.  We have an stand alone Splunk instance in AWS running version 7.2.3 and are looking to upgrade it to 9.3.0.  I see to get to that version I will have to do about 4 upgrades.  Also since our current version is running on RedHat version 6.4,  I would have to upgrade that to get be able to run the current version What I am curious about is, AWS has a Splunk 9.3.0 AMI with BYOL.   Would it be possible to migrate the data over to the new instance along with the configuration settings?  This is used as a customer lab so we only have about a dozen universal forwarders pointing to this server.  There are no alerts running on it and only 3 dashboards. The splunk home is stored on a separate volume than the OS so I could detach it from the old instance and attach it to the new one, or snapshot it and use the snapshot on the new one.   Any suggestions for this? Thanks.
Hello Splunk ES experts ,  I want to make a query which will produce MTTD (something like by analyzing the time difference between when a raw log event is ingested ( and meets the condition of a co... See more...
Hello Splunk ES experts ,  I want to make a query which will produce MTTD (something like by analyzing the time difference between when a raw log event is ingested ( and meets the condition of a correlation search ) and when a notable event is generated based on the correlation search , I have tried something below but it does not give me results I am expecting because it is not calculating time difference for those notables which are in New status , below is working fine for any other status . Can someone please help me on this , may be it is too simple to achieve and I am making this complex  index=notable | eval orig_epoch=if( NOT isnum(orig_time), strptime(orig_time, "%m/%d/%Y %H:%M:%S"), 'orig_time' ) | eval event_epoch_standardized= orig_epoch, diff_seconds='_time'-'event_epoch_standardized' | fields + _time, search_name, diff_seconds | stats count as notable_count, min(diff_seconds) as min_diff_seconds, max(diff_seconds) as max_diff_seconds, avg(diff_seconds) as avg_diff_seconds by search_name | eval avg_diff=tostring(avg_diff_seconds, "duration") | addcoltotals labelfield=search_name  
 WE updated the Sysmon add-on from 3.x to 4.0.1 (latest) on a search head cluster. After, we're getting errors about how the node we're on and the indexers can't load a lookup (Could not load looku... See more...
 WE updated the Sysmon add-on from 3.x to 4.0.1 (latest) on a search head cluster. After, we're getting errors about how the node we're on and the indexers can't load a lookup (Could not load lookup=LOOKUP-record_type).