All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi everyone, I need your help, I have this error because the kvstore stays in starting status. and it sends me the following error. Validate the certificate and it is valid, I also already gave pe... See more...
Hi everyone, I need your help, I have this error because the kvstore stays in starting status. and it sends me the following error. Validate the certificate and it is valid, I also already gave permissions to the splunk.key file and even so it does not remain. 07-18-2022 12:21:04.601 -0500 ERROR KVStoreBulletinBoardManager [12334 MongodLogThread] - KV Store process terminated abnormally (exit code 14, status PID 12335 exited with code 14). See mongod.log and splunkd.log for details. 18/07/2022 12:21:04.507 07-18-2022 12:21:04.507 -0500 ERROR MongodRunner [12334 MongodLogThread] - mongod exited abnormally (exit code 14, status: PID 12335 exited with code 14) - look at mongod.log to investigate. kvStoreStatus --------------------------- -------------  starting  failed  failed  failed failed failed
Is there a way to show currency symbol after the value? Like $393.26
I saw that there are two options to send logs from universal forwarder to indexer. We can use [httpout] to send the logs to HTTP Event Collector on the indexer on port 8088; alternatively, we can u... See more...
I saw that there are two options to send logs from universal forwarder to indexer. We can use [httpout] to send the logs to HTTP Event Collector on the indexer on port 8088; alternatively, we can use [tcpout] to send the logs to the indexer on port 9997.   Which one is the best practice to implement? Thanks.
I have a  file which gives me the following output :-  srvrmgr> list comps SHOW SV_NAME,CP_DISP_RUN_STATE,CP_STARTMODE,CP_NUM_RUN_TASKS,CP_MAX_TASKS,CP_ACTV_MTS_PROCS,CP_MAX_MTS_PROCS,CP_START_TIME... See more...
I have a  file which gives me the following output :-  srvrmgr> list comps SHOW SV_NAME,CP_DISP_RUN_STATE,CP_STARTMODE,CP_NUM_RUN_TASKS,CP_MAX_TASKS,CP_ACTV_MTS_PROCS,CP_MAX_MTS_PROCS,CP_START_TIME,CP_END_TIME,CC_ALIAS SV_NAME CP_DISP_RUN_STATE CP_STARTMODE CP_NUM_RUN_TASKS CP_MAX_TASKS CP_ACTV_MTS_PROCS CP_MAX_MTS_PROCS CP_START_TIME CP_END_TIME CC_ALIAS -------- ------------------- ------------ ---------------- ------------ ----------------- ---------------- ------------------- ------------------- --------------- --------- lnx001 Online Auto 0 50 1 1 2022-07-18 03:53:03 comp_123 comp_123 lnx003 Online Auto 0 50 1 1 2022-07-18 03:53:03 comp_456 comp_123 lnx005 Online Auto 0 20 1 1 2022-07-18 03:53:03 comp_123 comp_123 lnx007 Online Manual 0 50 0 1 2022-07-18 03:53:03 comp_987 lnx010 Online Manual 0 500 0 5 2022-07-18 03:53:03 comp_564 lnx011 Online Auto 643 4000 40 40 2022-07-18 03:53:03 comp_123   I only want to extract the 1st,4th(numeric) and where comp_name=comp_123, discarding all the other entries and show 1st field as host, 4th field as runningtasks and final field as component.. Please help me with the filters
Hi, how can I modify x-axis in order to display date only for each column.  query | eval finish_time_epoch = strftime(strptime(FINISH_TIME, "%Y-%m-%d %H:%M:%S"),"%Y-%m-%d %H:%M:%S") | eval... See more...
Hi, how can I modify x-axis in order to display date only for each column.  query | eval finish_time_epoch = strftime(strptime(FINISH_TIME, "%Y-%m-%d %H:%M:%S"),"%Y-%m-%d %H:%M:%S") | eval start_time_epoch = strftime(strptime(START_TIME, "%Y-%m-%d %H:%M:%S"),"%Y-%m-%d %H:%M:%S") | eval duration_s = strptime(FINISH_TIME, "%Y-%m-%d %H:%M:%S") - strptime(START_TIME, "%Y-%m-%d %H:%M:%S") | eval duration_min = round(duration_s / 60, 2) | chart sum(duration_min) as "time" by TimeDate  
Hello, we are trying to get data from Netapp server. Trying to understand the process on onboarding the data. I have already installed add-on available on Splunk base. (One main add-on with 3 oth... See more...
Hello, we are trying to get data from Netapp server. Trying to understand the process on onboarding the data. I have already installed add-on available on Splunk base. (One main add-on with 3 other supporting add-ons) https://splunkbase.splunk.com/app/5396/ https://splunkbase.splunk.com/app/5616/ https://splunkbase.splunk.com/app/3418/ https://splunkbase.splunk.com/app/5615/ My question is do I need to install Splunk forwarder on Netapp server Or its API based data collection ? In attached screenshot there are 2 option. I configured "ONTAP Collection Configuration".  but there is option of "Add data collection node" which require forwarder details.   Thanks Ankit
I have two actions linked together. The first one is a block with custom code where I want to list all of the files inside directory using  `os.listdirs()`  The second one is decision block.  I wo... See more...
I have two actions linked together. The first one is a block with custom code where I want to list all of the files inside directory using  `os.listdirs()`  The second one is decision block.  I would like to be able to pass the result of the first block into the second.  How can I go about it?
We are testing federated search.  when on the provider (environment A), the fields are nicely extracted. When on the federated SH (environment B), the fields do not show up, however they are usab... See more...
We are testing federated search.  when on the provider (environment A), the fields are nicely extracted. When on the federated SH (environment B), the fields do not show up, however they are usable in SPL for filtering etc... . anyone have an idea if this is a bug/restriction. I have found here restrictions on transparent-mode FS, but we are entirely on prem (8.2.1) so we are using standard mode. I am not conflicting any of the below restrictions documented, just simple search (and data does return) index=federated:<source_index> sourcetype=<sourcetype> extract: Restrictions for standard mode federated search Standard mode federated search does not support the following: Generating commands other than search, from, or tstats. For example, federated searches cannot include the datamodel or inputlookup commands. You can find a list of generating commands in Command types, in the Search Reference. The from command can reference only datasets of the saved search and data model dataset types. Real-time search. Usage of wildcard symbols (*) to reference multiple federated indexes. Metrics indexes and related metrics-specific search commands, such as mpreview or mstats. If you must include metrics data in a federated search, consider mapping a federated index to a saved search dataset that contains metric data. See Create a federated index. in the section on transparent mode there is something about search-time field extraction which then self-references as a page.   any ideas ?    
Hi Team, I am creating authorization token from Splunk web and I received the token which consist of more than 256 characters.  Is it possible to reduce its length or any alternative approach by ... See more...
Hi Team, I am creating authorization token from Splunk web and I received the token which consist of more than 256 characters.  Is it possible to reduce its length or any alternative approach by which we can generate the token up to 256 characters! because my tool have limitation that it cannot accept more than 256 characters as token Thanks, Venkata
Greetings! how to deploy fortimail add-ons in splunk enterprise distributed environment?   As per now ,I was already downloaded the apps and uploaded in splunk search head , what next to do? ... See more...
Greetings! how to deploy fortimail add-ons in splunk enterprise distributed environment?   As per now ,I was already downloaded the apps and uploaded in splunk search head , what next to do? -In my env. I have 7servers( 1 SH, 1 splunk management node, 5 indexers),kindly guide me what next to do? Thank you in advance!
Hello,   We are using Splunk HEC token to receive the EKS logs in Splunk. The EKS monitoring container of Splunk have docker.io/splunk/fluentd-hec:1.2.5 version running. We want to upgrade Splunk... See more...
Hello,   We are using Splunk HEC token to receive the EKS logs in Splunk. The EKS monitoring container of Splunk have docker.io/splunk/fluentd-hec:1.2.5 version running. We want to upgrade Splunk to version 8.2.7 from Splunk 8.0.5 and want to know if docker.io/splunk/fluentd-hec:1.2.5 version of container is compatible with our Splunk version 8.2.7 or not?
Hello i have 2 charts created with Splunk search a. pie chart showing all failed tasks names in a system b. line chart showing the amount of failed task names by day in the past 30 days.   ... See more...
Hello i have 2 charts created with Splunk search a. pie chart showing all failed tasks names in a system b. line chart showing the amount of failed task names by day in the past 30 days.   i want to be able to connect those 2 charts so when i click on a name of a chart in the pie chart the line chart will show this task name highlighted.   is that an option? if yes then how do i do that? thanks in advanced
Hello, I got a lookup file with differents range of time (start, end) looks like this Debut, Fin 2020-12-05 12:00:00, 2020-12-05 18:00:00 2021-01-24 08:00:00, 2021-01-24 18:00:00 2021-02:10 1... See more...
Hello, I got a lookup file with differents range of time (start, end) looks like this Debut, Fin 2020-12-05 12:00:00, 2020-12-05 18:00:00 2021-01-24 08:00:00, 2021-01-24 18:00:00 2021-02:10 19:00:00, 2021-02-10 21:00:00 2021-02-02 19:00:00, 2021-02-02 21:00:00 I'd like to match events which are not included in the differents range of the lookup   I tried this but it didn't work       index="my_index" [inputlookup my_lookup.csv | eval start=strptime(Debut,"%Y-%m-%d %H:%M:%S") | eval end=strptime(Fin,"%Y-%m-%d %H:%M:%S") | table start end] | search _time < start AND _time > end       Any idea ? Thanks for help
Hi,   As asked in the subject  I trying to figure out the difference between lookup input lookup because I  don't think I get it.   in this research for example:   index=windows EventCode... See more...
Hi,   As asked in the subject  I trying to figure out the difference between lookup input lookup because I  don't think I get it.   in this research for example:   index=windows EventCode=4624 [ | inputlookup damtest2.csv | rename Server AS Workstation_Name | fields Workstation_Name ] | lookup damtest2.csv Server AS Workstation_Name OUTPUT os | table Workstation_Name os Package_Name__NTLM_only_ | dedup Workstation_Name | sort Workstation_Name   plus What is the use case of a definition lookup. The command above works without lookup definition for example.   Regards  
Hi We are looking a way to integrate Checkmarx with  Splunk what will be the best way  ?
Hi Splunkers, for an addon I'm making, I need to perform a sourcetype override. The general mechanis is clearly explained on this documentation: Override source types on a per-event basis and I us... See more...
Hi Splunkers, for an addon I'm making, I need to perform a sourcetype override. The general mechanis is clearly explained on this documentation: Override source types on a per-event basis and I used it with different result. If I use, in the props.conf file, a sourcetype like <spec>, it works fine; so, if my data born with sourcetype A, and A is puttend in the props.conf as spec, and I want to override it with B, where B is putted in transforms.conf under the proper regex, nothing goes wring and I achieve the desiderd result. Now, suppose I want switch, in prop.conf file for <spec> parameter, from a sourcetype to a source and that this source is a file under a specific location. Of course, I could put the full path of source; but, for different reasons, this path may change in our production environment, so I need to switch from full path to a partial one; the worst case is whre we must change from:     C:\sub1\sub2\sub3.test_file.txt     to:     ...\test_file.txt     So, my question is: what is the proper wildcard syntax to achieve this purpose? I tried until now:     ...\test_file.txt C:\...\test_file.txt //C:\...\test_file.txt     but they does not work and the sourcetype is not overriden.
Hello community We are ingesting sftp log. The logfile rotates once every 24h. "headers" are set in the new file every rotation which gets indexed. Unlike every other event indexed, the "linecoun... See more...
Hello community We are ingesting sftp log. The logfile rotates once every 24h. "headers" are set in the new file every rotation which gets indexed. Unlike every other event indexed, the "linecount" for this event is 2 instead of 1 so they are pretty easy to spot. #Date: Mon Jan 10 00:00:00 CEST 2020 #Fields: date time ip port ......... I've seen examples regarding skipping header lines in CSV files, though this is a textfile. It is not a huge issue though still something which is a bit irritating. Is it possible to skip these lines so they are not forwarded/indexed? How would I go about accomplishing this? Thank you in advace
Hi is there any reference available to describe how we can understand different chart and graph? With practical examples. e.g. when memory usage increases step by step means we have memory leakag... See more...
Hi is there any reference available to describe how we can understand different chart and graph? With practical examples. e.g. when memory usage increases step by step means we have memory leakage.    Thanks 
I have a standalone instance of Splunk. I am running both: Splunk Add-on for Unix and Linux, and Splunk App for Unix. Since the Splunk App for Unix has reached End-of-Life and is not requir... See more...
I have a standalone instance of Splunk. I am running both: Splunk Add-on for Unix and Linux, and Splunk App for Unix. Since the Splunk App for Unix has reached End-of-Life and is not required in my deployment anymore i am looking to remove it. Initially i tried just using Splunk command: ./splunk remove app splunk_app_for_nix However noticed that this impacts the index "os" used by the Splunk Add-on for Unix and Linux. The index no longer appears in the web gui under settings>indexes. If i look in the CLI, i can still see data in /opt/splunk/os/db, so the data still appears to be there, but is not being used apparently.... I am getting Message saying "Received event for unconfigured/disabled/deleted index=os ...", so am not entirely sure what the status of this index is now. What is the best way to remove this app without affecting the index? Thanks,    
Hi All,   I'm seeking little help to drop/off board the device. So we don't have any HF in our environment we use our indexer as our HF also. Their is a windows device xyz in our environment ... See more...
Hi All,   I'm seeking little help to drop/off board the device. So we don't have any HF in our environment we use our indexer as our HF also. Their is a windows device xyz in our environment and we don't want any single logs from this xyz server and it is directly sending logs to indexer not to deployment server. So I create 2 files one is props.conf and other is transforms conf  On props.conf  [sourcetype name] TRANSFORMS-win=eventlogs On Transforms.conf REGEX=xyz DEST_KEY=queue FORMAT=nullQueue And I restart the indexer  But it is not working I can see till see logs. Can anyone please suggest where I goes wrong.    Thank you in advance