All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Due to some issue, We have to discontinue our existing Heavy Forwarder and move all the sources, data inputs, Splunk TA Apps/add-on one new server where we have already installed Heavy forwarder. B... See more...
Due to some issue, We have to discontinue our existing Heavy Forwarder and move all the sources, data inputs, Splunk TA Apps/add-on one new server where we have already installed Heavy forwarder. Both of the heavy forwarders are having the same version and present in Linux environment only. So just need to understand how can we easily move all the existing conf files, Splunk TA Apps/add-on and data inputs. So that without hampering any data loss we can move the HF server. Please add the splunk doc link as well if possible which we can use for linux environment. 
I am trying to bring in some .txt logfiles using Splunk forwarder. There are several logs in the directory, such as Log.txt, 10Log.txt, 20Log.txt, etc. These files are changed daily, and the 10, 20, ... See more...
I am trying to bring in some .txt logfiles using Splunk forwarder. There are several logs in the directory, such as Log.txt, 10Log.txt, 20Log.txt, etc. These files are changed daily, and the 10, 20, etclog.txt files are written to daily. So far, I can only get Splunk to ingest the Log.txt file and nothing else. My inputs.conf file is currently as below. I have tried to monitor just *.txt with the same results. Only Log.txt is read/ingested. [monitor://E:\Logs\CIR_Remote\*Log.txt] disabled = false sourcetype = LOG4NET index = log4net initCrcLength=1024 any input would be appreciated!
Hello, I know we can use SPLUNK GUI to create source types. But how I would create a new source type from CLI or using props.conf file. Any help will be highly appreciated, thank you. Does this p... See more...
Hello, I know we can use SPLUNK GUI to create source types. But how I would create a new source type from CLI or using props.conf file. Any help will be highly appreciated, thank you. Does this props.conf  is going to create  new source type test:audit  if it doesn't exist [test:audit] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]*)<MODTRANSAUDTRL> TIME_PREFIX= <TIMESTAMP> TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%2N MAX_TIMESTAMP_LOOKAHEAD=24 TRUNCATE=1000  
#Splunk t-shirt idea
I've read all the articles and past questions but I must be missing something. Our requirement is simple 6 months searchable, 6 months frozen. then delete. but seems there is not an easy setting for ... See more...
I've read all the articles and past questions but I must be missing something. Our requirement is simple 6 months searchable, 6 months frozen. then delete. but seems there is not an easy setting for anything less than cold to say 6months before roll. just seems data sizes?  currently our hot/warm/cold disk space is full and frozen is empty [ns-switches] homePath = volume:primary/ns-switches/db coldPath = volume:primary/ns-switches/colddb thawedPath = $SPLUNK_DB/ns-switches/thaweddb maxTotalDataSizeMB = 512000 maxDataSize = auto_high_volume coldToFrozenDir = /splunkfrozen/idx1/ns-switches/frozendb frozenTimePeriodInSecs = 4320000
Hi Team, I've multiple monitors on multiple forwarders and multiple tcpouts, I need to use forwarder hostname to route the monitor to respective tcpout, is there a configuration which can provi... See more...
Hi Team, I've multiple monitors on multiple forwarders and multiple tcpouts, I need to use forwarder hostname to route the monitor to respective tcpout, is there a configuration which can provide this forwarder host based routing ?   Thanks in advacne.  
  payload: Message { channel=EMAIL , type=security_event_postinfection_admin , locale=it_IT , recipientAddress=LIOUDMILA@ME.COM, data=[MESSAGEDATA { key=domain, value=https://okt.to/ , type=n... See more...
  payload: Message { channel=EMAIL , type=security_event_postinfection_admin , locale=it_IT , recipientAddress=LIOUDMILA@ME.COM, data=[MESSAGEDATA { key=domain, value=https://okt.to/ , type=null } , MessageData { key=date_time , value=2022-03-24T22:22:48.809 , type=null } MessageData { key=policy , value=botnet , type=null } , MessageData {key=content_categories , value=[malware] , type=null } , MessageData { key=manfacturer , value=Intel , type=null } ]}  
Hello, I've been trying to find and download Splunk Exchange app from the archive, but I couldn't find it. I know the app already hit the EOL, but still need it. Anybody knows where can I downl... See more...
Hello, I've been trying to find and download Splunk Exchange app from the archive, but I couldn't find it. I know the app already hit the EOL, but still need it. Anybody knows where can I download it or share me a link to download it Thank you Christian 
Hi folks,  I've been suffering from a creative crisis the last few days and looking for some brainstorming idea: I'm trying to come up with some use case ideas for SSO (particularly from Okta log... See more...
Hi folks,  I've been suffering from a creative crisis the last few days and looking for some brainstorming idea: I'm trying to come up with some use case ideas for SSO (particularly from Okta logs).  Does anyone have any examples on how to leverage the logs and what can I do with them in terms of any reports and alerts? All suggestions are more than welcome! 
So I am looking for the number of a specific event (sign-ins)  deduped by a user, which is simple. The challenge I am having is that I need the results deduped by date. So if i am looking at a weeks ... See more...
So I am looking for the number of a specific event (sign-ins)  deduped by a user, which is simple. The challenge I am having is that I need the results deduped by date. So if i am looking at a weeks worth of data I would like to see how many sign ins happened each day deduped by user. So, each user would only appear once each day but could appear multiple times over the course of the week depending.  Does this make sense? Please let me know if I can clarify anything and thanks in advance for any/all help. 
Hello Experts, I am facing difficulty at index time fields extraction. My sample log file format: Time stamp: Fri Mar 18 00:00:49 2022 File: File_name_1 Renamed to: Rename_1 Time stamp: Fri Ma... See more...
Hello Experts, I am facing difficulty at index time fields extraction. My sample log file format: Time stamp: Fri Mar 18 00:00:49 2022 File: File_name_1 Renamed to: Rename_1 Time stamp: Fri Mar 18 00:00:50 2022 File: File_name_1 Renamed to: Rename_1 Time stamp: Fri Mar 18 00:00:51 2022 File: File_name_1 Renamed to: Rename_1 Time stamp: Fri Mar 18 00:00:52 2022 File: File_name_1 Renamed to: Rename_1 Time stamp: Fri Mar 18 00:00:53 2022 File: File_name_1 Renamed to: Rename_1   props.conf [ demo ] CHARSET=AUTO LINE_BREAKER=([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD=24 NO_BINARY_CHECK=true SHOULD_LINEMERGE=true TIME_FORMAT=%a %b %d %H:%M:%S %Y TIME_PREFIX=^Time stamp:\s+ TRANSFORMS-extractfield=extract_demo_field TRUNCATE=100000 transforms.conf [extract_demo_field] REGEX =^Time stamp:\s*(?<timeStamp>.*)$\s*^File:\s*(?<file>.*)$\s*^Renamed to:\s+(?<renameFile>.*)$ FORMAT = time_stamp::$1 file::$2 renamed_to::$3 WRITE_META = true  
How to resolve memory spike ?
Hi all, good morning, my first question on the community, as I have just started learning.  So please be gentle if what I am asking is something obvious. I have configured an alert that execut... See more...
Hi all, good morning, my first question on the community, as I have just started learning.  So please be gentle if what I am asking is something obvious. I have configured an alert that executes two actions: send an email, and execute an customized app.  I'd like to know how to search the logs generated by the execution of the actions configured on this alert, specifically the logs generated by the app.  Could anyone please let me know where to see the logs generated by the app (python scrypt)? Thanks in advance. Kind regards,  Juan
Hi everyone,  I have a line chart of some tasks and its duration. I am trying to add average of the duration of all tasks as threshold.   |stats values(duration) as "Duration(Hr)" by Task|sort Tas... See more...
Hi everyone,  I have a line chart of some tasks and its duration. I am trying to add average of the duration of all tasks as threshold.   |stats values(duration) as "Duration(Hr)" by Task|sort Task|stats avg(duration) as threshold   I am not getting any results from the query.I have previously added threshold as a fixed value. Is there any different method to add a threshold value which is calculated not fixed.
Hi,    I set up an universal forwarder on a docker container : 172.17.0.3 I configurated it to forward data to 172.17.0.10:9997 (my VirtualBox VM ip's) I first enabled port 9997 port to liste... See more...
Hi,    I set up an universal forwarder on a docker container : 172.17.0.3 I configurated it to forward data to 172.17.0.10:9997 (my VirtualBox VM ip's) I first enabled port 9997 port to listen on my splunk enterprise instance web API.   connexion between VM and docker is a bridge on the docker0 interface. I can ping the VM on my container and vice versa.   I checked the connexion between the UF and the VM using ss -ant | grep "9997"  and I got :  LISTEN 0    128    0.0.0.0:9997    0.0.0.0:* As i'm new into networking, I'm clueless on how to make the connexion works.   Thank you all for your help
The predefined table names in the add-on doesn't list the service ticket related table name, hence wanted to know the service ticket table name so that I can create a custom input to pull service tic... See more...
The predefined table names in the add-on doesn't list the service ticket related table name, hence wanted to know the service ticket table name so that I can create a custom input to pull service tickets data from ServiceNow using ServiceNow add-on.  I'm not able to find this information on the servicenow website or in splunk-servicenow add-on documentation. Can someone please let me know the same?
Hi, I need to extract host values from one index (index=1) and see if there are similar matches that exists in other indexes (index=2 and index=3). Below are the details: Index=1 hosta=* hostb=... See more...
Hi, I need to extract host values from one index (index=1) and see if there are similar matches that exists in other indexes (index=2 and index=3). Below are the details: Index=1 hosta=* hostb=* hostc=* index=2 hostx=* index=3 hostx=* Can someone please help me with an SPL to find this?
Hello All, Can someone help with this? 1. We have Splunk DB Connect installed on a local Heavy Forwarder to test a proof of concept, which has Java installed locally. It works fine. 2. Now, the... See more...
Hello All, Can someone help with this? 1. We have Splunk DB Connect installed on a local Heavy Forwarder to test a proof of concept, which has Java installed locally. It works fine. 2. Now, the Splunk DB Connect app is installed on Splunk Cloud, which requires settings related to the Java environment and Task Server on Splunk DB Connect -> Configuration -> Settings page. 3. I can't find any documentation for these Settings on the Splunk Cloud. Already raised a Support ticket but they are very slow. Regards, Sree.
Hello I have built an APP with which I collect logs from a REST API. For this I use the checkpoint manager and store type file . I get the following error:  ``` 2022-03-28 07:57:58,877 +0000 log... See more...
Hello I have built an APP with which I collect logs from a REST API. For this I use the checkpoint manager and store type file . I get the following error:  ``` 2022-03-28 07:57:58,877 +0000 log_level=INFO, pid=7000, tid=MainThread, file=ta_config.py, func_name=set_logging, code_line_no=94 | Set log_level=DEBUG 2022-03-28 07:57:58,878 +0000 log_level=INFO, pid=7000, tid=MainThread, file=ta_config.py, func_name=set_logging, code_line_no=95 | Start mdm_api task 2022-03-28 07:57:58,878 +0000 log_level=DEBUG, pid=7000, tid=MainThread, file=ta_config.py, func_name=_get_checkpoint_storage_type, code_line_no=102 | Checkpoint storage type=auto 2022-03-28 07:57:58,878 +0000 log_level=DEBUG, pid=7000, tid=MainThread, file=ta_checkpoint_manager.py, func_name=_create_state_store, code_line_no=44 | Got checkpoint storage type=auto 2022-03-28 07:57:58,878 +0000 log_level=INFO, pid=7000, tid=MainThread, file=ta_checkpoint_manager.py, func_name=_use_cache_file, code_line_no=93 | Stanza=mdm_api using cached file store to create checkpoint 2022-03-28 07:57:58,878 +0000 log_level=DEBUG, pid=7000, tid=MainThread, file=ta_checkpoint_manager.py, func_name=_create_state_store, code_line_no=64 | Creating file state store, use_cache_file=True, max_cache_seconds=5 2022-03-28 07:57:58,878 +0000 log_level=ERROR, pid=7000, tid=MainThread, file=ta_mod_input.py, func_name=main, code_line_no=287 | api_mdm_v5 task encounter exception Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\TA-mdm_api_v5\bin\ta_mdm_api_v5\aob_py3\cloudconnectlib\splunktacollectorlib\data_collection\ta_mod_input.py", line 283, in main cc_json_file=cc_json_file File "C:\Program Files\Splunk\etc\apps\TA-mdm_api_v5\bin\ta_mdm_api_v5\aob_py3\cloudconnectlib\splunktacollectorlib\data_collection\ta_mod_input.py", line 209, in run for task_config in task_configs File "C:\Program Files\Splunk\etc\apps\TA-mdm_api_v5\bin\ta_mdm_api_v5\aob_py3\cloudconnectlib\splunktacollectorlib\data_collection\ta_mod_input.py", line 209, in <listcomp> for task_config in task_configs File "C:\Program Files\Splunk\etc\apps\TA-mdm_api_v5\bin\ta_mdm_api_v5\aob_py3\cloudconnectlib\splunktacollectorlib\data_collection\ta_data_client.py", line 68, in create_data_collector dataloader) File "C:\Program Files\Splunk\etc\apps\TA-mdm_api_v5\bin\ta_mdm_api_v5\aob_py3\cloudconnectlib\splunktacollectorlib\data_collection\ta_data_collector.py", line 61, in __init__ task_config) File "C:\Program Files\Splunk\etc\apps\TA-mdm_api_v5\bin\ta_mdm_api_v5\aob_py3\cloudconnectlib\splunktacollectorlib\data_collection\ta_checkpoint_manager.py", line 40, in __init__ task_config[c.appname] File "C:\Program Files\Splunk\etc\apps\TA-mdm_api_v5\bin\ta_mdm_api_v5\aob_py3\cloudconnectlib\splunktacollectorlib\data_collection\ta_checkpoint_manager.py", line 71, in _create_state_store max_cache_seconds=max_cache_seconds TypeError: get_state_store() got an unexpected keyword argument 'max_cache_seconds' ```
Hi, I have the following JSON String logs. I would like to extract JSON unique field values. It should go over all the message fields and extract specific field values from a JSON array("name") and... See more...
Hi, I have the following JSON String logs. I would like to extract JSON unique field values. It should go over all the message fields and extract specific field values from a JSON array("name") and unique them. Could someone help with Splunk query?   Raw log { "@timestamp": "2022-03-28T07:38:45.123+00:00", "message": "request - {\"metrics\":[{\"name\":\"m1\",\"downsample\":\"sum\"},{\"name\":\"m2\",\"downsample\":\"sum\"},{\"name\":\"m1\",\"downsample\":\"sum\"}]}" } JSON { "metrics": [{ "name": "m1", "aggregator": "sum", }, { "name": "m2", "downsample": "sum" }, { "name": "m1", "downsample": "sum" }] }   Expected Output:   m1 m2  ...