All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We have that false positives lately too and we found out with helkp of the following search that our peers ran into authTokenConnectionTimeout which defaults to 5 seconds authTokenConnectionTimeo... See more...
We have that false positives lately too and we found out with helkp of the following search that our peers ran into authTokenConnectionTimeout which defaults to 5 seconds authTokenConnectionTimeout is located in distsearch.conf       index=_internal (GetRemoteAuthToken OR DistributedPeer OR DistributedPeerManager) source!="/opt/splunk/var/log/splunk/remote_searches.log" | rex field=_raw "Peer:(?<peer>\S+)" | rex field=_raw "peer: (?<peer>\S+)" | rex field=_raw "uri=(?<peer>\S+)" | eval peer = replace(peer, "https://", "") | rex field=_raw "\d+-\d+-\d+\s+\d+:\d+:\d+.\d+\s+\S+\s+(?<loglevel>\S+)\s+(?<process>\S+)" | rex field=_raw "\] - (?<logMsg>.+)" | reverse | eval time=strftime(_time, "%d.%m.%Y %H:%M:%S.%Q") | bin span=1d _time | stats list(*) as * by peer _time | table peer time loglevel process logMsg    
Hello Experts, I'm facing challenge where I need to automatically load data from Python script results into a metric index in Splunk. Is it possible? I'd appreciate any guidance or examples how to... See more...
Hello Experts, I'm facing challenge where I need to automatically load data from Python script results into a metric index in Splunk. Is it possible? I'd appreciate any guidance or examples how to achieve this? Thanks
having the same issue for weekdays and month dates.  Is this something that will happen or we need to fix it ourselves creatively ? mer. 13 déc. 2023 23:31:20 CET file_hash=96def1... mar. 19 d... See more...
having the same issue for weekdays and month dates.  Is this something that will happen or we need to fix it ourselves creatively ? mer. 13 déc. 2023 23:31:20 CET file_hash=96def1... mar. 19 déc. 2023 22:06:55 CET user=x ...  mar. 19 déc. 2023 09:16:13 CET user=y ...
The issue is not fixed after upgrading 9.1.2. This issue occured on search head cluster.  My settings in outputs.conf :  [indexer_discovery:target_master] pass4SymmKey = ********** [tcpout] defa... See more...
The issue is not fixed after upgrading 9.1.2. This issue occured on search head cluster.  My settings in outputs.conf :  [indexer_discovery:target_master] pass4SymmKey = ********** [tcpout] defaultGroup = default_indexers forceTimebasedAutoLB = true maxQueueSize = 7MB useACK = true [tcpout:default_indexers] server = **********01:9997,**********02.lan:9997
I use a workaround. Background of what I did: I am writing a bash script, to create Splunk diag using splunk user and move the diag file to the desired folder of mine. [root@myserver ~]# vi test... See more...
I use a workaround. Background of what I did: I am writing a bash script, to create Splunk diag using splunk user and move the diag file to the desired folder of mine. [root@myserver ~]# vi test_script.sh sudo -i -u splunk bash << EOF # Create Splunk Diagnostic Report /opt/splunk/bin/splunk diag > /opt/splunk/mylog.log EOF # Use grep to extract the path from the output diag_path=$(cat /opt/splunk/mylog.log | grep -oP '(?<=Splunk diagnosis file created: )/.*\.tar\.gz') echo $diag_path # Check if the path is not empty if [ -n "$diag_path" ]; then      # Copy the file to /root/mydesiredfolder      mv "$diag_path" /root/mydesiredfolder      echo "File copied successfully to /root/mydesiredfolder" else      echo "Path not found or command did not generate the expected output" fi # Cleanup rm /opt/splunk/mylog.log ########## The idea behind it is when running ./splunk diag, it will have the output of something like this Splunk diagnosis file created: /opt/splunk/diag-servername-2023-12-22_08-19-01.tar.gz
I would like to create an answer, but could you please tell me the final form you would like to create?
Thanks for response, however were using Splunk ES app, per the representative were talking with we need the SC4S so that the events will be mapped correctly in the app else we need to manually do adj... See more...
Thanks for response, however were using Splunk ES app, per the representative were talking with we need the SC4S so that the events will be mapped correctly in the app else we need to manually do adjust the mapping  We want to minimize the configurations we need to manually do since were just starting with our deployment 
Hi @jbv, for syslogs I prefer to use an rsyslog server to ingest ans write the syslogs in files that I can ingest with a Universal or heavy Forwarder. This solution works also when Splunk is down. ... See more...
Hi @jbv, for syslogs I prefer to use an rsyslog server to ingest ans write the syslogs in files that I can ingest with a Universal or heavy Forwarder. This solution works also when Splunk is down. I don't like SC4S because it isn't so easy to configure, it's based on syslog-ng that's replacing with rsyslog and I saw only on UFs. There is another advantage to use rsyslog also with HF: if you have more inputs on the same port, to configure these inputs on the HF you have to work on the conf files and restart Splunk every time, instead with rsyslog you modify only /etc/rsyslog.conf and restart is almost immediate. Ciao. Giuseppe
Hi everyone, I am in trouble. I need help. We are performing an UPGRADE of splunk ITSI. Following the upgrade path of ITSI, we are now handling the following. 4.9.x → 4.11.x → 4.13.x → 4.15.x Tr... See more...
Hi everyone, I am in trouble. I need help. We are performing an UPGRADE of splunk ITSI. Following the upgrade path of ITSI, we are now handling the following. 4.9.x → 4.11.x → 4.13.x → 4.15.x Trouble is occurring in the following cases 4.9.6 → 4.11.6 The server configuration is a cluster. The Splunk version is as follows. Search head:Splunk 9.1.2 indexer:Splunk 9.1.2 api(HF):Splunk 9.1.2   Migration logs at the time the trouble occurred are as follows. -------------------------------------------------------- 2023/12/22 14:45:20.640 2023-12-22 14:45:20,640+0900 process:2531 thread:MainThread ERROR [itsi.migration] [itsi_migration:4543] [run_migration] Migration from 4.9.2 to 4.10.0 did not finish successfully. host = logmng-st-splunk_srch01source = /opt/splunk/var/log/splunk/itsi_migration_queue.logsourcetype = itsi_internal_log 2023/12/22 14:45:20.636 2023-12-22 14:45:20,636+0900 process:2531 thread:MainThread ERROR [itsi.migration] [__init__:1433] [exception] 4.9.2 to 4.10.0: [HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/SA-ITOA/storage/collections/data/itsi_entity_management_rules?fields=object_type; [{'type': 'ERROR', 'code': None, 'text': 'An object with name=itsi_entity_management_rules does not exist'}] Traceback (most recent call last): File "/opt/splunk/etc/apps/SA-ITOA/lib/migration/migration.py", line 310, in run if not command.execute(): File "/opt/splunk/etc/apps/SA-ITOA/lib/itsi/upgrade/itsi_migration.py", line 249, in execute backup.execute() File "/opt/splunk/etc/apps/SA-ITOA/lib/itsi/upgrade/kvstore_backup_restore.py", line 1244, in execute self.backup() File "/opt/splunk/etc/apps/SA-ITOA/lib/itsi/upgrade/kvstore_backup_restore.py", line 973, in backup raise e File "/opt/splunk/etc/apps/SA-ITOA/lib/itsi/upgrade/kvstore_backup_restore.py", line 942, in backup object_types = self._get_object_type_from_collection(collection) File "/opt/splunk/etc/apps/SA-ITOA/lib/itsi/upgrade/kvstore_backup_restore.py", line 601, in _get_object_type_from_collection rsp, content = simpleRequest(location, sessionKey=self.session_key, raiseAllErrors=False, getargs=getargs) File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 669, in simpleRequest raise splunk.ResourceNotFound(uri, extendedMessages=extractMessages(body)) splunk.ResourceNotFound: [HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/SA-ITOA/storage/collections/data/itsi_entity_management_rules?fields=object_type; [{'type': 'ERROR', 'code': None, 'text': 'An object with name=itsi_entity_management_rules does not exist'}] host = logmng-st-splunk_srch01source = /opt/splunk/var/log/splunk/itsi_migration_queue.logsourcetype = itsi_internal_log   2023/12/22 14:45:20.635 2023-12-22 14:45:20,635+0900 process:2531 thread:MainThread ERROR [itsi.migration] [__init__:1433] [exception] 4.9.2 to 4.10.0: BackupRestore: [HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/SA-ITOA/storage/collections/data/itsi_entity_management_rules?fields=object_type; [{'type': 'ERROR', 'code': None, 'text': 'An object with name=itsi_entity_management_rules does not exist'}] Traceback (most recent call last): File "/opt/splunk/etc/apps/SA-ITOA/lib/itsi/upgrade/kvstore_backup_restore.py", line 1244, in execute self.backup() File "/opt/splunk/etc/apps/SA-ITOA/lib/itsi/upgrade/kvstore_backup_restore.py", line 973, in backup raise e File "/opt/splunk/etc/apps/SA-ITOA/lib/itsi/upgrade/kvstore_backup_restore.py", line 942, in backup object_types = self._get_object_type_from_collection(collection) File "/opt/splunk/etc/apps/SA-ITOA/lib/itsi/upgrade/kvstore_backup_restore.py", line 601, in _get_object_type_from_collection rsp, content = simpleRequest(location, sessionKey=self.session_key, raiseAllErrors=False, getargs=getargs) File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 669, in simpleRequest raise splunk.ResourceNotFound(uri, extendedMessages=extractMessages(body)) splunk.ResourceNotFound: [HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/SA-ITOA/storage/collections/data/itsi_entity_management_rules?fields=object_type; [{'type': 'ERROR', 'code': None, 'text': 'An object with name=itsi_entity_management_rules does not exist'}] host = logmng-st-splunk_srch01source = /opt/splunk/var/log/splunk/itsi_migration_queue.logsourcetype = itsi_internal_log -------------------------------------------------------- How do you deal with errors? Any help would be good! thanks, shinsuke
Hi, We initially deployed a heavy forwarder on-prem to collect data from our passive devices (syslogs, security devices, etc) however per talking with a splunk represent he recommended to have a s... See more...
Hi, We initially deployed a heavy forwarder on-prem to collect data from our passive devices (syslogs, security devices, etc) however per talking with a splunk represent he recommended to have a splunk connect for syslog to collect the data. Per him Syslog connect is the recommended method of collection for passive devices and also helps with parsing/normalization of the data when it goes to our Enterprise Security. Can both HF and SC4S be in the server ? If yes how will that work? Can SC4S direct data to the cloud indexer? And for future, do we just go for SC4S instead on the HF on-prem for the passive devices?  Thank you
Hi there. I would like to know about Splunk Health engine, Enterprise 8.2.12, 3 SHC,     HOW it considers a savedsearch a Lagged search? Based on same previous 24h search runs and doing an a... See more...
Hi there. I would like to know about Splunk Health engine, Enterprise 8.2.12, 3 SHC,     HOW it considers a savedsearch a Lagged search? Based on same previous 24h search runs and doing an average running times? Since we have many many heavy searches that end up also in 10/15m WHY, sometimes, i found in Skipped search monitor a 100% of skipped search (1 from 1, when we have hundreds of scheduled searches)? WHILE, searching the scheduler log, i found something like 70.000 success / 68 skipped (scheduled every minute or every two, concurrency is a factor i calculate and there's no problem) in last 24h ? WHY 100%? Is it a bug? I also search for a single scheduled search per day savedsearches, but all (few) are in "success" status When those strange things occur, sometimes, restarting the cluster, make health monitor to reset without warnings!!! Other times, in reverse, restarting the cluster make a clean health monitor to start giving warnings from point 1 & 2 ... strange behaviour!!! Thanks.
@richgalloway  I am still getting the same error. Are you able to copy the sample data and ingest it into Splunk to see the errors I am getting?  Thanks
These settings may be cleaner, but I'm not sure what I'm trying to fix. SHOULD_LINEMERGE=false LINE_BREAKER=()library! NO_BINARY_CHECK=true TIME_FORMAT=%m/%d/%Y-%H:%M:%S TIME_PREFIX=!\w{3,4}! SEDCMD... See more...
These settings may be cleaner, but I'm not sure what I'm trying to fix. SHOULD_LINEMERGE=false LINE_BREAKER=()library! NO_BINARY_CHECK=true TIME_FORMAT=%m/%d/%Y-%H:%M:%S TIME_PREFIX=!\w{3,4}! SEDCMD-null=s/\<Header>[\s\S]*?\<\/Header>//g MAX_TIMESTAMP_LOOKAHEAD=80
@richgalloway  I am getting the error below. I can't even get Splunk to interpret the data as regular text. 
What was the error?
Hi this https://community.splunk.com/t5/Getting-Data-In/Only-first-line-from-logfile-is-logged/m-p/672184#M112607 is probably quite similar case? r. Ismo
Hi another thing what you should do is check is uf read that file or not. You could do it by  splunk list inputstatus on UF side. r. Ismo 
Hi @richgalloway,    Thanks for getting back to me! I tried the props.conf you proposed and got an error. Can you please try to upload the sample data with the sourcetype config you provided a... See more...
Hi @richgalloway,    Thanks for getting back to me! I tried the props.conf you proposed and got an error. Can you please try to upload the sample data with the sourcetype config you provided and see if you have any luck. 
You may need to modify one or both settings below in your inputs.conf to get Splunk to ingest the appended logs. It's kind of hard to say without seeing a sample of your full log - with redacted info... See more...
You may need to modify one or both settings below in your inputs.conf to get Splunk to ingest the appended logs. It's kind of hard to say without seeing a sample of your full log - with redacted info; or you can read the config details below and make the determination yourself.  Can you search in index=_internal for the specific host with the search string of the log file name that you're interested in? It should show what the UF is doing when it monitors for that file path. Commonly, folks tend to use crcSalt = <SOURCE> when they have issues with Splunk not ingesting a log file.   crcSalt = <string> * Use this setting to force the input to consume files that have matching CRCs, or cyclic redundancy checks. * By default, the input only performs CRC checks against the first 256 bytes of a file. This behavior prevents the input from indexing the same file twice, even though you might have renamed it, as with rolling log files, for example. Because the CRC is based on only the first few lines of the file, it is possible for legitimately different files to have matching CRCs, particularly if they have identical headers. * If set, <string> is added to the CRC. * If set to the literal string "<SOURCE>" (including the angle brackets), the full directory path to the source file is added to the CRC. This ensures that each file being monitored has a unique CRC. When 'crcSalt' is invoked, it is usually set to <SOURCE>. * Be cautious about using this setting with rolling log files; it could lead to the log file being re-indexed after it has rolled. * In many situations, 'initCrcLength' can be used to achieve the same goals. * Default: empty string initCrcLength = <integer> * How much of a file, in bytes, that the input reads before trying to identify whether it has already seen the file. * You might want to adjust this if you have many files with common headers (comment headers, long CSV headers, etc) and recurring filenames. * Cannot be less than 256 or more than 1048576. * CAUTION: Improper use of this setting causes data to be re-indexed. You might want to consult with Splunk Support before adjusting this value - the default is fine for most installations. * Default: 256 (bytes)   https://docs.splunk.com/Documentation/Splunk/latest/Admin/inputsconf 
Recently configured a new input that has successfully ingesting logs but appears to be working intermittently. There is large gaps in logs that we have confirmed are present and being created regular... See more...
Recently configured a new input that has successfully ingesting logs but appears to be working intermittently. There is large gaps in logs that we have confirmed are present and being created regularly from the source server. Example : Logs are captured 8th December and 16 December only. So, here 9th December to 15nth December logs are not captured We have created custom app on our deployment server and push that app across all the deployment slaves. The data flow is coming from source-->Universal forwarder-->Heavy forwarder--> Splunk cloud we have created Inputs.conf  [monitor://F:\Polarion\data\logs\main\*.log.*] sourcetype = catalina index = ito_app disabled = false ignoreOlderThan = 7d initCrcLength = 10000 Please help on the issue Thank you