All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for response, however were using Splunk ES app, per the representative were talking with we need the SC4S so that the events will be mapped correctly in the app else we need to manually do adj... See more...
Thanks for response, however were using Splunk ES app, per the representative were talking with we need the SC4S so that the events will be mapped correctly in the app else we need to manually do adjust the mapping  We want to minimize the configurations we need to manually do since were just starting with our deployment 
Hi @jbv, for syslogs I prefer to use an rsyslog server to ingest ans write the syslogs in files that I can ingest with a Universal or heavy Forwarder. This solution works also when Splunk is down. ... See more...
Hi @jbv, for syslogs I prefer to use an rsyslog server to ingest ans write the syslogs in files that I can ingest with a Universal or heavy Forwarder. This solution works also when Splunk is down. I don't like SC4S because it isn't so easy to configure, it's based on syslog-ng that's replacing with rsyslog and I saw only on UFs. There is another advantage to use rsyslog also with HF: if you have more inputs on the same port, to configure these inputs on the HF you have to work on the conf files and restart Splunk every time, instead with rsyslog you modify only /etc/rsyslog.conf and restart is almost immediate. Ciao. Giuseppe
Hi everyone, I am in trouble. I need help. We are performing an UPGRADE of splunk ITSI. Following the upgrade path of ITSI, we are now handling the following. 4.9.x → 4.11.x → 4.13.x → 4.15.x Tr... See more...
Hi everyone, I am in trouble. I need help. We are performing an UPGRADE of splunk ITSI. Following the upgrade path of ITSI, we are now handling the following. 4.9.x → 4.11.x → 4.13.x → 4.15.x Trouble is occurring in the following cases 4.9.6 → 4.11.6 The server configuration is a cluster. The Splunk version is as follows. Search head:Splunk 9.1.2 indexer:Splunk 9.1.2 api(HF):Splunk 9.1.2   Migration logs at the time the trouble occurred are as follows. -------------------------------------------------------- 2023/12/22 14:45:20.640 2023-12-22 14:45:20,640+0900 process:2531 thread:MainThread ERROR [itsi.migration] [itsi_migration:4543] [run_migration] Migration from 4.9.2 to 4.10.0 did not finish successfully. host = logmng-st-splunk_srch01source = /opt/splunk/var/log/splunk/itsi_migration_queue.logsourcetype = itsi_internal_log 2023/12/22 14:45:20.636 2023-12-22 14:45:20,636+0900 process:2531 thread:MainThread ERROR [itsi.migration] [__init__:1433] [exception] 4.9.2 to 4.10.0: [HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/SA-ITOA/storage/collections/data/itsi_entity_management_rules?fields=object_type; [{'type': 'ERROR', 'code': None, 'text': 'An object with name=itsi_entity_management_rules does not exist'}] Traceback (most recent call last): File "/opt/splunk/etc/apps/SA-ITOA/lib/migration/migration.py", line 310, in run if not command.execute(): File "/opt/splunk/etc/apps/SA-ITOA/lib/itsi/upgrade/itsi_migration.py", line 249, in execute backup.execute() File "/opt/splunk/etc/apps/SA-ITOA/lib/itsi/upgrade/kvstore_backup_restore.py", line 1244, in execute self.backup() File "/opt/splunk/etc/apps/SA-ITOA/lib/itsi/upgrade/kvstore_backup_restore.py", line 973, in backup raise e File "/opt/splunk/etc/apps/SA-ITOA/lib/itsi/upgrade/kvstore_backup_restore.py", line 942, in backup object_types = self._get_object_type_from_collection(collection) File "/opt/splunk/etc/apps/SA-ITOA/lib/itsi/upgrade/kvstore_backup_restore.py", line 601, in _get_object_type_from_collection rsp, content = simpleRequest(location, sessionKey=self.session_key, raiseAllErrors=False, getargs=getargs) File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 669, in simpleRequest raise splunk.ResourceNotFound(uri, extendedMessages=extractMessages(body)) splunk.ResourceNotFound: [HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/SA-ITOA/storage/collections/data/itsi_entity_management_rules?fields=object_type; [{'type': 'ERROR', 'code': None, 'text': 'An object with name=itsi_entity_management_rules does not exist'}] host = logmng-st-splunk_srch01source = /opt/splunk/var/log/splunk/itsi_migration_queue.logsourcetype = itsi_internal_log   2023/12/22 14:45:20.635 2023-12-22 14:45:20,635+0900 process:2531 thread:MainThread ERROR [itsi.migration] [__init__:1433] [exception] 4.9.2 to 4.10.0: BackupRestore: [HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/SA-ITOA/storage/collections/data/itsi_entity_management_rules?fields=object_type; [{'type': 'ERROR', 'code': None, 'text': 'An object with name=itsi_entity_management_rules does not exist'}] Traceback (most recent call last): File "/opt/splunk/etc/apps/SA-ITOA/lib/itsi/upgrade/kvstore_backup_restore.py", line 1244, in execute self.backup() File "/opt/splunk/etc/apps/SA-ITOA/lib/itsi/upgrade/kvstore_backup_restore.py", line 973, in backup raise e File "/opt/splunk/etc/apps/SA-ITOA/lib/itsi/upgrade/kvstore_backup_restore.py", line 942, in backup object_types = self._get_object_type_from_collection(collection) File "/opt/splunk/etc/apps/SA-ITOA/lib/itsi/upgrade/kvstore_backup_restore.py", line 601, in _get_object_type_from_collection rsp, content = simpleRequest(location, sessionKey=self.session_key, raiseAllErrors=False, getargs=getargs) File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 669, in simpleRequest raise splunk.ResourceNotFound(uri, extendedMessages=extractMessages(body)) splunk.ResourceNotFound: [HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/SA-ITOA/storage/collections/data/itsi_entity_management_rules?fields=object_type; [{'type': 'ERROR', 'code': None, 'text': 'An object with name=itsi_entity_management_rules does not exist'}] host = logmng-st-splunk_srch01source = /opt/splunk/var/log/splunk/itsi_migration_queue.logsourcetype = itsi_internal_log -------------------------------------------------------- How do you deal with errors? Any help would be good! thanks, shinsuke
Hi, We initially deployed a heavy forwarder on-prem to collect data from our passive devices (syslogs, security devices, etc) however per talking with a splunk represent he recommended to have a s... See more...
Hi, We initially deployed a heavy forwarder on-prem to collect data from our passive devices (syslogs, security devices, etc) however per talking with a splunk represent he recommended to have a splunk connect for syslog to collect the data. Per him Syslog connect is the recommended method of collection for passive devices and also helps with parsing/normalization of the data when it goes to our Enterprise Security. Can both HF and SC4S be in the server ? If yes how will that work? Can SC4S direct data to the cloud indexer? And for future, do we just go for SC4S instead on the HF on-prem for the passive devices?  Thank you
Hi there. I would like to know about Splunk Health engine, Enterprise 8.2.12, 3 SHC,     HOW it considers a savedsearch a Lagged search? Based on same previous 24h search runs and doing an a... See more...
Hi there. I would like to know about Splunk Health engine, Enterprise 8.2.12, 3 SHC,     HOW it considers a savedsearch a Lagged search? Based on same previous 24h search runs and doing an average running times? Since we have many many heavy searches that end up also in 10/15m WHY, sometimes, i found in Skipped search monitor a 100% of skipped search (1 from 1, when we have hundreds of scheduled searches)? WHILE, searching the scheduler log, i found something like 70.000 success / 68 skipped (scheduled every minute or every two, concurrency is a factor i calculate and there's no problem) in last 24h ? WHY 100%? Is it a bug? I also search for a single scheduled search per day savedsearches, but all (few) are in "success" status When those strange things occur, sometimes, restarting the cluster, make health monitor to reset without warnings!!! Other times, in reverse, restarting the cluster make a clean health monitor to start giving warnings from point 1 & 2 ... strange behaviour!!! Thanks.
@richgalloway  I am still getting the same error. Are you able to copy the sample data and ingest it into Splunk to see the errors I am getting?  Thanks
These settings may be cleaner, but I'm not sure what I'm trying to fix. SHOULD_LINEMERGE=false LINE_BREAKER=()library! NO_BINARY_CHECK=true TIME_FORMAT=%m/%d/%Y-%H:%M:%S TIME_PREFIX=!\w{3,4}! SEDCMD... See more...
These settings may be cleaner, but I'm not sure what I'm trying to fix. SHOULD_LINEMERGE=false LINE_BREAKER=()library! NO_BINARY_CHECK=true TIME_FORMAT=%m/%d/%Y-%H:%M:%S TIME_PREFIX=!\w{3,4}! SEDCMD-null=s/\<Header>[\s\S]*?\<\/Header>//g MAX_TIMESTAMP_LOOKAHEAD=80
@richgalloway  I am getting the error below. I can't even get Splunk to interpret the data as regular text. 
What was the error?
Hi this https://community.splunk.com/t5/Getting-Data-In/Only-first-line-from-logfile-is-logged/m-p/672184#M112607 is probably quite similar case? r. Ismo
Hi another thing what you should do is check is uf read that file or not. You could do it by  splunk list inputstatus on UF side. r. Ismo 
Hi @richgalloway,    Thanks for getting back to me! I tried the props.conf you proposed and got an error. Can you please try to upload the sample data with the sourcetype config you provided a... See more...
Hi @richgalloway,    Thanks for getting back to me! I tried the props.conf you proposed and got an error. Can you please try to upload the sample data with the sourcetype config you provided and see if you have any luck. 
You may need to modify one or both settings below in your inputs.conf to get Splunk to ingest the appended logs. It's kind of hard to say without seeing a sample of your full log - with redacted info... See more...
You may need to modify one or both settings below in your inputs.conf to get Splunk to ingest the appended logs. It's kind of hard to say without seeing a sample of your full log - with redacted info; or you can read the config details below and make the determination yourself.  Can you search in index=_internal for the specific host with the search string of the log file name that you're interested in? It should show what the UF is doing when it monitors for that file path. Commonly, folks tend to use crcSalt = <SOURCE> when they have issues with Splunk not ingesting a log file.   crcSalt = <string> * Use this setting to force the input to consume files that have matching CRCs, or cyclic redundancy checks. * By default, the input only performs CRC checks against the first 256 bytes of a file. This behavior prevents the input from indexing the same file twice, even though you might have renamed it, as with rolling log files, for example. Because the CRC is based on only the first few lines of the file, it is possible for legitimately different files to have matching CRCs, particularly if they have identical headers. * If set, <string> is added to the CRC. * If set to the literal string "<SOURCE>" (including the angle brackets), the full directory path to the source file is added to the CRC. This ensures that each file being monitored has a unique CRC. When 'crcSalt' is invoked, it is usually set to <SOURCE>. * Be cautious about using this setting with rolling log files; it could lead to the log file being re-indexed after it has rolled. * In many situations, 'initCrcLength' can be used to achieve the same goals. * Default: empty string initCrcLength = <integer> * How much of a file, in bytes, that the input reads before trying to identify whether it has already seen the file. * You might want to adjust this if you have many files with common headers (comment headers, long CSV headers, etc) and recurring filenames. * Cannot be less than 256 or more than 1048576. * CAUTION: Improper use of this setting causes data to be re-indexed. You might want to consult with Splunk Support before adjusting this value - the default is fine for most installations. * Default: 256 (bytes)   https://docs.splunk.com/Documentation/Splunk/latest/Admin/inputsconf 
Recently configured a new input that has successfully ingesting logs but appears to be working intermittently. There is large gaps in logs that we have confirmed are present and being created regular... See more...
Recently configured a new input that has successfully ingesting logs but appears to be working intermittently. There is large gaps in logs that we have confirmed are present and being created regularly from the source server. Example : Logs are captured 8th December and 16 December only. So, here 9th December to 15nth December logs are not captured We have created custom app on our deployment server and push that app across all the deployment slaves. The data flow is coming from source-->Universal forwarder-->Heavy forwarder--> Splunk cloud we have created Inputs.conf  [monitor://F:\Polarion\data\logs\main\*.log.*] sourcetype = catalina index = ito_app disabled = false ignoreOlderThan = 7d initCrcLength = 10000 Please help on the issue Thank you
And even back to single-site https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/Converttosinglesite.  
Hi One reason for that is bad timestamp handling or data from different times to one index. There should be some previous posts about it on community. r. Ismo
Thank you!
Agent configuration and maintenance don't have to be complex. With our new Smart Agent, managing upgrades and installations is as easy as a few clicks.  See for yourself! Check out this click-throu... See more...
Agent configuration and maintenance don't have to be complex. With our new Smart Agent, managing upgrades and installations is as easy as a few clicks.  See for yourself! Check out this click-through demo to see it in action: 
SEDCMD settings must contain either an s or y command not just a regex. To properly extract a timestamp, the props stanza should contain TIME_PREFIX, TIME_FORMAT, and MAX_TIMESTAMP_LOOKAHEAD setting... See more...
SEDCMD settings must contain either an s or y command not just a regex. To properly extract a timestamp, the props stanza should contain TIME_PREFIX, TIME_FORMAT, and MAX_TIMESTAMP_LOOKAHEAD settings. [sourcetype_name] disabled = false SHOULD_LINEMERGE = false MAX_TIMESTAMP_LOOKAHEAD = 80 TIME_FORMAT = %m/%d/%Y-%H:%M:%S TIME_PREFIX = \d! LINE_BREAKER = ([\r\n]+)library! SEDCMD-null = s/\<Header>[\s\S]*?\<\/Header>//g You may have a problem with time zones, depending on the zones of the Splunk server and that in the data.  Ideally, the time zone should be specified as part of the timestamp rather than as a separate element.  The time zone should be a recognized abbreviation such as "CST" or "-0600".  BTW, Central Daylight Time is not in effect in November.
Hi Splunk Community,    I am trying to create a props.conf for the sample log file below.  My goal is to      * Delete the Header tag and remove the data from being ingested.      * Break the in... See more...
Hi Splunk Community,    I am trying to create a props.conf for the sample log file below.  My goal is to      * Delete the Header tag and remove the data from being ingested.      * Break the individual events by starting with ( "library!WindowsService_98!..." OR "processing!ReportServer_0-127!" )     * Extracting time stamp such as ( "!11/26/2023-00:21:18::")   Heres the props.conf that I have so far but it is not working.  --------- [sourcetype_name] disabled = false SHOULD_LINEMERGE = false MAX_TIMESTAMP_LOOKAHEAD = 80 TIME_FORMAT = %m/%d/%Y-%H:%M:%S LINE_BREAKER = ([\r\n]+)library! SEDCMD-null = (<Header>([\s\S]*?)<\/Header>)   disabled ------------------- sample log file ------------------------- <Header> <Product>Microsoft SQL Server Reporting Services Version 2007.0100.6000.029 ((Random_value).18802-2848 )</Product> <Locale>English (United States)</Locale> <TimeZone>Central Daylight Time</TimeZone> <Path>C:\Program Files\Microsoft SQL Server\MSRS10.MSSQLSERVER\Reporting Services\Logfiles\ReportServerService__11_26_2023_00_00_01.log</Path> <SystemName>hostName01</SystemName> <OSName>Microsoft Windows NT 6.2.9200</OSName> <OSVersion>6.2.9200</OSVersion> <ProcessID>3088</ProcessID> </Header>library!WindowsService_98!1234!11/26/2023-00:00:01:: i INFO: Call to CleanBatch() library!WindowsService_98!1234!11/26/2023-00:00:01:: i INFO: Cleaned 0 batch records, 0 policies, 0 sessions, 0 cache entries, 0 snapshots, 0 chunks, 0 running jobs, 0 persisted streams, 0 segments, 0 segment mappings. library!WindowsService_98!1234!11/26/2023-00:00:01:: i INFO: Call to CleanBatch() ends library!WindowsService_98!1218!11/26/2023-00:10:01:: i INFO: Call to CleanBatch() library!WindowsService_98!1218!11/26/2023-00:10:01:: i INFO: Cleaned 0 batch records, 0 policies, 1 sessions, 0 cache entries, 1 snapshots, 14 chunks, 0 running jobs, 0 persisted streams, 9 segments, 9 segment mappings. library!WindowsService_98!1218!11/26/2023-00:10:01:: i INFO: Call to CleanBatch() ends library!WindowsService_98!d00!11/26/2023-00:20:01:: i INFO: Call to CleanBatch() library!WindowsService_98!d00!11/26/2023-00:20:01:: i INFO: Cleaned 0 batch records, 0 policies, 0 sessions, 0 cache entries, 0 snapshots, 0 chunks, 0 running jobs, 0 persisted streams, 0 segments, 0 segment mappings. library!WindowsService_98!d00!11/26/2023-00:20:01:: i INFO: Call to CleanBatch() ends library!ReportServer_0-127!2558!11/26/2023-00:21:18:: i INFO: RenderForNewSession('/Hampton.Common.Reports/BOL') processing!ReportServer_0-127!2558!11/26/2023-00:21:18:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 19., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 19. processing!ReportServer_0-127!2558!11/26/2023-00:21:18:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 54., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 54. processing!ReportServer_0-127!2558!11/26/2023-00:21:18:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 61., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 61. processing!ReportServer_0-127!2558!11/26/2023-00:21:18:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 62., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 62. processing!ReportServer_0-127!2558!11/26/2023-00:21:19:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 1., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 1. processing!ReportServer_0-127!2558!11/26/2023-00:21:19:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 2., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 2. processing!ReportServer_0-127!2558!11/26/2023-00:21:19:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 1., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 1. processing!ReportServer_0-127!2558!11/26/2023-00:21:19:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 2., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: There is no data for the field at position 2. library!WindowsService_98!1234!11/26/2023-00:30:01:: i INFO: Call to CleanBatch() ------------------- sample log file end -------------------------