All Topics

Top

All Topics

ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - EvtDC::connectToDC: DsBind failed: (5) We have 22 out of 3000+ hosts send... See more...
ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - EvtDC::connectToDC: DsBind failed: (5) We have 22 out of 3000+ hosts sending thousands of errors for this and I can't seem to figure out why. My best guess at this point is the forwarders need to be updated.  We have a distributed environment with multiple DC's.  Any idea if I'm doing something wrong on my end, or do I need to have these forwarders that are causing errors fixed? I have things set up as follows: All Windows hosts Universal Forwarders - inputs.conf - [default] evt_resolve_ad_obj = 0 Domain Controller UF inputs - [admon://DefaultTargetDC] targetDc = 'DC02' startingNode = LDAP://OU=Computers,DC=ad index = msad monitorSubtree = 1 disabled = 0 baseline = 0 evt_resolve_ad_obj = 1 [admon://SecondTargetDC] targetDc = 'DC03' startingNode = LDAP://OU=Computers,DC=ad index = msad monitorSubtree = 1 disabled = 1 baseline = 0 evt_resolve_ad_obj = 0 [admon://ThirdTargetDC] targetDc = 'DC01' startingNode = LDAP://OU=Computers,DC=ad disabled = 1 index = msad baseline = 0 evt_resolve_ad_obj = 0 [admon://FourthTargetDC] targetDc = 'DC02' startingNode = LDAP://OU=Computers,DC=ad disabled = 1 index = msad baseline = 0 evt_resolve_ad_obj = 0 [admon://FifthTargetDC] targetDc = 'DC01' startingNode = LDAP://OU=Computers,DC=adu disabled = 1 index = msad baseline = 0 evt_resolve_ad_obj = 0 [admon://FifthTargetDC] targetDc = 'DC01dev' startingNode = LDAP://OU=Computers,DC=ad disabled = 1 index = msad baseline = 0 evt_resolve_ad_obj = 0 [admon://SixthTargetDC] targetDc = 'DC04' startingNode = LDAP://OU=Computers,DC=ad disabled = 1 index = msad baseline = 0 evt_resolve_ad_obj = 0 [admon://SeventhTargetDC] targetDc = 'DC05' startingNode = LDAP://OU=Computers,DC=ad disabled = 1 index = msad baseline = 0 evt_resolve_ad_obj = 0 [admon://EighthTargetDC] targetDc = 'DC06' startingNode = LDAP://OU=Computers,DC=ad disabled = 1 index = msad baseline = 0 evt_resolve_ad_obj = 0 [admon://NearestDC] disabled = 1 baseline = 0 evt_resolve_ad_obj = 0
Mission Control (MC), currently in Preview, is a security operations application from Splunk Security. When all features are released, it will unify capabilities from Splunk Core, Enterprise Security... See more...
Mission Control (MC), currently in Preview, is a security operations application from Splunk Security. When all features are released, it will unify capabilities from Splunk Core, Enterprise Security, SOAR, and Threat Intelligence Management.  This release of Mission Control, known as “Preview 1”, provides the initial infrastructure for the app, the ingestion framework to move ES incidents into MC, and the framework for response capabilities via a feature called Response Templates.  It is a great way to see how the product is shaping up and what the fundamentals of Mission Control look like! Check out the docs here for more info! https://docs.splunk.com/Documentation/MC/Preview/   Note that the app is currently in an early release preview. 
I have a distributed environment with a search head (with deployment server) and indexer. I SSH into the search head and create the app to deployed, configure inputs.conf with a barebones config (u... See more...
I have a distributed environment with a search head (with deployment server) and indexer. I SSH into the search head and create the app to deployed, configure inputs.conf with a barebones config (using vi/vim), and deploy to a universal forwarder set up on a windows server. It deploys the app, but when I view the deployed inputs.conf on the windows server, all line breaks have been removed and it is all compressed into a single line. I tried using the commands :set enc=utf8 in vim, but it had no effect. Any thoughts on what I can do to get the config in a windows recognizable format? Thanks!
I have created a custom add-on using the Splunk Add-on Builder app, which is running in an on-premise instance of Splunk Enterprise. The add-on utilizes a few REST API data input configurations that ... See more...
I have created a custom add-on using the Splunk Add-on Builder app, which is running in an on-premise instance of Splunk Enterprise. The add-on utilizes a few REST API data input configurations that make calls to another vendor's product and pulls back specific data we're interested in. When I test the input inside the test pane of the add-on builder, it returns all expected events. I can also test the same REST API call outside of Splunk and it similarly returns all expected events. When I package and upload the add-on to our Splunk Cloud instance, however, the same data input only pulls back 60 events instead of the full amount (~250). Other data inputs within the add-on that are hitting the same REST API are able to pull back more than 60 events, so the limitation appears to be exclusive to this one data input, which again, does not have the same limit when run in the add-on builder or outside of Splunk entirely. Does anyone know why there would be a difference in behavior when run in our Cloud environment or where I might be able to find logs to help me answer that question?
Hello, I have created a datamodel which I have accelerated, containing two sourcetype. The goal is to add a field from one sourcetype into the primary results. The challenge I have been having is r... See more...
Hello, I have created a datamodel which I have accelerated, containing two sourcetype. The goal is to add a field from one sourcetype into the primary results. The challenge I have been having is returning all the data from the Vulnerability sourcetype, which contains over 400K events. I have attempted several different searchs, none of which are the results i'm expecting. Basically I want to join the two tstats below without using join do to its limitations. The vulnerabilies should include the hostname and operating system.  Any help would be apprectiated     |tstats summariesonly=f ``` latest(tenable.plugin_name) as "Vulnerability", ``` values(tenable.first_found) as "First Found", values(tenable.last_found) as "Last Found", values(tenable.risk_factor) as "Risk Factor", values(tenable.in_the_news) as "In The News?", values(tenable.vpr_score) as "VPR", values(tenable.solution) as "Solution", values(tenable.see_also) as "See Also", values(tenable.state) as "State" values(tenable.exploitability_ease) as "Exploitable Ease", values(tenable.exploit_available) as "Exploit Available" values(tenable.ip) as IP ``` latest(tenable.asset_hostname) as hostname``` FROM datamodel="VulnMgt.tenable", WHERE sourcetype="tenable:io:vuln" by tenable.asset_uuid tenable.asset_hostname tenable.plugin_name |tstats summariesonly=t prestats=t append=t values(devices.operating_systems) as OS FROM datamodel="VulnMgt.tenable", WHERE sourcetype="tenable:io:assets" by tenable.asset_uuid tenable.hostnames | stats latest(*) as * count as total by tenable.asset_uuid      
Hi, Can it be possible to extract one common field if we have two sourcetypes and sourcepath is also different in them , index is same. Example : sourcetype : abc with source path : /home/mysqld/... See more...
Hi, Can it be possible to extract one common field if we have two sourcetypes and sourcepath is also different in them , index is same. Example : sourcetype : abc with source path : /home/mysqld/$DB_NAME/audit/audit.log sourcetype:xyz with sourcepath : /mydata/log/$DB_NAME/audit/audit.log I need to have DBname extracted is that possible to get it via regex if yes what it can be. Also if not can i make soucretype as one with 2 different sourcepath /home/mysqld/$DB_NAME/audit/audit.log and /mydata/log/$DB_NAME/audit/audit.log  and then extract DBname from it via regex?  
I am new to Splunk so please forgive me for what I do not know.  We are getting events with start=1661359208771 and need to covert it to a readable timestamp.  I have tried changing the below Timesta... See more...
I am new to Splunk so please forgive me for what I do not know.  We are getting events with start=1661359208771 and need to covert it to a readable timestamp.  I have tried changing the below Timestamp format and prefix with no luck.  Any suggestions?
Hi all! I'm trying to create a Timechart showing only the graph bars where the number of events is 2X the number of events from the previous 10 minutes.    E.g. if I have 10,000 events at 10:10... See more...
Hi all! I'm trying to create a Timechart showing only the graph bars where the number of events is 2X the number of events from the previous 10 minutes.    E.g. if I have 10,000 events at 10:10 AM to 10:20 AM and 30,000 at 10:20 AM -10:30 AM then 35,000 at 10:30 AM to 10:40 AM   I want the timechart to show only the bar for 10:20-10:30 period, which is where the surge happened.   Hope that makes sense, thanks in advance!
Good afternoon! We receive messages on splunk. The task is as follows: there is a time period between the first message and the second, and also between the second and the third. The task is that y... See more...
Good afternoon! We receive messages on splunk. The task is as follows: there is a time period between the first message and the second, and also between the second and the third. The task is that you need to somehow calculate the delta between these intervals and display it on the dashboard. This is real? And the question is, how can I do this? Is there a rough example? Unfortunately, I have not worked with splunk at all before, so I don’t even know where to start. If you need leading questions, I'm ready to answer.
Hello I would like to create multiple new custom data source categories to use them in a Partner Integration app on Splunk Security Essentials. I already read this documentation, then I was able to... See more...
Hello I would like to create multiple new custom data source categories to use them in a Partner Integration app on Splunk Security Essentials. I already read this documentation, then I was able to create a single new custom data source category. However, when trying to create multiple custom data source categories by changing the "company_name" of other security contents, there were no updates to the existing data source categories. Therefore, they were not created and only the first data source category that I had created continued to appear. Finally, I noticed the following snippet in the SSE documentation in the "Populating Data Inventory" section: "[...] it will take any detections that have a create_data_inventory=true configuration. For the first piece of content that it finds, it will add a new item to data_inventory output [...]". And then I was in doubt if the app is really programmed to create only a new data source category informed, not creating the others, after having created the first.   So I have the following questions: 1. Is it possible to create multiple new custom data source categories? 2. How could I create them?
Hello, "The ingestion certificates on xxxx Splunk Cloud environment xxx Universal Forwarder certificate package, will be expiring on x/xx/2022. In order to ensure that ingestion is not disrupted, w... See more...
Hello, "The ingestion certificates on xxxx Splunk Cloud environment xxx Universal Forwarder certificate package, will be expiring on x/xx/2022. In order to ensure that ingestion is not disrupted, we have rolled out an updated Universal Forwarder (UF) package to your customer’s Splunk Cloud Platform stack. The operational contacts have been informed of this information via xxxx. They will need to install this updated package on all forwarders connecting to their Splunk Cloud Stack as soon as possible. We are asking you to please reach out to your customer and verify they are aware that they are responsible for rolling out this package and should do so immediately." I have received a message from splunk and I would like you to please confirm if what I must do is related to this link https://docs.splunk.com/Documentation/Forwarder/9.0.1/Forwarder/ConfigSCUFCredentials?ref=hk#HowtoforwarddatatoSplunkCloud#How_to_forward_data_to_Splunk_Cloud      
We are trying to audit/monitor administrative activity to Splunk.  Is there some canned dashboards or searches that can be used to monitor/review elevated privilege activity?  How do we monitor chang... See more...
We are trying to audit/monitor administrative activity to Splunk.  Is there some canned dashboards or searches that can be used to monitor/review elevated privilege activity?  How do we monitor change management on Splunk itself?
Hi All, We are generating a log that records in and out timestamp in epoch for a specific set of transactions and we have been doing this for a while. In order to test this api, we run our load tes... See more...
Hi All, We are generating a log that records in and out timestamp in epoch for a specific set of transactions and we have been doing this for a while. In order to test this api, we run our load test at specific times for 1 hour and that generate the logs with transaction ids, keywords and the in and out timestamps etc. see the sample below. We are querying this data and calculating duration.  Is there a way in splunk to compare and find delta of the duration from the previous runs. Every run has a specific timestamps and we were adding it in the SPL itself like earliest="08/23/2022:20:45:00" latest="08/23/2022:21:55:00" ============================================================= sample log 2022/08/23 21:54:38,918 INFO [XXXX.CPU_LITE @67166e0a] [LoggerMessageProcessor ] [ ] [ ] [] - End Workflow: flow1 | LogID: 104 |{ "Trans-Id": "cf18655a-5d1a-4867-b500-c4ba5bee9333", "AppId": "somepapi" } | OutTimestamp : 1661306078918 2022/08/23 21:54:37,819 INFO [XXXX.CPU_INTENSIVE @2c86def1] [LoggerMessageProcessor ] [ ] [ ] [] - Start Workflow: flow1 | LogID: 104 |{ "Trans-Id": "cf18655a-5d1a-4867-b500-c4ba5bee9333", "AppId":"somepapi" } | InTimestamp : 1661306077819
Hi, There is an application which is used by multiple teams and we are ingesting the application logs for each team in a single index. Here we want to restrict each team people should be accessible ... See more...
Hi, There is an application which is used by multiple teams and we are ingesting the application logs for each team in a single index. Here we want to restrict each team people should be accessible only their teams logs not all the data in the index. How do i implement it in splunk? Thanks in advance. Gowtham
Loading the Configuration page from the Splunk_TA_snow ServiceNow TA yields the following error: Something went wrong! Failed to load current state for selected entity in form! Error: Requ... See more...
Loading the Configuration page from the Splunk_TA_snow ServiceNow TA yields the following error: Something went wrong! Failed to load current state for selected entity in form! Error: Request failed with status code 500 ERR0005   Similarly for the inputs page yields the following error: Failed to load Inputs Page This is normal on Splunk search heads as they do not require an Input page. Check your installation or return to the configuration page. Error: Request failed with status code 500 ERR0001   The troubleshooting documentation only mentions something about using the admin  account, which we are: https://docs.splunk.com/Documentation/AddOns/released/ServiceNow/Troubleshooting#Cannot_access_configuration_page SNOW internal logs only show this error:       index=_internal sourcetype="ta_snow" 2022-08-24 13:52:12,894 INFO pid=[xxx] tid=MainThread file=snow.py:stream_events:430 | No configured inputs found. To collect data from ServiceNow, configure new input(s) or update existing input(s) either from Inputs page of the Add-on or manually from inputs.conf.     Splunkd errors show:       requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')) raise ConnectionError(err, request=request) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/requests/adapters.py", line 501, in send r = adapter.send(request, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/requests/sessions.py", line 645, in send resp = self.send(prep, **send_kwargs) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/requests/sessions.py", line 529, in request return session.request(method=method, url=url, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/requests/api.py", line 61, in request **kwargs, File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/solnlib/splunk_rest_client.py", line 147, in request response = self.handler(url, message, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunklib/binding.py", line 1279, in request return self.request(url, { 'method': "GET", 'headers': headers }) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunklib/binding.py", line 1219, in get response = self.http.get(path, all_headers, **query) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunklib/binding.py", line 685, in get val = f(*args, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunklib/binding.py", line 70, in new_f return request_fun(self, *args, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunklib/binding.py", line 289, in wrapper **query) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunklib/client.py", line 795, in get return super(Collection, self).get(name, owner, app, sharing, **query) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunklib/client.py", line 1702, in get response = self.get(key) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunklib/client.py", line 1740, in __getitem__ conf = self._confs[name] File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/solnlib/conf_manager.py", line 457, in get_conf return func(*args, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/solnlib/utils.py", line 153, in wrapper Traceback (most recent call last): During handling of the above exception, another exception occurred: urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')) return self._sock.recv_into(b) File "/opt/splunk/lib/python3.7/socket.py", line 589, in readinto line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") File "/opt/splunk/lib/python3.7/http/client.py", line 280, in _read_status version, status, reason = self._read_status() File "/opt/splunk/lib/python3.7/http/client.py", line 319, in begin response.begin() File "/opt/splunk/lib/python3.7/http/client.py", line 1373, in getresponse httplib_response = conn.getresponse() File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/urllib3/connectionpool.py", line 444, in _make_request File "<string>", line 3, in raise_from six.raise_from(e, None) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/urllib3/connectionpool.py", line 449, in _make_request chunked=chunked, File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/urllib3/connectionpool.py", line 710, in urlopen raise value.with_traceback(tb) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/urllib3/packages/six.py", line 769, in reraise raise six.reraise(type(error), error, _stacktrace) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/urllib3/util/retry.py", line 550, in increment File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/urllib3/connectionpool.py", line 786, in urlopen timeout=timeout File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/requests/adapters.py", line 450, in send Traceback (most recent call last): During handling of the above exception, another exception occurred: ConnectionResetError: [Errno 104] Connection reset by peer return self._sock.recv_into(b) File "/opt/splunk/lib/python3.7/socket.py", line 589, in readinto line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") File "/opt/splunk/lib/python3.7/http/client.py", line 280, in _read_status version, status, reason = self._read_status() File "/opt/splunk/lib/python3.7/http/client.py", line 319, in begin response.begin() File "/opt/splunk/lib/python3.7/http/client.py", line 1373, in getresponse httplib_response = conn.getresponse() File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/urllib3/connectionpool.py", line 444, in _make_request File "<string>", line 3, in raise_from six.raise_from(e, None) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/urllib3/connectionpool.py", line 449, in _make_request chunked=chunked, File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/urllib3/connectionpool.py", line 710, in urlopen WARNING:root:Run function: get_conf failed: Traceback (most recent call last): requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')) raise ConnectionError(err, request=request) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/requests/adapters.py", line 501, in send r = adapter.send(request, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/requests/sessions.py", line 645, in send resp = self.send(prep, **send_kwargs) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/requests/sessions.py", line 529, in request return session.request(method=method, url=url, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/requests/api.py", line 61, in request **kwargs, File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/solnlib/splunk_rest_client.py", line 147, in request Traceback (most recent call last): During handling of the above exception, another exception occurred: urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')) return self._sock.recv_into(b) File "/opt/splunk/lib/python3.7/socket.py", line 589, in readinto line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") File "/opt/splunk/lib/python3.7/http/client.py", line 280, in _read_status version, status, reason = self._read_status() File "/opt/splunk/lib/python3.7/http/client.py", line 319, in begin response.begin() File "/opt/splunk/lib/python3.7/http/client.py", line 1373, in getresponse httplib_response = conn.getresponse()       How do we troubleshoot and solve this?
Hi, On two Deploymentservers i have the issue, that the KV Store Migration partly failes because the KV Store Version cant be upgraded.   "/opt/splunk/bin/splunk migrate migrate-kvstore" is int... See more...
Hi, On two Deploymentservers i have the issue, that the KV Store Migration partly failes because the KV Store Version cant be upgraded.   "/opt/splunk/bin/splunk migrate migrate-kvstore" is interupted:    [App Key Value Store migration] Starting migrate-kvstore. [App Key Value Store migration] Checking if migration is needed. Upgrade type 2. This can take up to 600seconds. WARN: [App Key Value Store migration] Service(42) terminated before the service availability check could complete. Exit code 1, waited for 0 seconds. App Key Value Store migration failed, check the migration log for details. After you have addressed the cause of the service failure, run the migration again, otherwise App Key Value Store won’t function.   In Splunkd Logs of the Server you afterwards find the following ERROR Message: 08-24-2022 15:57:45.252 +0200 ERROR KVStoreBulletinBoardManager [4946 MainThread] - Failed to upgrade KV Store to the latest version. KV Store is running an old version, service(40). Resolve upgrade errors and try to upgrade KV Store to the latest version again.   I assume it has something to do with the OS which is SUSE SLES 11.4, since on later RHEL Systems we dont run into the problem. Has somebody experienced the same?
Hi, IThere is an application which is used by multiple teams and we are ingesting the application logs for each team in a single index. Here we want to restrict each team people should be accessibl... See more...
Hi, IThere is an application which is used by multiple teams and we are ingesting the application logs for each team in a single index. Here we want to restrict each team people should be accessible only their teams logs not all the data in the index. How do i implement it in splunk? Thanks in advance. Gowtham
One of our top customers using our add on app is facing issue related to delay in the indexing of the events. We can reproduce the issue in our local setup as well. The delay is between 170,000 se... See more...
One of our top customers using our add on app is facing issue related to delay in the indexing of the events. We can reproduce the issue in our local setup as well. The delay is between 170,000 secs to 250,000 secs (2-3 days). We are using following expression for getting specific events:   index="druva" | eval inSyncDataSourceName=upper(inSyncDataSourceName) | eval Epoch_Last_Backed_Up=strptime(Last_Backed_Up, "%b %d %Y %H:%M") | eval Days=round (((_time - Epoch_Last_Backed_Up)/86400),0) | eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M") | eval lag_sec=_indextime-_time | table timestamp _time indextime lag_sec severity event_type Alert Alert_Description Last_Backed_Up Days eventDetails clientOS clientVersion inSyncDataSourceName inSyncUserEmail inSyncUserName profileName   I tried the steps mentioned here -https://docs.splunk.com/Documentation/Splunk/9.0.0/Troubleshooting/Troubleshootingeventsindexingdelay And setting the maxKBps to zero but the issue still persists. Could you please suggest how can we address this issue. Appreciate your inputs.
Hi together, I want to update an old Splunk environment on Windows Server 2016 Standard. The update fails and the error log (%TEMP%/splunk.log) looks like this:     C:\Windows\system32\cmd.ex... See more...
Hi together, I want to update an old Splunk environment on Windows Server 2016 Standard. The update fails and the error log (%TEMP%/splunk.log) looks like this:     C:\Windows\system32\cmd.exe /c ""C:\Program Files\Splunk\bin\splunk.exe" enable boot-start-loop --answer-yes --no-prompt --accept-license >> "C:\Users\Admin\AppData\Local\Temp\splunk.log" 2>&1" Service Splunkd install error (CreateService): The specified service already exists. Installing service Splunkd Error running "splunkd install": error 1 C:\Windows\system32\cmd.exe /c ""C:\Program Files\Splunk\bin\splunk.exe" start --answer-yes --no-prompt --accept-license --auto-ports >> "C:\Users\Admin\AppData\Local\Temp\splunk.log" 2>&1" Splunk> Now with more code! Checking prerequisites... Checking http port [8443]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes... Validated: _audit _internal _introspection _metrics _telemetry _thefishbucket cim_modactions history main summary synology westermo wineventlog Done Invalid key in stanza [install] in C:\Program Files\Splunk\etc\apps\dnslookup\default\app.conf, line 10: author (value: Travis Freeland). Invalid key in stanza [install] in C:\Program Files\Splunk\etc\apps\dnslookup\default\app.conf, line 11: description (value: dnslookup <forward|reverse> <input field> <outputfield>, servicelookup <input field> <output field> <optional services file path>). Invalid key in stanza [settings] in C:\Program Files\Splunk\etc\system\local\web.conf, line 365: engine.autoreload_on (value: False). Invalid key in stanza [framework] in C:\Program Files\Splunk\etc\system\local\web.conf, line 493: django_enable (value: True). Invalid key in stanza [framework] in C:\Program Files\Splunk\etc\system\local\web.conf, line 496: django_path (value: etc/apps/framework). Invalid key in stanza [framework] in C:\Program Files\Splunk\etc\system\local\web.conf, line 499: django_force_enable (value: False). Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug' Checking filesystem compatibility... Done Checking conf files for problems... Done Checking default conf files for edits... Validating installed files against hashes from 'C:\Program Files\Splunk\splunk-8.0.0-1357bef0a7f6-windows-64-manifest' Error initializing openssl -- cannot compute checksums. Error encountered while attempting to validate files Problems were found, please review your files and move customizations to local All preliminary checks passed. Starting splunk server daemon (splunkd)... Splunkd: Starting (pid 2636) Done      For me it looks like openssl does not work properly and is cancelling the update, but I'm not sure about it. Do you have any ideas on what the problem is and what I need to do to make the update work? Cheers, lukrator
The issue:  a file that is being monitored was ingested again via batch.  The back story is not critical.  We know what happened and it shouldn't happen again.  We now have duplicates in our index ba... See more...
The issue:  a file that is being monitored was ingested again via batch.  The back story is not critical.  We know what happened and it shouldn't happen again.  We now have duplicates in our index back to 03/16/2021.  The user of this data wants the duplicates removed.  I have looked  at solutions to removing duplicates and with the amount of data involved would be very time consuming. The user asks the question: can we remove all the data based on index, host, sourcetype and source and then reload the data? My process would be (for each file being monitored) 1)  Turn off monitoring off the file 2)  Remove the matching data. 3) Turn back on monitoring of the file. When monitoring is turned back on, will it ingest the entire file the first time it is updated? I am open to other solutions to this as well. Thank you!