All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, There is an application which is used by multiple teams and we are ingesting the application logs for each team in a single index. Here we want to restrict each team people should be accessible ... See more...
Hi, There is an application which is used by multiple teams and we are ingesting the application logs for each team in a single index. Here we want to restrict each team people should be accessible only their teams logs not all the data in the index. How do i implement it in splunk? Thanks in advance. Gowtham
Loading the Configuration page from the Splunk_TA_snow ServiceNow TA yields the following error: Something went wrong! Failed to load current state for selected entity in form! Error: Requ... See more...
Loading the Configuration page from the Splunk_TA_snow ServiceNow TA yields the following error: Something went wrong! Failed to load current state for selected entity in form! Error: Request failed with status code 500 ERR0005   Similarly for the inputs page yields the following error: Failed to load Inputs Page This is normal on Splunk search heads as they do not require an Input page. Check your installation or return to the configuration page. Error: Request failed with status code 500 ERR0001   The troubleshooting documentation only mentions something about using the admin  account, which we are: https://docs.splunk.com/Documentation/AddOns/released/ServiceNow/Troubleshooting#Cannot_access_configuration_page SNOW internal logs only show this error:       index=_internal sourcetype="ta_snow" 2022-08-24 13:52:12,894 INFO pid=[xxx] tid=MainThread file=snow.py:stream_events:430 | No configured inputs found. To collect data from ServiceNow, configure new input(s) or update existing input(s) either from Inputs page of the Add-on or manually from inputs.conf.     Splunkd errors show:       requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')) raise ConnectionError(err, request=request) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/requests/adapters.py", line 501, in send r = adapter.send(request, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/requests/sessions.py", line 645, in send resp = self.send(prep, **send_kwargs) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/requests/sessions.py", line 529, in request return session.request(method=method, url=url, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/requests/api.py", line 61, in request **kwargs, File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/solnlib/splunk_rest_client.py", line 147, in request response = self.handler(url, message, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunklib/binding.py", line 1279, in request return self.request(url, { 'method': "GET", 'headers': headers }) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunklib/binding.py", line 1219, in get response = self.http.get(path, all_headers, **query) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunklib/binding.py", line 685, in get val = f(*args, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunklib/binding.py", line 70, in new_f return request_fun(self, *args, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunklib/binding.py", line 289, in wrapper **query) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunklib/client.py", line 795, in get return super(Collection, self).get(name, owner, app, sharing, **query) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunklib/client.py", line 1702, in get response = self.get(key) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/splunklib/client.py", line 1740, in __getitem__ conf = self._confs[name] File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/solnlib/conf_manager.py", line 457, in get_conf return func(*args, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/solnlib/utils.py", line 153, in wrapper Traceback (most recent call last): During handling of the above exception, another exception occurred: urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')) return self._sock.recv_into(b) File "/opt/splunk/lib/python3.7/socket.py", line 589, in readinto line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") File "/opt/splunk/lib/python3.7/http/client.py", line 280, in _read_status version, status, reason = self._read_status() File "/opt/splunk/lib/python3.7/http/client.py", line 319, in begin response.begin() File "/opt/splunk/lib/python3.7/http/client.py", line 1373, in getresponse httplib_response = conn.getresponse() File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/urllib3/connectionpool.py", line 444, in _make_request File "<string>", line 3, in raise_from six.raise_from(e, None) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/urllib3/connectionpool.py", line 449, in _make_request chunked=chunked, File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/urllib3/connectionpool.py", line 710, in urlopen raise value.with_traceback(tb) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/urllib3/packages/six.py", line 769, in reraise raise six.reraise(type(error), error, _stacktrace) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/urllib3/util/retry.py", line 550, in increment File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/urllib3/connectionpool.py", line 786, in urlopen timeout=timeout File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/requests/adapters.py", line 450, in send Traceback (most recent call last): During handling of the above exception, another exception occurred: ConnectionResetError: [Errno 104] Connection reset by peer return self._sock.recv_into(b) File "/opt/splunk/lib/python3.7/socket.py", line 589, in readinto line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") File "/opt/splunk/lib/python3.7/http/client.py", line 280, in _read_status version, status, reason = self._read_status() File "/opt/splunk/lib/python3.7/http/client.py", line 319, in begin response.begin() File "/opt/splunk/lib/python3.7/http/client.py", line 1373, in getresponse httplib_response = conn.getresponse() File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/urllib3/connectionpool.py", line 444, in _make_request File "<string>", line 3, in raise_from six.raise_from(e, None) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/urllib3/connectionpool.py", line 449, in _make_request chunked=chunked, File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/urllib3/connectionpool.py", line 710, in urlopen WARNING:root:Run function: get_conf failed: Traceback (most recent call last): requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')) raise ConnectionError(err, request=request) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/requests/adapters.py", line 501, in send r = adapter.send(request, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/requests/sessions.py", line 645, in send resp = self.send(prep, **send_kwargs) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/requests/sessions.py", line 529, in request return session.request(method=method, url=url, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/requests/api.py", line 61, in request **kwargs, File "/opt/splunk/etc/apps/Splunk_TA_snow/lib/solnlib/splunk_rest_client.py", line 147, in request Traceback (most recent call last): During handling of the above exception, another exception occurred: urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')) return self._sock.recv_into(b) File "/opt/splunk/lib/python3.7/socket.py", line 589, in readinto line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") File "/opt/splunk/lib/python3.7/http/client.py", line 280, in _read_status version, status, reason = self._read_status() File "/opt/splunk/lib/python3.7/http/client.py", line 319, in begin response.begin() File "/opt/splunk/lib/python3.7/http/client.py", line 1373, in getresponse httplib_response = conn.getresponse()       How do we troubleshoot and solve this?
Hi, On two Deploymentservers i have the issue, that the KV Store Migration partly failes because the KV Store Version cant be upgraded.   "/opt/splunk/bin/splunk migrate migrate-kvstore" is int... See more...
Hi, On two Deploymentservers i have the issue, that the KV Store Migration partly failes because the KV Store Version cant be upgraded.   "/opt/splunk/bin/splunk migrate migrate-kvstore" is interupted:    [App Key Value Store migration] Starting migrate-kvstore. [App Key Value Store migration] Checking if migration is needed. Upgrade type 2. This can take up to 600seconds. WARN: [App Key Value Store migration] Service(42) terminated before the service availability check could complete. Exit code 1, waited for 0 seconds. App Key Value Store migration failed, check the migration log for details. After you have addressed the cause of the service failure, run the migration again, otherwise App Key Value Store won’t function.   In Splunkd Logs of the Server you afterwards find the following ERROR Message: 08-24-2022 15:57:45.252 +0200 ERROR KVStoreBulletinBoardManager [4946 MainThread] - Failed to upgrade KV Store to the latest version. KV Store is running an old version, service(40). Resolve upgrade errors and try to upgrade KV Store to the latest version again.   I assume it has something to do with the OS which is SUSE SLES 11.4, since on later RHEL Systems we dont run into the problem. Has somebody experienced the same?
Hi, IThere is an application which is used by multiple teams and we are ingesting the application logs for each team in a single index. Here we want to restrict each team people should be accessibl... See more...
Hi, IThere is an application which is used by multiple teams and we are ingesting the application logs for each team in a single index. Here we want to restrict each team people should be accessible only their teams logs not all the data in the index. How do i implement it in splunk? Thanks in advance. Gowtham
One of our top customers using our add on app is facing issue related to delay in the indexing of the events. We can reproduce the issue in our local setup as well. The delay is between 170,000 se... See more...
One of our top customers using our add on app is facing issue related to delay in the indexing of the events. We can reproduce the issue in our local setup as well. The delay is between 170,000 secs to 250,000 secs (2-3 days). We are using following expression for getting specific events:   index="druva" | eval inSyncDataSourceName=upper(inSyncDataSourceName) | eval Epoch_Last_Backed_Up=strptime(Last_Backed_Up, "%b %d %Y %H:%M") | eval Days=round (((_time - Epoch_Last_Backed_Up)/86400),0) | eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M") | eval lag_sec=_indextime-_time | table timestamp _time indextime lag_sec severity event_type Alert Alert_Description Last_Backed_Up Days eventDetails clientOS clientVersion inSyncDataSourceName inSyncUserEmail inSyncUserName profileName   I tried the steps mentioned here -https://docs.splunk.com/Documentation/Splunk/9.0.0/Troubleshooting/Troubleshootingeventsindexingdelay And setting the maxKBps to zero but the issue still persists. Could you please suggest how can we address this issue. Appreciate your inputs.
Hi together, I want to update an old Splunk environment on Windows Server 2016 Standard. The update fails and the error log (%TEMP%/splunk.log) looks like this:     C:\Windows\system32\cmd.ex... See more...
Hi together, I want to update an old Splunk environment on Windows Server 2016 Standard. The update fails and the error log (%TEMP%/splunk.log) looks like this:     C:\Windows\system32\cmd.exe /c ""C:\Program Files\Splunk\bin\splunk.exe" enable boot-start-loop --answer-yes --no-prompt --accept-license >> "C:\Users\Admin\AppData\Local\Temp\splunk.log" 2>&1" Service Splunkd install error (CreateService): The specified service already exists. Installing service Splunkd Error running "splunkd install": error 1 C:\Windows\system32\cmd.exe /c ""C:\Program Files\Splunk\bin\splunk.exe" start --answer-yes --no-prompt --accept-license --auto-ports >> "C:\Users\Admin\AppData\Local\Temp\splunk.log" 2>&1" Splunk> Now with more code! Checking prerequisites... Checking http port [8443]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes... Validated: _audit _internal _introspection _metrics _telemetry _thefishbucket cim_modactions history main summary synology westermo wineventlog Done Invalid key in stanza [install] in C:\Program Files\Splunk\etc\apps\dnslookup\default\app.conf, line 10: author (value: Travis Freeland). Invalid key in stanza [install] in C:\Program Files\Splunk\etc\apps\dnslookup\default\app.conf, line 11: description (value: dnslookup <forward|reverse> <input field> <outputfield>, servicelookup <input field> <output field> <optional services file path>). Invalid key in stanza [settings] in C:\Program Files\Splunk\etc\system\local\web.conf, line 365: engine.autoreload_on (value: False). Invalid key in stanza [framework] in C:\Program Files\Splunk\etc\system\local\web.conf, line 493: django_enable (value: True). Invalid key in stanza [framework] in C:\Program Files\Splunk\etc\system\local\web.conf, line 496: django_path (value: etc/apps/framework). Invalid key in stanza [framework] in C:\Program Files\Splunk\etc\system\local\web.conf, line 499: django_force_enable (value: False). Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug' Checking filesystem compatibility... Done Checking conf files for problems... Done Checking default conf files for edits... Validating installed files against hashes from 'C:\Program Files\Splunk\splunk-8.0.0-1357bef0a7f6-windows-64-manifest' Error initializing openssl -- cannot compute checksums. Error encountered while attempting to validate files Problems were found, please review your files and move customizations to local All preliminary checks passed. Starting splunk server daemon (splunkd)... Splunkd: Starting (pid 2636) Done      For me it looks like openssl does not work properly and is cancelling the update, but I'm not sure about it. Do you have any ideas on what the problem is and what I need to do to make the update work? Cheers, lukrator
The issue:  a file that is being monitored was ingested again via batch.  The back story is not critical.  We know what happened and it shouldn't happen again.  We now have duplicates in our index ba... See more...
The issue:  a file that is being monitored was ingested again via batch.  The back story is not critical.  We know what happened and it shouldn't happen again.  We now have duplicates in our index back to 03/16/2021.  The user of this data wants the duplicates removed.  I have looked  at solutions to removing duplicates and with the amount of data involved would be very time consuming. The user asks the question: can we remove all the data based on index, host, sourcetype and source and then reload the data? My process would be (for each file being monitored) 1)  Turn off monitoring off the file 2)  Remove the matching data. 3) Turn back on monitoring of the file. When monitoring is turned back on, will it ingest the entire file the first time it is updated? I am open to other solutions to this as well. Thank you!
Hi, How can I transform a table, so that the result would look something like this  
Last night I installed the UF onto a system hosting some docker containers. I wanted to grab the log files without modifying the existing containers config so I created symlinks to the container logs... See more...
Last night I installed the UF onto a system hosting some docker containers. I wanted to grab the log files without modifying the existing containers config so I created symlinks to the container logs (/var/lib/docker/containers/<name>) in /var/log.  Then set the stanzas in inputs.conf to look at those symlinks. Bounced the app and waited about half an hour, nothing.  I was searching and found references to followSymlinks, so I added that to each stanza as 'true'. It's been ~7 hours and nothing yet. What did I do wrong here?
Hi All, I want to write a search which gives me total event counts for each host as per the time range picker. Additionally, I want to had two more columns which give event counts for each host in l... See more...
Hi All, I want to write a search which gives me total event counts for each host as per the time range picker. Additionally, I want to had two more columns which give event counts for each host in last 7 days and last 24 hours.  My SPL is in the format: - index="xxx" field1="dummy_value" field2="dummy_value" |stats sparkline(sum(event_count)) AS sparkline, max(_time) AS _time, sum(event_count) AS "Total_Event_Count" BY field2, field3, field4 |table field2, sparkline, field3, field4 I tried using append command but it does not help me get proper results.  Thus, I need your help to build the SPL. Thank you
I have created a number of apps and push them out using the command line.  In the serverclass.conf I want to add a restart of the forwarders.  This is what I have so far: [default] restartSplunk... See more...
I have created a number of apps and push them out using the command line.  In the serverclass.conf I want to add a restart of the forwarders.  This is what I have so far: [default] restartSplunkd = true issueReload = true [serverClass:duke_test_app:app:duke_test_app] restartSplunkWeb = 0 restartSplunkd = 1 stateOnClient = enabled [serverClass:duke_test_app] whitelist.0 =xxxxxx Will this work?  
"user-info" index=user_interface_type sourcetype=*  | table _time, host, port, _raw | sendemail to="abc@splunk.com" sendresults=true I use above query to list out the details for the search "user... See more...
"user-info" index=user_interface_type sourcetype=*  | table _time, host, port, _raw | sendemail to="abc@splunk.com" sendresults=true I use above query to list out the details for the search "user-info" I want to use this string "user-info" and pass it on in the title of the e-mail as : Notification received for user-info How to do that ?  
we have a DSP k8s cluster , for creating a DSP connection , is there a way for me to setup a HTTP proxy for data destination, send outputs to a destination over a HTTP proxy? It seems there is no suc... See more...
we have a DSP k8s cluster , for creating a DSP connection , is there a way for me to setup a HTTP proxy for data destination, send outputs to a destination over a HTTP proxy? It seems there is no such options on the UI, maybe it can be archived by adding http proxy Env variables into the container which is in charge of sending outgoing traffics via editing K8s Deployment, but I am not sure which Deployment should I change, or maybe add those ENV variables into the k8s master nodes…
I need to write regular expression to extract few fields in this, but not able to figure this out. Can you please help me on the same. X-Response-Timestamp: 2022-08-24T07:27:26.150Z x-amzn-Remapped... See more...
I need to write regular expression to extract few fields in this, but not able to figure this out. Can you please help me on the same. X-Response-Timestamp: 2022-08-24T07:27:26.150Z x-amzn-Remapped-Connection: close ... 4 lines omitted ... X-Amzn-Trace-Id: Root=1-6305d2de-69ec840431ff21182b4a9f68 Content-Type: application/json {"code":"APS.MPI.2019","severity":"FATL","text":"Invalid Request","user_message":"Request id has already used."} Above is the whole log. I need to extract code,severity and message. I cant able t understand the format and fetch.
Is there a way to manually trigger modular inputs using the rest API. Most of the advice I've seen involves triggering mod inputs by turning them off then on, but my setup uses cron intervals instead... See more...
Is there a way to manually trigger modular inputs using the rest API. Most of the advice I've seen involves triggering mod inputs by turning them off then on, but my setup uses cron intervals instead of seconds so it doesn't execute on enablement.
Hello Splunk Team, I want to build Dashboard on over golden signal. Can anyone help me or anyone have any prebuild dashboard. So can build my dashboard accordingly.
Hello everyone,  I have been reading the documentation for timecharts however I am a bit confused on how I can modify the encoding for the timecharts to return the second column instead of the fir... See more...
Hello everyone,  I have been reading the documentation for timecharts however I am a bit confused on how I can modify the encoding for the timecharts to return the second column instead of the first  as shown in the image or here I want to use a single value time chart to monitor the hosts reporting to me per hour and track the number of increase / decrease in them  Many thanks for all your help
greetings, i use LDAP for Authentication combined with RSA Multifactor Authentication. Is it possible to configure a exception for local users? 
  Hi, how do I display my Status Indicator with dynamic colors and icons in a Trellis layout?   | eval status=case(status_id==0,"Idle", status_id>=1 AND status_id<=3,"Setup/PM", status_id==4,"Idle... See more...
  Hi, how do I display my Status Indicator with dynamic colors and icons in a Trellis layout?   | eval status=case(status_id==0,"Idle", status_id>=1 AND status_id<=3,"Setup/PM", status_id==4,"Idle", status_id>=5 AND status_id<=6,"Down", status_id>=7 AND status_id<=8,"Idle", status_id==9,"Running") | eval color=case(status_id=0,"#edd051", status_id>=1 AND status_id<=3,"#006d9c", status_id=4,"#edd051", status_id>=5 AND status_id<=6,"#ff0000", status_id>=7 AND status_id<=8,"#edd051", status_id=9,"#42dba0") | eval icon=case(status_id=0,"times-circle", status_id>=1 AND status_id<=3,"user", status_id=4,"times-circle", status_id>=5 AND status_id<=6,"warning", status_id>=7 AND status_id<=8,"times-circle", status_id=9,"check") | stats last(status) last(color) last(icon) BY internal_name   This only displays the status with no icons and the default grey color.   Thanks.  
I have recently realized certain data models of indexes are occupying alot of disk size and have lowered the Summary Range of related Data Model from 3 months to 1 month. Currently in this scenario I... See more...
I have recently realized certain data models of indexes are occupying alot of disk size and have lowered the Summary Range of related Data Model from 3 months to 1 month. Currently in this scenario I have not seen a drop in the Disk Size usage in the Data Model screen, do I need to Rebuild and Update the acceleration? If so in which order and are there any performance or other risks involved in doing so? Thanks, Regards,