All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Looking for some real guidance here. We just implemented Splunk with an Implementation team. We are pulling out Notables to send to our case management product and then closing the notable (... See more...
Hello, Looking for some real guidance here. We just implemented Splunk with an Implementation team. We are pulling out Notables to send to our case management product and then closing the notable ( this way we are only searching for open notables to send and if for some reason it doesnt send it doesnt close so it can attempt again) .   We are having to add a |head 1 to this search in order for the update Notable command knows which notable to update and set to close ( Not having the Head command caused issues updating the notable to closed.....seeing say 5 notables and then trying to update became to confusing for splunk) . This has caused us to make this search real-time search ( we get 10 Notables at the same time we dont want to wait 10 minuets for that event to get over to us) . I am going to provide some of the SPL and see if anyone knows a better way....we have been waiting for 4 months from Splunk on this. `notable` | where (status==1 AND notable_xref_id!="") Some eval commands and table | head 1 | sendalert XXXX param.X_instance_id=X param.alert_mode="X" param.unique_id_field="" param.case_template="X" param.type="alert" param.source="splunk" param.timestamp_field="" param.title=X param.description=X param.tags="X" param.scope=0 param.severity=X param.tlp=X param.pap=X | table status event_id | eval status=5|updatenotable Has anyone attempted to search in the notable index and pull multiple events and tried to update the notable in that search and had successful results for multiple entries? 
I have two sourcetypes containing login information and user information Sourcetype1: Login information (useful paramaters: UserId, status) Sourcetype1: Id = accountId Sourcetype2: User informatio... See more...
I have two sourcetypes containing login information and user information Sourcetype1: Login information (useful paramaters: UserId, status) Sourcetype1: Id = accountId Sourcetype2: User information (useful parameters: username. Id) Sourcetype2; Id = userId Both sourcetypes contains the parameter Id but refers to different information. I want to get a list/table with number of logins and the result for each user Mapping login data with user data: UserId (Sourcetype1) = Id (Sourcetype2)   Example: username     status        count aa@aa.aa     success     3  
Hi All,  As AppDynamics announced that they will stop accepting network connections utilizing TLS 1.0 and 1.1 protocols from April 1 onwards. Is there an option to get extension for particular serve... See more...
Hi All,  As AppDynamics announced that they will stop accepting network connections utilizing TLS 1.0 and 1.1 protocols from April 1 onwards. Is there an option to get extension for particular servers, which are still using TLS 1.0 and 1.1 from AppDynamics .  We still have some servers that uses older TLS versions. Regards Fadil
I have a splunk universal forwarder, which is indexing a 1 GB log file to a Splunk Indexer. The problem I am facing is the ingestion is happening very slow (100K log entries per mins). I have tried s... See more...
I have a splunk universal forwarder, which is indexing a 1 GB log file to a Splunk Indexer. The problem I am facing is the ingestion is happening very slow (100K log entries per mins). I have tried setting the      parallelIngestionPipelines = 2     setting for both Indexer and Forwarder, but to no avail.  Below are the stats for the containers running Indexer and forwarder CONTAINER_ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS ecb272b9ca6b tracing-splunk-1 12.15% 260.8MiB / 7.674GiB 3.32% 366MB / 1.85MB 0B / 1.01GB 239 CONTAINER_ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 0ac17f935889 tracing-splunkforwarder-1 0.70% 68.22MiB / 7.674GiB 0.87% 986kB / 312MB 0B / 18.2MB 65 We are running these in a docker container I and my team is pretty new to Splunk eco system. Can someone please help us to optimize the ingestion of logs.
I am building a dashboard with the new dashboard builder and I have a dynmic dropdown which returns me these values: timerange, rangeStart, rangeEnd, date 2024-03-07T09:10:23/2024-03-07T23:34:39... See more...
I am building a dashboard with the new dashboard builder and I have a dynmic dropdown which returns me these values: timerange, rangeStart, rangeEnd, date 2024-03-07T09:10:23/2024-03-07T23:34:39 2024-03-07T09:10:23 2024-03-07T23:34:39 07/03/24-07/03/24 2024-03-08T19:41:25/2024-03-08T23:28:54 2024-03-08T19:41:25 2024-03-08T23:28:54 08/03/24-08/03/24 2024-03-11T19:36:52/2024-03-11T23:19:36 2024-03-11T19:36:52 2024-03-11T23:19:36 11/03/24-11/03/24   These ranges can go over multiple days. I use the date column as my label in the dropdown which works fine. My problem now is that I want to use the rangeStart and rangeEnd as the earliest and latest times for my graphs. My dropdown config looks like this: {     "options": {         "items": ">frame(label, value, additional_value) | prepend(formattedStatics) | objects()",         "token": "testrun",         "selectFirstSearchResult": true     },     "title": "Testrun",     "type": "input.dropdown",     "dataSources": {         "primary": "ds_w86GnMtx"     },     "context": {         "formattedConfig": {             "number": {                 "prefix": ""             }         },         "formattedStatics": ">statics | formatByType(formattedConfig)",         "statics": [],         "label": ">primary | seriesByName(\"date\") | renameSeries(\"label\") | formatByType(formattedConfig)",         "value": ">primary | seriesByName(\"rangeStart\") | renameSeries(\"value\") | formatByType(formattedConfig)",         "additional_value": ">primary | seriesByName(\"rangeEnd\") | renameSeries(\"additional_value\") | formatByType(formattedConfig)"     } } The token name for the dropdown is testrun    My query config for the graph looks like this: {     "type": "ds.search",     "options": {         "query": "QUERY",         "queryParameters": {             "earliest": "$testrun$rangeStart$",             "latest": "$testrun$rangeEnd$"         },         "enableSmartSources": true     },     "name": "cool graph" } It seems like the token $testrun$ itself returns the rangeStart, but these $testrun$rangeStart/rangeEnd$ don't work. Is it even possible to do something like that, that the dropdown returns multiple values? If not is there a way to use the timerange from above and split it in the middle to get earliest and latest? "earliest": "$testrun.timerange.split(\"/\")[0].strptime('%Y-%m-%dT%H:%M:%S')$", "latest": "$testrun.timerange.split(\"/\")[1].strptime('%Y-%m-%dT%H:%M:%S')$" I tried also this in different ways which I also couldn't get to work. The error I am getting is always "invalid earliest_time".
Hello, We have around 10K events per hour on _internal index from a Splunk UF 9.1.2 installed on a Windows 10 22H2 machine (build 19045.3930). I know the problem is that the Printer service is disa... See more...
Hello, We have around 10K events per hour on _internal index from a Splunk UF 9.1.2 installed on a Windows 10 22H2 machine (build 19045.3930). I know the problem is that the Printer service is disabled, the question is why Splunk UF is not trying to check WinPrintMon every 600 seconds as per inputs.conf   Here seems the reason, why is ignoring the interval parameter?   03-14-2024 11:02:02.900 +0100 INFO ExecProcessor [4212 ExecProcessor] - Ignoring parameter "interval" for modular input "WinPrintMon" when scheduling the runtime for script=""C:\Program Files\SplunkUniversalForwarder\bin\scripts\splunk-winprintmon.path"". This means potentially Splunk won't be restarting it in case it gets terminated.     Here the logs in _internal index (around 180 per minutes, so 3 times per second)   03-14-2024 10:30:23.470 +0100 ERROR ExecProcessor [7088 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winprintmon.exe"" splunk-winPrintMon - monitorHost::ProcessRefresh: Failed ProcessRefresh: error = '0x800706ba'. Restart. 03-14-2024 10:30:23.470 +0100 ERROR ExecProcessor [7088 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winprintmon.exe"" splunk-winPrintMon - monitorHost::ProcessRefresh: Failed ProcessRefresh: error = '0x800706ba'. Restart. 03-14-2024 10:30:22.932 +0100 ERROR ExecProcessor [7088 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winprintmon.exe"" splunk-winPrintMon - monitorHost::ProcessRefresh: Failed ProcessRefresh: error = '0x800706ba'. Restart. 03-14-2024 10:30:22.707 +0100 ERROR ExecProcessor [7088 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winprintmon.exe"" splunk-winPrintMon - monitorHost::ProcessRefresh: Failed ProcessRefresh: error = '0x800706ba'. Restart. 03-14-2024 10:30:22.407 +0100 ERROR ExecProcessor [7088 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winprintmon.exe"" splunk-winPrintMon - monitorHost::ProcessRefresh: Failed ProcessRefresh: error = '0x800706ba'. Restart.     Here inputs.conf   ###### Print monitoring ###### [WinPrintMon://printer] type = printer interval = 600 baseline = 1 disabled = 0 index=idx_xxxx_windows [WinPrintMon://job] type = job interval = 600 baseline = 1 disabled = 0 index=idx_xxxx_windows [WinPrintMon://driver] type = driver interval = 600 baseline = 1 disabled = 0 index=idx_xxxx_windows [WinPrintMon://port] type = port interval = 600 baseline = 1 disabled = 0 index=idx_xxxx_windows     Thanks a lot, Edaordo
Hello Splunkers! I've encountered challenges while attempting to connect Notion logs to our Splunk instance. Here's what I've tried: Inserting the HEC URL with a public IP on our Splunk on-premis... See more...
Hello Splunkers! I've encountered challenges while attempting to connect Notion logs to our Splunk instance. Here's what I've tried: Inserting the HEC URL with a public IP on our Splunk on-premise setup. Activating the HEC URL and applying it to our Splunk Cloud Trial. Unfortunately, both methods failed to establish the connection. Has anyone successfully connected Notion logs with Splunk, either on-premise or through the Splunk Cloud Trial? Thank you for your attention and support.    
Hello! Since 7.3.0 I'm seeing the reload process for assets and identities failing frequently. Any ideas?   RROR pid=20559 tid=MainThread file=base_modinput.py:execute:820 | Execution failed: 'Splu... See more...
Hello! Since 7.3.0 I'm seeing the reload process for assets and identities failing frequently. Any ideas?   RROR pid=20559 tid=MainThread file=base_modinput.py:execute:820 | Execution failed: 'SplunkdConnectionException' object has no attribute 'get_message_text' Traceback (most recent call last): File "/app/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 601, in simpleRequest serverResponse, serverContent = h.request(uri, method, headers=headers, body=payload) File "/app/splunk/lib/python3.7/site-packages/httplib2/__init__.py", line 1710, in request conn, authority, uri, request_uri, method, body, headers, redirections, cachekey, File "/app/splunk/lib/python3.7/site-packages/httplib2/__init__.py", line 1425, in _request (response, content) = self._conn_request(conn, request_uri, method, body, headers) File "/app/splunk/lib/python3.7/site-packages/httplib2/__init__.py", line 1377, in _conn_request response = conn.getresponse() File "/app/splunk/lib/python3.7/http/client.py", line 1373, in getresponse response.begin() File "/app/splunk/lib/python3.7/http/client.py", line 319, in begin version, status, reason = self._read_status() File "/app/splunk/lib/python3.7/http/client.py", line 280, in _read_status line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") File "/app/splunk/lib/python3.7/socket.py", line 589, in readinto return self._sock.recv_into(b) File "/app/splunk/lib/python3.7/ssl.py", line 1079, in recv_into return self.read(nbytes, buffer) File "/app/splunk/lib/python3.7/ssl.py", line 937, in read return self._sslobj.read(len, buffer) socket.timeout: The read operation timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/app/splunk/etc/apps/SA-IdentityManagement/bin/identity_manager.py", line 483, in reload_settings raiseAllErrors=True File "/app/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 613, in simpleRequest raise splunk.SplunkdConnectionException('Error connecting to %s: %s' % (path, str(e))) splunk.SplunkdConnectionException: Splunkd daemon is not responding: ('Error connecting to /services/identity_correlation/identity_manager/_reload: The read operation timed out',) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/app/splunk/etc/apps/SA-Utils/lib/SolnCommon/modinput/base_modinput.py", line 811, in execute log_exception_and_continue=True File "/app/splunk/etc/apps/SA-Utils/lib/SolnCommon/modinput/base_modinput.py", line 380, in do_run self.run(stanzas) File "/app/splunk/etc/apps/SA-IdentityManagement/bin/identity_manager.py", line 586, in run reload_success = self.reload_settings() File "/app/splunk/etc/apps/SA-IdentityManagement/bin/identity_manager.py", line 486, in reload_settings logger.error('status="Failed to reload settings" error="%s"', e.get_message_text()) AttributeError: 'SplunkdConnectionException' object has no attribute 'get_message_text'    
Hi, I am trying to find what is my Splunk ID for the purpose of purchasing additional users licenses. I am unable to find any useful info on my dashboard and the support portal seems to be down. Re... See more...
Hi, I am trying to find what is my Splunk ID for the purpose of purchasing additional users licenses. I am unable to find any useful info on my dashboard and the support portal seems to be down. Regards, Alvin  
hello all! I am trying to add  field to an artifact with "update artifact" action (phantom app). i am trying to add a 'message parameter' in the 'value' at the cef_json field: for example: {"new_... See more...
hello all! I am trying to add  field to an artifact with "update artifact" action (phantom app). i am trying to add a 'message parameter' in the 'value' at the cef_json field: for example: {"new_field": {0}} but unfortunately I get "key_error" and the action failed.  how can I solve it?
當我在SH設置好props.conf後去看我的uf端並重啟就會出現以下錯誤: Checking conf files for problems... Invalid key in stanza [web:access] in /opt/splunkforwarder/etc/apps/dynasafe_course_demo_ta/local/props.conf, line 3: ENVE... See more...
當我在SH設置好props.conf後去看我的uf端並重啟就會出現以下錯誤: Checking conf files for problems... Invalid key in stanza [web:access] in /opt/splunkforwarder/etc/apps/dynasafe_course_demo_ta/local/props.conf, line 3: ENVENT_BREAKER (value: ([\r\n]+)). Invalid key in stanza [web:secure] in /opt/splunkforwarder/etc/apps/dynasafe_course_demo_ta/local/props.conf, line 6: ENVENT_BREAKER (value: ([\r\n]+)). Your indexes and inputs configurations are not internally consistent. 這是怎麼回事
I’ve been running into an issue with the Splunk query which have been using since long time and seeing the following error message: “Please select a shorter time duration for your query,” even when I... See more...
I’ve been running into an issue with the Splunk query which have been using since long time and seeing the following error message: “Please select a shorter time duration for your query,” even when I’m using a 5-minute time range. I noticed that this error seems to pop up when we use latest=now() in our queries to get the most recent data.However, when I tried the same query with a specific time range, like earliest=-xxh@h latest=-xxh@h, it worked just fine. Any ideas on why latest=now() might not be fetching results as expected? And if there is any resolution to working with latest=now()
Hi Splunk Community, I'm working on a Django-based website server running inside a Docker container, and I'm facing an issue with OpenTelemetry Collector (Otel) data reception. Despite following the... See more...
Hi Splunk Community, I'm working on a Django-based website server running inside a Docker container, and I'm facing an issue with OpenTelemetry Collector (Otel) data reception. Despite following the official Splunk documentation for installing Otel within a Docker container. I'm encountering an issue where the Otel installed on my VM isn't receiving any data from my Django application. Here are the warning logs from the Otel container: 2024-03-14 04:49:05,592 WARNING [opentelemetry.exporter.otlp.proto.grpc.exporter] [exporter.py:293] [trace_id=0 span_id=0 resource.service.name=website trace_sampled=False] - Transient error StatusCode.UNAVAILABLE encountered while exporting metrics to localhost:4317, retrying in 32s.   Initially, my Dockerfile was configured with OTEL_EXPORTER_OTLP_ENDPOINT='localhost:4317'. Considering that might be the issue, I updated it to OTEL_EXPORTER_OTLP_ENDPOINT='otelcol:4317', aiming to directly communicate with the Otel collector service running as a Docker container. However, I'm still observing attempts to connect to localhost:4317 in the error logs. Here's a brief overview of my setup: Django application running in a Docker container. OpenTelemetry Collector deployed as a separate Docker container named 'otel-collector'. Dockerfile for the Django application updated to use the OpenTelemetry Collector container endpoint. Could anyone provide insights or suggestions on what might be going wrong here? How can I ensure that my Django application correctly sends telemetry data to the Otel Collector? Thank you in advance for your help and suggestions!
Hi,   I'm trying to write data to outputlookup file by doing a REST API Call (by running a search query). The below command works and writes data to outputlookup csv file when running the search d... See more...
Hi,   I'm trying to write data to outputlookup file by doing a REST API Call (by running a search query). The below command works and writes data to outputlookup csv file when running the search directly from Splunk. | stats count as field1 | eval field1="host_abc;host_def" | eval field1=split(field1,";") | mvexpand field1 | rex field=field1 "(?<host>.*)" | table host | outputlookup test_maintenance.csv But this is not working when executing the above search using REST API. Getting the below error "Unbalanced quotes" when running the below command curl -k -u admin:admin https://splunksearchnode:8089/servicesNS/admin/search/jobs/export -d search="| stats count as field1 | eval field1=\"host_abc;host_def\" | eval field1=split(field1,\";\") | mvexpand field1 | rex field=field1 \"(?<host>.*)\" | table host | outputlookup test_maintenance.csv" Getting the below error  when running the below command Error : Error in 'EvalCommand': The expression is malformed. An unexpected character is reached at '\'host_abc'.</msg></messages></response> curl -k -u admin:admin https://splunksearchnode:8089/servicesNS/admin/search/jobs/export -d search='| stats count as field1 | eval field1=\"host_abc;host_def\" | eval field1=split(field1,\";\") | mvexpand field1 | rex field=field1 \"(?<host>.*)\" | table host | outputlookup test_maintenance.csv' Appreciate your help.   Thank you    
I made a terrible mistake and tried to use Splunk as a non-admin for the first time in a year or so.  With that mistake I experienced normal user woes of job queuing.  In reaction to queuing I went t... See more...
I made a terrible mistake and tried to use Splunk as a non-admin for the first time in a year or so.  With that mistake I experienced normal user woes of job queuing.  In reaction to queuing I went to the job manager to delete all of my own jobs except the latest queued job I cared about.  Upon deletion of older jobs my queued search did not resume within a reasonable period of time (within 5 seconds).  I then went back to view the job activity monitor and saw that jobs I deleted seconds before were still present.   How long is someone expected to wait until queued jobs resume after deletion of older jobs?  Seems like the desired effect only comes after a matter of minutes, not seconds.  Is this configurable?
Hi all I am trying to join two queries but unable to get the expected result. I am using join command to extract username from base query and then look for the details of username from main query. I... See more...
Hi all I am trying to join two queries but unable to get the expected result. I am using join command to extract username from base query and then look for the details of username from main query. I am also trying to accomodate time constraints here, ex look for a user in main query if the time difference it was captured in sub query and main query is 120 secs. I am also using multiple eval commands and also tried appendcols
Hi All, I need help on troubleshooting metric coming to sim_metrics i.e SIM add - on Splunk observability is configured with a service "test". When I ran SIM command on splunk search head, I se... See more...
Hi All, I need help on troubleshooting metric coming to sim_metrics i.e SIM add - on Splunk observability is configured with a service "test". When I ran SIM command on splunk search head, I see there are metrics. Same if I run with mstats it is not returning any result. It was pulling data a week back but not now What could be the troubleshooting steps when there is issue like this ? What are the points i have to check on ? Summary : Data is  being pulled by SIM  Add on , So I am seeing metrics when using SIM command.  But when I try mstats on same metrics, it is not returning any result. Can anyone help me what could be the issue. From where I have to troubleshoot ? REgards, PNV
I have a question. I have a table that contains groups of people with their email addresses. I want to use this table in the recipients field when creating an alert to notify users via email. For thi... See more...
I have a question. I have a table that contains groups of people with their email addresses. I want to use this table in the recipients field when creating an alert to notify users via email. For this, I want to know if I can use $result.fieldname$ to call that table in the 'to' field when configuring the recipients.     
Hi - Recently we have upgraded splunk to version 9.1.3 . Noticed that I can not not start the splunk using :   "./splunk start --accept-licnese = yes" , forcing my to use "systemctl start Splunkd" ... See more...
Hi - Recently we have upgraded splunk to version 9.1.3 . Noticed that I can not not start the splunk using :   "./splunk start --accept-licnese = yes" , forcing my to use "systemctl start Splunkd" to start splunk   Could you please let me know how to pass --accept-license=yes with "systemctl start Splunkd"
I'm trying to create a workload management rule to prevent users from searching with "All Time". After researching, it seems that best practice would be to not run "All Time" searches as they produ... See more...
I'm trying to create a workload management rule to prevent users from searching with "All Time". After researching, it seems that best practice would be to not run "All Time" searches as they produce long run times and use more memory/cpu. Are there any types of searches, users or otherwise exceptions that should be allowed to use "All Time"?