All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm hoping the community can help you out here because I'm having the same issue.
Hello, Looking for some real guidance here. We just implemented Splunk with an Implementation team. We are pulling out Notables to send to our case management product and then closing the notable (... See more...
Hello, Looking for some real guidance here. We just implemented Splunk with an Implementation team. We are pulling out Notables to send to our case management product and then closing the notable ( this way we are only searching for open notables to send and if for some reason it doesnt send it doesnt close so it can attempt again) .   We are having to add a |head 1 to this search in order for the update Notable command knows which notable to update and set to close ( Not having the Head command caused issues updating the notable to closed.....seeing say 5 notables and then trying to update became to confusing for splunk) . This has caused us to make this search real-time search ( we get 10 Notables at the same time we dont want to wait 10 minuets for that event to get over to us) . I am going to provide some of the SPL and see if anyone knows a better way....we have been waiting for 4 months from Splunk on this. `notable` | where (status==1 AND notable_xref_id!="") Some eval commands and table | head 1 | sendalert XXXX param.X_instance_id=X param.alert_mode="X" param.unique_id_field="" param.case_template="X" param.type="alert" param.source="splunk" param.timestamp_field="" param.title=X param.description=X param.tags="X" param.scope=0 param.severity=X param.tlp=X param.pap=X | table status event_id | eval status=5|updatenotable Has anyone attempted to search in the notable index and pull multiple events and tried to update the notable in that search and had successful results for multiple entries? 
I have two sourcetypes containing login information and user information Sourcetype1: Login information (useful paramaters: UserId, status) Sourcetype1: Id = accountId Sourcetype2: User informatio... See more...
I have two sourcetypes containing login information and user information Sourcetype1: Login information (useful paramaters: UserId, status) Sourcetype1: Id = accountId Sourcetype2: User information (useful parameters: username. Id) Sourcetype2; Id = userId Both sourcetypes contains the parameter Id but refers to different information. I want to get a list/table with number of logins and the result for each user Mapping login data with user data: UserId (Sourcetype1) = Id (Sourcetype2)   Example: username     status        count aa@aa.aa     success     3  
Make sure you have this in limits.conf on the UF [thruput] maxKBps = 0  
Is there any tooling (btool perhaps) that would tell me what props/transfroms are being applied to my sourcetype? Even if I drop the sourcetype form my inputs.conf the issue perists 
ITWhisperer - thanks for your answer  - fits perfect!   Is the creation of own source-type difficult -  any hints, tutorials about it ?   KP      
How about a really slow response....  Maybe someone else will stumble across it.    Many Amazon EC2 instances support simultaneous multithreading, which enables multiple threads to run concurrentl... See more...
How about a really slow response....  Maybe someone else will stumble across it.    Many Amazon EC2 instances support simultaneous multithreading, which enables multiple threads to run concurrently on a single CPU core. Each thread is represented as a virtual CPU (vCPU) on the instance. An instance has a default number of CPU cores, which varies according to instance type. For example, an m5.xlarge instance type has two CPU cores and two threads per core by default—four vCPUs in total. taken from AWS docs @ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-optimize-cpu.html The  AWS instance-type page states that the c6i.4xl that you were looking at is an ice-lake 8375C processor. What you are getting is use of 16 threads from a  32 core/64 thread processor.  https://en.wikipedia.org/wiki/List_of_Intel_Xeon_processors_(Ice_Lake-based)  
Yes I thought about using the old dashboard builder as an alternative, but I wanted to see if it would be possible to use the new one.
Hi All,  As AppDynamics announced that they will stop accepting network connections utilizing TLS 1.0 and 1.1 protocols from April 1 onwards. Is there an option to get extension for particular serve... See more...
Hi All,  As AppDynamics announced that they will stop accepting network connections utilizing TLS 1.0 and 1.1 protocols from April 1 onwards. Is there an option to get extension for particular servers, which are still using TLS 1.0 and 1.1 from AppDynamics .  We still have some servers that uses older TLS versions. Regards Fadil
I have a splunk universal forwarder, which is indexing a 1 GB log file to a Splunk Indexer. The problem I am facing is the ingestion is happening very slow (100K log entries per mins). I have tried s... See more...
I have a splunk universal forwarder, which is indexing a 1 GB log file to a Splunk Indexer. The problem I am facing is the ingestion is happening very slow (100K log entries per mins). I have tried setting the      parallelIngestionPipelines = 2     setting for both Indexer and Forwarder, but to no avail.  Below are the stats for the containers running Indexer and forwarder CONTAINER_ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS ecb272b9ca6b tracing-splunk-1 12.15% 260.8MiB / 7.674GiB 3.32% 366MB / 1.85MB 0B / 1.01GB 239 CONTAINER_ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 0ac17f935889 tracing-splunkforwarder-1 0.70% 68.22MiB / 7.674GiB 0.87% 986kB / 312MB 0B / 18.2MB 65 We are running these in a docker container I and my team is pretty new to Splunk eco system. Can someone please help us to optimize the ingestion of logs.
Have you considered using Classic / SimpleXML dashboard as you can probably achieve this with SimpleXML?
Thank you for the quick take.   I am confident that i searched for and invoked deletion of my jobs across all apps.  I’m thinking it takes splunk a while to act on or confirm deletion and until so it... See more...
Thank you for the quick take.   I am confident that i searched for and invoked deletion of my jobs across all apps.  I’m thinking it takes splunk a while to act on or confirm deletion and until so it jobs remain searchable in job activity manager.   I have experienced this sort of problem in multiple clustered on-prem splunk implementations  over the years and am frustrated on behalf of users for their non deterministic experience recovering from queuing through prescribed actions on job activity monitor. 
I am building a dashboard with the new dashboard builder and I have a dynmic dropdown which returns me these values: timerange, rangeStart, rangeEnd, date 2024-03-07T09:10:23/2024-03-07T23:34:39... See more...
I am building a dashboard with the new dashboard builder and I have a dynmic dropdown which returns me these values: timerange, rangeStart, rangeEnd, date 2024-03-07T09:10:23/2024-03-07T23:34:39 2024-03-07T09:10:23 2024-03-07T23:34:39 07/03/24-07/03/24 2024-03-08T19:41:25/2024-03-08T23:28:54 2024-03-08T19:41:25 2024-03-08T23:28:54 08/03/24-08/03/24 2024-03-11T19:36:52/2024-03-11T23:19:36 2024-03-11T19:36:52 2024-03-11T23:19:36 11/03/24-11/03/24   These ranges can go over multiple days. I use the date column as my label in the dropdown which works fine. My problem now is that I want to use the rangeStart and rangeEnd as the earliest and latest times for my graphs. My dropdown config looks like this: {     "options": {         "items": ">frame(label, value, additional_value) | prepend(formattedStatics) | objects()",         "token": "testrun",         "selectFirstSearchResult": true     },     "title": "Testrun",     "type": "input.dropdown",     "dataSources": {         "primary": "ds_w86GnMtx"     },     "context": {         "formattedConfig": {             "number": {                 "prefix": ""             }         },         "formattedStatics": ">statics | formatByType(formattedConfig)",         "statics": [],         "label": ">primary | seriesByName(\"date\") | renameSeries(\"label\") | formatByType(formattedConfig)",         "value": ">primary | seriesByName(\"rangeStart\") | renameSeries(\"value\") | formatByType(formattedConfig)",         "additional_value": ">primary | seriesByName(\"rangeEnd\") | renameSeries(\"additional_value\") | formatByType(formattedConfig)"     } } The token name for the dropdown is testrun    My query config for the graph looks like this: {     "type": "ds.search",     "options": {         "query": "QUERY",         "queryParameters": {             "earliest": "$testrun$rangeStart$",             "latest": "$testrun$rangeEnd$"         },         "enableSmartSources": true     },     "name": "cool graph" } It seems like the token $testrun$ itself returns the rangeStart, but these $testrun$rangeStart/rangeEnd$ don't work. Is it even possible to do something like that, that the dropdown returns multiple values? If not is there a way to use the timerange from above and split it in the middle to get earliest and latest? "earliest": "$testrun.timerange.split(\"/\")[0].strptime('%Y-%m-%dT%H:%M:%S')$", "latest": "$testrun.timerange.split(\"/\")[1].strptime('%Y-%m-%dT%H:%M:%S')$" I tried also this in different ways which I also couldn't get to work. The error I am getting is always "invalid earliest_time".
Hello, We have around 10K events per hour on _internal index from a Splunk UF 9.1.2 installed on a Windows 10 22H2 machine (build 19045.3930). I know the problem is that the Printer service is disa... See more...
Hello, We have around 10K events per hour on _internal index from a Splunk UF 9.1.2 installed on a Windows 10 22H2 machine (build 19045.3930). I know the problem is that the Printer service is disabled, the question is why Splunk UF is not trying to check WinPrintMon every 600 seconds as per inputs.conf   Here seems the reason, why is ignoring the interval parameter?   03-14-2024 11:02:02.900 +0100 INFO ExecProcessor [4212 ExecProcessor] - Ignoring parameter "interval" for modular input "WinPrintMon" when scheduling the runtime for script=""C:\Program Files\SplunkUniversalForwarder\bin\scripts\splunk-winprintmon.path"". This means potentially Splunk won't be restarting it in case it gets terminated.     Here the logs in _internal index (around 180 per minutes, so 3 times per second)   03-14-2024 10:30:23.470 +0100 ERROR ExecProcessor [7088 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winprintmon.exe"" splunk-winPrintMon - monitorHost::ProcessRefresh: Failed ProcessRefresh: error = '0x800706ba'. Restart. 03-14-2024 10:30:23.470 +0100 ERROR ExecProcessor [7088 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winprintmon.exe"" splunk-winPrintMon - monitorHost::ProcessRefresh: Failed ProcessRefresh: error = '0x800706ba'. Restart. 03-14-2024 10:30:22.932 +0100 ERROR ExecProcessor [7088 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winprintmon.exe"" splunk-winPrintMon - monitorHost::ProcessRefresh: Failed ProcessRefresh: error = '0x800706ba'. Restart. 03-14-2024 10:30:22.707 +0100 ERROR ExecProcessor [7088 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winprintmon.exe"" splunk-winPrintMon - monitorHost::ProcessRefresh: Failed ProcessRefresh: error = '0x800706ba'. Restart. 03-14-2024 10:30:22.407 +0100 ERROR ExecProcessor [7088 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winprintmon.exe"" splunk-winPrintMon - monitorHost::ProcessRefresh: Failed ProcessRefresh: error = '0x800706ba'. Restart.     Here inputs.conf   ###### Print monitoring ###### [WinPrintMon://printer] type = printer interval = 600 baseline = 1 disabled = 0 index=idx_xxxx_windows [WinPrintMon://job] type = job interval = 600 baseline = 1 disabled = 0 index=idx_xxxx_windows [WinPrintMon://driver] type = driver interval = 600 baseline = 1 disabled = 0 index=idx_xxxx_windows [WinPrintMon://port] type = port interval = 600 baseline = 1 disabled = 0 index=idx_xxxx_windows     Thanks a lot, Edaordo
Hello Splunkers! I've encountered challenges while attempting to connect Notion logs to our Splunk instance. Here's what I've tried: Inserting the HEC URL with a public IP on our Splunk on-premis... See more...
Hello Splunkers! I've encountered challenges while attempting to connect Notion logs to our Splunk instance. Here's what I've tried: Inserting the HEC URL with a public IP on our Splunk on-premise setup. Activating the HEC URL and applying it to our Splunk Cloud Trial. Unfortunately, both methods failed to establish the connection. Has anyone successfully connected Notion logs with Splunk, either on-premise or through the Splunk Cloud Trial? Thank you for your attention and support.    
Hello! Since 7.3.0 I'm seeing the reload process for assets and identities failing frequently. Any ideas?   RROR pid=20559 tid=MainThread file=base_modinput.py:execute:820 | Execution failed: 'Splu... See more...
Hello! Since 7.3.0 I'm seeing the reload process for assets and identities failing frequently. Any ideas?   RROR pid=20559 tid=MainThread file=base_modinput.py:execute:820 | Execution failed: 'SplunkdConnectionException' object has no attribute 'get_message_text' Traceback (most recent call last): File "/app/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 601, in simpleRequest serverResponse, serverContent = h.request(uri, method, headers=headers, body=payload) File "/app/splunk/lib/python3.7/site-packages/httplib2/__init__.py", line 1710, in request conn, authority, uri, request_uri, method, body, headers, redirections, cachekey, File "/app/splunk/lib/python3.7/site-packages/httplib2/__init__.py", line 1425, in _request (response, content) = self._conn_request(conn, request_uri, method, body, headers) File "/app/splunk/lib/python3.7/site-packages/httplib2/__init__.py", line 1377, in _conn_request response = conn.getresponse() File "/app/splunk/lib/python3.7/http/client.py", line 1373, in getresponse response.begin() File "/app/splunk/lib/python3.7/http/client.py", line 319, in begin version, status, reason = self._read_status() File "/app/splunk/lib/python3.7/http/client.py", line 280, in _read_status line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") File "/app/splunk/lib/python3.7/socket.py", line 589, in readinto return self._sock.recv_into(b) File "/app/splunk/lib/python3.7/ssl.py", line 1079, in recv_into return self.read(nbytes, buffer) File "/app/splunk/lib/python3.7/ssl.py", line 937, in read return self._sslobj.read(len, buffer) socket.timeout: The read operation timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/app/splunk/etc/apps/SA-IdentityManagement/bin/identity_manager.py", line 483, in reload_settings raiseAllErrors=True File "/app/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 613, in simpleRequest raise splunk.SplunkdConnectionException('Error connecting to %s: %s' % (path, str(e))) splunk.SplunkdConnectionException: Splunkd daemon is not responding: ('Error connecting to /services/identity_correlation/identity_manager/_reload: The read operation timed out',) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/app/splunk/etc/apps/SA-Utils/lib/SolnCommon/modinput/base_modinput.py", line 811, in execute log_exception_and_continue=True File "/app/splunk/etc/apps/SA-Utils/lib/SolnCommon/modinput/base_modinput.py", line 380, in do_run self.run(stanzas) File "/app/splunk/etc/apps/SA-IdentityManagement/bin/identity_manager.py", line 586, in run reload_success = self.reload_settings() File "/app/splunk/etc/apps/SA-IdentityManagement/bin/identity_manager.py", line 486, in reload_settings logger.error('status="Failed to reload settings" error="%s"', e.get_message_text()) AttributeError: 'SplunkdConnectionException' object has no attribute 'get_message_text'    
Hi, I am trying to find what is my Splunk ID for the purpose of purchasing additional users licenses. I am unable to find any useful info on my dashboard and the support portal seems to be down. Re... See more...
Hi, I am trying to find what is my Splunk ID for the purpose of purchasing additional users licenses. I am unable to find any useful info on my dashboard and the support portal seems to be down. Regards, Alvin  
@phanTom  yeah I got confused with the escaped "{} ". you are the best!
@meshorer just add more keys & values to the JSON string  {{"new_field1": "{0}", "new_field2": "{1}"}}    
@phanTom thak you so much! could you also tell me how to add two fields in the same action?