All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

How about a really slow response....  Maybe someone else will stumble across it.    Many Amazon EC2 instances support simultaneous multithreading, which enables multiple threads to run concurrentl... See more...
How about a really slow response....  Maybe someone else will stumble across it.    Many Amazon EC2 instances support simultaneous multithreading, which enables multiple threads to run concurrently on a single CPU core. Each thread is represented as a virtual CPU (vCPU) on the instance. An instance has a default number of CPU cores, which varies according to instance type. For example, an m5.xlarge instance type has two CPU cores and two threads per core by default—four vCPUs in total. taken from AWS docs @ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-optimize-cpu.html The  AWS instance-type page states that the c6i.4xl that you were looking at is an ice-lake 8375C processor. What you are getting is use of 16 threads from a  32 core/64 thread processor.  https://en.wikipedia.org/wiki/List_of_Intel_Xeon_processors_(Ice_Lake-based)  
Yes I thought about using the old dashboard builder as an alternative, but I wanted to see if it would be possible to use the new one.
Hi All,  As AppDynamics announced that they will stop accepting network connections utilizing TLS 1.0 and 1.1 protocols from April 1 onwards. Is there an option to get extension for particular serve... See more...
Hi All,  As AppDynamics announced that they will stop accepting network connections utilizing TLS 1.0 and 1.1 protocols from April 1 onwards. Is there an option to get extension for particular servers, which are still using TLS 1.0 and 1.1 from AppDynamics .  We still have some servers that uses older TLS versions. Regards Fadil
I have a splunk universal forwarder, which is indexing a 1 GB log file to a Splunk Indexer. The problem I am facing is the ingestion is happening very slow (100K log entries per mins). I have tried s... See more...
I have a splunk universal forwarder, which is indexing a 1 GB log file to a Splunk Indexer. The problem I am facing is the ingestion is happening very slow (100K log entries per mins). I have tried setting the      parallelIngestionPipelines = 2     setting for both Indexer and Forwarder, but to no avail.  Below are the stats for the containers running Indexer and forwarder CONTAINER_ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS ecb272b9ca6b tracing-splunk-1 12.15% 260.8MiB / 7.674GiB 3.32% 366MB / 1.85MB 0B / 1.01GB 239 CONTAINER_ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 0ac17f935889 tracing-splunkforwarder-1 0.70% 68.22MiB / 7.674GiB 0.87% 986kB / 312MB 0B / 18.2MB 65 We are running these in a docker container I and my team is pretty new to Splunk eco system. Can someone please help us to optimize the ingestion of logs.
Have you considered using Classic / SimpleXML dashboard as you can probably achieve this with SimpleXML?
Thank you for the quick take.   I am confident that i searched for and invoked deletion of my jobs across all apps.  I’m thinking it takes splunk a while to act on or confirm deletion and until so it... See more...
Thank you for the quick take.   I am confident that i searched for and invoked deletion of my jobs across all apps.  I’m thinking it takes splunk a while to act on or confirm deletion and until so it jobs remain searchable in job activity manager.   I have experienced this sort of problem in multiple clustered on-prem splunk implementations  over the years and am frustrated on behalf of users for their non deterministic experience recovering from queuing through prescribed actions on job activity monitor. 
I am building a dashboard with the new dashboard builder and I have a dynmic dropdown which returns me these values: timerange, rangeStart, rangeEnd, date 2024-03-07T09:10:23/2024-03-07T23:34:39... See more...
I am building a dashboard with the new dashboard builder and I have a dynmic dropdown which returns me these values: timerange, rangeStart, rangeEnd, date 2024-03-07T09:10:23/2024-03-07T23:34:39 2024-03-07T09:10:23 2024-03-07T23:34:39 07/03/24-07/03/24 2024-03-08T19:41:25/2024-03-08T23:28:54 2024-03-08T19:41:25 2024-03-08T23:28:54 08/03/24-08/03/24 2024-03-11T19:36:52/2024-03-11T23:19:36 2024-03-11T19:36:52 2024-03-11T23:19:36 11/03/24-11/03/24   These ranges can go over multiple days. I use the date column as my label in the dropdown which works fine. My problem now is that I want to use the rangeStart and rangeEnd as the earliest and latest times for my graphs. My dropdown config looks like this: {     "options": {         "items": ">frame(label, value, additional_value) | prepend(formattedStatics) | objects()",         "token": "testrun",         "selectFirstSearchResult": true     },     "title": "Testrun",     "type": "input.dropdown",     "dataSources": {         "primary": "ds_w86GnMtx"     },     "context": {         "formattedConfig": {             "number": {                 "prefix": ""             }         },         "formattedStatics": ">statics | formatByType(formattedConfig)",         "statics": [],         "label": ">primary | seriesByName(\"date\") | renameSeries(\"label\") | formatByType(formattedConfig)",         "value": ">primary | seriesByName(\"rangeStart\") | renameSeries(\"value\") | formatByType(formattedConfig)",         "additional_value": ">primary | seriesByName(\"rangeEnd\") | renameSeries(\"additional_value\") | formatByType(formattedConfig)"     } } The token name for the dropdown is testrun    My query config for the graph looks like this: {     "type": "ds.search",     "options": {         "query": "QUERY",         "queryParameters": {             "earliest": "$testrun$rangeStart$",             "latest": "$testrun$rangeEnd$"         },         "enableSmartSources": true     },     "name": "cool graph" } It seems like the token $testrun$ itself returns the rangeStart, but these $testrun$rangeStart/rangeEnd$ don't work. Is it even possible to do something like that, that the dropdown returns multiple values? If not is there a way to use the timerange from above and split it in the middle to get earliest and latest? "earliest": "$testrun.timerange.split(\"/\")[0].strptime('%Y-%m-%dT%H:%M:%S')$", "latest": "$testrun.timerange.split(\"/\")[1].strptime('%Y-%m-%dT%H:%M:%S')$" I tried also this in different ways which I also couldn't get to work. The error I am getting is always "invalid earliest_time".
Hello, We have around 10K events per hour on _internal index from a Splunk UF 9.1.2 installed on a Windows 10 22H2 machine (build 19045.3930). I know the problem is that the Printer service is disa... See more...
Hello, We have around 10K events per hour on _internal index from a Splunk UF 9.1.2 installed on a Windows 10 22H2 machine (build 19045.3930). I know the problem is that the Printer service is disabled, the question is why Splunk UF is not trying to check WinPrintMon every 600 seconds as per inputs.conf   Here seems the reason, why is ignoring the interval parameter?   03-14-2024 11:02:02.900 +0100 INFO ExecProcessor [4212 ExecProcessor] - Ignoring parameter "interval" for modular input "WinPrintMon" when scheduling the runtime for script=""C:\Program Files\SplunkUniversalForwarder\bin\scripts\splunk-winprintmon.path"". This means potentially Splunk won't be restarting it in case it gets terminated.     Here the logs in _internal index (around 180 per minutes, so 3 times per second)   03-14-2024 10:30:23.470 +0100 ERROR ExecProcessor [7088 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winprintmon.exe"" splunk-winPrintMon - monitorHost::ProcessRefresh: Failed ProcessRefresh: error = '0x800706ba'. Restart. 03-14-2024 10:30:23.470 +0100 ERROR ExecProcessor [7088 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winprintmon.exe"" splunk-winPrintMon - monitorHost::ProcessRefresh: Failed ProcessRefresh: error = '0x800706ba'. Restart. 03-14-2024 10:30:22.932 +0100 ERROR ExecProcessor [7088 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winprintmon.exe"" splunk-winPrintMon - monitorHost::ProcessRefresh: Failed ProcessRefresh: error = '0x800706ba'. Restart. 03-14-2024 10:30:22.707 +0100 ERROR ExecProcessor [7088 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winprintmon.exe"" splunk-winPrintMon - monitorHost::ProcessRefresh: Failed ProcessRefresh: error = '0x800706ba'. Restart. 03-14-2024 10:30:22.407 +0100 ERROR ExecProcessor [7088 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winprintmon.exe"" splunk-winPrintMon - monitorHost::ProcessRefresh: Failed ProcessRefresh: error = '0x800706ba'. Restart.     Here inputs.conf   ###### Print monitoring ###### [WinPrintMon://printer] type = printer interval = 600 baseline = 1 disabled = 0 index=idx_xxxx_windows [WinPrintMon://job] type = job interval = 600 baseline = 1 disabled = 0 index=idx_xxxx_windows [WinPrintMon://driver] type = driver interval = 600 baseline = 1 disabled = 0 index=idx_xxxx_windows [WinPrintMon://port] type = port interval = 600 baseline = 1 disabled = 0 index=idx_xxxx_windows     Thanks a lot, Edaordo
Hello Splunkers! I've encountered challenges while attempting to connect Notion logs to our Splunk instance. Here's what I've tried: Inserting the HEC URL with a public IP on our Splunk on-premis... See more...
Hello Splunkers! I've encountered challenges while attempting to connect Notion logs to our Splunk instance. Here's what I've tried: Inserting the HEC URL with a public IP on our Splunk on-premise setup. Activating the HEC URL and applying it to our Splunk Cloud Trial. Unfortunately, both methods failed to establish the connection. Has anyone successfully connected Notion logs with Splunk, either on-premise or through the Splunk Cloud Trial? Thank you for your attention and support.    
Hello! Since 7.3.0 I'm seeing the reload process for assets and identities failing frequently. Any ideas?   RROR pid=20559 tid=MainThread file=base_modinput.py:execute:820 | Execution failed: 'Splu... See more...
Hello! Since 7.3.0 I'm seeing the reload process for assets and identities failing frequently. Any ideas?   RROR pid=20559 tid=MainThread file=base_modinput.py:execute:820 | Execution failed: 'SplunkdConnectionException' object has no attribute 'get_message_text' Traceback (most recent call last): File "/app/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 601, in simpleRequest serverResponse, serverContent = h.request(uri, method, headers=headers, body=payload) File "/app/splunk/lib/python3.7/site-packages/httplib2/__init__.py", line 1710, in request conn, authority, uri, request_uri, method, body, headers, redirections, cachekey, File "/app/splunk/lib/python3.7/site-packages/httplib2/__init__.py", line 1425, in _request (response, content) = self._conn_request(conn, request_uri, method, body, headers) File "/app/splunk/lib/python3.7/site-packages/httplib2/__init__.py", line 1377, in _conn_request response = conn.getresponse() File "/app/splunk/lib/python3.7/http/client.py", line 1373, in getresponse response.begin() File "/app/splunk/lib/python3.7/http/client.py", line 319, in begin version, status, reason = self._read_status() File "/app/splunk/lib/python3.7/http/client.py", line 280, in _read_status line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") File "/app/splunk/lib/python3.7/socket.py", line 589, in readinto return self._sock.recv_into(b) File "/app/splunk/lib/python3.7/ssl.py", line 1079, in recv_into return self.read(nbytes, buffer) File "/app/splunk/lib/python3.7/ssl.py", line 937, in read return self._sslobj.read(len, buffer) socket.timeout: The read operation timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/app/splunk/etc/apps/SA-IdentityManagement/bin/identity_manager.py", line 483, in reload_settings raiseAllErrors=True File "/app/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 613, in simpleRequest raise splunk.SplunkdConnectionException('Error connecting to %s: %s' % (path, str(e))) splunk.SplunkdConnectionException: Splunkd daemon is not responding: ('Error connecting to /services/identity_correlation/identity_manager/_reload: The read operation timed out',) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/app/splunk/etc/apps/SA-Utils/lib/SolnCommon/modinput/base_modinput.py", line 811, in execute log_exception_and_continue=True File "/app/splunk/etc/apps/SA-Utils/lib/SolnCommon/modinput/base_modinput.py", line 380, in do_run self.run(stanzas) File "/app/splunk/etc/apps/SA-IdentityManagement/bin/identity_manager.py", line 586, in run reload_success = self.reload_settings() File "/app/splunk/etc/apps/SA-IdentityManagement/bin/identity_manager.py", line 486, in reload_settings logger.error('status="Failed to reload settings" error="%s"', e.get_message_text()) AttributeError: 'SplunkdConnectionException' object has no attribute 'get_message_text'    
Hi, I am trying to find what is my Splunk ID for the purpose of purchasing additional users licenses. I am unable to find any useful info on my dashboard and the support portal seems to be down. Re... See more...
Hi, I am trying to find what is my Splunk ID for the purpose of purchasing additional users licenses. I am unable to find any useful info on my dashboard and the support portal seems to be down. Regards, Alvin  
@phanTom  yeah I got confused with the escaped "{} ". you are the best!
@meshorer just add more keys & values to the JSON string  {{"new_field1": "{0}", "new_field2": "{1}"}}    
@phanTom thak you so much! could you also tell me how to add two fields in the same action?
I encountered the same problem after ES (7.3.0) installation and what Giuseppe say is correct about the RAM. To avoid the issue edit alert "Audit - ES System Requirements" on SH ES and adjust the RA... See more...
I encountered the same problem after ES (7.3.0) installation and what Giuseppe say is correct about the RAM. To avoid the issue edit alert "Audit - ES System Requirements" on SH ES and adjust the RAM value. Splunk expect 32000MB RAM into the check but your system can report 31750MB as 32GB RAM. Regards, Antonio
Hi Marnell,   Yes this error is happening on my 2 HF's and deployment servers. All have 12GB of RAM with 10GB available.
@meshorer  when using the format input or format block for JSON you need to use double { & } and encase the value in " such as the below (which I just tested): {{"new_field": "{0}"}} Note that yo... See more...
@meshorer  when using the format input or format block for JSON you need to use double { & } and encase the value in " such as the below (which I just tested): {{"new_field": "{0}"}} Note that you don't need to double the {&} on the {0} as it's a replacement element but the actual JSON elements will need escaping in this way, even if you had nested JSON like the below: {{"new_field": {{"sub_field": "{0}"}}}} -- Hope this helps! If so please mark as a solution for future SOARers. Happy SOARing! --
hello all! I am trying to add  field to an artifact with "update artifact" action (phantom app). i am trying to add a 'message parameter' in the 'value' at the cef_json field: for example: {"new_... See more...
hello all! I am trying to add  field to an artifact with "update artifact" action (phantom app). i am trying to add a 'message parameter' in the 'value' at the cef_json field: for example: {"new_field": {0}} but unfortunately I get "key_error" and the action failed.  how can I solve it?
@ryanaa Line breaking is not possible with the universal forwarder.  The indexer or HF is responsible for that. The EVENT_BREAKER setting is the only one that functions with UF; nevertheless, it sim... See more...
@ryanaa Line breaking is not possible with the universal forwarder.  The indexer or HF is responsible for that. The EVENT_BREAKER setting is the only one that functions with UF; nevertheless, it simply instructs UF to identify the boundaries of events and causes it to deliver whole events to indexers.   Try to apply this settings on an heavy forwarder or indexer.   If this reply helps you an upvote and "Accept as Solution" is appreciated.
Where we have to run the abv query