All Topics

Top

All Topics

Hi All, We are deploying out the Splunk Universal Forwarder at the moment which is going well but I'm looking at getting this installed on our Citrix Infrastructure. In our environment, we have "Go... See more...
Hi All, We are deploying out the Splunk Universal Forwarder at the moment which is going well but I'm looking at getting this installed on our Citrix Infrastructure. In our environment, we have "Golden Images" where we make changes. Once published (Using PVS), the new image will be deployed out to the Citrix servers in that specific delivery group. When the server/s (None-Persistent) in that group performs their reboots which is nightly they pick up the golden image via PVS. Using the clone prep command works and the new client comes through in the Forwarder Management without any issues which I'm happy with as it is working as expected but as these servers reboot every night, I'm finding that duplicate entries of the same servers are being created when the reboot completes and Splunk connects to the deployment server. I'm presuming that this is because the GUID is changing every time these servers reboot in the night but I want to know if there is a way to ensure it uses the same GUID for that hostname so that it avoids creating duplicate records in the Forwarder Management console? Or if there is some option somewhere where Splunk identifies a duplicate hostname and removes it automatically? For example, this is how it works: SERVERMAIN01 - Citrix Maintenance Server where golden images are attached and changes can be made. SERVERAPP01 - Application server which picks up golden image (None-Persistent) and is rebooted nightly SERVERAPP02 - Application server which picks up golden image (None-Persistent) and is rebooted nightly SERVERAPP03 - Application server which picks up golden image (None-Persistent) and is rebooted nightly So essentially, I'm getting duplicate clients in the Forwarder Management for SERVERAPP01/02/03 every night which will just build up over time unless I manually intervene which takes up my time. Hope this all makes sense and someone can point me in the right direction as I've searched around for a while and can't find any posts around this. Cheers,
Currently, we are using the ITSI Module along with the Splunk_TA_snow addon to create incidents on ServiceNow and this is working as expected. We have a new requirement now to create TASKs along w... See more...
Currently, we are using the ITSI Module along with the Splunk_TA_snow addon to create incidents on ServiceNow and this is working as expected. We have a new requirement now to create TASKs along with the incidents. We went through the scripts of ServiceNow and the documentation and we couldn't find anything that could help us.   My questions are 1. do we have this feature within the current scope of the addon? 2. If not, can this be customized?
Hi Guys, In this case statement i am getting jobType values but i am not getting Status value. I already mentioned the keyword above in the query But why i am not getting . index="mulesoft" applica... See more...
Hi Guys, In this case statement i am getting jobType values but i am not getting Status value. I already mentioned the keyword above in the query But why i am not getting . index="mulesoft" applicationName="s-concur-api" environment=DEV timestamp ("onDemand Flow for concur Expense Report file with FileID Started" OR "Exchange Rates Scheduler process started" OR "Exchange Rates Process Completed. File successfully sent to Concur")|transaction correlationId| rename timestamp as Timestamp correlationId as CorrelationId tracePoint as TracePoint content.payload.TargetFileName as TargetFileName | eval JobType=case(like('message',"%onDemand Flow for concur Expense Report file with FileID Started%"),"OnDemand",like('message',"%Exchange Rates Scheduler process started%"),"Scheduled", true() , "Unknown")| eval Status=case(like('message',"Exchange Rates Process Completed. File sucessfully sent to Concur"),"SUCCESS",like('TracePoint',"%EXCEPTION%"),"ERROR") |table JobType Status    
We had PS create a report but, I can't seem to figure out what setting he set to show a time base chart without a time-based command.   He didn't use dashboard.   The graphic only shows on the re... See more...
We had PS create a report but, I can't seem to figure out what setting he set to show a time base chart without a time-based command.   He didn't use dashboard.   The graphic only shows on the report?  I want the ability to do similar type of visualization but, I can't figure what setting cause the visual output.
How to know if the SDK is initialized or not in react native?
<row> <panel depends="$tok_tab_1$"> <table> <title>Alerts Fired</title> <search> <query> index=_audit action=alert_fired | rename ss_name AS Alert | stats latest... See more...
<row> <panel depends="$tok_tab_1$"> <table> <title>Alerts Fired</title> <search> <query> index=_audit action=alert_fired | rename ss_name AS Alert | stats latest(_time) AS "Event_Time" sparkline AS "Alerts Per Day" count AS "Times Fired" first(sid) AS sid by Alert | eval Event_Time=strftime(Event_Time,"%m/%d/%y %I:%M:%S %P") | rename Event_Time AS "Last Fired" | sort -"Times Fired" </query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <fields>Alert, "Last Fired", "Times Fired", "Alerts Per Day"</fields> <option name="count">10</option> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">cell</option> <option name="rowNumbers">false</option> <option name="wrap">true</option> <drilldown> <set token="sid">$row.sid$</set> <unset token="tok_tab_1"></unset> <set token="tok_tab_2">active</set> <set token="tok_display_dd"></set> <set token="Alert">$row.Alert$</set> <link target="_blank">search?sid=$row.sid$</link> </drilldown> </table> </panel> </row> <row> <panel depends="$tok_tab_2$"> <table> <title>$Alert$</title> <search> <query>| search?sid=$sid$</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row>     In the code above the below line works correctly opening a new search tab with the Alert search query. <link target="_blank">search?sid=$row.sid$</link> I would like to know how to have this same functionality, but within a token so I can keep it on the same page within another table.    
Hi @richgalloway  Good Day!! How to fix the vulnerabilities in Splunk? Please guide me with some example. Thanks
How to Use Logs from Splunk Platform in Splunk Observability   Logs play a critical role in identifying why there is a problem and, when combined with metrics and traces, help build strong observ... See more...
How to Use Logs from Splunk Platform in Splunk Observability   Logs play a critical role in identifying why there is a problem and, when combined with metrics and traces, help build strong observable systems. For that reason, ITOps and engineering teams need a robust and well-rounded logging tool to be able to successfully perform troubleshooting and monitoring across any environment. However, they often have to deal with tool sprawl that leads to a broken troubleshooting workflow and longer time to resolve or over complicated solutions that require coding or infrastructure overhead. In this webinar, find out how to combine Splunk Platform’s power with Splunk Observability Cloud. You’ll learn how easily you can extend logs from Splunk Cloud Platform or Splunk Enterprise into Splunk Observability Cloud, a solution purposefully built for hybrid troubleshooting. You’ll also discover how seamless it is to correlate logs with traces and metrics in a single point-and-click interface for improved visibility of your environment.   The Convergence and Power of Observability in Modern Software Development Understanding the convergence of Observability as a powerful group of different parts that work together to improve business resiliency and solve operational inefficiency. Maximizing the Power of Splunk Cloud and Observability Cloud Integration Learn how to seamlessly connect and leverage the combined benefits of Splunk Cloud and Observability Cloud for streamlined log analytics and real-time monitoring. Understanding Log Observer Connect in Splunk Cloud Explore the process of setting up Log Observer Connect between Splunk Observability Cloud and Splunk Cloud, including prerequisite steps and role-based access control. Automating Network Level Connection for Observability Cloud and Splunk Cloud Learn how to automate the secure network level connection between Observability Cloud and Splunk Cloud for seamless data access and troubleshooting.   Want to see more? Watch the Full Webinar   Key Takeaways: How to centralize your log management across ITOps and engineering use cases Enjoy a no-code and intuitive interface to easily explore log data Combine logs with Infrastructure Monitoring (metrics) and APM (traces) for faster root cause analysis Speakers: Joanna Zouhour, Product Marketing Manager Observability, Splunk Subu Baskaran, Principal Product Manager, Splunk  
Hello, Looking for some real guidance here. We just implemented Splunk with an Implementation team. We are pulling out Notables to send to our case management product and then closing the notable (... See more...
Hello, Looking for some real guidance here. We just implemented Splunk with an Implementation team. We are pulling out Notables to send to our case management product and then closing the notable ( this way we are only searching for open notables to send and if for some reason it doesnt send it doesnt close so it can attempt again) .   We are having to add a |head 1 to this search in order for the update Notable command knows which notable to update and set to close ( Not having the Head command caused issues updating the notable to closed.....seeing say 5 notables and then trying to update became to confusing for splunk) . This has caused us to make this search real-time search ( we get 10 Notables at the same time we dont want to wait 10 minuets for that event to get over to us) . I am going to provide some of the SPL and see if anyone knows a better way....we have been waiting for 4 months from Splunk on this. `notable` | where (status==1 AND notable_xref_id!="") Some eval commands and table | head 1 | sendalert XXXX param.X_instance_id=X param.alert_mode="X" param.unique_id_field="" param.case_template="X" param.type="alert" param.source="splunk" param.timestamp_field="" param.title=X param.description=X param.tags="X" param.scope=0 param.severity=X param.tlp=X param.pap=X | table status event_id | eval status=5|updatenotable Has anyone attempted to search in the notable index and pull multiple events and tried to update the notable in that search and had successful results for multiple entries? 
I have two sourcetypes containing login information and user information Sourcetype1: Login information (useful paramaters: UserId, status) Sourcetype1: Id = accountId Sourcetype2: User informatio... See more...
I have two sourcetypes containing login information and user information Sourcetype1: Login information (useful paramaters: UserId, status) Sourcetype1: Id = accountId Sourcetype2: User information (useful parameters: username. Id) Sourcetype2; Id = userId Both sourcetypes contains the parameter Id but refers to different information. I want to get a list/table with number of logins and the result for each user Mapping login data with user data: UserId (Sourcetype1) = Id (Sourcetype2)   Example: username     status        count aa@aa.aa     success     3  
Hi All,  As AppDynamics announced that they will stop accepting network connections utilizing TLS 1.0 and 1.1 protocols from April 1 onwards. Is there an option to get extension for particular serve... See more...
Hi All,  As AppDynamics announced that they will stop accepting network connections utilizing TLS 1.0 and 1.1 protocols from April 1 onwards. Is there an option to get extension for particular servers, which are still using TLS 1.0 and 1.1 from AppDynamics .  We still have some servers that uses older TLS versions. Regards Fadil
I have a splunk universal forwarder, which is indexing a 1 GB log file to a Splunk Indexer. The problem I am facing is the ingestion is happening very slow (100K log entries per mins). I have tried s... See more...
I have a splunk universal forwarder, which is indexing a 1 GB log file to a Splunk Indexer. The problem I am facing is the ingestion is happening very slow (100K log entries per mins). I have tried setting the      parallelIngestionPipelines = 2     setting for both Indexer and Forwarder, but to no avail.  Below are the stats for the containers running Indexer and forwarder CONTAINER_ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS ecb272b9ca6b tracing-splunk-1 12.15% 260.8MiB / 7.674GiB 3.32% 366MB / 1.85MB 0B / 1.01GB 239 CONTAINER_ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 0ac17f935889 tracing-splunkforwarder-1 0.70% 68.22MiB / 7.674GiB 0.87% 986kB / 312MB 0B / 18.2MB 65 We are running these in a docker container I and my team is pretty new to Splunk eco system. Can someone please help us to optimize the ingestion of logs.
I am building a dashboard with the new dashboard builder and I have a dynmic dropdown which returns me these values: timerange, rangeStart, rangeEnd, date 2024-03-07T09:10:23/2024-03-07T23:34:39... See more...
I am building a dashboard with the new dashboard builder and I have a dynmic dropdown which returns me these values: timerange, rangeStart, rangeEnd, date 2024-03-07T09:10:23/2024-03-07T23:34:39 2024-03-07T09:10:23 2024-03-07T23:34:39 07/03/24-07/03/24 2024-03-08T19:41:25/2024-03-08T23:28:54 2024-03-08T19:41:25 2024-03-08T23:28:54 08/03/24-08/03/24 2024-03-11T19:36:52/2024-03-11T23:19:36 2024-03-11T19:36:52 2024-03-11T23:19:36 11/03/24-11/03/24   These ranges can go over multiple days. I use the date column as my label in the dropdown which works fine. My problem now is that I want to use the rangeStart and rangeEnd as the earliest and latest times for my graphs. My dropdown config looks like this: {     "options": {         "items": ">frame(label, value, additional_value) | prepend(formattedStatics) | objects()",         "token": "testrun",         "selectFirstSearchResult": true     },     "title": "Testrun",     "type": "input.dropdown",     "dataSources": {         "primary": "ds_w86GnMtx"     },     "context": {         "formattedConfig": {             "number": {                 "prefix": ""             }         },         "formattedStatics": ">statics | formatByType(formattedConfig)",         "statics": [],         "label": ">primary | seriesByName(\"date\") | renameSeries(\"label\") | formatByType(formattedConfig)",         "value": ">primary | seriesByName(\"rangeStart\") | renameSeries(\"value\") | formatByType(formattedConfig)",         "additional_value": ">primary | seriesByName(\"rangeEnd\") | renameSeries(\"additional_value\") | formatByType(formattedConfig)"     } } The token name for the dropdown is testrun    My query config for the graph looks like this: {     "type": "ds.search",     "options": {         "query": "QUERY",         "queryParameters": {             "earliest": "$testrun$rangeStart$",             "latest": "$testrun$rangeEnd$"         },         "enableSmartSources": true     },     "name": "cool graph" } It seems like the token $testrun$ itself returns the rangeStart, but these $testrun$rangeStart/rangeEnd$ don't work. Is it even possible to do something like that, that the dropdown returns multiple values? If not is there a way to use the timerange from above and split it in the middle to get earliest and latest? "earliest": "$testrun.timerange.split(\"/\")[0].strptime('%Y-%m-%dT%H:%M:%S')$", "latest": "$testrun.timerange.split(\"/\")[1].strptime('%Y-%m-%dT%H:%M:%S')$" I tried also this in different ways which I also couldn't get to work. The error I am getting is always "invalid earliest_time".
Hello, We have around 10K events per hour on _internal index from a Splunk UF 9.1.2 installed on a Windows 10 22H2 machine (build 19045.3930). I know the problem is that the Printer service is disa... See more...
Hello, We have around 10K events per hour on _internal index from a Splunk UF 9.1.2 installed on a Windows 10 22H2 machine (build 19045.3930). I know the problem is that the Printer service is disabled, the question is why Splunk UF is not trying to check WinPrintMon every 600 seconds as per inputs.conf   Here seems the reason, why is ignoring the interval parameter?   03-14-2024 11:02:02.900 +0100 INFO ExecProcessor [4212 ExecProcessor] - Ignoring parameter "interval" for modular input "WinPrintMon" when scheduling the runtime for script=""C:\Program Files\SplunkUniversalForwarder\bin\scripts\splunk-winprintmon.path"". This means potentially Splunk won't be restarting it in case it gets terminated.     Here the logs in _internal index (around 180 per minutes, so 3 times per second)   03-14-2024 10:30:23.470 +0100 ERROR ExecProcessor [7088 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winprintmon.exe"" splunk-winPrintMon - monitorHost::ProcessRefresh: Failed ProcessRefresh: error = '0x800706ba'. Restart. 03-14-2024 10:30:23.470 +0100 ERROR ExecProcessor [7088 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winprintmon.exe"" splunk-winPrintMon - monitorHost::ProcessRefresh: Failed ProcessRefresh: error = '0x800706ba'. Restart. 03-14-2024 10:30:22.932 +0100 ERROR ExecProcessor [7088 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winprintmon.exe"" splunk-winPrintMon - monitorHost::ProcessRefresh: Failed ProcessRefresh: error = '0x800706ba'. Restart. 03-14-2024 10:30:22.707 +0100 ERROR ExecProcessor [7088 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winprintmon.exe"" splunk-winPrintMon - monitorHost::ProcessRefresh: Failed ProcessRefresh: error = '0x800706ba'. Restart. 03-14-2024 10:30:22.407 +0100 ERROR ExecProcessor [7088 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winprintmon.exe"" splunk-winPrintMon - monitorHost::ProcessRefresh: Failed ProcessRefresh: error = '0x800706ba'. Restart.     Here inputs.conf   ###### Print monitoring ###### [WinPrintMon://printer] type = printer interval = 600 baseline = 1 disabled = 0 index=idx_xxxx_windows [WinPrintMon://job] type = job interval = 600 baseline = 1 disabled = 0 index=idx_xxxx_windows [WinPrintMon://driver] type = driver interval = 600 baseline = 1 disabled = 0 index=idx_xxxx_windows [WinPrintMon://port] type = port interval = 600 baseline = 1 disabled = 0 index=idx_xxxx_windows     Thanks a lot, Edaordo
Hello Splunkers! I've encountered challenges while attempting to connect Notion logs to our Splunk instance. Here's what I've tried: Inserting the HEC URL with a public IP on our Splunk on-premis... See more...
Hello Splunkers! I've encountered challenges while attempting to connect Notion logs to our Splunk instance. Here's what I've tried: Inserting the HEC URL with a public IP on our Splunk on-premise setup. Activating the HEC URL and applying it to our Splunk Cloud Trial. Unfortunately, both methods failed to establish the connection. Has anyone successfully connected Notion logs with Splunk, either on-premise or through the Splunk Cloud Trial? Thank you for your attention and support.    
Hello! Since 7.3.0 I'm seeing the reload process for assets and identities failing frequently. Any ideas?   RROR pid=20559 tid=MainThread file=base_modinput.py:execute:820 | Execution failed: 'Splu... See more...
Hello! Since 7.3.0 I'm seeing the reload process for assets and identities failing frequently. Any ideas?   RROR pid=20559 tid=MainThread file=base_modinput.py:execute:820 | Execution failed: 'SplunkdConnectionException' object has no attribute 'get_message_text' Traceback (most recent call last): File "/app/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 601, in simpleRequest serverResponse, serverContent = h.request(uri, method, headers=headers, body=payload) File "/app/splunk/lib/python3.7/site-packages/httplib2/__init__.py", line 1710, in request conn, authority, uri, request_uri, method, body, headers, redirections, cachekey, File "/app/splunk/lib/python3.7/site-packages/httplib2/__init__.py", line 1425, in _request (response, content) = self._conn_request(conn, request_uri, method, body, headers) File "/app/splunk/lib/python3.7/site-packages/httplib2/__init__.py", line 1377, in _conn_request response = conn.getresponse() File "/app/splunk/lib/python3.7/http/client.py", line 1373, in getresponse response.begin() File "/app/splunk/lib/python3.7/http/client.py", line 319, in begin version, status, reason = self._read_status() File "/app/splunk/lib/python3.7/http/client.py", line 280, in _read_status line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") File "/app/splunk/lib/python3.7/socket.py", line 589, in readinto return self._sock.recv_into(b) File "/app/splunk/lib/python3.7/ssl.py", line 1079, in recv_into return self.read(nbytes, buffer) File "/app/splunk/lib/python3.7/ssl.py", line 937, in read return self._sslobj.read(len, buffer) socket.timeout: The read operation timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/app/splunk/etc/apps/SA-IdentityManagement/bin/identity_manager.py", line 483, in reload_settings raiseAllErrors=True File "/app/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 613, in simpleRequest raise splunk.SplunkdConnectionException('Error connecting to %s: %s' % (path, str(e))) splunk.SplunkdConnectionException: Splunkd daemon is not responding: ('Error connecting to /services/identity_correlation/identity_manager/_reload: The read operation timed out',) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/app/splunk/etc/apps/SA-Utils/lib/SolnCommon/modinput/base_modinput.py", line 811, in execute log_exception_and_continue=True File "/app/splunk/etc/apps/SA-Utils/lib/SolnCommon/modinput/base_modinput.py", line 380, in do_run self.run(stanzas) File "/app/splunk/etc/apps/SA-IdentityManagement/bin/identity_manager.py", line 586, in run reload_success = self.reload_settings() File "/app/splunk/etc/apps/SA-IdentityManagement/bin/identity_manager.py", line 486, in reload_settings logger.error('status="Failed to reload settings" error="%s"', e.get_message_text()) AttributeError: 'SplunkdConnectionException' object has no attribute 'get_message_text'    
Hi, I am trying to find what is my Splunk ID for the purpose of purchasing additional users licenses. I am unable to find any useful info on my dashboard and the support portal seems to be down. Re... See more...
Hi, I am trying to find what is my Splunk ID for the purpose of purchasing additional users licenses. I am unable to find any useful info on my dashboard and the support portal seems to be down. Regards, Alvin  
hello all! I am trying to add  field to an artifact with "update artifact" action (phantom app). i am trying to add a 'message parameter' in the 'value' at the cef_json field: for example: {"new_... See more...
hello all! I am trying to add  field to an artifact with "update artifact" action (phantom app). i am trying to add a 'message parameter' in the 'value' at the cef_json field: for example: {"new_field": {0}} but unfortunately I get "key_error" and the action failed.  how can I solve it?
當我在SH設置好props.conf後去看我的uf端並重啟就會出現以下錯誤: Checking conf files for problems... Invalid key in stanza [web:access] in /opt/splunkforwarder/etc/apps/dynasafe_course_demo_ta/local/props.conf, line 3: ENVE... See more...
當我在SH設置好props.conf後去看我的uf端並重啟就會出現以下錯誤: Checking conf files for problems... Invalid key in stanza [web:access] in /opt/splunkforwarder/etc/apps/dynasafe_course_demo_ta/local/props.conf, line 3: ENVENT_BREAKER (value: ([\r\n]+)). Invalid key in stanza [web:secure] in /opt/splunkforwarder/etc/apps/dynasafe_course_demo_ta/local/props.conf, line 6: ENVENT_BREAKER (value: ([\r\n]+)). Your indexes and inputs configurations are not internally consistent. 這是怎麼回事
I’ve been running into an issue with the Splunk query which have been using since long time and seeing the following error message: “Please select a shorter time duration for your query,” even when I... See more...
I’ve been running into an issue with the Splunk query which have been using since long time and seeing the following error message: “Please select a shorter time duration for your query,” even when I’m using a 5-minute time range. I noticed that this error seems to pop up when we use latest=now() in our queries to get the most recent data.However, when I tried the same query with a specific time range, like earliest=-xxh@h latest=-xxh@h, it worked just fine. Any ideas on why latest=now() might not be fetching results as expected? And if there is any resolution to working with latest=now()