I have around 10 alerts set up in Slack, and I'm trying to find a way to find the total figure of each alert triggered in the previous month. I'm using the following: index="_internal" sourcetyp...
See more...
I have around 10 alerts set up in Slack, and I'm trying to find a way to find the total figure of each alert triggered in the previous month. I'm using the following: index="_internal" sourcetype="scheduler" thread_id="AlertNotifier*" NOT (alert_actions="summary_index" OR alert_actions="") | search savedsearch_name IN.....
| stats count by savedsearch_name | sort -count This works, and brings up some figures for all 10 alerts, however, for some reason it doesn't seem to be accurate. For example, I know we receive multiple alerts in a day for one particular search query (which is set to fire every 15 mins) and so a count of 23 in the previous month just isn't correct. What am I doing wrong? Ps I'm a complete newbie here. Thanks in advance!
Hi
Now and again we get an extremely high system load average on the Search Head.
I cant figure out why it is happening and I have to do a kill -9 -1 and restart to fix it.
This means we ...
See more...
Hi
Now and again we get an extremely high system load average on the Search Head.
I cant figure out why it is happening and I have to do a kill -9 -1 and restart to fix it.
This means we can't log into the Splunk GUI.
I kill Splunk and I see a lot of processes.
After it is dead, I can still Splunkd process on the box and the load time is still high.
Regards
Robert
Hi All, We have a json logs where few logs are not parsing properly. When i check internal logs its shows that truncate value exceed the default 10000 bytes, so i tried increasing truncate value to 4...
See more...
Hi All, We have a json logs where few logs are not parsing properly. When i check internal logs its shows that truncate value exceed the default 10000 bytes, so i tried increasing truncate value to 40000, but still logs are not parsing correctly.
the logs length is around 26000.
props used:
[app:json:logs]
SHOULD_LINEMERGE=true
LINE_BREAKER=([\r\n]+)
CHARSET=UTF-8
TIMEPREFIX=\{\"timestamp"\:\"
KV_MODE=json
TRUNCATE=40000
appreciate it, but... Have you actually used this? I can't get it to work (it's in beta now, zero reviews or ratings). Even it's own demos and samples throw errors. Running on RHEL8, 9.2.2.
S3SPL Add-On for Splunk enables your data stored in S3 for immediate insight using custom Splunk commands. The source of the data does not matter, as long as it is stored in S3 and can be queried usi...
See more...
S3SPL Add-On for Splunk enables your data stored in S3 for immediate insight using custom Splunk commands. The source of the data does not matter, as long as it is stored in S3 and can be queried using S3 Select. This includes JSON, CSV, Parquet and even files written by Splunk Ingest Actions. S3SPL provides the following functionality to Splunk users: Query S3 using S3Select in an ad-hoc fashion using WHERE statements Save queries and share them with other users Configure queries to manage timestamps based on defined field names automatically Configure queries with replacements to adapt queries to the current requirement on the fly Create queries and preview results using an interactive workbench In addition, S3SPL provides an admin section that allows the management of multiple buckets and saved queries. Finally, a comprehensive access control system based on Splunk capabilities and roles allows for granular access control from Splunk to buckets and prefixes within them.
Hello, I have this - results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) reader = results.JSONResultsReader(oneshotsearch_results) dict = json.loads(oneshotsearch_result...
See more...
Hello, I have this - results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) reader = results.JSONResultsReader(oneshotsearch_results) dict = json.loads(oneshotsearch_results) # to get dict to send data outside splunk selectively Error: TypeError: the JSON object must be str, bytes or bytearray, not ResponseReader How do I fix this? Thanks
But can you give me a bit more on the Rebuild Forwarder Asset table in the DMC? And do you have maybe how that search would look? I have basically generally searched for specific users in the search ...
See more...
But can you give me a bit more on the Rebuild Forwarder Asset table in the DMC? And do you have maybe how that search would look? I have basically generally searched for specific users in the search and reporting field. So any more pointing in the direction would help. But in the interim, I will start looking into this as a solution and work towards it. Appreciate it
We have configured a health rule in AppDynamics to monitor storage usage across all Servers. (Hardware Resources|Volumes|/|Used (%))The rule is set to trigger a Slack notification when the root stora...
See more...
We have configured a health rule in AppDynamics to monitor storage usage across all Servers. (Hardware Resources|Volumes|/|Used (%))The rule is set to trigger a Slack notification when the root storage exceeds the 80% warning and 90% critical threshold. While the rule violation is correctly detected for all nodes, for 2 of the VMs which crossing 90% above but alerts are sent for one VM. We need assistance in ensuring that alerts are triggered and sent for all affected nodes. Please also see attached screenshots.
You have two options: 1. Rebuild the Forwarder Asset table in the DMC 2. Create a custom search to identify duplicate hostnames and remove these entries of missing forwarder in the lookup file d...
See more...
You have two options: 1. Rebuild the Forwarder Asset table in the DMC 2. Create a custom search to identify duplicate hostnames and remove these entries of missing forwarder in the lookup file dmc_fowarder_assets.csv that is located in the splunk_monitoring_console app
Hi @bowesmana , I mean to ask what part of the js file defines the JS error in the UI. I have other files as well that have different functionalities but they do not have the util/console part bu...
See more...
Hi @bowesmana , I mean to ask what part of the js file defines the JS error in the UI. I have other files as well that have different functionalities but they do not have the util/console part but still throw the same error. How do I identify those parts in the JS file? Regards, Pravin
Here is an old post from 2019 that was unanswered. https://community.splunk.com/t5/Deployment-Architecture/Remove-missing-duplicate-forwarders-from-forwarder-managment/m-p/492211 I am running into ...
See more...
Here is an old post from 2019 that was unanswered. https://community.splunk.com/t5/Deployment-Architecture/Remove-missing-duplicate-forwarders-from-forwarder-managment/m-p/492211 I am running into the same issue. Splunk Enterprise 9.2.2. Basically we had maybe 400+ machines with version 9.0.10. When upgrading to a newer splunkforwarder 9.2.2 under Forwarder Management there is duplicate instances of the computers. Pushing our Clients now to above 800. How can you remove the duplicates with going through each duplicate and clicking delete Record? Thanks