All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @bawan , good for you, see next time! let us know if we can help you more, or, please, accept one answer (even if your one) for the other people of Community. Ciao and happy splunking Giuseppe... See more...
Hi @bawan , good for you, see next time! let us know if we can help you more, or, please, accept one answer (even if your one) for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
HI @Ryan.Paredez  Thanks for the Reply.  I've configured the Async transactions configuration as per the documentation {( screenshot attached ) . However , I still don't see any Drilldown option f... See more...
HI @Ryan.Paredez  Thanks for the Reply.  I've configured the Async transactions configuration as per the documentation {( screenshot attached ) . However , I still don't see any Drilldown option for end-end latency for the Business transaction  If you observe the above image - the end-end latency is 52sec , but I don't see the option to drill down to investigate why it took 52secs and where exactly it is taking time.
System having a splunk forwarder
Hi @mahesh27 , try the spath command (https://docs.splunk.com/Documentation/Splunk/9.3.0/SearchReference/Spath ) Ciao. Giuseppe
Hi @gcusello, sorry we have a limitation not to use that is there any other way 
I have around 10 alerts set up in Slack, and I'm trying to find a way to find the total figure of each alert triggered in the previous month.  I'm using the following:   index="_internal" sourcetyp... See more...
I have around 10 alerts set up in Slack, and I'm trying to find a way to find the total figure of each alert triggered in the previous month.  I'm using the following:   index="_internal" sourcetype="scheduler" thread_id="AlertNotifier*" NOT (alert_actions="summary_index" OR alert_actions="") | search savedsearch_name IN..... | stats count by savedsearch_name | sort -count   This works, and brings up some figures for all 10 alerts, however, for some reason it doesn't seem to be accurate. For example, I know we receive multiple alerts in a day for one particular search query (which is set to fire every 15 mins) and so a count of 23 in the previous month just isn't correct. What am I doing wrong?    Ps I'm a complete newbie here. Thanks in advance!
Hi @elend , yes, you have to rebuild the DataModel, otherwise the change is applied only to new events. Ciao. Giuseppe
Hi @mahesh27 , try to add INDEXED_EXTRACTIONS = JSON to your props.conf Ciao. Giuseppe
Hi  Now and again we get an extremely high system load average on the Search Head. I cant figure out why it is happening and I have to do a kill -9 -1 and restart to fix it. This means we ... See more...
Hi  Now and again we get an extremely high system load average on the Search Head. I cant figure out why it is happening and I have to do a kill -9 -1 and restart to fix it. This means we can't log into the Splunk GUI. I kill Splunk and I see a lot of processes. After it is dead, I can still Splunkd process on the box and the load time is still high.   Regards Robert  
Please try:     oneshotsearch_results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot)     reader = results.JSONResultsReader(oneshotsearch_results)  
Hi All, We have a json logs where few logs are not parsing properly. When i check internal logs its shows that truncate value exceed the default 10000 bytes, so i tried increasing truncate value to 4... See more...
Hi All, We have a json logs where few logs are not parsing properly. When i check internal logs its shows that truncate value exceed the default 10000 bytes, so i tried increasing truncate value to 40000, but still logs are not parsing correctly. the logs length is around  26000. props used: [app:json:logs] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) CHARSET=UTF-8 TIMEPREFIX=\{\"timestamp"\:\" KV_MODE=json TRUNCATE=40000    
This question has been answered here: Solved: Re: Unanswered question about duplicate forwarders... - Splunk Community
appreciate it, but... Have you actually used this? I can't get it to work (it's in beta now, zero reviews or ratings). Even it's own demos and samples throw errors. Running on RHEL8, 9.2.2.
|inputlookup dmc_forwarder_assets.csv | sort - last_connected hostname |streamstats count by hostname |search status=active OR (status=missing AND count=1) |fields - count | outputlookup dmc_for... See more...
|inputlookup dmc_forwarder_assets.csv | sort - last_connected hostname |streamstats count by hostname |search status=active OR (status=missing AND count=1) |fields - count | outputlookup dmc_forwarder_assets.csv
S3SPL Add-On for Splunk enables your data stored in S3 for immediate insight using custom Splunk commands. The source of the data does not matter, as long as it is stored in S3 and can be queried usi... See more...
S3SPL Add-On for Splunk enables your data stored in S3 for immediate insight using custom Splunk commands. The source of the data does not matter, as long as it is stored in S3 and can be queried using S3 Select. This includes JSON, CSV, Parquet and even files written by Splunk Ingest Actions. S3SPL provides the following functionality to Splunk users: Query S3 using S3Select in an ad-hoc fashion using WHERE statements Save queries and share them with other users Configure queries to manage timestamps based on defined field names automatically Configure queries with replacements to adapt queries to the current requirement on the fly Create queries and preview results using an interactive workbench In addition, S3SPL provides an admin section that allows the management of multiple buckets and saved queries. Finally, a comprehensive access control system based on Splunk capabilities and roles allows for granular access control from Splunk to buckets and prefixes within them.
Hello, I have this - results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) reader = results.JSONResultsReader(oneshotsearch_results)   dict = json.loads(oneshotsearch_result... See more...
Hello, I have this - results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) reader = results.JSONResultsReader(oneshotsearch_results)   dict = json.loads(oneshotsearch_results)  # to get dict to send data outside splunk selectively   Error: TypeError: the JSON object must be str, bytes or bytearray, not ResponseReader   How do I fix this?   Thanks    
Use this query to find out which indexes are used by a data model. | tstats count from datamodel=foo by index
But can you give me a bit more on the Rebuild Forwarder Asset table in the DMC? And do you have maybe how that search would look? I have basically generally searched for specific users in the search ... See more...
But can you give me a bit more on the Rebuild Forwarder Asset table in the DMC? And do you have maybe how that search would look? I have basically generally searched for specific users in the search and reporting field. So any more pointing in the direction would help. But in the interim, I will start looking into this as a solution and work towards it. Appreciate it
We have configured a health rule in AppDynamics to monitor storage usage across all Servers. (Hardware Resources|Volumes|/|Used (%))The rule is set to trigger a Slack notification when the root stora... See more...
We have configured a health rule in AppDynamics to monitor storage usage across all Servers. (Hardware Resources|Volumes|/|Used (%))The rule is set to trigger a Slack notification when the root storage exceeds the 80% warning and 90% critical threshold. While the rule violation is correctly detected for all nodes, for 2 of the VMs which crossing 90% above but alerts are sent for one VM. We need assistance in ensuring that alerts are triggered and sent for all affected nodes. Please also see attached screenshots.