Hello, I am struggling on figuring out how this request can be achieved. I need to report on events from an API call in Splunk, However, the API call requires variables from another API call. I ha...
See more...
Hello, I am struggling on figuring out how this request can be achieved. I need to report on events from an API call in Splunk, However, the API call requires variables from another API call. I have been testing with the Add-On Builder and can make the initial request. I'm seeing the resulting events in Splunk Search, but I can't figure out how to create a secondary API call that could use the fields as variables in the secondary args or parameters fields. I was trying to use the API module, because I'm not fluent at all with scripting. Thanks for any help on this, it is greatly appreciated, Tom
Hi some TAs will support some kind of HA e.g. DB Connect, but I think that most didn’t. With DB Connect you could use SHC configuration for managing HA. I’m not sure how well this is currently work...
See more...
Hi some TAs will support some kind of HA e.g. DB Connect, but I think that most didn’t. With DB Connect you could use SHC configuration for managing HA. I’m not sure how well this is currently working in general TAs? This needs some kind of mechanism to use distributed checkpoint status e.g. kvstore. r. Ismo
It’s good to known that all those nodes are independent on for buckets. There could be situations where primary bucket is already e.g. removed and there are still those secondary buckets on another si...
See more...
It’s good to known that all those nodes are independent on for buckets. There could be situations where primary bucket is already e.g. removed and there are still those secondary buckets on another sites and/or another nodes on primary sites.
In Current Splunk deployment we have 2 HFs, One used for DB connect another one used for the HEC connector and other. And the requirement is if One HF goes done other HF can handle all the function...
See more...
In Current Splunk deployment we have 2 HFs, One used for DB connect another one used for the HEC connector and other. And the requirement is if One HF goes done other HF can handle all the functions. so is there High Availability option available for Heavy forwarder OR for DB connect APP ?
Usually those underscore indexes are restricted only for admin user access. As @PickleRick said those are reserved for Splunk’s own usage, not for regular data. If you need to use those as a regular u...
See more...
Usually those underscore indexes are restricted only for admin user access. As @PickleRick said those are reserved for Splunk’s own usage, not for regular data. If you need to use those as a regular user, you must separately grant access to those.
Hi @Nraj87, Replication tasks will queue if remote indexers are unavailable, but it's generally assumed they are always on and reliably connected. Indexers in all sites remain active participants in...
See more...
Hi @Nraj87, Replication tasks will queue if remote indexers are unavailable, but it's generally assumed they are always on and reliably connected. Indexers in all sites remain active participants in the cluster subject to your replication, search, and forwarding settings.
Is it possible to get each day first login event( EventCode=4634) as "logon" and Last event of (EventCode=4634) as Logoff and calculate total duration .
index=win sourcetype="wineventlog" Eve...
See more...
Is it possible to get each day first login event( EventCode=4634) as "logon" and Last event of (EventCode=4634) as Logoff and calculate total duration .
index=win sourcetype="wineventlog" EventCode=4624 OR EventCode=4634 NOT
| eval action=case((EventCode=4624), "LOGON", (EventCode=4634), "LOGOFF", true(), "ERROR")
| bin _time span=1d
| stats count by _time action user
Thanks for your response! It seems that workaround proposed in the link is for the file provided by CyberArk because it is not matching the content of SplunkCIM.xsl file provided by Splunk TA. ...
See more...
Thanks for your response! It seems that workaround proposed in the link is for the file provided by CyberArk because it is not matching the content of SplunkCIM.xsl file provided by Splunk TA. Do you know how to apply it to Splunk application?
Hi @tscroggins / @PickleRick , Thanks for the valuable feedback. one quick question, The Splunk indexer clustering isn't active-passive, than how the data will replicate in bucket bucket life c...
See more...
Hi @tscroggins / @PickleRick , Thanks for the valuable feedback. one quick question, The Splunk indexer clustering isn't active-passive, than how the data will replicate in bucket bucket life cycle (hot > warm> cold) from site1 to site2 incase of any delay in log or latency in the network.
Dear All, I would like to introduced the DR Site along with active log ingestion (SH cluster + Indexers cluster ). is there any formula for calculator to estimate the bandwidth to Forward the da...
See more...
Dear All, I would like to introduced the DR Site along with active log ingestion (SH cluster + Indexers cluster ). is there any formula for calculator to estimate the bandwidth to Forward the data from Site1 to Site2.
Hello, Could anyone please tell me how I can disable SSL Verification for the Add-On Builder? I can't figure out where the parameter is located at. Thank you for any help on this one, Tom
Using the classic type dashboards I'm able to have simple script run on load of the dashboard by adding something like:
<dashboard script="App_Name:script_name.js" version="1.1">
But adding t...
See more...
Using the classic type dashboards I'm able to have simple script run on load of the dashboard by adding something like:
<dashboard script="App_Name:script_name.js" version="1.1">
But adding this to a dashboard created using Dashboard Studio the script does not run. How do you get a script to run on load of a dashboard that was created with Dashboard Studio?
I believe I have a fix, and curious if it resolves your issue as well. I'm in close contact with Splunk Support about this, so I'm sure documentation will be coming out shortly. Follow this docum...
See more...
I believe I have a fix, and curious if it resolves your issue as well. I'm in close contact with Splunk Support about this, so I'm sure documentation will be coming out shortly. Follow this documentation to enable cgroupsv2, reboot, and then disable/re-enable boot-start. https://access.redhat.com/webassets/avalon/j/includes/session/scribe/?redirectTo=https%3A%2F%2Faccess.redhat.com%2Fsolutions%2F6898151