All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Nraj87, Replication tasks will queue if remote indexers are unavailable, but it's generally assumed they are always on and reliably connected. Indexers in all sites remain active participants in... See more...
Hi @Nraj87, Replication tasks will queue if remote indexers are unavailable, but it's generally assumed they are always on and reliably connected. Indexers in all sites remain active participants in the cluster subject to your replication, search, and forwarding settings.
Is it possible to get each day first login event( EventCode=4634)  as "logon" and Last event of   (EventCode=4634) as Logoff and calculate total duration . index=win sourcetype="wineventlog" Eve... See more...
Is it possible to get each day first login event( EventCode=4634)  as "logon" and Last event of   (EventCode=4634) as Logoff and calculate total duration . index=win sourcetype="wineventlog" EventCode=4624 OR EventCode=4634 NOT | eval action=case((EventCode=4624), "LOGON", (EventCode=4634), "LOGOFF", true(), "ERROR") | bin _time span=1d | stats count by _time action user
Thanks for your response! It seems that workaround proposed in the link is for the file provided by CyberArk because it is not matching the content of SplunkCIM.xsl file provided by Splunk TA.  ... See more...
Thanks for your response! It seems that workaround proposed in the link is for the file provided by CyberArk because it is not matching the content of SplunkCIM.xsl file provided by Splunk TA.  Do you know how to apply it to Splunk application?
Hi @tscroggins / @PickleRick , Thanks for the valuable feedback. one quick question, The Splunk indexer clustering isn't active-passive,  than how the data will replicate in bucket bucket life c... See more...
Hi @tscroggins / @PickleRick , Thanks for the valuable feedback. one quick question, The Splunk indexer clustering isn't active-passive,  than how the data will replicate in bucket bucket life cycle (hot > warm> cold)  from site1 to site2 incase of any delay in log or latency in the network.    
Dear All, I would like to introduced the DR Site along with active log ingestion (SH cluster + Indexers cluster ). is there any formula for calculator to estimate the bandwidth  to Forward the da... See more...
Dear All, I would like to introduced the DR Site along with active log ingestion (SH cluster + Indexers cluster ). is there any formula for calculator to estimate the bandwidth  to Forward the data from Site1 to Site2.
Okay I'll see if removing the _ helps. Thank you.
This still gives me only one year results - 2023
Hello, sorry, I forgot to mention that I am using the API portion of the Add-On Builder, no scripting, just direct API connection.   Thanks again, Tom
Hello, Could anyone please tell me how I can disable SSL Verification for the Add-On Builder?  I can't figure out where the parameter is located at. Thank you for any help on this one, Tom  
Using the classic type dashboards I'm able to have simple script run on load of the dashboard by adding something like: <dashboard script="App_Name:script_name.js" version="1.1"> But adding t... See more...
Using the classic type dashboards I'm able to have simple script run on load of the dashboard by adding something like: <dashboard script="App_Name:script_name.js" version="1.1"> But adding this to a dashboard created using Dashboard Studio the script does not run. How do you get a script to run on load of a dashboard that was created with Dashboard Studio?   
Hi AndrewBurnett, Thank you for keeping me updated. I have send the link to our Linux colleagues, and will hear what they think of it. Harry
Restore the $SPLUNK_HOME/etc/system/local/authorize.conf file from your most recent backup and restart Splunk.
I believe I have a fix, and curious if it resolves your issue as well. I'm in close contact with Splunk Support about this, so I'm sure documentation will be coming out shortly.   Follow this docum... See more...
I believe I have a fix, and curious if it resolves your issue as well. I'm in close contact with Splunk Support about this, so I'm sure documentation will be coming out shortly.   Follow this documentation to enable cgroupsv2, reboot, and then disable/re-enable boot-start. https://access.redhat.com/webassets/avalon/j/includes/session/scribe/?redirectTo=https%3A%2F%2Faccess.redhat.com%2Fsolutions%2F6898151
Also you could consider putting the search in the init block so it isn't even in a panel
Panels can be hidden by using the depends attribute with a token that is never set <panel depends="$neverset$">
HI  in splunkd.log file I am seeing: TailReader [260668 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' and In splunk, I am seeing the logs a... See more...
HI  in splunkd.log file I am seeing: TailReader [260668 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' and In splunk, I am seeing the logs as well Basically, I want to know that is happening here. this tracker.log file should be under index=_internal but somehow this file is present under index=linux  and in Linux TA, I can see the [linux_audit] sourcetype config under props.conf.  who is calling this as I am not seeing any related input parameter for this. Kind Regards, Rashid    
A string in single quotes is treated by Splunk as a field name. substr('message.processingDuration', 1, len('message.processingDuration')-2)
Try something like this index=cls_prod_app appname=Lacerte message="featureperfmetrics" NOT(isinternal="*") taxmodule=$taxmodule$ $hostingprovider$ datapath=* operation=createclient $concurrentusers... See more...
Try something like this index=cls_prod_app appname=Lacerte message="featureperfmetrics" NOT(isinternal="*") taxmodule=$taxmodule$ $hostingprovider$ datapath=* operation=createclient $concurrentusers$ [| makeresults | eval latest=relative_time(now(),"@d") | eval row=mvrange(0,2) | mvexpand row | eval latest=relative_time(latest,"@d-".row."y") | eval earliest=relative_time(latest,"-30d") | eval applicationversion=$applicationversion$-row | table earliest latest applicationversion]
Hi @gcusello  With the updated query , i am not able to fetch the data of the current date.  Can you please help me to add the data of the current date too.  Query:  index=events_prod_cdp... See more...
Hi @gcusello  With the updated query , i am not able to fetch the data of the current date.  Can you please help me to add the data of the current date too.  Query:  index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="IDJO20P" endswith="PIDZJEA" keeporphans=True | bin span=1d _time | stats sum(eventcount) AS eventcount BY _time file | append [ search index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="PIDZJEA" endswith="IDJO20P" keeporphans=True | bin span=1d _time | stats sum(eventcount) AS eventcount BY _time | eval file="count after PIDZJEA" | table file eventcount _time] | chart sum(eventcount) AS eventcount OVER _time BY file   Extract :       Also , is it possible to have a visual graph like below to show the details in the graph :  IN_per_24h = count of RPWARDA between IDJO20P and PIDZJEA of the day.  Out_per_24h =  count of SPWARAA + SPWARRA between IDJO20P and PIDZJEA of the day.  Backlog = count after PIDZJEA  of the day.     
@ITWhisperer  : Can you please check my last query and help to provide a solution.