All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Here's what as in my Props.conf. I cannot share logs.   [SUMS] EVENT_BREAKER_ENABLE=true EVENT_BREAKER=(At\s[0-2][0-9]:[0-6][0-9]:[0-6][0-9]\s-\d{4}\s-)  
Hi You could try this https://github.com/ryanadler/downloadSplunk to generate suitable download link. At least it knows 7.0.x versions. r. Ismo
We use Splunk, and I do know that our SystemOut logs are forwarded to the Splunk indexer. Does anyone have some example SPLs for searching indexes for WebSphere SystemOut Warnings "W" and SystemOut E... See more...
We use Splunk, and I do know that our SystemOut logs are forwarded to the Splunk indexer. Does anyone have some example SPLs for searching indexes for WebSphere SystemOut Warnings "W" and SystemOut Errors "E"? Thanks.   For your reference, here is a link to IBM's WebSphere log interpretation: ibm.com/docs/en/was/8.5.5?topic=SSEQTP_8.5.5/…
Hi When you are looking source, it said that this event is from OS auditd not from Splunk’s internal logs. This is the reason why it is in your Linux index. r. Ismo
And if you haven’t backups (you really should), just add role admin to your admin user into authorize.conf file with any text editor. See authorize.conf specs how it should do.
I created a support ticket, and they confirmed that this is a bug that will be fixed in the next release of SSE. However, they could not provide a date for the update and recommended that I downgrade... See more...
I created a support ticket, and they confirmed that this is a bug that will be fixed in the next release of SSE. However, they could not provide a date for the update and recommended that I downgrade back to 3.7.1. I did so and that worked. I've asked that they update the "Known Issues" list with this bug info.
Try something like this index=win sourcetype="wineventlog" EventCode=4624 OR EventCode=4634 | bin _time as day span=1d | stats count min(eval(if(EventCode=4624,_time,null()))) as first_logon max... See more...
Try something like this index=win sourcetype="wineventlog" EventCode=4624 OR EventCode=4634 | bin _time as day span=1d | stats count min(eval(if(EventCode=4624,_time,null()))) as first_logon max(eval(if(EventCode=4634,_time,null)))) as last_logout by day user
Hi I’m not sure if there is any official calculations available? You could do some estimates based on knowledge of your replication factor and amount of ingestion and searches. Splunk will replicate... See more...
Hi I’m not sure if there is any official calculations available? You could do some estimates based on knowledge of your replication factor and amount of ingestion and searches. Splunk will replicate all buckets between sites as soon as events are written on primary bucket. Actually splunk inform sending HF / UF (if you have indexAck configured) after replication has done, not before. Also searches are using bandwidth based on your queries. Bad queries use more bandwidth and better less. r. Ismo
Hello, I am struggling on figuring out how this request can be achieved.  I need to report on events from an API call in Splunk, However, the API call requires variables from another API call.  I ha... See more...
Hello, I am struggling on figuring out how this request can be achieved.  I need to report on events from an API call in Splunk, However, the API call requires variables from another API call.  I have been testing with the Add-On Builder and can make the initial request.  I'm seeing the resulting events in Splunk Search, but I can't figure out how to create a secondary API call that could use the fields as variables in the secondary args or parameters fields. I was trying to use the API module, because I'm not fluent at all with scripting. Thanks for any help on this, it is greatly appreciated, Tom
Hi some TAs will support some kind of HA e.g. DB Connect, but I think that most didn’t. With DB Connect you could use SHC configuration for managing HA. I’m not sure how well this is currently work... See more...
Hi some TAs will support some kind of HA e.g. DB Connect, but I think that most didn’t. With DB Connect you could use SHC configuration for managing HA. I’m not sure how well this is currently working in general TAs? This needs some kind of mechanism to use distributed checkpoint status e.g. kvstore. r. Ismo
It’s good to known that all those nodes are independent on for buckets. There could be situations where primary bucket is already e.g. removed and there are still those secondary buckets on another si... See more...
It’s good to known that all those nodes are independent on for buckets. There could be situations where primary bucket is already e.g. removed and there are still those secondary buckets on another sites and/or another nodes on primary sites.
In Current Splunk deployment  we have 2 HFs, One used for DB connect another one used for the HEC connector and other. And the requirement is if One HF goes done other HF can handle all the function... See more...
In Current Splunk deployment  we have 2 HFs, One used for DB connect another one used for the HEC connector and other. And the requirement is if One HF goes done other HF can handle all the functions.   so  is there High Availability option available for Heavy forwarder OR for DB connect APP ?
Usually those underscore indexes are restricted only for admin user access. As @PickleRick said those are reserved for Splunk’s own usage, not for regular data. If you need to use those as a regular u... See more...
Usually those underscore indexes are restricted only for admin user access. As @PickleRick said those are reserved for Splunk’s own usage, not for regular data. If you need to use those as a regular user, you must separately grant access to those.
What does your job inspection say about the time period of your search?
Hi @Nraj87, Replication tasks will queue if remote indexers are unavailable, but it's generally assumed they are always on and reliably connected. Indexers in all sites remain active participants in... See more...
Hi @Nraj87, Replication tasks will queue if remote indexers are unavailable, but it's generally assumed they are always on and reliably connected. Indexers in all sites remain active participants in the cluster subject to your replication, search, and forwarding settings.
Is it possible to get each day first login event( EventCode=4634)  as "logon" and Last event of   (EventCode=4634) as Logoff and calculate total duration . index=win sourcetype="wineventlog" Eve... See more...
Is it possible to get each day first login event( EventCode=4634)  as "logon" and Last event of   (EventCode=4634) as Logoff and calculate total duration . index=win sourcetype="wineventlog" EventCode=4624 OR EventCode=4634 NOT | eval action=case((EventCode=4624), "LOGON", (EventCode=4634), "LOGOFF", true(), "ERROR") | bin _time span=1d | stats count by _time action user
Thanks for your response! It seems that workaround proposed in the link is for the file provided by CyberArk because it is not matching the content of SplunkCIM.xsl file provided by Splunk TA.  ... See more...
Thanks for your response! It seems that workaround proposed in the link is for the file provided by CyberArk because it is not matching the content of SplunkCIM.xsl file provided by Splunk TA.  Do you know how to apply it to Splunk application?
Hi @tscroggins / @PickleRick , Thanks for the valuable feedback. one quick question, The Splunk indexer clustering isn't active-passive,  than how the data will replicate in bucket bucket life c... See more...
Hi @tscroggins / @PickleRick , Thanks for the valuable feedback. one quick question, The Splunk indexer clustering isn't active-passive,  than how the data will replicate in bucket bucket life cycle (hot > warm> cold)  from site1 to site2 incase of any delay in log or latency in the network.    
Dear All, I would like to introduced the DR Site along with active log ingestion (SH cluster + Indexers cluster ). is there any formula for calculator to estimate the bandwidth  to Forward the da... See more...
Dear All, I would like to introduced the DR Site along with active log ingestion (SH cluster + Indexers cluster ). is there any formula for calculator to estimate the bandwidth  to Forward the data from Site1 to Site2.
Okay I'll see if removing the _ helps. Thank you.