All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, do you see any corrupt buckets? Sometimes restarting the CM has been known to clear the bucket fix up stuck tasks in the past. Refer: https://docs.splunk.com/Documentation/Splunk/9.3.1/Inde... See more...
Hello, do you see any corrupt buckets? Sometimes restarting the CM has been known to clear the bucket fix up stuck tasks in the past. Refer: https://docs.splunk.com/Documentation/Splunk/9.3.1/Indexer/Anomalousbuckets
Thanks for the suggestion, but I don't see a magnifying glass on any of the panels on the overview screen like on normal dashboard panels.  I'm logged in as admin.
I have the same issue If I enforce the HttpOnly settings for all cookies, splunk is not anymore working correctly
I have an enterprise deployment with multiple servers. All licensing is handled by a license manager. One of my indexers gives the warning "Your license is expired. Please login as an administrator t... See more...
I have an enterprise deployment with multiple servers. All licensing is handled by a license manager. One of my indexers gives the warning "Your license is expired. Please login as an administrator to update the license." When I do, licensing looks fine. It's pointed to the correct address for the license manager. Last successful contact was less than a minute ago. Under messages it says, "No licensing alerts". And under "Show all configuration details" It lists recent successful contacts and the license keys in use. That's about as far as I can go because 30 seconds in, my session will get kicked back to the login prompt with the message my session has expired.  So, I have one server out of a larger deployment that seems to think it doesn't have a license. But all indications are that it does. But it still behaves like it doesn't.   
I'm creating a splunk multisite cluster. the configuration is done as the documentation shows, so I did with the cluster node. all peers show up and tell me they are up and are happily replicating. b... See more...
I'm creating a splunk multisite cluster. the configuration is done as the documentation shows, so I did with the cluster node. all peers show up and tell me they are up and are happily replicating. but for whatever reason the search factor and replication factor is not met. the notification about the unhealthy system tells me it's the cluster node :      but - why is that? how can I check what is wrong with it? If I look up the cluster status it all seems fine (via cli)  
Thanks a lot !
Thawed path is the directory in which you'd have to manually put the data to be thawed (or where Splunk puts it after thawing; I don't remember I don't generally thaw buckets). It doesn't have anythi... See more...
Thawed path is the directory in which you'd have to manually put the data to be thawed (or where Splunk puts it after thawing; I don't remember I don't generally thaw buckets). It doesn't have anything to do with the freezing process. If you don't define frozen path (and freeze script) the data will get deleted when rolled to frozen. And be aware of what @gcusello said - data is rolled on a per bucket basis which means that "resolution" of the bucket rolling process depends on the contents of the buckets - data is being rolled to frozen when _newest_ event in a bucket is older than the retention period. That can be important especially in case of quarantine buckets.
Your input data is definitely _not_ in the same order as shown in the opening post.
Hi @avoelk , you don't need to allocate any disk space: the thawed path is only a mount point that you can use to recover frozen buckets, if you don't need it, you must only define the mount point (... See more...
Hi @avoelk , you don't need to allocate any disk space: the thawed path is only a mount point that you can use to recover frozen buckets, if you don't need it, you must only define the mount point (the thawed_path) in indexes.conf and then you don't need to allocate any disk space. Ciao. Giuseppe
Unfortunately your script does not provide the correct overview.   I want to know how long a machine has had a "Full" status. I can calculate that by taking the first "full" status and the ... See more...
Unfortunately your script does not provide the correct overview.   I want to know how long a machine has had a "Full" status. I can calculate that by taking the first "full" status and the "first" Ready status together and the difference is the duration. for example: Full -->This one Full --> Skip Ready -->This one Full-->This one Ready-->This one Full-->This one Full -->Skip Ready -->This one Ready --> skip
Hi Giuseppe and thanks for the swift answer! But how does it behave if I don't want to allocate a specific diskspace for thawed files/frozen files?  So there is no way to just have a retention of 1... See more...
Hi Giuseppe and thanks for the swift answer! But how does it behave if I don't want to allocate a specific diskspace for thawed files/frozen files?  So there is no way to just have a retention of 180 days and afterwards it will be deleted or did I get something of your answer wrong?    Kind regards
Hi @avoelk, yes, it's a required parameter even if you don't want to restore thawed buckets. Remember in Splunk the retention period is managed at bucket level, in other words, a bucket is deleted ... See more...
Hi @avoelk, yes, it's a required parameter even if you don't want to restore thawed buckets. Remember in Splunk the retention period is managed at bucket level, in other words, a bucket is deleted (or frozen) only when the latest event is older than the retention period, this means that you'll surely have in your buckets events older than the retention period, because they are in a bucket with younger events. Ciao. Giuseppe
Hi @msarkaus , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
I'm trying to configure the indexes.conf in such a way that its data retention is exactly 180 days and then does NOT get frozen, but gets deleted.    I've tried to set it with frozenTimePeriodInSec... See more...
I'm trying to configure the indexes.conf in such a way that its data retention is exactly 180 days and then does NOT get frozen, but gets deleted.    I've tried to set it with frozenTimePeriodInSecs = 15552000 but now I get the following error:    Validation errors are present in the bundle. Errors=peer=XXX, stanza=someidx Required parameter=thawedPath not configured;   so I HAVE TO put a thawed path in it even tho I don't want to freeze anything? how does that make sense?    Kind regards for a clarification!
The most significant memory saving could come with doing | fields - _raw If you already have your fields parsed, there's no need to drag the whole huge raw event along.
OK. So you want to have a "transaction" consisting of any sequence of Full events ending with a single Ready event. Any Ready events not preceeded by a Full event are not a part of any transaction an... See more...
OK. So you want to have a "transaction" consisting of any sequence of Full events ending with a single Ready event. Any Ready events not preceeded by a Full event are not a part of any transaction and should be discarded? | streamstats current=f window=1 values(ReasonCode) as LastReasonCode | where ReasonCode="Full" OR LastReasonCode="Full" OR isnull(LastReasonCode) This should filter out the events which are Ready and are preceeded by Ready. Now we can mark beginnings of each of those "streaks" | eval bump=if(ReasonCode="Full" AND LastReasonCode="Ready",1,0) And we can find which transaction is which | streamstats current=t sum(bump) as tran_id Now you have your unique transaction ID which you can use to find first and last timestamp | stats min(_time) as earliest max(_time) as latest by tran_is | eval duration=latest-earliest
Try removing / reducing unneeded fields before the doing the mvexpand to reduce the memory requirement
I believe something was wrong with the way I installed Splunk Enterprise since I have an MacBook M1 Pro. Initially I used the .dmg installation, but after I tried the .tgz installation by following t... See more...
I believe something was wrong with the way I installed Splunk Enterprise since I have an MacBook M1 Pro. Initially I used the .dmg installation, but after I tried the .tgz installation by following this tutorial, it is working just fine.  
That's why spath has both input and output options. And yes, you need to mvexpand your results to make each testcase a separate row.
Thanks, this script gives only 3 rows. But, I want to have an overview like (TS: Timestamp of the event):