All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Unfortunately your script does not provide the correct overview.   I want to know how long a machine has had a "Full" status. I can calculate that by taking the first "full" status and the ... See more...
Unfortunately your script does not provide the correct overview.   I want to know how long a machine has had a "Full" status. I can calculate that by taking the first "full" status and the "first" Ready status together and the difference is the duration. for example: Full -->This one Full --> Skip Ready -->This one Full-->This one Ready-->This one Full-->This one Full -->Skip Ready -->This one Ready --> skip
Hi Giuseppe and thanks for the swift answer! But how does it behave if I don't want to allocate a specific diskspace for thawed files/frozen files?  So there is no way to just have a retention of 1... See more...
Hi Giuseppe and thanks for the swift answer! But how does it behave if I don't want to allocate a specific diskspace for thawed files/frozen files?  So there is no way to just have a retention of 180 days and afterwards it will be deleted or did I get something of your answer wrong?    Kind regards
Hi @avoelk, yes, it's a required parameter even if you don't want to restore thawed buckets. Remember in Splunk the retention period is managed at bucket level, in other words, a bucket is deleted ... See more...
Hi @avoelk, yes, it's a required parameter even if you don't want to restore thawed buckets. Remember in Splunk the retention period is managed at bucket level, in other words, a bucket is deleted (or frozen) only when the latest event is older than the retention period, this means that you'll surely have in your buckets events older than the retention period, because they are in a bucket with younger events. Ciao. Giuseppe
Hi @msarkaus , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
I'm trying to configure the indexes.conf in such a way that its data retention is exactly 180 days and then does NOT get frozen, but gets deleted.    I've tried to set it with frozenTimePeriodInSec... See more...
I'm trying to configure the indexes.conf in such a way that its data retention is exactly 180 days and then does NOT get frozen, but gets deleted.    I've tried to set it with frozenTimePeriodInSecs = 15552000 but now I get the following error:    Validation errors are present in the bundle. Errors=peer=XXX, stanza=someidx Required parameter=thawedPath not configured;   so I HAVE TO put a thawed path in it even tho I don't want to freeze anything? how does that make sense?    Kind regards for a clarification!
The most significant memory saving could come with doing | fields - _raw If you already have your fields parsed, there's no need to drag the whole huge raw event along.
OK. So you want to have a "transaction" consisting of any sequence of Full events ending with a single Ready event. Any Ready events not preceeded by a Full event are not a part of any transaction an... See more...
OK. So you want to have a "transaction" consisting of any sequence of Full events ending with a single Ready event. Any Ready events not preceeded by a Full event are not a part of any transaction and should be discarded? | streamstats current=f window=1 values(ReasonCode) as LastReasonCode | where ReasonCode="Full" OR LastReasonCode="Full" OR isnull(LastReasonCode) This should filter out the events which are Ready and are preceeded by Ready. Now we can mark beginnings of each of those "streaks" | eval bump=if(ReasonCode="Full" AND LastReasonCode="Ready",1,0) And we can find which transaction is which | streamstats current=t sum(bump) as tran_id Now you have your unique transaction ID which you can use to find first and last timestamp | stats min(_time) as earliest max(_time) as latest by tran_is | eval duration=latest-earliest
Try removing / reducing unneeded fields before the doing the mvexpand to reduce the memory requirement
I believe something was wrong with the way I installed Splunk Enterprise since I have an MacBook M1 Pro. Initially I used the .dmg installation, but after I tried the .tgz installation by following t... See more...
I believe something was wrong with the way I installed Splunk Enterprise since I have an MacBook M1 Pro. Initially I used the .dmg installation, but after I tried the .tgz installation by following this tutorial, it is working just fine.  
That's why spath has both input and output options. And yes, you need to mvexpand your results to make each testcase a separate row.
Thanks, this script gives only 3 rows. But, I want to have an overview like (TS: Timestamp of the event):  
Thanks. It worked
Why did you do that?  It's not what I suggested in my reply. I'm not surprised you received no results since the syntax is rubbish.  like is a function, not an operator. | where like(hostname, host... See more...
Why did you do that?  It's not what I suggested in my reply. I'm not surprised you received no results since the syntax is rubbish.  like is a function, not an operator. | where like(hostname, hostname_pattern) Be aware that like uses "%" as a wildcard rather than "*".
Hi, @PickleRick , | spath output=suite path=suite{}.name | spath output=Testcase path=suite{}.testcase{}.name | spath output=Status path=suite{}.testcase{}.status|table suite Testcase Status I... See more...
Hi, @PickleRick , | spath output=suite path=suite{}.name | spath output=Testcase path=suite{}.testcase{}.name | spath output=Status path=suite{}.testcase{}.status|table suite Testcase Status I wrote a query like this. but the problem here is in a single row multiple values will come. I want to break these value and print them in different row. Any optionnother than mvexpand?
As it has already said you must escape all special characters! ... | rex "(?P<POH>[^\"]+)" should fix this one. Just do rest with same way. 
Hi Have you read this https://conf.splunk.com/files/2022/slides/PLA1122B.pdf ? I suppose that you can contact Mary in Splunk UG Slack if you are needing some help? r. Ismo
Hi, @ITWhisperer if mvexoand is used the results are truncated and i get a warning message. Any other alternative to mvexpand command is available?
| spath suite{}.testcase{} output=testcase | mvexpand testcase | spath input=testcase | table name status
Transaction seems to have a mind of its own (there are some not well documented nuances to how it works). Try something like this before your transaction command (to give it a hand!) | streamstats c... See more...
Transaction seems to have a mind of its own (there are some not well documented nuances to how it works). Try something like this before your transaction command (to give it a hand!) | streamstats count(eval(ReasonCode="Full")) as fullCount count(eval(ReasonCode="Ready")) as readyCount by EquipmentName | where fullCount=1 OR readyCount=1  
Splunk functions should _not_ truncate any data on their own (unless you explicitly use some text-manipulation function of course). There might be some visualization issue on the displaying end. Any... See more...
Splunk functions should _not_ truncate any data on their own (unless you explicitly use some text-manipulation function of course). There might be some visualization issue on the displaying end. Anyway, You're doing one thing which in case of your data might be giving proper results but in general is a bad practice. If you have multivalued fields (like your two Testcase and Status fields) you have no guarantee that they will contain entries matching 1-1 with each other. A simple run-anywhere example to demonstrate: | makeresults | eval _raw="[ { \"a\":\"a\",\"b\":\"b\"},{\"a\":\"b\",\"c\":\"c\"},{\"b\":\"d\",\"c\":\"e\"}]" | spath {}.a output=a | spath {}.b output=b | spath {}.c output=c | spath {} output=pairs As you can see, the output in fields a, b and c would be completely different if zipped together than what you get as pairs in the array. That's why you should rather parse out whole separate testcases as json objects with | spath testcase (or whatever path you have there to your test cases) and then parse each of them separately so you don't loose the connection between separate fields within a single testcase.