All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi ITWhisperer, downtime represents every value starting with 0,00 do matter how many decimals. BR
And did you add your components as search peers to your MC? (for the indexer cluster you only need to add the CM)
OK. But what is your issue here? You have a timestamp but don't know how to render it into text with a given format? For that you use either eval or fieldformat with a strftime function. Or you alr... See more...
OK. But what is your issue here? You have a timestamp but don't know how to render it into text with a given format? For that you use either eval or fieldformat with a strftime function. Or you already have a string value but have some problems with putting it on a dashboard? (what problems exactly?)
OK. This is indeed interesting. The search behind this panel uses the /services/server/status/partitions-space REST endpoint. This endpoint, according to docs, returns four values. - capacity - f... See more...
OK. This is indeed interesting. The search behind this panel uses the /services/server/status/partitions-space REST endpoint. This endpoint, according to docs, returns four values. - capacity - free - fs_type - mount_point (along with some "standard" fields like title, author, id and eai stuff) But the actual data returned by the call also includes a field called "available". And in my case the "available" field indeed shows the free space on the filesystem. The "free" field (again - in my case) contains some value completely unrelated to anything. But the search behind the MC panel uses the field "available" if it's included in the data. If it's not included, it uses the "free" field. Check the results of | rest splunk_server=<your indexer> /services/server/status/partitions-space | fields - eai* id author published updated title And see if the data makes sense.  I suspect you're not getting the "available" field and your "free" field contains some bonkers value. EDIT: Posted a feedback to the docs page describing this REST endpoint
There is no single good answer to this question. Generally, indexed fields cause additional overhead in terms of storage size, can - if bloated - counterintuitively have negative impact on performan... See more...
There is no single good answer to this question. Generally, indexed fields cause additional overhead in terms of storage size, can - if bloated - counterintuitively have negative impact on performance and for straight event searches do not give that much of a performance gain versus well written raw events search. Having said that, there are some scenarios when adding some indexed fields helps. One is when you do a lot of summarizing on some fields. Not searching but summarizing. Then indeed tstats is lightning fast compared to search | stats. (OTOH you can usually get similar results by report acceleration or summary indexing so indexed fields might not be needed). Another case is when you have a lot of values which can appear often in multiple fields. Splunk searches by finding values first and then parsing the event containing those values to find out if it parses out to given field. So if you have 10k events of which only 10 contain a string "whatever" and out of those ten nine are values of a field named comment, a search for "comment=whatever" will only need to check 10 events out of those 10k and of those 90% of considered events will match. So the search will be quite effective. But if your data contained the word "whatever" in 3k events of which only 9 were in the comment field, Splunk would have to fetch all 3k events, parse them and see if the comment field indeed contained that word. Since only 9 of those 3k events contain that word in that right spot, this search would be very ineffective. So there is no one size fits all. But the general rule is that adding indexed fields can sometimes help and it's not a thing that should never be used at all but should be only done when indeed needed. Not just added blindly for all possible fields in all your data because then you're effectively transforming Splunk into something it is not - a document database with schema on index. And for that you don't need Splunk. And if your SH is already overloaded, that usually (again - as always, it of course depends on particular case; yours might be completely different but I'm speaking from general experience) means that either you simply have too many concurrently running searches. And creating indexed fields won't help here much. Or you have badly written searches. (which is nothing to be ashamed of; Splunk is easy to start working with but can be tricky to master; writing effective searches requires quite a significant bit of knowledge).
Hi @hazem , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
My tests yesterday seemed to confirm it. I have a test index. I run | eventcount index=test2 | eval type="eventcount" | append [ | tstats count where earliest=1 latest=+10y index=test2 | ev... See more...
My tests yesterday seemed to confirm it. I have a test index. I run | eventcount index=test2 | eval type="eventcount" | append [ | tstats count where earliest=1 latest=+10y index=test2 | eval type="tstats"] And get count type 35172 eventcount 31077 tstats   (Yesterday I already removed some events) So I run index=test2 earliest=-2y@y latest=@y | delete Splunk says it deleted 27549 events. So I rerun my counting search and this time I get count type 35172 eventcount 3528 tstats   So you can see - deleting events changes tstats, doesn't touch eventcount
Hi @Ananya_23 , the only way is adding a JS that makes the same thing, but I cannot help you because I'm not so strong in JS development. A simpler way is to add an option: display _raw that adds t... See more...
Hi @Ananya_23 , the only way is adding a JS that makes the same thing, but I cannot help you because I'm not so strong in JS development. A simpler way is to add an option: display _raw that adds the _raw field to the table command and displays it. Ciao. Giuseppe
Hi @gcusello  _raw gives me all the details of that particular event agreed. But here I want to display the "Show Source" link to be displayed on the dashboard
Hi @krishna1 , you have only to remove the filter (where command). Eventually, you could add a calculation (eval command) to indicate if an event is matching or not, but probably isn't relevant bec... See more...
Hi @krishna1 , you have only to remove the filter (where command). Eventually, you could add a calculation (eval command) to indicate if an event is matching or not, but probably isn't relevant because the matching ones have a value in the work_queue field. Ciao. Giuseppe 
Hi @jaibalaraman , you have to use the table command and the field containing the date to display: <your_search> | table timestamp There's on ly one problem I can see: having a long string like th... See more...
Hi @jaibalaraman , you have to use the table command and the field containing the date to display: <your_search> | table timestamp There's on ly one problem I can see: having a long string like the one you shared, chars in the Single Value will be very small. Maybe you could use more than one Single Value displaying parts of the timestamp. Ciao. Giuseppe
Wineventlog inputs have been known for having performance problems above certain EPS threshold. It usually doesn't manifest itself in local events ingestion but shows when pulling WEF-ed logs. Adding... See more...
Wineventlog inputs have been known for having performance problems above certain EPS threshold. It usually doesn't manifest itself in local events ingestion but shows when pulling WEF-ed logs. Adding additional pipelines doesn't help. The way around it (other than setting up more WEC hosts and splitting WEF subscriptions among them is to create more eventlog channels and split your subscription into several channels. The performance problems for eventlog inputs seem to be at single input level so if you're getting stuck around 10k EPS with single input you should be able to get up to 40k EPS if you split your ForwardedLogs into 4 channels. Unfortunately, it's a bit of work to set it up and you need to create custom dll for that. https://learn.microsoft.com/en-gb/archive/blogs/russellt/creating-custom-windows-event-forwarding-logs https://github.com/palantir/windows-event-forwarding/blob/master/windows-event-channels/README.md  
I have setup Cluster master, indexer cluster & Search head cluster. I have a new environment for monitoring console. When I go to  Settings > Monitoring Console > Settings > General Setup  & switch t... See more...
I have setup Cluster master, indexer cluster & Search head cluster. I have a new environment for monitoring console. When I go to  Settings > Monitoring Console > Settings > General Setup  & switch to Distributed mode servers are not showing up under remote instances. Can someone help me on it.
This explanation is the simplest of all. Thank you.  
Hello @zksvc ,     Thanks again!     I'm facing error in this line "unbalanced quotes" | eval lengths = mvmap(code_list, len(trim('code_list', '"'))) So ihave modified this as  | eval lengths... See more...
Hello @zksvc ,     Thanks again!     I'm facing error in this line "unbalanced quotes" | eval lengths = mvmap(code_list, len(trim('code_list', '"'))) So ihave modified this as  | eval lengths = mvmap(code_list, len(trim('code_list', "\""))) though eval is not accepting "*" as a token value in code. Thanks!
Hi, It was added in the following way and it did not work, it does not show results. index="cdr_cfs_index" | search Call.OrigParty.TrunkGroup.TrunkGroupId=2601 | lookup ClientesSymSipdfntion1 Cal... See more...
Hi, It was added in the following way and it did not work, it does not show results. index="cdr_cfs_index" | search Call.OrigParty.TrunkGroup.TrunkGroupId=2601 | lookup ClientesSymSipdfntion1 Call.OrigParty.CallingPartyAddr OUTPUT cliente | lookup ClientesSymSipdfntion2 Call.OrigParty.CallingPartyAddr OUTPUT cliente1 | fillnull value=null Call.CallForwardInfo.OriginalCalledAddr | where isnull(cliente) AND isnull(cliente1) AND Call.CallForwardInfo.OriginalCalledAddr="null" | stats count by Call.OrigParty.CallingPartyAddr Call.CallForwardInfo.OriginalCalledAddr | sort - Call.CallForwardInfo.OriginalCalledAddr    
Hi @smanojkumar  According in your information what if we create new field, let say max_length. put that field in condition then run the query like this index=03_f123456 sourcetype=logs* (CODE IN (... See more...
Hi @smanojkumar  According in your information what if we create new field, let say max_length. put that field in condition then run the query like this index=03_f123456 sourcetype=logs* (CODE IN ($code$)) | eval code_list = split(trim("($code$)", "()"), ",") | eval lengths = mvmap(code_list, len(trim('code_list', '"'))) | eval max_length = if(mvfind(lengths, 1) >= 0, "value_1", "value_2") | table code_list max_length   Let me know if it works   Danke!      
Hi    How to display the day / month / time / year like the below format using simple format    Ex- | make result   
as i know, the result is same