All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for your answer but it does not solve the problem. I already tried to solve it  by refreshing the page, restart Splunk, use different browers (edge, firefox and chrome), but nothing helps. So ... See more...
Thanks for your answer but it does not solve the problem. I already tried to solve it  by refreshing the page, restart Splunk, use different browers (edge, firefox and chrome), but nothing helps. So I think the version of the app will not work in Splunk 9.1.1. Unfortunately the app is not supported so it looks like I must look for an alternative (if there is..). There is a Tag Cloud vizualisation but that is not so graphical as Wordcloud. 
my question is very simple.  This returns nothing:   sourcetype=my_sourcetype   This returns X amount of events (same amount as index=my_index):   index=my_index AND sourcetype=my_sourcetype... See more...
my question is very simple.  This returns nothing:   sourcetype=my_sourcetype   This returns X amount of events (same amount as index=my_index):   index=my_index AND sourcetype=my_sourcetype   Search is in: Verbose Mode what am I missing?!  howcome another filter returns more events?
Hi, I have been struggling to fix this blacklist in windows_ta app inputs.conf in the DS and deployed it to clients but it not working as expected, please help me in fixing this issue Still logs ... See more...
Hi, I have been struggling to fix this blacklist in windows_ta app inputs.conf in the DS and deployed it to clients but it not working as expected, please help me in fixing this issue Still logs are ingesting.. Thanks Eagerly waiting for your answers....
You can do it by creating a stacked bar and setting the limit to be the gap (limit-spend) and creating the rows needed for the 4 groups. Here's an example, but I suspect there is a better way | mak... See more...
You can do it by creating a stacked bar and setting the limit to be the gap (limit-spend) and creating the rows needed for the 4 groups. Here's an example, but I suspect there is a better way | makeresults | eval _raw="Country,old_limit,old_spend_limit,new_limit,new_spend_limit USA,84000,37000,121000,43000 Canada,149000,103000,214000,128000" | multikv forceheader=1 | table Country old_limit old_spend_limit new_limit new_spend_limit | foreach *_spend_limit [ eval "<<MATCHSEG1>>_gap"='<<MATCHSEG1>>_limit'-<<FIELD>>, type=if("<<MATCHSEG1>>"="old", "Pre", "Post"), MV=mvappend(mvzip(mvzip('<<MATCHSEG1>>_gap', '<<FIELD>>', ";"), type, ";"), MV) ] | fields Country MV | mvexpand MV | rex field=MV "(?<gap>[^\;]*);(?<spend>[^\;]*);(?<type>.*)" | eval Country=Country." ".type | fields Country gap spend This just creates a gap;spend field for each type (pre/post) and then expands the pair for each country.  
Hello Splunk lovers!  I stacked when i was realize kafka connect on Splunk to KafkaBroker with error "LZ4 compression not implemented". Maybe someone has already had and solved this problem.  So, h... See more...
Hello Splunk lovers!  I stacked when i was realize kafka connect on Splunk to KafkaBroker with error "LZ4 compression not implemented". Maybe someone has already had and solved this problem.  So, how can I solve this problem, please help ?  
你好 @yafei, Splunk 服务器可以访问的数据没有明确的限制。 例如,如果您有分布式集群/非集群环境,则它可以访问任意数量的事件。 如果询问特定于容量规划,下面是 2 个 Splunk 文档,可以帮助您解决相同问题 - - https://docs.splunk.com/Documentation/Splunk/9.1.1/Capacity/Introductiontocapaci... See more...
你好 @yafei, Splunk 服务器可以访问的数据没有明确的限制。 例如,如果您有分布式集群/非集群环境,则它可以访问任意数量的事件。 如果询问特定于容量规划,下面是 2 个 Splunk 文档,可以帮助您解决相同问题 - - https://docs.splunk.com/Documentation/Splunk/9.1.1/Capacity/IntroductiontocapacityplanningforSplunkEnterprise - https://docs.splunk.com/Documentation/Splunk/9.1.1/Capacity/Referencehardware 另外,这里有一个类似的问题 - https://community.splunk.com/t5/Splunk-Enterprise/Capacity-planning-best-practices-for-Splunk-Enterprise/m-p/476931   如果有帮助,请接受解决方案并点击 Karma, 或者如果您对此有任何疑问,请告诉我!
I want to alert if a result changes. There are probably dozens of ways to do this, but I think I'm missing the really simple obvious solution. I've been looking at diff, and I can get this to work in... See more...
I want to alert if a result changes. There are probably dozens of ways to do this, but I think I'm missing the really simple obvious solution. I've been looking at diff, and I can get this to work in search results - providing a single "event" result containing either "Results are the same" or some stats if a difference is found. The expression looks like this: index=testapp ErrorlogTotalCount |diff I could add an attribute, but it's not really needed because the result is static except for the log count. The default position values of 1 and 2, comparing the newest result to the prior one, is also perfect because we want to catch when the number changes. My difficulty is setting up an alert to catch this. Since I always get 1 event back, I can't alert on a count of events. Maybe I can use a custom trigger condition, but I'm not finding a document that explains how to use that field. This is probably possible with other search commands such as delta or streamstats but to me those appear to be overkill. Let me know what I am missing please. Thanks for the help.
麻烦提供下,数据规格说明
想了解下,SPlunk 单台服务器,最多可以接入多大的数据量 ,可以给工
Hello, I have a below values in lookup and trying to achieve below bar chart view.  Country     old_limit        old_spend_limit      new_limit          new_spend_limit    USA            84000    ... See more...
Hello, I have a below values in lookup and trying to achieve below bar chart view.  Country     old_limit        old_spend_limit      new_limit          new_spend_limit    USA            84000             37000                       121000                   43000   Canada     149000           103000                     214000                 128000 old_limit = PRE new_limit = POST    
I assume that my-index is a metrics index.  But still unclear what is being asked.  Generally only you will know what data you get back from AWS/EBS and which metrics are of interest to your use case... See more...
I assume that my-index is a metrics index.  But still unclear what is being asked.  Generally only you will know what data you get back from AWS/EBS and which metrics are of interest to your use case.   Once you know which metrics you are interested in and what kind of stats (e.g., avg) you want to perform, mstats is your friend. If you have difficulty figure out which metrics are available, mstats is also your friend: | mstats count(*) as * where index=my-index ``` my-index must be a metric index ``` | transpose column_name=metric_name Hope this helps.
I'm not aware of any such option.  Perhaps one of the DEBUG log settings will help. Failure to apply a regex is not an error - it just means the data doesn't match the regex, which is perfectly norm... See more...
I'm not aware of any such option.  Perhaps one of the DEBUG log settings will help. Failure to apply a regex is not an error - it just means the data doesn't match the regex, which is perfectly normal.
Hi @richgalloway @gcusello , Is there any option where we can see the errors for the blacklisted regex  if it's not getting applied? Thanks..
@_JP is correct that single quotes have special meaning in SPL.  Have you tested   index=_internal status=* | rename status AS HTTPStatus | stats count(eval(HTTPStatus >= 400 AND HTTPStatus < 500))... See more...
@_JP is correct that single quotes have special meaning in SPL.  Have you tested   index=_internal status=* | rename status AS HTTPStatus | stats count(eval(HTTPStatus >= 400 AND HTTPStatus < 500)) AS fourxxErrors, count(eval(HTTPStatus >= 500 AND HTTPStatus < 600)) AS fivexxErrors, count AS TotalRequests Can you share sample output from this?  
Yes, I also had that idea: So, calculate log(analog_value)  and plot that a linear Y axis? While that produces a proper visual, you can't read the value of analog_value any more (only it's log). ... See more...
Yes, I also had that idea: So, calculate log(analog_value)  and plot that a linear Y axis? While that produces a proper visual, you can't read the value of analog_value any more (only it's log). But the illegibility of the true values still bothers me, which is why I was hoping for an even more perfect solution, somehow... Maybe there is none.
You can avoid the empty panel warnings by just adding a dummy html panel, i.e. <panel> <html/> </panel>  
If the urls are consistent, that is a great idea.  Unfortunately, the urls have between 1 and 8 parts  This is rather confusing.  @ITWhisperer's command concatenates from the end, and it shouldn't... See more...
If the urls are consistent, that is a great idea.  Unfortunately, the urls have between 1 and 8 parts  This is rather confusing.  @ITWhisperer's command concatenates from the end, and it shouldn't matter whether there is 1 part in between or 8.  Have you tested in full search?   index=my_index sourcetype=my_sourcetype (rule=policy_1 OR rule=policy_2 OR rule=policy_3) [ | inputlookup my_list_of_urls.csv ] | eval my_url = mvjoin(mvindex(split(url, "."), -2,-1), ".") | stats count by my_url   between . and I don't know where to start.  Alternatively, if there is a way of adding what is in the csv to the results, that would work. Another piece of confusing comes from the original search:   index=my_index sourcetype=my_sourcetype (rule=policy_1 OR rule=policy_2 OR rule=policy_3) [ | inputlookup my_list_of_urls.csv ] | rename url AS my_url | stats count by my_url | table my_url If you table my_url in the end, the result is no different from using inputlookup alone.  Why bother with index search?  If you only want mircrosoft.com and office.com from the CSV, you could do | inputlookup my_list_of_urls.csv | eval my_url = mvjoin(mvindex(split(url, "."), -2,-1), ".") | stats values(my_url) as my_url | mvexpand my_url   Maybe you can describe more variety of data (anonymize as needed) and print output from proposed searches, then illustrate what desired outcome is and explain why the actual output is not desired (in case it is not obvious enough)?  
In this case if you just care about the max TotalScore, you can just reverse-sort your data by TotalScore and use head to grab to first (aka the max) one:   | makeresults format=csv data="Class,Nam... See more...
In this case if you just care about the max TotalScore, you can just reverse-sort your data by TotalScore and use head to grab to first (aka the max) one:   | makeresults format=csv data="Class,Name,Subject,TotalScore,Score1,Score2,Score3 ClassA,Name1, Math, 170, 60 ,40 ,70 ClassA,Name1, English ,195, 85, 60, 50 ClassA,Name2, Math, 175, 50, 60, 65 ClassA,Name2, English ,240, 80, 90, 70 ClassA,Name3, Math, 170, 40, 60 ,70 ClassA,Name3, English ,230, 55, 95, 80" | sort -TotalScore | head 1 | table Class Name, Subject, TotalScore, Score1, Score2, Score3   Here's a screenshot:        
Hello, I only need 1 row displaying all fields that has the Max TotalScore of 240     Class Name Subject TotalScore Score1 Score2   Score3 ClassA Name2 English 240 80 90 70 ... See more...
Hello, I only need 1 row displaying all fields that has the Max TotalScore of 240     Class Name Subject TotalScore Score1 Score2   Score3 ClassA Name2 English 240 80 90 70 Thank you
if only it was supported in dashboard studio!  will use classic for the time being - thank you!