All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for the quick reply. Yes, I've seen this filter switch in the Trace Analyzer, but I also want to create an alert to get notified in case of traces with an error span. It's not possible with th... See more...
Thanks for the quick reply. Yes, I've seen this filter switch in the Trace Analyzer, but I also want to create an alert to get notified in case of traces with an error span. It's not possible with the present fields. Actually I have a dashboard, where I use the metric traces.count and the auto-generated filter field sf_error:true. I can see the results there, but when I create an alert based on the same metric and filter, it is not triggered. I use a static threshold condition with the following settings:    P. S. You're right "error" is not a tag. I also tried to index on the tag "otel.status_code", but this also wasn't possible.
Hi @dhiraj , I suppose that you already extracted the REQ field, so you could try something like this: index=your_index ("Error occurred during message exchange" OR REQ="INI") earliest=-3600s | eva... See more...
Hi @dhiraj , I suppose that you already extracted the REQ field, so you could try something like this: index=your_index ("Error occurred during message exchange" OR REQ="INI") earliest=-3600s | eval type=if(REQ="INI","INI","Message") | stats dc(type) AS type_count values(type) As type | where type_count=1 AND type="Message" You can define the time period for the search (e.g. last hour). If you eventually have more servers, you can group results by host in the stats command. Ciao. Giuseppe  
Please help me with SPL for WHENEVER THERE IS ERROR OCCURED DURING MESSAGE EXCHANGE KEYWORD OCCURS AND REQ=INI didn't occur within few minutes raise and alert. 
I am unable to search my custom fields in Splunk after getting migrated index from normal to federated. do I have to change something in field extractions? or something wrong in migration
Hi @arunkuriakose , I don't know if this could be your use case, but there's a feature to perform backup and restore ok the kv-store. We used it for DR of DB-Connect. For more infos see at https:/... See more...
Hi @arunkuriakose , I don't know if this could be your use case, but there's a feature to perform backup and restore ok the kv-store. We used it for DR of DB-Connect. For more infos see at https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/BackupKVstore Ciao. Giuseppe
We have two separate splunk instances with ES (standalone not clustered) . Consider it as a HO DR   when i try to move to DR instance of splunk and copy /etc/apps , After restarting DR instance all... See more...
We have two separate splunk instances with ES (standalone not clustered) . Consider it as a HO DR   when i try to move to DR instance of splunk and copy /etc/apps , After restarting DR instance all the notables are in new status . Those notables which are closed in HO splunk is also showing as new. What could be the reason?   I do know that this is managed as a kv store. If we have to migrate KV store related to this. What are the best practises in this case    
Y-axis values are scalar / non-dimensional so the abbreviations follow this - that is, the fact that you are counting bytes is lost when it becomes a scalar quantity.
Perhaps if you can isolate the event or events which are generating the error, you might be able to determine this. However, my guess is that sometimes you end up with one or more nulls from the rex ... See more...
Perhaps if you can isolate the event or events which are generating the error, you might be able to determine this. However, my guess is that sometimes you end up with one or more nulls from the rex and this is what mvzip doesn't like. Doing it this way around avoids using mvzip because the mvexpand is done before the fields are split up so the association across the row is maintained and doesn't need to be rebuilt with the mvzip
Hi @TTAL , to have a status dashboard, you need at first a list of the systems to monitor. You can put this list in a lookup (called e.g. perimeter.csv) containing at least one field (host). Then ... See more...
Hi @TTAL , to have a status dashboard, you need at first a list of the systems to monitor. You can put this list in a lookup (called e.g. perimeter.csv) containing at least one field (host). Then you can run a search like the following:   | tstats count WHERE index=* BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | eval status=if(total=0,"Missing","Present") | table host status   then you could also consider the case that you have some host not present in the lookup, in this case, you have to use a little more complicated search:   | tstats count WHERE index=* BY host | eval tyte="index" | append [ | inputlookup perimeter.csv | eval count=0, type="lookup" | fields host count type ] | stats dc(type) AS type_count values(type) AS type sum(count) AS total BY host | eval status=case(total=0,"Missing",type_count=1 AND type="index","new host",true(),"Present") | table host status   At least , if you don't want to manage the list of hosts to monitor, you can use a different search to find the hosts that sent logs in the last 7 days but that didn't send logs in tha last hour (obviuously you can change these parameters:   | tstats count latest(_time) AS _time WHERE index=* latest=-30d@d BY host | eval status=if(_time<now()-3600,"Missing","Present") | table host status   I don't like this last solution because, even if requires more time to manage but it gives you less control than the others.  Ciao. Giuseppe
I have indexer cluster setup with a load-balancer in front of  it.  I want the syslog to be ingested into the indexer. My plan is to install Universal forwarder on the Linux servers and send the sysl... See more...
I have indexer cluster setup with a load-balancer in front of  it.  I want the syslog to be ingested into the indexer. My plan is to install Universal forwarder on the Linux servers and send the syslog to Indexer clusters. Now the problem is how can i configure universal forwarder to hit load-balancer to ingest the data ?
I think that your are mistaken According to the below message of weiss_h, this issue fixed only in new version 9.3.1
Upgraded to version 9.3.0.0, but the issue still remains..
This solution is working and im not seeing any warning message now. How is this different from mvzip? May i know why mvzip gives warning if the data is empty?
Hi everyone, I’m trying to visualize the network traffic of an interface in Splunk using an area chart. However, the Y-axis scale is currently displaying as K, M, B (for thousand, million, billion),... See more...
Hi everyone, I’m trying to visualize the network traffic of an interface in Splunk using an area chart. However, the Y-axis scale is currently displaying as K, M, B (for thousand, million, billion), but I would like it to show K, M, G, T (for kilobytes, megabytes, gigabytes, terabytes). Is there a way to adjust this? Thanks!
It is good that you try to illustrate input and desired output.  But you forget to tell us what you are trying to count that should either be 4 or 2?  In other words, you need to explain the logic be... See more...
It is good that you try to illustrate input and desired output.  But you forget to tell us what you are trying to count that should either be 4 or 2?  In other words, you need to explain the logic between input and desired output fully and explicitly. If I take a wild mind reading, you want to count unique number of E-mails related to each type of event.  You want to use distinctcount or dc, not count.   | stats dc(email) as count by event   Here's an emulation of your mock input   | makeresults format=csv data="_raw abc xyz@email.com abc xyz@email.com abc. test@email.com abc. test@email.com xyz xyz@email.com" | rex "(?<event>\w+)\W+(?<email>\S+)" ``` data emulation above ```   The output is event count abc 2 xyz 1
Thankyou for this, it solve the dynamic size issue. Other question appears from this sample: like on OS Description, the value just through the panel. Is it possible or there's option to wrap it?
When displaying the Choropleth Map on the dashboard, the painted area is collapsed. When I use the Search app to display visual effects, there is no problem. I was wondering if anyone has experienced... See more...
When displaying the Choropleth Map on the dashboard, the painted area is collapsed. When I use the Search app to display visual effects, there is no problem. I was wondering if anyone has experienced the same problem or has any ideas on how to solve it. Splunk Enterprise 9.3.1 renders fine in Dashboard Classic, but the issue occurs in Dashboard Studio. コロプレスマップをダッシュボード表示するとペイントされるエリアが崩れます。 Search appで視覚エフェクト表示した際には問題ありません。同じ事象を経験の方や解決の糸口をお持ちの方いらっしゃいませんでしょうか。 Splunk Enterprise 9.3.1ではDashboard Classicでは正常にレンダリングでき、Dashboard Studioでは事象が発生しています。 Choropleth Map in Search app Choropleth Map in Dashboard Studio Choropleth Map in Dashboard Classic Thanks,
Hi, "error" is actually a case where you don't need to index a tag to be able to filter on it. Here is a screen shot of filtering spans where error=true. And here is an example of filtering t... See more...
Hi, "error" is actually a case where you don't need to index a tag to be able to filter on it. Here is a screen shot of filtering spans where error=true. And here is an example of filtering traces that contain errors: PS - The reason it won't allow you to index "error" as a APM metricset is because "error" isn't actually a span tag so there is nothing to index.  
Thank you for letting me know! Unfortunately the workaround didn't fix it, hopefully the next update will. 
You should ensure the tokens always have a value by setting them in an init block - just using an initial / default value is not enough to set the input token to a value.