All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

How can I verify this? How do I grant a specific user permission for all jobs? Does the user require particular capabilities or roles to search for a job? I noticed that someitmes the user has succes... See more...
How can I verify this? How do I grant a specific user permission for all jobs? Does the user require particular capabilities or roles to search for a job? I noticed that someitmes the user has successfully accessed the "/services/search/jobs" endpoint , but encountered issues when using the "/services/search/jobs/{searchid}" endpoint. Sometimes I got Unauthorizedon "/services/search/jobs" and sometimes got  Unauthorized on "/services/search/jobs/{searchid}"
Can you please elaborated a bit. Its not working  
Hi @gcusello  Stil I am getting same output with both situation. If I have only "Error occurred during message exchange" then also getting type_count=1 and type =message and when I have both keyword... See more...
Hi @gcusello  Stil I am getting same output with both situation. If I have only "Error occurred during message exchange" then also getting type_count=1 and type =message and when I have both keyword "Error occurred during message exchange" and "REQ=INI" than also type_count=1 and type =message FYI, I have not extracted any data, just monitoring data logs.  
Hi @dhiraj , are you sure that the REQ field is already extracted? otherwise you ha to search a different condition: index=your_index ("Error occurred during message exchange" OR "REQ=INI") earlie... See more...
Hi @dhiraj , are you sure that the REQ field is already extracted? otherwise you ha to search a different condition: index=your_index ("Error occurred during message exchange" OR "REQ=INI") earliest=-420s | eval type=if(searchmatch("REQ=INI"),"INI","Message") | stats dc(type) AS type_count values(type) As type | where type_count=1 AND type="Message" Ciao. Giuseppe
Try This: index="yourindex"  latest=-7m | transaction startswith="Error occurred during message exchange" endswith="REQ\=INI" keepevicted=true | search closed_txn=0
Hi @gcusello  I am checking for whole day for testing and it's giving me count as 1 and only type is message. But in actual we have  both keyword in data, which means no alert required.  
Hi @fahimeh , are you using xml or classif format? if xml, try using the classic format adding renderXML=0 to the inputs.conf. Ciao. Giuseppe
Hi @dhiraj , you have to change only the time period (7 minutes), then the search shoudl be correct: index=your_index ("Error occurred during message exchange" OR REQ="INI") earliest=-420s | eval t... See more...
Hi @dhiraj , you have to change only the time period (7 minutes), then the search shoudl be correct: index=your_index ("Error occurred during message exchange" OR REQ="INI") earliest=-420s | eval type=if(REQ="INI","INI","Message") | stats dc(type) AS type_count values(type) As type | where type_count=1 AND type="Message" using this search you select only events with your two conditions and using the eval and the stats you identify the presence of one or both the conditions. In your use case you want to fire the alert if there's the error message but there isn't the REQ=INI condition, the other conditions are excluded. Ciao. Giuseppe
Hi @gcusello  It's not working, we are monitoring log and Whenever the line Error occurred during message exchange and if REQ=INI line didn’t occur in last 7 minutes , it should trigger an alert. ... See more...
Hi @gcusello  It's not working, we are monitoring log and Whenever the line Error occurred during message exchange and if REQ=INI line didn’t occur in last 7 minutes , it should trigger an alert. With above search I am getting type_count=1 in both the condition, if “REQ=INI” is present and if not present.
Hello, Some of the logs coming from the Windows Universal Forwarder to Splunk show the following error in the message field for certain events: "Splunk could not get the description for this event.... See more...
Hello, Some of the logs coming from the Windows Universal Forwarder to Splunk show the following error in the message field for certain events: "Splunk could not get the description for this event." I have reviewed [https://community.splunk.com/t5/Getting-Data-In/Why-quot-FormatMessage-error-quot-appears-in-indexed-message-for/td-p/139980?_gl=1*1qz5els*_gcl_au*MjMzMzQwMzM3LjE3MjAzNTUzMDY.*FPAU*MjMzMzQwMzM3LjE3MjAzNTUzMDY.*_ga*MTExMjgzNTE2OC4xNzIwMzU1MzA2*_ga_5EPM2P39FV*MTcyNjY0MTY4NC4xNDQuMS4xNzI2NjQxODA1LjAuMC4xODMzMzUzNTE5*_fplc*T0RRcU4zWGc1THpWUFVQeVBxTTQ1T0JVanhWSVVpMmdLTVNzYjNMZSUyQjZUMXBEb0NsY3NTSm45MlpQaFVnbUtsR1MwQWdjdlVyM25peCUyRkozUnZmQ1UlMkJzUE9tSTBFd3kzbjV6diUyQmJoQzQxUlM5alphRUhIVXQ1V0I4M3hRZVElM0QlM0Q.] , but it doesn't solve the issue, as this problem only occurs for a few specific events at specific times. I am using Splunk version 9.2. What could be the issue?
Thanks for the quick reply. Yes, I've seen this filter switch in the Trace Analyzer, but I also want to create an alert to get notified in case of traces with an error span. It's not possible with th... See more...
Thanks for the quick reply. Yes, I've seen this filter switch in the Trace Analyzer, but I also want to create an alert to get notified in case of traces with an error span. It's not possible with the present fields. Actually I have a dashboard, where I use the metric traces.count and the auto-generated filter field sf_error:true. I can see the results there, but when I create an alert based on the same metric and filter, it is not triggered. I use a static threshold condition with the following settings:    P. S. You're right "error" is not a tag. I also tried to index on the tag "otel.status_code", but this also wasn't possible.
Hi @dhiraj , I suppose that you already extracted the REQ field, so you could try something like this: index=your_index ("Error occurred during message exchange" OR REQ="INI") earliest=-3600s | eva... See more...
Hi @dhiraj , I suppose that you already extracted the REQ field, so you could try something like this: index=your_index ("Error occurred during message exchange" OR REQ="INI") earliest=-3600s | eval type=if(REQ="INI","INI","Message") | stats dc(type) AS type_count values(type) As type | where type_count=1 AND type="Message" You can define the time period for the search (e.g. last hour). If you eventually have more servers, you can group results by host in the stats command. Ciao. Giuseppe  
Please help me with SPL for WHENEVER THERE IS ERROR OCCURED DURING MESSAGE EXCHANGE KEYWORD OCCURS AND REQ=INI didn't occur within few minutes raise and alert. 
I am unable to search my custom fields in Splunk after getting migrated index from normal to federated. do I have to change something in field extractions? or something wrong in migration
Hi @arunkuriakose , I don't know if this could be your use case, but there's a feature to perform backup and restore ok the kv-store. We used it for DR of DB-Connect. For more infos see at https:/... See more...
Hi @arunkuriakose , I don't know if this could be your use case, but there's a feature to perform backup and restore ok the kv-store. We used it for DR of DB-Connect. For more infos see at https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/BackupKVstore Ciao. Giuseppe
We have two separate splunk instances with ES (standalone not clustered) . Consider it as a HO DR   when i try to move to DR instance of splunk and copy /etc/apps , After restarting DR instance all... See more...
We have two separate splunk instances with ES (standalone not clustered) . Consider it as a HO DR   when i try to move to DR instance of splunk and copy /etc/apps , After restarting DR instance all the notables are in new status . Those notables which are closed in HO splunk is also showing as new. What could be the reason?   I do know that this is managed as a kv store. If we have to migrate KV store related to this. What are the best practises in this case    
Y-axis values are scalar / non-dimensional so the abbreviations follow this - that is, the fact that you are counting bytes is lost when it becomes a scalar quantity.
Perhaps if you can isolate the event or events which are generating the error, you might be able to determine this. However, my guess is that sometimes you end up with one or more nulls from the rex ... See more...
Perhaps if you can isolate the event or events which are generating the error, you might be able to determine this. However, my guess is that sometimes you end up with one or more nulls from the rex and this is what mvzip doesn't like. Doing it this way around avoids using mvzip because the mvexpand is done before the fields are split up so the association across the row is maintained and doesn't need to be rebuilt with the mvzip
Hi @TTAL , to have a status dashboard, you need at first a list of the systems to monitor. You can put this list in a lookup (called e.g. perimeter.csv) containing at least one field (host). Then ... See more...
Hi @TTAL , to have a status dashboard, you need at first a list of the systems to monitor. You can put this list in a lookup (called e.g. perimeter.csv) containing at least one field (host). Then you can run a search like the following:   | tstats count WHERE index=* BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | eval status=if(total=0,"Missing","Present") | table host status   then you could also consider the case that you have some host not present in the lookup, in this case, you have to use a little more complicated search:   | tstats count WHERE index=* BY host | eval tyte="index" | append [ | inputlookup perimeter.csv | eval count=0, type="lookup" | fields host count type ] | stats dc(type) AS type_count values(type) AS type sum(count) AS total BY host | eval status=case(total=0,"Missing",type_count=1 AND type="index","new host",true(),"Present") | table host status   At least , if you don't want to manage the list of hosts to monitor, you can use a different search to find the hosts that sent logs in the last 7 days but that didn't send logs in tha last hour (obviuously you can change these parameters:   | tstats count latest(_time) AS _time WHERE index=* latest=-30d@d BY host | eval status=if(_time<now()-3600,"Missing","Present") | table host status   I don't like this last solution because, even if requires more time to manage but it gives you less control than the others.  Ciao. Giuseppe
I have indexer cluster setup with a load-balancer in front of  it.  I want the syslog to be ingested into the indexer. My plan is to install Universal forwarder on the Linux servers and send the sysl... See more...
I have indexer cluster setup with a load-balancer in front of  it.  I want the syslog to be ingested into the indexer. My plan is to install Universal forwarder on the Linux servers and send the syslog to Indexer clusters. Now the problem is how can i configure universal forwarder to hit load-balancer to ingest the data ?