All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have this search string to identify certain events from extensions that stopped sending logs to Splunk, The specific event is below, I saw some examples using hosts, would it be possible instead of... See more...
I have this search string to identify certain events from extensions that stopped sending logs to Splunk, The specific event is below, I saw some examples using hosts, would it be possible instead of hosts, certain fields in the log? In this specific example, the extension appears in the log, but I need to know if it is not within the 07 day period.   index = raw_ramal EXTENSION: 11111111 (That number can be changed, there are thousands of it. The query below shows me the number of events of the extensions, but it does not help me to locate when any of them stopped having registration in the last 07 days.   index=raw_ramal "EXTENSION:" | rex field=_raw "EXTENSION:(?<EXTENSION>\+?\d+)" | stats count by EXTENSION    
Hello Splunkers, I've been in some weird requirement/situation, which is, we need to validate if events  of particular source and sourcetype are getting  forwarded by UF or not. For Ex:  Our appli... See more...
Hello Splunkers, I've been in some weird requirement/situation, which is, we need to validate if events  of particular source and sourcetype are getting  forwarded by UF or not. For Ex:  Our application XYZ is logging events in windows event logging system we need to make sure that events are getting forwarded by UF to the indexers for particular source that is "XYZ". & application registers the events in windows event logging system very rarely say once in 7 days. Can we validate this using only UFs internal logs? Yes we have this big restriction of validating above situation only with UFs internal logs, we simple cannot query for the source and check its earliest coz we dont have access to indexers containing actual logs. I checked with metrics.log but it will tell only about top 10 sources of events, in that case if we have 35 sources on single server, I guess we won't be able to do it. Can anyone help me with this?        
my indexers are sending way too much of data to my search heads (close to 500 GBs  in a day) which is having an impact on the bandwidth utilisation.  Although from initial investigation it seemed li... See more...
my indexers are sending way too much of data to my search heads (close to 500 GBs  in a day) which is having an impact on the bandwidth utilisation.  Although from initial investigation it seemed like some of the dashboards were running long running searches which i had killed manually, but that just helped partially, is there any other aspects that i need to look into.
I have a table that I am returning from the search below:     host=sre* | search "[*._reprovision()] - [WARNING]" | dedup CORP_ACCT_NBR | rex field=_raw "(?<BROKEN_ACCOUNT>Failed to Provi... See more...
I have a table that I am returning from the search below:     host=sre* | search "[*._reprovision()] - [WARNING]" | dedup CORP_ACCT_NBR | rex field=_raw "(?<BROKEN_ACCOUNT>Failed to Provision)" | table CORP_ACCT_NBR , BROKEN_ACCOUNT, _raw, _time       And I am using the JIRA Service desk and I wanted to know if there were any values similar to $name$ that allows me to send my table inline when the extension doesn't offer the option. Also, is there a reference of all available variables similar to $name$ that can be referenced somewhere? I cannot seem to find a list of these anywhere.  
Please help me out. I want to route below specific keyword logs to destination on heavyforwarder. "logged out" "Rejected password for user" "Cannot login" "logged in as" "Accepted user for user... See more...
Please help me out. I want to route below specific keyword logs to destination on heavyforwarder. "logged out" "Rejected password for user" "Cannot login" "logged in as" "Accepted user for user" "was updated on host" "Password was changed for account" "Destroy VM called" or destory   Can someone please help on this ?
Hi, MS will be blocking legacy auth soon which will break this add-on.  Is this being looked at for the next release of the add-on?
I need to extract a value from this field and update in my table. Details.Context = "dgfhgjj <Property Name="Name" VariantType="8">TRIMWorkgroup</Property>" field_name               irrelevant data... See more...
I need to extract a value from this field and update in my table. Details.Context = "dgfhgjj <Property Name="Name" VariantType="8">TRIMWorkgroup</Property>" field_name               irrelevant data            when Name="Name" i want the result value (TRIMWorkgroup) in this case as my field value in new field name called "Service" Help me with a rex command for this.
Hey everyone, I'm currently trying to disable splunk notifications (the little popups under "Messages") for specific roles. Or in other words: I only want the admin role to see notifications. I'm a... See more...
Hey everyone, I'm currently trying to disable splunk notifications (the little popups under "Messages") for specific roles. Or in other words: I only want the admin role to see notifications. I'm aware of the per stanza "roles" option in messages.conf, but I don't want to set the specific roles for every messages. Another reason why I don't want to do it per stanza is, because I don't want to check every new add-on and app for a messages.conf. I tried [default] roles = admin in system/local just because and for some reason my test user didn't get any new messages, while my admin user continued to get new ones. But that seemed to be lucky coincidence because logs with the MessagesManager stated:   WARN MessagesManager - MessagesManager: Skipping component 'default': No name field loaded   That was kinda expected because the [default] stanza ist not mentioned in the messages.conf documentation. Is there a way to limit the scope on a global scale? Best regards, Eric
My csv file has  "month" field and the values are as below :   2020-10 2020-09 2020-08 2020-07 2020-06 2020-05 2020-04 2020-03 2020-02 2020-01     I want to convert the months from number to Jan... See more...
My csv file has  "month" field and the values are as below :   2020-10 2020-09 2020-08 2020-07 2020-06 2020-05 2020-04 2020-03 2020-02 2020-01     I want to convert the months from number to Jan, Feb, Mar, Apr and so on... I tried using strptime, but that is not working. I want to show it as 2020-Jan, 2020-Feb, 2020-Mar...  Please help , below is my query :   index=azure sourcetype=azure_total_cost source="C:\total_cost\\azure_total_cost_by_year.csv" month!="month" | eval mymonth=strftime(month, "%Y-%b")        
Hi, is there a way to dynamically show/hide sets of panels on a specific period of time? Like after 15 seconds, a different set of panels will appear on the dashboard; and the past set of panels will... See more...
Hi, is there a way to dynamically show/hide sets of panels on a specific period of time? Like after 15 seconds, a different set of panels will appear on the dashboard; and the past set of panels will be hidden for a while. This will be on the loop by the way. I only experienced hiding and showing panels using the "depends=$showPanel$" but this depends on the input forms' value which is done manually. Maybe there's a way to automate and change the values of a dropdown input?
Hi everyone.  I have this result of my sear ch here in table below. is there a way to transform the table into something like this,  separating the 2 rows (successful & unsuccessful) into merch... See more...
Hi everyone.  I have this result of my sear ch here in table below. is there a way to transform the table into something like this,  separating the 2 rows (successful & unsuccessful) into merchand_id field. Thanks!
Hi, I am creating the dashboards using the REST API and I want to manage the dashboards in more automated way. Is there way I can update the dashboard contents(JSON) via APIs ?
Hi at all, I have to use eventgen to populate a demo I prepared. I'm able to populate events starting from a template and changing some values in a token from a file. My question is: is it possibl... See more...
Hi at all, I have to use eventgen to populate a demo I prepared. I'm able to populate events starting from a template and changing some values in a token from a file. My question is: is it possible to populate two tokens correlated between them? in other words, in my events I have two tokens: token1 token2 if in my file I have the following table:     value1 message1 value2 message2 value3 message3     is it possible to put in the same events? if value1 is in token1 -> message1 is in token2, if value2 is in token1 -> message2 is in token2, if value3 is in token1 -> message3 is in token2. To do this can I put vales and messages in the same file or I have to put them in two different files? in in two different files, how can I be sure that they are always in the correct position? Ciao. Giuseppe
Hi Splunk Admins, Hi Users, I would like to give some background on our application. It is a C# application which runs as a console application in a distributed computing environment which uses IBM ... See more...
Hi Splunk Admins, Hi Users, I would like to give some background on our application. It is a C# application which runs as a console application in a distributed computing environment which uses IBM Platform Symphony grid as the hosting platform. For better understanding of what goes on each grid node the application splunk logs are configured with the help of Splunk. The application takes one request at a time and does not support asynchronous execution. Each request is named as RunId in our world and splunk assigns one session to each request distinctively which can then be queried using that RunID or SessionID. Splunk can be queried on RunID or SessionID both. We normally use Chrome browser to query the logs on splunk but recently to analyse the huge amount of data which is written in the logs we came up with a requirement to fetch the splunk query results via Splunk REST Api and use the results for analytical purposes like how much time did a Run take, memory consumed, Errors, crtitical data points etc etc. While doing so the usual "Simple Query" as below works fine via the REST Api and gives us the desired result since its giving the RealTime output. But when we try to query the grid session query i.e. "Grid Query" it just gets hangs on the "GetSearchPreviewAsync" call. Moreover, when we try to put a debug on the "GetSearchPreviewAsync" call and wait for a minute on breakpoint and then continue the execution then we do get some results if they are less in number. The same behaviour does not work if the results of the query is very big in number and it takes some time to execute the query. Could you please advise what wrong we are doing here, need to make any changes in the code.   Thanks & Regards Shobhit Bansal  
IF the _raw is the same as above, I want to search with the query below. Index=_internal sourcetype=splunkd   I want to use the map command with the contents of the C column. Without putting A... See more...
IF the _raw is the same as above, I want to search with the query below. Index=_internal sourcetype=splunkd   I want to use the map command with the contents of the C column. Without putting A and B columns in the map command NO  ->>  | map search="search earliest=-1d@d latest=now index=$a$ sourcetype=$b$"   I used this SPL, but it did not produce the desired result. | map search="search earliest=-1d@d latest=now $c$" I guess this query seems to mean this. search earliest=-1d@d latest=now index=$a$ sourcetype=$b$   Help, Variable string processing in variable string in map command. Please
Hi Splunk Experts, I just want to ask if any of you has an experience creating an auto load dashboard lets say the dashboard A will change into dashboard B and then change to dashboard C in every mi... See more...
Hi Splunk Experts, I just want to ask if any of you has an experience creating an auto load dashboard lets say the dashboard A will change into dashboard B and then change to dashboard C in every minute without human intervention. This would be look like a carousel but it is not. The thing that I want to accomplish is to have a complete view without clicking anything. Also another thing that I think that is a possible solution is by hiding/showing the panels but the thing is I dont have an idea where to set the timer in order for it to trigger the hide/show capability. Hope you guys can help me with this. Thank you.
hi I use a scheduled search in order to generate a csv lookup | inputlookup fo_all where TYPE="PC" | rename HOSTNAME as host | table host | outputlookup industrial_host.csv As you can see, I id... See more...
hi I use a scheduled search in order to generate a csv lookup | inputlookup fo_all where TYPE="PC" | rename HOSTNAME as host | table host | outputlookup industrial_host.csv As you can see, I identify a list of host in order to copy them in the lookup but at the beginning of my lookyp, i need to have the name "host" and after the list of the host how to do this please? 
hello In the example below, "fo_all" is a KV Store In this KV, I identify the HOSTNAME corresponding to my where condition and I cross the results with my macro [| inputlookup fo_all where TYP... See more...
hello In the example below, "fo_all" is a KV Store In this KV, I identify the HOSTNAME corresponding to my where condition and I cross the results with my macro [| inputlookup fo_all where TYPE="IndPC" (DOMAIN=I OR DOMAIN=IN OR DOMAIN=AW) (CATEGORY = "LAPTOP" OR CATEGORY ="TABLET" OR CATEGORY ="DESKTOP") (STATUS = "Production") | rename HOSTNAME as host ] `diskspace`  What is better for performances : doing a query directly from the KV store or doing a scheduled search from the KV Store in order to generate a lookup? [| inputlookup ind.csv ] `diskspace`   Thanks  
Hi  All Im stuck with couple of questions  while i working on securing communication between Splunk nodes. I have 4 forwarders sending data to 3 indexers which is in cluster . I have a 1 deployment... See more...
Hi  All Im stuck with couple of questions  while i working on securing communication between Splunk nodes. I have 4 forwarders sending data to 3 indexers which is in cluster . I have a 1 deployment server to manage  forwarders . and have 1 cluster master: Question 1 (securing forwarder->indexers): From the Splunk documentation we need to have certificates for both forwarders and indexers. But don't understand why it is required on forwarders as well? Having certificates only in indexers does the job right? . Is it because of Splunk configuration or code demands to have it in forwarders as well? Question 2 (Cluster Master->indexers) : Since we have already in production , is there any impact or precautions that needs to be taken while making communication secured  between CM and indexers. please share any link to document or list high level steps  on the configuration .    Thanks in Advance