All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a non-clustered splunk enterprise deployment (1) , where 1 of the 3 indexers is the License Master. I have another clustered splunk enterprise deployment (2) that I must configure to contact ... See more...
I have a non-clustered splunk enterprise deployment (1) , where 1 of the 3 indexers is the License Master. I have another clustered splunk enterprise deployment (2) that I must configure to contact the LM of  deployment (1). I tried connecting the LM of 2 to LM of 1, and that did not work.  The connection was successful and it appeared the LM2 was checking in, but the license usage was not correct.  Eventually deployment 2 showed license warnings because the clustered indexers could not use the daisy-chained LM 2 to LM 1 connection. Now I am looking for advice regarding connecting each of the individual (clustered) indexers in deployment 2 to LM of deployment 1. Is there anything special that needs to be done when connecting clustered indexers individually to an LM in non-clustered deployment? I would not think so, but any insight/previous experience with this is much appreciated. Thank you
Hi Guys, Help me out how to find the active rules in splunk and how many log sources are integrated with splunk.   Thanks in advance, Kishore
Hi all, I'm trying to configure a tcpinput endpoint configuration in a way which makes me able to route incoming traffic in a few indexes. I'm trying to use tokens to do this. From Splunk web gui ... See more...
Hi all, I'm trying to configure a tcpinput endpoint configuration in a way which makes me able to route incoming traffic in a few indexes. I'm trying to use tokens to do this. From Splunk web gui I can see that is possible to bind an HEC token to a certain index. It seems I'm not able to do the same with the TCP tokens. I've been following this documentation to define my token: https://docs.splunk.com/Documentation/Forwarder/8.0.5/Forwarder/Controlforwarderaccess and then defining this on my indexer instance's inputs.conf file: [splunktcp://9997] disabled = 0 index = minikube_on_hs9 [splunktcptoken://my_token] disabled = 0 index = minikube_on_hs9 token = $7$qlaEJcxHynjqXZHqCddO61xXxB/FUh/aooVPVFvjBEde9OnUZPx6Oz/Te8ye0lJKR/3tkNuCXjK8ccPLsKARgNAIkSg=   after a restart of Splunk, the status of the token was as the one showed in the attached screenshot. At the moment of creation, the token seemed associated to the default index, though. On my test client, in the outputs.conf file, I have something like this: [tcpout] defaultGroup = tcpin-sre-tools token = $7$2ygFLiflfLjPs/n/jXxuOBI/aSgTK/Hwf+IcSSMkAtt6V+ATWCbOm4+95VpVPag05bco0qjlMuEckfcxtZDBa7h1fu0= [tcpout:tcpin-sre-tools] server = my_server_name:9997   No error related to token missing or mismatch, but still unable to address logs from my client into the specified index on the Splunk side. The logs get injected in the main index. Any idea how to fix this?   Best regards, Giuseppe    
hello everyone,   i need some help for a request.   I have a lookup which contains area country code (phone) associated to country and i want to use these area code( +33, +32,..) to extract from ... See more...
hello everyone,   i need some help for a request.   I have a lookup which contains area country code (phone) associated to country and i want to use these area code( +33, +32,..) to extract from the "calling" field the code which is at the beginning of the number. EX: lookup.csv num country +33 France +32 Belgium +44 England +351 Portugal +216 Tunisia   Field "calling": calling +33xxxxxxxxx +351xxxxxxxxx +216xxxxxxxx +32xxxxxxxx   I want to use a rex which points to the "num" field in the lookup file and which will automatically extract the area code in the "calling" field thanks to the lookup. can we use the "num" field of the lookup file as a variable and put this variable in a rex?   I think that making a rex that identifies the first 2 or 3 is difficult because you will have to do it one by one   Thx    
Does anyone know why the tag based search is not working in metric based commands. Is there any restriction or any alternative approach? Working here,   index=_internal tag=windows_lab_iis     ... See more...
Does anyone know why the tag based search is not working in metric based commands. Is there any restriction or any alternative approach? Working here,   index=_internal tag=windows_lab_iis     No Results for this command,   | mcatalog values(metric_name) WHERE tag=windows_lab_iis index=metrics* BY index, host    
Hi, Could someone please clarify if free trial of AppDynamics (SAAS) supports only .NET applications? or Java applications can also be configured and monitored in 15 days free trial. Thanks, Kiran
Hi Splunkers,   We have two slices in a pie chart. This is for deployment. One is for Successful log and another is for failure log. We are checking for the count for successful and failure logs. C... See more...
Hi Splunkers,   We have two slices in a pie chart. This is for deployment. One is for Successful log and another is for failure log. We are checking for the count for successful and failure logs. Consider, we have 10 successful log and 2 failure logs. Those two failure logs details have been analysed and deployments have happened and it got reflected in successful log as well, so that the overall count is 12 successful logs and 2 failure logs. Even though we have deployed the failed labels and those are reflecting fine in successful log slice, the same failure log is coming in failure log as well which is making mismatch in the original count. Is there any way to have the latest data alone in the pie chart.   Please note, we have pie chart and we are having two slices -> deployment success and deployment failure
When I run following query:   .... | bin _time span=5m | timechart avg(responseTime)     (responseTime is an extracted field) What I understand of this query is this: Divide timeline in a ser... See more...
When I run following query:   .... | bin _time span=5m | timechart avg(responseTime)     (responseTime is an extracted field) What I understand of this query is this: Divide timeline in a series of buckets of 5 minutes duration each, find average of responseTime  for each such bucket and plot the graph(average of responsetime as Y axis, for timechart X axis is always time). I see graph as: So I see graph is not continuous, as there may be time slots when there is no record and hence no data point. Now, if I change my query to:   .... | bin _time span=5m | chart avg(responseTime) by _time     My understanding is that this query should behave same as first one. But graph I see is as:  In contrast to first graph, this one is continuous without any break. I am not able to understand why the two queries behave differently.
Hello World. I have a splunk search which results in the below table...   Col1 Col2 Col3 Col4 Row1 X X X X Row2 X X X X Row3 X X X X   My need now is to subtract Co... See more...
Hello World. I have a splunk search which results in the below table...   Col1 Col2 Col3 Col4 Row1 X X X X Row2 X X X X Row3 X X X X   My need now is to subtract Col2 - Col1, Col3 - Col2, Col4 - Col3. Please note the name of the column are not static, they differ depending on the search, and have the potential to be around 40 different values. 
Hi, I wanted a single graph to show values.  One search is  index="cumu_open_csv"  Assignee="ram" | eval open_field=if(in(Status,"Open","Reopened","Waiting","In Progress"), 1,0) | stats count(eva... See more...
Hi, I wanted a single graph to show values.  One search is  index="cumu_open_csv"  Assignee="ram" | eval open_field=if(in(Status,"Open","Reopened","Waiting","In Progress"), 1,0) | stats count(eval(open_field=1)) AS Open, count(eval(open_field=0)) AS closed by CW_Created this gives me a table as  Similarly I have another search  index="cumu_open_csv"  Assignee="ram" | eval open_field=if(in(Status,"Open","Reopened","Waiting","In Progress"), 1,0) | stats count(eval(open_field=1)) As DueOpen by CW_DueDate which gives me another table as  I tried to combine these two using appendcols...but the X-axis has only the CW_Created and displays the second table details in wrong CW. I wanted CW_Created and CW_Duedate to be combined and provide the result in a single table like CW, Open,Close,DueCount wherever DueCount is not for a particular CW fill it with 0, for others display the data like so..   Open Close DueCount CW27 7 0 0 CW28 2 0 0 CW29 0 0 4 CW30 0 7 3 CW31 0 0 1 CW32 0 0 1 Kindly help me with this.
Winevent security logs are consuming most of the license size limit. Tried reconfiguring the Forwarder after unchecking the winevent logs but still same. Also tried configuring the below input.conf ... See more...
Winevent security logs are consuming most of the license size limit. Tried reconfiguring the Forwarder after unchecking the winevent logs but still same. Also tried configuring the below input.conf but still same [WinEventLog://Application] disabled = true [WinEventLog://Security] disabled = true [WinEventLog://System] disabled = true
Hello Splunker's: I have a problem with the selection of the latest time at the Time range Dashboards: I open the time range I select the start time and the end time The problem is that aft... See more...
Hello Splunker's: I have a problem with the selection of the latest time at the Time range Dashboards: I open the time range I select the start time and the end time The problem is that after the click on apply the end time not change, it only modifies the start time and I have to reopen the time range and modify the end time.   Do you have same problem?   Thank you Splunk-version: 8
I am working with linux auditd data The first search is below which pulls together all of the applications executed by a user during the duration of their session index=os sourcetype=auditd NOT exe... See more...
I am working with linux auditd data The first search is below which pulls together all of the applications executed by a user during the duration of their session index=os sourcetype=auditd NOT exe=/usr/sbin/crond | transaction ses startswith=USER_START endswith=USER_END | rename hostname AS src | eval in_time=_time | eval login_time=strftime(in_time,"%d-%b-%Y %H:%M:%S.%3N") | eval out_time=_time + duration | eval logout_time=strftime(out_time,"%d-%b-%Y %H:%M:%S.%3N") | search src=$field2$ auid=$field3$ host=$field4$ | table login_time,logout_time,duration,src,host,uid,auid,exe,key The drilldown looks like this, which take the host, & originating user name from the first search and finds all command line executions that user performed. index=os sourcetype=auditd host=$field4$ | `find_commands` | transaction timestamp | search auid=$field2$ type=EXECVE | table timestamp,host,ppid,pid,auid,uid,command,proc_command,success | sort timestamp Where I am struggling is to get the timestamp from the login_time and logout_time fields from the first search to populate the timestamp picker of the drill down. Dashboard Source <form> <label>Linux Auditd</label> <description>User session monitoring and the applications they ran</description> <fieldset submitButton="true"> <input type="time" token="field1"> <label></label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="text" token="field2"> <label>Source System</label> <default>*</default> </input> <input type="text" token="field4"> <label>Target System</label> <default>*</default> </input> <input type="text" token="field3"> <label>Source User</label> <default>*</default> </input> </fieldset> <row> <panel> <title>Session Monitoring</title> <table> <search> <query>index=os sourcetype=auditd NOT exe=/usr/sbin/crond | transaction ses startswith=USER_START endswith=USER_END | rename hostname AS src | search src=$field2$ | eval in_time=_time | eval login_time=strftime(in_time,"%d-%b-%Y %H:%M:%S.%3N") | eval out_time=_time + duration | eval logout_time=strftime(out_time,"%d-%b-%Y %H:%M:%S.%3N") | search auid=$field3$ host=$field4$ | table login_time,logout_time,duration,src,host,uid,auid,exe,key</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <drilldown> <link target="_blank">search?q=index%3Dos%20sourcetype%3Dauditd%20host%3D$field4$%20%7C%20%60find_commands%60%20%7C%20transaction%20timestamp%20%7C%20search%20auid%3D$field2$%20type%3DEXECVE%20%7C%20table%20timestamp%2Chost%2Cppid%2Cpid%2Cauid%2Cuid%2Ccommand%2Cproc_command%2Csuccess%20%7C%20sort%20timestamp&amp;earliest=$row.login_time$&amp;latest=$row.logout_time$</link> </drilldown> </table> </panel> </row> </form>  
Need to mask cs_cookie,cs_Referer and cs_uri_path but headers still showing values after using SEDCMD.i need to mask the header values as well. 
Hi,  I am trying to on-board the DNS application logs from windows servers 2012 event logs -> "Application and Services Logs" -> "DNS Server" .   I have add the below stanza in inputs.conf... See more...
Hi,  I am trying to on-board the DNS application logs from windows servers 2012 event logs -> "Application and Services Logs" -> "DNS Server" .   I have add the below stanza in inputs.conf for forwarder but data is not injecting.  [WinEventLog:DNS-Server] disabled = 0   do i need to change anything in inputs.conf?
Hello! I am trying to use less static colors as possible in a dashboard so that users could switch between dark and light mode without dashboard looking odd. It would useful to have current theme i... See more...
Hello! I am trying to use less static colors as possible in a dashboard so that users could switch between dark and light mode without dashboard looking odd. It would useful to have current theme in a token so that I could switch static colors based on chosen theme. Too bad it is not in the env: tokens. <form theme="light"> <label>Dashboard</label> ... Any idea?
 What CSS or XML property needs to be changed, to avoid greying out of rest of the data upon hovering on a bar graph? Also is there a way to rotate the tooltip on hovering?
Hi Everyone! I'm working on a report to find out the hosts that are not reporting logs. Since it's a huge data set, I'm using command metadata. This command is not allowing me to set up an alert base... See more...
Hi Everyone! I'm working on a report to find out the hosts that are not reporting logs. Since it's a huge data set, I'm using command metadata. This command is not allowing me to set up an alert based on time range values. Can anyone suggest me how to pass dynamic time values to the command metadata. Below is my query. |metadata type=hosts index=_internal splunk_server_group=* | fields host | join type=left host [|metadata index=os* type=hosts splunk_server_group=* ] | table host lastTime | eval reporting=case(isnull(lastTime), "no", 1=1, "yes") | eval time=strftime(lastTime,"%b %d %T %Y %Z")  |dedup host |where reporting="no" Thanks in advance.
Is there a way to automatically close all of the notables associated with an investigation when you close the investigation itself?  Currently Splunk just gives me a warning that "x number of notable... See more...
Is there a way to automatically close all of the notables associated with an investigation when you close the investigation itself?  Currently Splunk just gives me a warning that "x number of notables are still open in this investigation."  The only way i have found to close notables is to go back to the "incident review" interface and manually filter and then change their state to closed.  This seems like an unnecessary step if I'm already closing the investigation.  Am i missing something? 
Hi,   I have following kind of url : https://abc.com/loc/country/123/iss https://abc.com/a1/v1/country/456.json?returnFields=attr,add https://abc.com/a1/countries/456/av/int/orig/curr I want ... See more...
Hi,   I have following kind of url : https://abc.com/loc/country/123/iss https://abc.com/a1/v1/country/456.json?returnFields=attr,add https://abc.com/a1/countries/456/av/int/orig/curr I want to extract numbers(123,456) , in url whatever comes after country / countries , want that digit. Thanks in advance Thanks,