All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, One of our clients has the application UI hosted on their system via WCF services, using which they access the application. As an EUM experience, they wish for AppD to gather & report the metri... See more...
Hi, One of our clients has the application UI hosted on their system via WCF services, using which they access the application. As an EUM experience, they wish for AppD to gather & report the metrics like how many users logged in to the application, how many faced errors, session details, etc. May be their system health also. How does AppD come in handy here? Please suggest. Thanks, Deepika
Hello, We have a dashboard panel that contains an iframe. We realize that iframe has been deprecated, but it still works. Our helpdesk needs this to view "live" what our users see when they access o... See more...
Hello, We have a dashboard panel that contains an iframe. We realize that iframe has been deprecated, but it still works. Our helpdesk needs this to view "live" what our users see when they access our website. We are on Splunk Enterprise v8.x. What command(s) XML,HTML,CSS, or JS, can we use to refresh this panel? This panel is part of a larger dashboard, so refreshing the entire dashboard is not an option. XML <row> <panel> <html> <title>iframe css</title> <div class="h_iframe"> <iframe src="https://somesite.com" /> </div> <h1>Live view of our website.</h1> </html> </panel> </row> CSS .h_iframe { position:left; padding-top: 0%; } .h_iframe iframe { position:left; top:0; left:0; width:25%; height:25%; } Stay safe and healthy you and yours. Thanks and God bless, Genesius
Windows 2016 / Spunk 8.0.4.1 Today I have installed Splunk and configured it as heavy forwarder ref. https://docs.splunk.com/Documentation/Splunk/8.0.4/AddMcafeeCloud/InstallHWF Currently I'm able... See more...
Windows 2016 / Spunk 8.0.4.1 Today I have installed Splunk and configured it as heavy forwarder ref. https://docs.splunk.com/Documentation/Splunk/8.0.4/AddMcafeeCloud/InstallHWF Currently I'm able to search the _internal index and see the splunkd.log events from the host, so forwarding and recieving should be just fine. On the heavy forwarder I have defined a TCP port 514 without host limitations. Sourcetype and index is also defined.  [tcp://514] connection_host = dns index = network sourcetype = bluecoat:proxysg:access:syslog But when searching the index from the searchhead I'm not able to see any syslog events. I do assume that our network administrator has defined the proxy to send syslog to correct servername/port, but just in case I do also use the Kiwi Syslog Message Generator to test sending messages as well but ... nothing. Searching for the message text: nothing, source ip: nothing.  I'm on Windows, so using netstat | findstr 514 I do see that there is a connetction from the server which I use to send the test message from.  A bit lost right now....
Hello, I would like to create a table for the past 14 days of events. 13 of the table cells will contain output from a lookup table (last 13 days). The 14th cell will be "live" data from a realtime ... See more...
Hello, I would like to create a table for the past 14 days of events. 13 of the table cells will contain output from a lookup table (last 13 days). The 14th cell will be "live" data from a realtime search. The table needs to be 8 rows by 3 columns (see below). The cell below listed as Friday will be the current day of the week. *Using LUP# to represent the value is from the lookup table; and RT to represent the value is from the realtime search. Stay safe and healthy you and yours. God bless, Genesius    
Hi All, I'm trying to combine a number of fields using: | stats values(task_name) as task_name by idnumber This works great when it comes to timestamps associated with the idnumber, but for the... See more...
Hi All, I'm trying to combine a number of fields using: | stats values(task_name) as task_name by idnumber This works great when it comes to timestamps associated with the idnumber, but for the tasks associated with it, splunk sorts it alphabetically. This leads to problems down the line when we try to see which task was executed first. Part of the problem is that the number of timestamps can differ from the number of tasks so to make a new field with timestamp and task combined does not work. #original data: sysmodtime,task_name,idnumber 05/01/20 12:00 PM,one,1 05/01/20 12:01 AM,two,1 05/01/20 12:02 AM,two,1 05/01/20 12:02 AM,two,1 05/02/20 12:00 PM,one,2 04/02/20 12:00 AM,one,2 04/02/20 01:00 AM,one,2 04/02/20 02:00 AM,one,3 05/04/20 12:00 PM,one,4 05/03/20 12:00 PM,two,4 05/03/20 12:01 PM,three,4 05/03/20 12:02 PM,four,4 05/03/20 12:40 PM,five,4 05/03/20 12:50 PM,six,4 #the conflicting results after stats command (see attachment) Any advice would be welcome Cheers, Roelof
Hi All, I got requirement where we need to display custom name like "maintenance" during planned activity when there will be no data and while displaying in chart currently its empty space, but need... See more...
Hi All, I got requirement where we need to display custom name like "maintenance" during planned activity when there will be no data and while displaying in chart currently its empty space, but needed to display as maintenance as custom word. I tried with fillnull it doesn't show in chart but in table it does.  index=myindex source=mysource status=* | timechart count as total, count(eval(status=="PASS")) as Success,count(eval(status=="Fail")) as Failure | eval PF=round((Failure/total)*100), "Success%"=round((100-PF)) Please let me know any suggestions to achieve this.   Thanks! Pavan
I wanted to implement a date picker calendar as an Input. I followed this link: https://community.splunk.com/t5/Dashboards-Visualizations/Jquery-datepicker-in-splunk/td-p/361049 And I was able to i... See more...
I wanted to implement a date picker calendar as an Input. I followed this link: https://community.splunk.com/t5/Dashboards-Visualizations/Jquery-datepicker-in-splunk/td-p/361049 And I was able to implement one time range. But If I have to implement multiple date_pickers then every time I have to modify my JavaScript. Is there a way by which we can have one generalized JavaScript and then we can implement N no.of date pickers which will be independent from each other. @niketn 
Hello, This is a difficult one to explain. Best to show the code and the intended outcomes. Note, there are 7+ possible values, but I will use only 2 for clarity.   index=mf SYSNAME=mf MFSOURCETYP... See more...
Hello, This is a difficult one to explain. Best to show the code and the intended outcomes. Note, there are 7+ possible values, but I will use only 2 for clarity.   index=mf SYSNAME=mf MFSOURCETYPE=SMF110 TRAN !="C*" OAPPLID=* | eval TransactionSpeed=(SUSPTIME_MICROSEC + USRCPUT_MICROSEC) | eval avgTransactionSpeedSec=round(avgTransactionSpeed/1000000,5) | eval OAPPLID=if( like(OAPPLID,"CI0%"),OAPPLID,"ALL") | eval upper_limit=case(OAPPLID="CI04JPAD",.1,OAPPLID="CI04JPAF",.2,OAPPLID="ALL",1.5) | eval lower_limit1=upper_limit/3 | eval lower_limit2=(upper_limit/3)*2 | stats avg(TransactionSpeed) AS avgTransactionSpeed, values(upper_limit) AS upper_limit, values(lower_limit1) AS lower_limit1, values(lower_limit2) AS lower_limit2 | gauge avgTransactionSpeedSec 0 lower_limit1 lower_limit2 upper_limit    When OAPPLID is a single value ("CI04JPAD" or "CI04JPAF") the upper_limit is set to .1 or .2 respectively.   When OAPPLID is set to * (all possible values) the upper_limit should be set to 1.5. instead, upper_limit is set to 100. Stay safe and healthy you and yours. God bless, Genesius
Hi, I am trying to join 2 searches with produce some results but I am getting this error which says -  "subsearch produced 50000 results truncating to 50000". I can't change the limits.conf so is t... See more...
Hi, I am trying to join 2 searches with produce some results but I am getting this error which says -  "subsearch produced 50000 results truncating to 50000". I can't change the limits.conf so is there any other way to get the stats without using join.  This is my search -   index=test_index ip="83.136.24.154" sourcetype=audit_log event=Attempt NOT messagetype=Request NOT status=failure | rex field=idDetails "id\:(?<id>.*)" | eval successful_login=if(status == "success", "Yes", "No") | rename subject AS username | join type=left id username [ search index=test_index sourcetype=server_log "validator.Credential" | rex field=_raw "id\:(?<id>[^\s]+)" | rex field=_raw "mytemp\s(?<message>.*)$" | rex field=_raw "user\s\[?(?<username>[^\]]+)" | fields id,message,username] | table _time,username,successful_login,message   Let me know if someone can advice. 
Hi Team, We have multiple log files which will be regularly getting updated and the same will be ingested into Splunk. For example as mentioned in below query when i search the data for last 15 mint... See more...
Hi Team, We have multiple log files which will be regularly getting updated and the same will be ingested into Splunk. For example as mentioned in below query when i search the data for last 15 mintues i can see "n" number of events would be getting ingested for each and every minute and sometimes there would be multiple events in a single minute as well. So suppose if i search the specific (index=abc host="efg" machinedata OR xxxx-) query & doesnt have any events for the next 3 minutes then it should trigger an email alert to the concerned team. Search query look like as below: index=abc host="efg" machinedata OR xxxx- Events return after search would be as below: 2020-06-19 05:15:53,083 INFO xxxx- splunk machinedata - Content Type : text/plain; charset=us-ascii xxxxxxx-xxxxx-xxxxx 2020-06-19 05:15:53,083 INFO xxxx- splunk machinedata - Body type: .net.lang.String xxxxx-xxxx-xxxxxxx 2020-06-19 05:15:52,881 DEBUG xxxx- splunk machinedata - [AccessMessage]: Matched: xxxx-xxxx-xxxx-xxxxx 2020-06-19 05:15:52,881 DEBUG xxxx- splunk machinedata - [abc (accept)]: Subject: [sample] abc def ijk XXXXXXXXXX So kindly help with the query so that if there are no new events for last 3 minutes then it needs to trigger an email.
Hi All, We are running four jobs it will runs individual.i have to consolidate all four keyword and make it as success otherwise as failure .Can anyone help on creating alert. Example: A completed... See more...
Hi All, We are running four jobs it will runs individual.i have to consolidate all four keyword and make it as success otherwise as failure .Can anyone help on creating alert. Example: A completed B Completed C completed D Completed  
Hi  need your support Splunkers I Want to search user created and deleted in 10 minutes. so i am starting the search like this: index=windows_auth EventID=4720 AND EventID=4726 Then I just got s... See more...
Hi  need your support Splunkers I Want to search user created and deleted in 10 minutes. so i am starting the search like this: index=windows_auth EventID=4720 AND EventID=4726 Then I just got stuck, no matter what I tried. I need a data when these 2 events occur during 10 minutes on the same username. Thanks for your help !
I have one main dashboard created from 'Single Value' chart. Which has word like 'APP'. I have used 'Link to dashboard' drilldown method to link this 'Single value' value chart to another dashboard. ... See more...
I have one main dashboard created from 'Single Value' chart. Which has word like 'APP'. I have used 'Link to dashboard' drilldown method to link this 'Single value' value chart to another dashboard. My target dashboard has 3 pie charts. and my requirement is to calculate the values in the 3 charts and aggregate should get reflected on my main dashboard in color format. For eg, In my target dashboard, Pie chart1: Red color - 35% Green color - 65% Pie chart2: Red color - 5% Green color - 95% Pie chart3: Red color - 0% Green color - 100% So, aggregate of these 3 pie chart values should get reflected in my single value chart dashboard in color format. Like, if my aggregate of green color is 80%, 80% of my main dashboard surface should have green color and rest is in red color. Is this possible to achieve in Splunk ?
Hi all, I'm trying to pull data from Azure Log Analytics workspace to Splunk. I have installed the add-on Microsoft Log Analytics Add-on (https://splunkbase.splunk.com/app/4127/) . When I checked t... See more...
Hi all, I'm trying to pull data from Azure Log Analytics workspace to Splunk. I have installed the add-on Microsoft Log Analytics Add-on (https://splunkbase.splunk.com/app/4127/) . When I checked the log, this is what I see 2020-06-19 11:23:36,446 INFO pid=85670 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1 2020-06-19 11:23:37,263 INFO pid=85670 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1 2020-06-19 11:23:38,694 INFO pid=85670 tid=MainThread file=splunk_rest_client.py:_request_handler:100 | Use HTTP connection pooling 2020-06-19 11:23:38,694 DEBUG pid=85670 tid=MainThread file=binding.py:get:664 | GET request to https://127.0.0.1:8089/servicesNS/nobody/TA-ms-loganalytics/s torage/collections/config/TA_ms_loganalytics_checkpointer (body: {}) 2020-06-19 11:23:38,695 INFO pid=85670 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1 2020-06-19 11:23:38,699 DEBUG pid=85670 tid=MainThread file=connectionpool.py:_make_request:387 | "GET /servicesNS/nobody/TA-ms-loganalytics/storage/collecti ons/config/TA_ms_loganalytics_checkpointer HTTP/1.1" 200 5632 2020-06-19 11:23:38,699 DEBUG pid=85670 tid=MainThread file=binding.py:new_f:71 | Operation took 0:00:00.005300 2020-06-19 11:23:38,700 DEBUG pid=85670 tid=MainThread file=binding.py:get:664 | GET request to https://127.0.0.1:8089/servicesNS/nobody/TA-ms-loganalytics/s torage/collections/config/ (body: {'offset': 0, 'count': -1, 'search': 'TA_ms_loganalytics_checkpointer'}) 2020-06-19 11:23:38,702 DEBUG pid=85670 tid=MainThread file=connectionpool.py:_make_request:387 | "GET /servicesNS/nobody/TA-ms-loganalytics/storage/collecti ons/config/?offset=0&count=-1&search=TA_ms_loganalytics_checkpointer HTTP/1.1" 200 4830 2020-06-19 11:23:38,702 DEBUG pid=85670 tid=MainThread file=binding.py:new_f:71 | Operation took 0:00:00.002460 2020-06-19 11:23:38,704 DEBUG pid=85670 tid=MainThread file=binding.py:get:664 | GET request to https://127.0.0.1:8089/servicesNS/nobody/TA-ms-loganalytics/s torage/collections/data/TA_ms_loganalytics_checkpointer/AzureLogAnalytic (body: {}) 2020-06-19 11:23:38,706 DEBUG pid=85670 tid=MainThread file=connectionpool.py:_make_request:387 | "GET /servicesNS/nobody/TA-ms-loganalytics/storage/collecti ons/data/TA_ms_loganalytics_checkpointer/AzureLogAnalytic HTTP/1.1" 404 140 2020-06-19 11:23:38,708 DEBUG pid=85670 tid=MainThread file=log.py:debug:108 | 16e5bba7-a023-431f-9813-396e814eabc9 - Authority:Performing instance discovery : https://login.microsoftonline.com/0ae51e19-07c8-4e4b-bb6d-648ee58410f4 2020-06-19 11:23:38,708 DEBUG pid=85670 tid=MainThread file=log.py:debug:108 | 16e5bba7-a023-431f-9813-396e814eabc9 - Authority:Performing static instance di scovery 2020-06-19 11:23:38,708 DEBUG pid=85670 tid=MainThread file=log.py:debug:108 | 16e5bba7-a023-431f-9813-396e814eabc9 - Authority:Authority validated via stati c instance discovery 2020-06-19 11:23:38,709 INFO pid=85670 tid=MainThread file=log.py:info:103 | 16e5bba7-a023-431f-9813-396e814eabc9 - TokenRequest:Getting token with client cr edentials. 2020-06-19 11:23:38,709 DEBUG pid=85670 tid=MainThread file=log.py:debug:108 | 16e5bba7-a023-431f-9813-396e814eabc9 - TokenRequest:No user_id passed for cach e query 2020-06-19 11:23:38,709 DEBUG pid=85670 tid=MainThread file=log.py:debug:108 | 16e5bba7-a023-431f-9813-396e814eabc9 - OAuth2Client:finding with query: {"_clientId": "dce1fe27-225d-4615-bcee-d22ff8071a0f"} 2020-06-19 11:23:38,709 DEBUG pid=85670 tid=MainThread file=log.py:debug:108 | 16e5bba7-a023-431f-9813-396e814eabc9 - OAuth2Client:Looking for potential cache entries: 2020-06-19 11:23:38,709 DEBUG pid=85670 tid=MainThread file=log.py:debug:108 | 16e5bba7-a023-431f-9813-396e814eabc9 - OAuth2Client:{"_clientId": "dce1fe27-225d-4615-bcee-d22ff8071a0f"} 2020-06-19 11:23:38,709 DEBUG pid=85670 tid=MainThread file=log.py:debug:108 | 16e5bba7-a023-431f-9813-396e814eabc9 - OAuth2Client:Found 0 potential entries. 2020-06-19 11:23:38,713 DEBUG pid=85670 tid=MainThread file=connectionpool.py:_new_conn:809 | Starting new HTTPS connection (1): login.microsoftonline.com 2020-06-19 11:23:38,716 INFO pid=85670 tid=MainThread file=log.py:info:103 | 16e5bba7-a023-431f-9813-396e814eabc9 - OAuth2Client:Get Token request failed 2020-06-19 11:23:38,718 ERROR pid=85670 tid=MainThread file=base_modinput.py:log_error:307 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/modinput_wrapper/base_modinput.py", line 127, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/log_analytics.py", line 96, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/input_module_log_analytics.py", line 49, in collect_events token_response = context.acquire_token_with_client_credentials('https://api.loganalytics.us/', application_id, application_key) File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/adal/authentication_context.py", line 160, in acquire_token_with_client_credentials return self._acquire_token(token_func) File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/adal/authentication_context.py", line 109, in _acquire_token return token_func(self) File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/adal/authentication_context.py", line 158, in token_func return token_request.get_token_with_client_credentials(client_secret) File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/adal/token_request.py", line 316, in get_token_with_client_credentials token = self._oauth_get_token(oauth_parameters) File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/adal/token_request.py", line 113, in _oauth_get_token return client.get_token(oauth_parameters) File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/adal/oauth2_client.py", line 262, in get_token verify=self._call_context.get('verify_ssl', None)) File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/requests/api.py", line 110, in post return request('post', url, data=data, json=json, **kwargs) File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/requests/api.py", line 56, in request return session.request(method=method, url=url, **kwargs) File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/requests/sessions.py", line 488, in request resp = self.send(prep, **send_kwargs) File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/requests/sessions.py", line 609, in send r = adapter.send(request, **kwargs) File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/requests/adapters.py", line 487, in send raise ConnectionError(e, request=request) ConnectionError: HTTPSConnectionPool(host='login.microsoftonline.com', port=443): Max retries exceeded with url: /0ae51e19-07c8-4e4b-bb6d-648ee58410f4/oauth2/token?api-version=1.0 (Caused by NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f6e3a7aaa10>: Failed to establish a new connection: [Errno -2] Name or service not known',)) Anyone has any idea how to solve this issue? Thanks  
Hi all, We have 3 search heads are in cluster. serach head 1 is captain.Recently we upgraded to 7.2.3 to 8.0.3.after the upgrade i see that my search captain is in red and it showing search delayed ... See more...
Hi all, We have 3 search heads are in cluster. serach head 1 is captain.Recently we upgraded to 7.2.3 to 8.0.3.after the upgrade i see that my search captain is in red and it showing search delayed  error.none of the users are complained for any thing.But not sure why we see search head captain is in red and saying searches delayed.I am sure something is wrong , can some one please help with this. We tried restarting the and resync option.    
Hi, I've had this issue before, but now I have to visit this again. I dont know how to manipulate the token value in such a way, that I can use it in a dashboard panel search, when there is are mul... See more...
Hi, I've had this issue before, but now I have to visit this again. I dont know how to manipulate the token value in such a way, that I can use it in a dashboard panel search, when there is are multipart values in the token. The setup: 1. I have a dashboard with multiple panels, with different searches 2. Then there is dropdown inputs for time and one to filter the data to a subset. Problem is when the data filter input uses a search to populate the token AND WHEN the values are multipart, like "Active Directory Domain Services", or anything that's not single string. Then when that token , like "Role_Name" is used to filter the panel, like : index=* Type=Role Name=$Role_Name$ It does not get any results AS the Token values should be inside quotes, single or double. So how do I concatenate quotes in to the search or should I preprosess the token value inside the search with some evals ?    
I have below source code. My aim is to pass 2 click values(click.value & click.value2) to my search by using 'Link to search' drilldown method. I have used 'time' dropdwon filter here. and below is ... See more...
I have below source code. My aim is to pass 2 click values(click.value & click.value2) to my search by using 'Link to search' drilldown method. I have used 'time' dropdwon filter here. and below is my drilldown source code. <option name="drilldown">cell</option> <drilldown> <link target="_blank">search?q=index=index_foo%20source=*$click.value2$*restart.log&amp;earliest=$time.earliest$&amp;latest=$time.latest$</link> </drilldown> Output of my Main dashboard search: Req                   JVM                     status 1732                 App1                 started 1732                 App2                 stopped 2747                 App1                 stopped Actual source name is something like, 1732/App1_restart.log. So if i click, App1 in the table after setting up drilldown, it will give me all the log files whichever having App1 in it. This will be a problem when i set time as last 30 days or 7 days. So i want to pass my Req as well to my search to get the exact log. My Req number is unique here. Note: JVM names will be same for all the time, but Req no will differ.
Hey guys, What do I need to get data in and output to TIBCO JMS? Python or Java? libs?.. I couldn't find any reasonable Python module to work with TIBCO queues in JMS protocol. Something is wrong ... See more...
Hey guys, What do I need to get data in and output to TIBCO JMS? Python or Java? libs?.. I couldn't find any reasonable Python module to work with TIBCO queues in JMS protocol. Something is wrong with pyactivemq (conflict of versions). So I looked at Java SDK way to work with TIBCO.
db_183236610_1832273414_19315 what does this mean?  Its part an index data. 
I want to extract the client ip and user "DELTA\Kelly" from the windows event messages Message=The following client performed a SASL (Negotiate/Kerberos/NTLM/Digest) LDAP bind without requesting s... See more...
I want to extract the client ip and user "DELTA\Kelly" from the windows event messages Message=The following client performed a SASL (Negotiate/Kerberos/NTLM/Digest) LDAP bind without requesting signing (integrity verification), or performed a simple bind over a cleartext (non-SSL/TLS-encrypted) LDAP connection. Client IP address: 172.4.5.6:57157 Identity the client attempted to authenticate as: DELTA\Kelly Binding Type: Fixed..... Please close