All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good day All, We have enabled the searches as durable searches. In our environment due to any one or other activity the scheduled search may skip or go in delegate_remote_error or go in delegate_re... See more...
Good day All, We have enabled the searches as durable searches. In our environment due to any one or other activity the scheduled search may skip or go in delegate_remote_error or go in delegate_remote_completion with success=0. In those cases I wanted to take the last status of that scheduled search + scheduled time and check if that time period accomodates in upcoming durable_cursor. How to achieve this? I tried with below one but this just fits for successful ones. How to get the failed ones too. I am running the subsearch to take the savedsearches with scheduled time which is not success in the last 7 hours and taking those for further search to check if that durable_cursor has taken up for the next run and if it is success. Is this right approach. Or any other alternate approach available? index=_internal sourcetype=scheduler [search index=_internal sourcetype=scheduler earliest=-7h@h latest=now | stats latest(status) as FirstStatus by scheduled_time savedsearch_name | search NOT FirstStatus IN ("success","delegated_remote") | eval Flag=if(FirstStatus="delegated_remote_completion" OR FirstStatus="delegated_remote_error",scheduled_time,"NO VALUE") | fields Flag savedsearch_name | rename Flag as durable_cursor ] | stats values(status) as FinalStatus values(durable_cursor) as durable_cursor by savedsearch_name scheduled_time
I have a dashboard named dashboard1 created using splunk dashboard studio. It has dropdown as below :   Based on the selection here, it displays line graph. Example : AWSAccount selected, l... See more...
I have a dashboard named dashboard1 created using splunk dashboard studio. It has dropdown as below :   Based on the selection here, it displays line graph. Example : AWSAccount selected, line graph contains different account ids. (xyseries, $Account_Selection$,costing) There is another dashboard  (dashboard2) which has details of the AWSAccounts. It has two dropdowns again- awsaccountid, accountname. My requirement is Example :  when I click on any line in the dashboard1 (where each line is different AWSAccount). It should direct to dashboard2 with this AWSAccount selected in dropdown of dashboard2. Similarly if AWSAccountName  is selected in dropdown and any account is clicked in the panel, it should direct to dashoard2 with the same AWSAccountName in the dashboard2 dropdown.  If AWSAccount is selected and clicked, accountname in dashboardw will have default value "All" i.e *. viceversa if AWSAccountName selected. What I have done ? Under interactions , I selected link to dashboard with dashboard2 and gave Token name = account_id (this is the token set in dashboard 2) Value = Name         "eventHandlers": [ { "type": "drilldown.linkToDashboard", "options": { "app": "Search", "dashboard": "dashboard2", "newTab": true, "tokens": [ { "token": "awsaccountid", "value": "name" } ] } } ] "eventHandlers": [ { "type": "drilldown.linkToDashboard", "options": { "app": "search", "dashboard": "dashboard2", "tokens": [ { "token": "awsaccountid", "value": "row.AWSAccount.value" } ], "newTab": true } } ]         This is working fine. But if I add AWSAccountName under tokens and pass value to it like above, it is causing AWSAccount value to be displayed in "awsaccountid" and "accountname" of dashboard2. Please can anyone of you help me how to implement this. Regards, PNV    
I am trying to call a 3rd party API which supports Certificate and Key based authentication. I have an on-prem instance of Splunk (Version: 9.0.2) running on a VM. I have verified the API response on... See more...
I am trying to call a 3rd party API which supports Certificate and Key based authentication. I have an on-prem instance of Splunk (Version: 9.0.2) running on a VM. I have verified the API response on the VM via curl command (Command used: curl --cert <"path to .crt file"> --key <"path to .key file"> --header "Authorization: <token>" --request GET <"url">) which gives response for a normal user. However, when running the same curl command using shell in Splunk Add-on Builder's Modular Data Inputs, the command only works with "sudo" otherwise it gives Error 403. When checked with "whoami", it returns the user as root. Question 1: Why is the curl command not working without using sudo even when the user is root. Is there any configuration that I need to modify to make it work without using sudo. Question 2: How do I make the same API call using Python code in Modular Data Inputs of Splunk Add-on Builder.
Hi guys, I don't know if you already done this, but could you please help ? I'm trying to create a new and simple datepicker where you just choose a date and next click in a button "Submit button" ... See more...
Hi guys, I don't know if you already done this, but could you please help ? I'm trying to create a new and simple datepicker where you just choose a date and next click in a button "Submit button" and show me the results.  I already created the datepicker but it dosen't do anything.  I'm trying, and tried to follow one similar example here but it isn't the same.  
I have 10 indexes starts with "ep_winevt_ms" . So i am using * here "index=ep_winevt_ms*". But while taking the | stats count i want only 1 count for the entire "ep_winevt_ms*". I don't want 10 coun... See more...
I have 10 indexes starts with "ep_winevt_ms" . So i am using * here "index=ep_winevt_ms*". But while taking the | stats count i want only 1 count for the entire "ep_winevt_ms*". I don't want 10 count for "ep_winevt_ms*". Please help
Hello, i am trying to display only the required strings, this is a description field, would like to omit and display the meaningful contents Actual description .UnknownEngineIDException:  parsing ... See more...
Hello, i am trying to display only the required strings, this is a description field, would like to omit and display the meaningful contents Actual description .UnknownEngineIDException:  parsing failed. Unknown engineId 80 00 00 09 03 10 05 CA 57 23 80 for address 1.5.6.3 . Dropping bind from /1.5.6.3. Required description engineId 80 00 00 09 03 10 05 CA 57 23 80 for address 1.5.6.3 
Hi, I have business use case of creating an alert wherein it has to search and trigger if the condition is matched, this alert is cron scheduled at 1pm from Monday through Friday.   The query:... See more...
Hi, I have business use case of creating an alert wherein it has to search and trigger if the condition is matched, this alert is cron scheduled at 1pm from Monday through Friday.   The query: index=xyz | head 1 | eval month_year=strftime(now(),"%c") | table month_year   I work on IST zone, the splunk server is CST/CDT zone, but from the alert mail we can see that the search was executed on 1pm(13:00), but trigger time is 1:14 am CST, I received the alert mail on 11:44am IST. Actually I should receive the mail on 11pm IST, Please help me out there.     Thanks  
Hi, We are running many applications with JAVA_AGENT and out of those, a few applications are discovering the JDBC calls and a few are not discovering the JDBC backend. I even tried to manually add ... See more...
Hi, We are running many applications with JAVA_AGENT and out of those, a few applications are discovering the JDBC calls and a few are not discovering the JDBC backend. I even tried to manually add a custom discovery with rule of matching URL. What all other factors impacting this and anything needs to be changed? Have attached the samples here. Note: 1. Both apps are using same version of MySQL connector jar.  mysql-connector-java : 5.1.49 For the application where the JDBC details and queries are captured.  For the similar application where JDBC details are not captured. Note:  1. For the app where details are captured is a web-app where business-transaction are present. but for the app where this is not captured is a core java app without transaction. ( More like a monitoring app ). Does this have any impact? Thanks & Regards. Akshay
Hello Splunkers!! Every week, my report runs and gathers the results under the summary index=analyst. You can see that several stash files are being created for the specific report in the screenshot... See more...
Hello Splunkers!! Every week, my report runs and gathers the results under the summary index=analyst. You can see that several stash files are being created for the specific report in the screenshot below. Conversely, multiple stash files won't be created for other reports.   Report with multiple stash files. Report with no duplicate no stash files. Please provide me an assistance on this.
How can we ingest MDI logs to Splunk?
After upgrade from 9.1.0 to 9.2.1, my heavy forwarder has many following lines in log:   04-01-2024 08:56:16.812 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 afte... See more...
After upgrade from 9.1.0 to 9.2.1, my heavy forwarder has many following lines in log:   04-01-2024 08:56:16.812 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:16.887 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:16.951 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:16.982 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.008 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.013 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.024 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.041 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.079 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.097 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.146 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.170 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.190 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.257 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.292 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.327 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.425 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.522 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.528 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.549 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped. 04-01-2024 08:56:17.551 +0700 INFO TcpInputProc [103611 FwdDataReceiverThread] - Input queue has pds 0 after reader thread stopped.     How to disable this log? Does any error related this INFO log?
Application Success Failed Total percentage IPL 15 2 17 11.764 IPL 10 2 12 16.666 IPL 4 1 5 20.000 WWV 3 2 5 40.000 WWV 1 0 1 0.000 PIP 20 5 25 20.000... See more...
Application Success Failed Total percentage IPL 15 2 17 11.764 IPL 10 2 12 16.666 IPL 4 1 5 20.000 WWV 3 2 5 40.000 WWV 1 0 1 0.000 PIP 20 5 25 20.000 IPL 1 0 1 0.0000 WWV 30 15 45 33.333 PIP 20 10 30 33.333   From the above table, we want to calculate  application wise data. Expected output: Application Success Failed Total percentage IPL 30 5 35 14.285 WWV 34 17 51 33.333 PIP 40 15 55 27.272   How can we do this???
Hi!, This is a contrived example, but could you help me understand why this completes (and functions as expected):   | makeresults format=csv data="filename calc.exe" | lookup isWindowsSystemFile_... See more...
Hi!, This is a contrived example, but could you help me understand why this completes (and functions as expected):   | makeresults format=csv data="filename calc.exe" | lookup isWindowsSystemFile_lookup filename   Whilst this:   index=sandbox | eval filename="calc.exe" | lookup isWindowsSystemFile_lookup filename   throws an error with message:   ... The lookup table 'isWindowsSystemFile_lookup' does not exist or is not available.   The isWindowsSystemFile_lookup is provided by Splunk Security Essentials. Hmm, I'm on splunk cloud. Thanks, Kevin    
|mstats sum(Transactions) as Transaction_count where index=metrics-logs application=login services IN(get, put, delete) span=1h by services |streamstats by services |timechart span=1h values(Transact... See more...
|mstats sum(Transactions) as Transaction_count where index=metrics-logs application=login services IN(get, put, delete) span=1h by services |streamstats by services |timechart span=1h values(Transaction_count) by services Results: _time get put delete 2024-01-22  09:00 7654.000000 17854.000000 9876.000000 2024-01-22  10:00 5643.000000 2345.000000 1267.000000   From the above query we want to calculate percentage  between 2 values. For example : For get field , we want percentage between 2 hours(09:00 and 10:00) 7654.000000/5643.000000*100 how to do this??
So for our Final year project we have been assigned the project of implementing DDOS and detecting it with Splunk Now our issue is that we are not getting any logs from the Splunk's ADD DATA INPUT o... See more...
So for our Final year project we have been assigned the project of implementing DDOS and detecting it with Splunk Now our issue is that we are not getting any logs from the Splunk's ADD DATA INPUT option of Local Windows Networking Monitoring which seems to work for the video I was following to do that Context of DDOS:  SO we are using hping3 tcp syn flood attack but their logs aren't getting in through my newly added data input source  All the other network logs are generating like network from my gcp to rdp to server and back but these are the only type of logs that are showing Now if I were to guess the problem it might be that there are two IP provided to us by GCP Internal and External IP I've attacked on both but there is no difference in the incoming LOGS I've checked the connectivity between the two VM's on GCP i.e. Win and Ubuntu  using ping and telnet  Also have turned off the rdp win's firewall also added a firewall rule that allows ingress tcp packets over the port 80 and 21 (which we are attacking on) So my guess ultimately is that the server of GCP is blocking these type of packets I'm still not sure how all these things work(I'm a AI dev you see this is not my field) SO Please help me if you can and have time to!| THANK YOU for reading my question and taking your time for doing it IF you have any other questions that you need the answers for to help me be free to ask away as much you guys want
Hello I have a csv file in azure i've created an input at "Splunk Add-on for Microsoft Cloudservices Storage Blob" app Also, ive created this in the sourcetype : this is the transforms : ... See more...
Hello I have a csv file in azure i've created an input at "Splunk Add-on for Microsoft Cloudservices Storage Blob" app Also, ive created this in the sourcetype : this is the transforms : and this is the field extraction : but the logs does not parse it indexed as one line  Am i missing something ?
query 1: |mstats sum(transaction) as Total sum(success) as Success where index=metric-index transaction IN(transaction1, transaction2, transaction3) by service transaction |eval SuccessPerct=round((... See more...
query 1: |mstats sum(transaction) as Total sum(success) as Success where index=metric-index transaction IN(transaction1, transaction2, transaction3) by service transaction |eval SuccessPerct=round(((Success/Total)*100),2) |xyseries service transaction Total Success SuccessPerct |table service "Success: transaction1" "SuccessPerct: transaction1" "SuccessPerct: transaction2" "Total: transaction2" "Success: transaction2" |join service [|mstats sum(error-count) as Error where index=metric-index by service errortype |append [|search index=app-index sourcetype=appl-logs (TERM(POST) OR TERM(GET) OR TERM(DELETE) OR TERM(PATCH)) OR errorNumber!=0 appls=et |lookup app-error.csv code as errorNumber output type as errortype |stats count as app.error count by appls errortype |rename appls as service error-count as Error] |xyseries service errortype Error |rename wvv as WVVErrors xxf as nonerrors] |addtotals "Success: transaction1" WVVErrors nonerrors fieldname="Total: transaction1" |eval sort_service=case(service="serv1",1,service="serv2",2,service="serv3",3,service="serv4",4,service="serv5",5,service="serv6",6,service="serv7",7,service="serv8",8,service="serv9",9,service="serv10",10) |sort + sort_service |table service "Success: transaction1" "SuccessPerct: transaction2" WVVErrors nonerrors |fillnull value=0   query1 OUTPUT: service Success: transaction1 SuccessPerct: transaction2 WVVErrors  nonerrors serv1 345678.000000 12.33 7.000000 110.000000 serv2 345213.000000 22.34 8777.000000 0 serv3 1269.000000 12.45 7768.000000 563 serv4 34567.000000 11.56 124447.000000 0 serv5 23456.000000 67.55 10.000000 067 serv6 67778.000000 89.55 15.000000 32 serv7 34421.000000 89.00 17.000000 56 serv8 239078.000000 53.98 37.000000 67.0000000 serv9 769.000000 09.54 87.000000  8.00000 serv10 3467678.000000 87.99 22.000000 27.000000 serv11 285678.000000 56.44 1123.000000 90.00000 serv12 5123.000000 89.66 34557.000000 34 serv13 678.000000 90.54 37.000000 56 serv14 345234678.000000 89.22 897.000000 33 serv15 12412.33678.000000 45.29 11237.000000 23.000000 query2: |mstats sum(error-count) as Error where index=metric-index by service errorNumber errortype query2: output: service errorNumber errortype Error serv1 0 wvv 7.000000 serv1 22 wvv 8777.000000 serv1 22 wvv 7768.000000 serv1 45 wvv 124447.000000 serv2 0 xxf 10.000000 serv2 22 xxf 15.000000 serv2 22 xxf 17.000000 serv2 45 xxf 37.000000 serv3 0 wvv 87.000000 serv3 22 wvv 22.000000 serv3 22 wvv 1123.000000 serv3 45 wvv 34557.000000 serv4 0 xxf 37.000000 serv4 26 xxf 897.000000 serv4 22 xxf 11237.000000 serv4 40 xxf 7768.000000 serv5 25 wvv 124447.000000 serv5 28 wvv 10.000000 serv5 1000 wvv 15.000000 serv5 10 wvv 17.000000 serv6 22 xxf 37.000000 serv6 34 xxf 87.000000 serv6 88 xxf 22.000000 serv6 10 xxf 45.000000   we want to combine query 1 and query2 and want to get the both outputs in one table.
Explain Splunk Enterprise Event Collector, Processor and Console architecture.
Please help with splunk query to get pass and fail count in table format from below jsonarray | Group   | Pass | Fail | | Group1 | 239    | 6     | | Group2 | 746    | 14    | | Group3 | 760    |... See more...
Please help with splunk query to get pass and fail count in table format from below jsonarray | Group   | Pass | Fail | | Group1 | 239    | 6     | | Group2 | 746    | 14    | | Group3 | 760    | 10     | [ { "Group": 1, "Pass": 239, "Fail": 6 }, { "Group": 2, "Pass": 746, "Fail": 14 }, { "Group": 3, "Pass": 760, "Fail": 10 } ]  
Hello there, we're in process of deploying Splunk Cloud. We have installed the Microsoft Office 365 App for splunk along with all the required add-ons. The app is working as intended except that we'r... See more...
Hello there, we're in process of deploying Splunk Cloud. We have installed the Microsoft Office 365 App for splunk along with all the required add-ons. The app is working as intended except that we're not getting any Message Trace data. We followed the instructions to properly setup the the Add-on Input and assigned the API permissions on the Azure side. For whatever reason we're still not getting any Trace Data. It looks like problem it's on the Azure side, we have assigned the appropriate API permissions as stated in the documentation. Is there anything else that needs to be setup on the Azure - or Splunk side to get Exchange Trace data?   We followed this instructions for Splunk add-on for Microsoft Office 365 integration. https://docs.splunk.com/Documentation/AddOns/released/MSO365/ConfigureappinAzureAD Any help would be highly appreciated.