All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have few alerts created which looks into failure rates of my services and I have put in a condition which says if the failure rate is > 10% AND number of failed request > 200 then trigger the a... See more...
Hi, I have few alerts created which looks into failure rates of my services and I have put in a condition which says if the failure rate is > 10% AND number of failed request > 200 then trigger the alert. This is really not the ideal way to do the monitoring. Is there a way in Splunk we can use the AI to detect anomalies or outliers over time? So basically if Splunk can detect a failure pattern and if that pattern is consistent then don't trigger an alert but if it goes beyond the threshold, only then trigger it? Can we do this kind of stuff in Splunk using in-built  ML or AI?
I am performing a lookup in a main search which returns earliest_event and latest_event timestamp values.  I would like to use these timestamp values as parameters for a subsearch.  The search would ... See more...
I am performing a lookup in a main search which returns earliest_event and latest_event timestamp values.  I would like to use these timestamp values as parameters for a subsearch.  The search would be similar to the following: index=foo ........... | lookup lookuptable.csv session_id OUTPUTNEW session_id, earliest_event, latest_event ........... | append [ search index=bar earliest=earliest_event latest=latest_event ...........] The time parameters for the subsearch are not being accepted, though. Is there a different way that this can be accomplished?
I am trying to create an alert based on stats count value...I want to alert if count is less than or greater than 500
Hi All, Some files has been deleted by someone from one of the  server, I need to investigate on that. We only know the host name but not sure which file is deleted or by whom. Can anyone tell me... See more...
Hi All, Some files has been deleted by someone from one of the  server, I need to investigate on that. We only know the host name but not sure which file is deleted or by whom. Can anyone tell me exact query I need to type in search head to fetch the logs from Splunk to identify if any files has been deleted from my server. I'm totally new to Splunk, kindly assist   Regards, Vipin
I'm currently building a query that reports the top 10 urls of the top 10 users. Although my current query works, I would like a cleaner look. Query:     index="zscaler" sourcetype="zscalern... See more...
I'm currently building a query that reports the top 10 urls of the top 10 users. Although my current query works, I would like a cleaner look. Query:     index="zscaler" sourcetype="zscalernss-web" appclass!=Enterprise user!=unknown | stats count by user, url | sort 0 user -count | streamstats count as standings by user | where standings < 11 | eventstats sum(count) as total by category | sort 0 -total user -count     The results look like this     user. url. count rank john.doe@example.com. example.com. 100. 1 john.doe@example.com. facebook.com. 99. 2 john.doe@example.com. twitter.com. 98. 3 john.doe@example.com. google.com. 97. 4 john.doe@example.com. splunk.com. 96. 5 jane.doe@example.com. example.com. 100. 1 jane.doe@example.com. facebook.com. 99. 2 jane.doe@example.com. twitter.com. 98. 3 jane.doe@example.com. google.com. 97. 4 jane.doe@example.com. splunk.com. 96. 5 and so forth I would like for i to look like this user. url. count john.doe@example.com. example.com. 100. facebook.com. 99. twitter.com. 98. google.com. 97. splunk.com. 96. user. url. count jane.doe@example.com. example.com. 100. facebook.com. 99. twitter.com. 98. google.com. 97. splunk.com. 96. and so forth        
Hi - I want to list API's and its latencies / response times and want to compare the latencies in a table like below, Explanation : The test is executed for 1 hour and each ramp is 15 min (1X to 4X... See more...
Hi - I want to list API's and its latencies / response times and want to compare the latencies in a table like below, Explanation : The test is executed for 1 hour and each ramp is 15 min (1X to 4X)  API 1X Load response time avg or p95 2X Load response time avg or p95 3X Load response time avg or p95 4X Load response time avg or p95 API1         API2           Current Query : host=somehost sourcetype=somesourcetype endpoint=* latency=* received | search *SOMESTRING* |timechart p95(latency) span=15m by endpoint |foreach *[|eval "<<FIELD.."=ROUND('<<FIELD>>',0)] this query works fine without any issue and its displaying results like this but results are not accurate as the response time of 2022-05-09 00:00:00 & 2022-05-09 00:15:00 overlap and this becomes 1X data. how can i exactly separate 1X to 4X if i have executed a test from 2022-05-09 13:00:00 - 14:00:00 PM  _time  API1  API2 API3 2022-05-09 00:00:00       2022-05-09 00:15:00       2022-05-09 00:30:00       2022-05-09 00:45:00      
Hi, how do I query ingestion in GB by each index instead of just the top 10?
I'm completely stuck here. I'm trying to extract the "Path" from a logfile with this format:     Time: 05/10/2022 11:26:53 Event: Traffic IP Address: xxxxxxxxxx Description: HOST PROCESS FOR W... See more...
I'm completely stuck here. I'm trying to extract the "Path" from a logfile with this format:     Time: 05/10/2022 11:26:53 Event: Traffic IP Address: xxxxxxxxxx Description: HOST PROCESS FOR WINDOWS SERVICES Path: C:\Windows\System32\svchost.exe Message: Blocked Incoming UDP - Source xxxxxxxxxx : (xxxx) Destination xxxxxxxxxx : (xxxxx) Matched Rule: Block all traffic     using this regex     ((Path:\s{1,2})(?<fwpath>.+))     It does exactly what I want when I use rex, it extracts the path as "fwpath". However, when I do it as a field extraction, it matches the rest of the log entry. Why is it behaving differently for these two?
Hey, I recently made a bar graph in Splunk that adds new data of the duration of the test. My only problem with this graph is that after 13 entries, it starts adding data from the beginning of the g... See more...
Hey, I recently made a bar graph in Splunk that adds new data of the duration of the test. My only problem with this graph is that after 13 entries, it starts adding data from the beginning of the graph.    So for example, you can see after 28th april, new data starts getting added from the beginning of the bar graph. Please refer to the diagram below. I want the new data to continuously get added at the end of the graph and erase the old data in the beginning of the graph. How can I accomplish this? The query I am using for the existing search is below: index="aws_dev" | eval st=strptime(startTime, "%Y-%m-%dT%H:%M:%S.%3N%Z"), et=strptime(endTime, "%Y-%m-%dT%H:%M:%S.%3N%Z") | eval st=mvindex(st,0) | eval et=mvindex(et,0) | eval diff = et - st | eval date_wday=lower(strftime(_time,"%A"))|eval date_w=strftime(_time,"%d-%b-%y %a %H:%M:%S") |where NOT (date_wday = "sunday" OR date_wday = "saturday") | chart values(diff) by date_w  
Hi Team, I have two log sources ,say x and y. For x we need to extract a field x1 and then for each x1 we need to take last six digit and search the logs from source y and we need to extract a fiel... See more...
Hi Team, I have two log sources ,say x and y. For x we need to extract a field x1 and then for each x1 we need to take last six digit and search the logs from source y and we need to extract a field y1. After this,we need to plot x1 vs y1.. and we need to find out x1 for which y1 is present and x1 for which y1 is not present.   Logically we need to showcase end to end journey of a transaction,where we have two different sources on same server.  
I have an HTTP Event Collector input collecting JSON data via syslog forwarder. The syslog-ng message looks like:   body("{ \"source\": \"${.splunk.source}\", \"event\": ${MSG} }")   ... See more...
I have an HTTP Event Collector input collecting JSON data via syslog forwarder. The syslog-ng message looks like:   body("{ \"source\": \"${.splunk.source}\", \"event\": ${MSG} }")   I can see the message and the proper source in my indexer. But time extraction is the problem. Because there is often a delay between the log and the time syslog receives it, I want to use a field in the message to grab the timestamp... A message looks like this:   {"data":"stuff","time_stamp":"2022-05-10 17:14:23Z","value1":"more_stuff"}   So, I create a props.conf on my indexer cluster that looks like:   [my_sourcetype] DATETIME_CONFIG = MAX_TIMESTAMP_LOOKAHEAD = 30 TIME_PREFIX = time_stamp\":\" TIME_FORMAT = %Y-%m-%d %h:%M:%S%Z TZ = UTC   I've confirmed the sourcetype is correct as I define it in the inputs for HEC and they match. But for the life of me, I can't seem to get Splunk to find the time. I've tried looking for errors using this search:   index = _internal log_level = WARN OR log_level =ERROR "timestamp"   (found in an older community post, so thanks to the author) but I find nothing. I tried playing in the UI and creating a new data type (the add new data widget). In that UI, my props.conf should work. But for some reason on the cluster it doesnt. Are there any other troubleshooting steps I can follow? Am I missing something that might help this work better?  
Hi,   I moved the installation Splunk folder by mistake into another folder because Splunk stopped working. Since, I had moved the folder to a different location, tried to copy it back but the co... See more...
Hi,   I moved the installation Splunk folder by mistake into another folder because Splunk stopped working. Since, I had moved the folder to a different location, tried to copy it back but the copy failed and became a partial copy. So I upgraded to the latest version of Splunk and tried to start Splunk.   When I try to start Splunk, I get the following error   Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Text decryption - error in finalizing: No errors in queue AES-GCM Decryption failed! Decryption operation failed: AES-GCM Decryption failed!    Also, it is asking for   All preliminary checks passed. Starting splunk server daemon (splunkd)... Enter PEM pass phrase:   Even if I don't enter anything, the Splunk server starts running but the GUI is not available and gives me 500 internal server error. Not sure what configuration file was changed during the process.   Regards, Pravin  
I had a windows admin create a powershell script for me (requires code-signing, plus app whitelisting complexity), and have configured as a Splunk input.  It's working fine from a powershell prompt, ... See more...
I had a windows admin create a powershell script for me (requires code-signing, plus app whitelisting complexity), and have configured as a Splunk input.  It's working fine from a powershell prompt, and I can see from _internal that Splunk is executing it, but I'm receiving no output. Script:     #main $command = { try { $Response = Invoke-WebRequest -Uri 'www.google.com' $StatusCode = $Response.StatusCode } catch { $StatusCode = $_.Exception.Response.StatusCode.value__ if ($StatusCode -eq $null){ $StatusCode = '000' } } return $StatusCode } $StatusCode = invoke-command -ScriptBlock $command Switch ($StatusCode) { '000' { write-warning ('Web_Connectivity url=www.google.com status=failure status_code={0}' -f $statuscode) -Verbose } default { write-host ('Web_Connectivity url=www.google.com status=success status_code={0}' -f $statuscode) -ForegroundColor Green } }       With this inputs.conf:     [powershell://test-internetaccessSplunk] script = . "$SplunkHome\etc\apps\test_Windows_Scripts\bin\test-internetaccessSplunk.ps1" schedule = */5 * 9-16 * 1-5 sourcetype = Script:Web_Connectivity source = Script:Web_Connectivity index = win_test     *note:  schedule to be updated to once/day once it works. _internal log events:      05-10-2022 09:45:00.0001576-7 INFO Start executing script=. "$SplunkHome\etc\apps\test_Windows_Scripts\bin\test-internetaccessSplunk.ps1" for stanza=test-internetaccessSplunk 05-10-2022 09:45:00.8595184-7 INFO End of executing script=. "$SplunkHome\etc\apps\test_Windows_Scripts\bin\test-internetaccessSplunk.ps1" for stanza=test-internetaccessSplunk, execution_time=0.8593608 seconds        
Hello Experts, I have a transaction query that I am displaying in a table. I am able to get results in a table, however, the results tied in a single transaction appear as single row in the table. ... See more...
Hello Experts, I have a transaction query that I am displaying in a table. I am able to get results in a table, however, the results tied in a single transaction appear as single row in the table. I would like to have them displayed in separate rows, as if they are individual search results. Here's an example: Log Data:   Transaction Id=1, step=1, data_x=dataX1, data_y=dataY1   Transaction Id=1, step=2, data_x=dataX2, data_y=dataY2 How results look like Transaction Id data_x data_y 1 1 dataX1 dataX2 dataY1 dataY2 2 2 ... ...   Need it to look like Transaction Id data_x data_y 1 dataX1 dataY1 1 dataX2 dataY2 2 ... ... 2 ... ...   Any help appreciated. Thanks!  
I am getting this message from salesforce Splunk app Cannot expand lookup field 'UserType' due to a reference cycle in the lookup configuration. Check search.log for details and update the lookup c... See more...
I am getting this message from salesforce Splunk app Cannot expand lookup field 'UserType' due to a reference cycle in the lookup configuration. Check search.log for details and update the lookup configuration to remove the reference cycle.
Hello, I must create a visualization- table. I have several similar tables located at dashboard side by side, the only difference is "Base Level". Client demands the visualization to look aesthetic... See more...
Hello, I must create a visualization- table. I have several similar tables located at dashboard side by side, the only difference is "Base Level". Client demands the visualization to look aesthetically- that means  the values on the column "Priority difference" should be stable, from -4 to 4 (-4, -3, -2, -1, 0, 1, 2, 3 ,4) and when there would be no logs for that, on the right column "Number" the value should be "0"- like at 1st photo, where I added it manually to visualize the case: The current final dashboard. At all the dashboards the values on Priority_difference should be always from -4 to 4, now when there are no logs the rows are just missing: The logs looks like this: Could you kindly please advise?
hello I need to display a bar chart with the site field in x axis For each site, I need to display 2 bar The first bar is the avg of retrans_bytes per site and the second bar is the avg of retr... See more...
hello I need to display a bar chart with the site field in x axis For each site, I need to display 2 bar The first bar is the avg of retrans_bytes per site and the second bar is the avg of retrans_bytes per user (it means the user corresponding to the site) Thats' why I use a subsearch for doing this But I dont succedd to cross the results between the 2 search could you help please?     `index` sourcetype="netp_tcp"" | chart avg(retrans_bytes) as retrans_bytes by site user | append [| search `index` sourcetype="netp_tcp"" | chart avg(retrans_bytes) as retrans_bytes by site ]      
Hi Team, Could you please help me on this request. I have a correlation search working fine and need to exclude these 3 filenames (setup64.exe & eggplantaiagent.exe & eggplantaiagentcmd.exe) using ... See more...
Hi Team, Could you please help me on this request. I have a correlation search working fine and need to exclude these 3 filenames (setup64.exe & eggplantaiagent.exe & eggplantaiagentcmd.exe) using the custom trigger condition. Tried with several combinations and it not reflecting in the notables triggered. PFB main query and custom trigger condition for your reference and guide me how to feed the query by suppressing it in the custom trigger condition,   Main query: index=sec_cb category=WARNING | search type=CB_ANALYTICS | search ((threat_cause_threat_category IN("KNOWN_MALWARE", "NEW_MALWARE") OR threat_cause_reputation IN("KNOWN_MALWARE", "PUP", "SUSPECT_MALWARE")) OR reason_code IN(”T_CANARY”)) | search NOT(reason_code IN(T_POL_TERM*) AND NOT(threat_cause_threat_category IN("KNOWN_MALWARE") OR threat_cause_reputation IN("KNOWN_MALWARE"))) | search NOT(reason_code="T_RUN_SYS" AND threat_cause_threat_category="NEW_MALWARE") | search reason_code IN(T_*, R_*) | search NOT [| inputlookup scanner_ip.csv | rename ScannerIP as threat_cause_actor_sha256 ] | search NOT(device_name("AZWPRDWVDIMT*" OR "AZWDRWVDIMT*") AND threat_cause_actor_name="automation controller.xlsm") | stats count values(severity) as severity values(process_name) as process_name values(reason_code) as reason_code values(device_os_version) as os_version values(device_external_ip) as external_ip values(device_internal_ip) as internal_ip values(device_location) as location values(device_username) as username values(threat_cause_threat_category) as threat_category values(threat_cause_reputation) as threat_reputation values(category) as category values(reason) as reason dc(id) as id_dc values(threat_cause_actor_name) as threat_actor values(create_time) as time by device_name id sourcetype | rename device_name as hostname | table hostname threat_actor id severity process_name reason_code reason os_version external_ip internal_ip location username threat_category threat_reputation category sourcetype time id_dc count Custom trigger condition: search threat_actor != "setup64.exe"
I would like to make a pie chart which shows the Top 10 tenants by number of hosts and then put everything else under the label "other". Currently, I am doing this: | stats sum(hostsCount) as hosts... See more...
I would like to make a pie chart which shows the Top 10 tenants by number of hosts and then put everything else under the label "other". Currently, I am doing this: | stats sum(hostsCount) as hostsCount by TenantName | sort hostsCount desc     The issue with this is that it truncates the TenantNames to 10000 as shown in the screenshot which makes the "other" category's hostsCount not accurate. There are over 30000 TenantNames/hostsCount. I would like to change this Pie chart to: 1. Display the Top 10  tenants by hostsCount 2. Make a label called "other" and put all the remaining hostsCounts in them so that it displays the accurate percentage/amount. What would be the best way to do this?
Hi, I would like to execute a windows script when a certain event is triggered. I am wondering how to upload a custom action script to a SaaS controller and NOT on-prem controller. All the documenta... See more...
Hi, I would like to execute a windows script when a certain event is triggered. I am wondering how to upload a custom action script to a SaaS controller and NOT on-prem controller. All the documentation does is talk about on Prem controller. Thanks.