All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all. I’m kind of new to Splunk. I have data by day - this is the response time for each API call by day. I want to run that automatically every day, collecting it into a summary index. (I cannot r... See more...
Hi all. I’m kind of new to Splunk. I have data by day - this is the response time for each API call by day. I want to run that automatically every day, collecting it into a summary index. (I cannot run this by month since it is too much data). Then, every month, I want to use the summary index to calculate the 95th percentile, average, stan dev, of all the response times by each API call. The summary index will allow me to do that faster. Although I am not sure of the mechanics on how to use. For instance, do I need to readd my filters for the monthly pull? Does the below so far look correct to pull in all information (events)? So, I want to understand if I am doing this correctly. I have the below SPL by day: index=virt [other search parameters] | rename msg.sessionId as sessionId | rename msg.apiName as apiName | rename msg.processingTime as processingTime | rename msg.responseCode as responseCode | eval session_id= coalesce(a_session_id, sessionId) | fields … | stats values(a_api_responsetime) as responsetime, values(processingTime) as BackRT by session_id | eval PlatformProcessingTime = (responsetime - BackRT) | where PlatformProcessingTime>0 | collect index=virt_summary Then I have the below SPL by month: index=virt_summary | bucket _time span=1mon | stats count as Events, avg(PlatformProcessingTime), stdev(PlatformProcessingTime), perc95(PlatformProcessingTime) by _time  Any assistance is much appreciated! Let me know if you need more clarification. The results are what I have attached, so it looks like it is not working properly. I tested the results by day. 
how to get the total each indexer volume size utilization in the indexer cluster of 10. i have the cluster manager with 10 indexers and like to know is there a way to query from the CM or the dashbo... See more...
how to get the total each indexer volume size utilization in the indexer cluster of 10. i have the cluster manager with 10 indexers and like to know is there a way to query from the CM or the dashboard view the volume utilization of each indexer. we don't have the distribution monitoring console setup yet. we have 1 SH cluster with 5 SH 1CM 10 indexers 1 deployer to manage SH cluster
Hi Folks,  Is there any way to bulk edit many alerts/reports or dashboards in Splunk Cloud? For example, I'm planning to edit the index of many alerts at the same time but I cannot find an option... See more...
Hi Folks,  Is there any way to bulk edit many alerts/reports or dashboards in Splunk Cloud? For example, I'm planning to edit the index of many alerts at the same time but I cannot find an option to do this in bulk. Any insight will be appreciated. Thanks in advance!
below is my search query index="inm_inventory" |table inventory_date, region, vm_name, version |dedup vm_name | search vm_name="*old*" OR vm_name="*restore*" output as below : The challenge ... See more...
below is my search query index="inm_inventory" |table inventory_date, region, vm_name, version |dedup vm_name | search vm_name="*old*" OR vm_name="*restore*" output as below : The challenge here is each vm_name has different suffix added and its not standard since any user adds any comment to to it so it could be anything. how do i perform look for the vm names since lookup file only has hostnames and no suffix. i have a lookup file named itso.csv which has details like hostname(all in lower case), tier, owner, country. I want to use lookup in my main search for the fields tier, owner, country end requirement is to do lookup for the vm_name in itso.csv file and add details like tier, countrycode, owner in the main search output.
Hi All, I have 2 similar queries as below to get the total host count and host count that are affected:   Query 1: To get total host count .... | rex field=_raw "(?ms)]\|(?P<host>\w+\-\w+)\|" |... See more...
Hi All, I have 2 similar queries as below to get the total host count and host count that are affected:   Query 1: To get total host count .... | rex field=_raw "(?ms)]\|(?P<host>\w+\-\w+)\|" | rex field=_raw "(?ms)]\|(?P<host>\w+)\|" | rex field=_raw "\]\,(?P<host>[^\,]+)\," | rex field=_raw "\]\|(?P<host>[^\|]+)\|" | rex field=_raw "(?ms)\|(?P<File_System>(\/\w+){1,5})\|" | rex field=_raw "(?ms)\|(?P<Disk_Usage>\d+)" | rex field=_raw "(?ms)\s(?<Disk_Usage>\d+)%" | rex field=_raw "(?ms)\%\s(?<File_System>\/\w+)" | regex _raw!="^\d+(\.\d+){0,2}\w" | regex _raw!="/apps/tibco/datastore" | rex field=_raw "(?P<Time>\w+\s\w+\s\d+\s\d+\:\d+\:\d+\s\w+\s\d+)\s\d" | rex field=_raw "\[(?P<Time>\w+\s\w+\s\d+\s\d+\:\d+\:\d+\s\w+\s\d+)\]" | rex field=_raw "(?ms)\d\s(?<Total>\d+(\.\d+){0,2})\w\s\d" | rex field=_raw "(?ms)G\s(?<Used>\d+(\.\d+){0,2})\w\s\d" | eval Available=(Total-Used) | eval Time_Stamp=strftime(_time, "%b %d, %Y %I:%M:%S %p") | lookup Master_List.csv "host" | search "Tech Stack"=* | search Region="GC" | search Environment=* | search host=* | search File_System=* | search Disk_Usage=* | stats count by host | stats count as Total Query 2: To get the affected host count .... | rex field=_raw "(?ms)]\|(?P<host>\w+\-\w+)\|" | rex field=_raw "(?ms)]\|(?P<host>\w+)\|" | rex field=_raw "\]\,(?P<host>[^\,]+)\," | rex field=_raw "\]\|(?P<host>[^\|]+)\|" | rex field=_raw "(?ms)\|(?P<File_System>(\/\w+){1,5})\|" | rex field=_raw "(?ms)\|(?P<Disk_Usage>\d+)" | rex field=_raw "(?ms)\s(?<Disk_Usage>\d+)%" | rex field=_raw "(?ms)\%\s(?<File_System>\/\w+)" | regex _raw!="^\d+(\.\d+){0,2}\w" | regex _raw!="/apps/tibco/datastore" | rex field=_raw "(?P<Time>\w+\s\w+\s\d+\s\d+\:\d+\:\d+\s\w+\s\d+)\s\d" | rex field=_raw "\[(?P<Time>\w+\s\w+\s\d+\s\d+\:\d+\:\d+\s\w+\s\d+)\]" | rex field=_raw "(?ms)\d\s(?<Total>\d+(\.\d+){0,2})\w\s\d" | rex field=_raw "(?ms)G\s(?<Used>\d+(\.\d+){0,2})\w\s\d" | eval Available=(Total-Used) | eval Time_Stamp=strftime(_time, "%b %d, %Y %I:%M:%S %p") | lookup Master_List.csv "host" | search "Tech Stack"=* | search Region="GC" | search Environment=* | search host=* | search File_System=* | search Disk_Usage>=80 | stats count by host | stats count as Total   I am able to get both the host count and create a dashboard panel in "single value" visualization in trellis layout separately. But I want to get both the host counts in one panel in trellis layout. (something like shown in the sample attachment) Please help to modify/create the query to get both host counts in one panel in the dashboard.   Your kind consideration is highly appreciated..!! Thank You..!!  
I have mstats query it was working fine till last week but suddenly now the success count is not showing up correctly. How to troubleshoot this issue???
Hi, I want to create an alert that triggers when a user_name  exist in a lookup table (e.g. group_names.csv). But I'm not sure how to create the search string for this. The fields I'm using in the g... See more...
Hi, I want to create an alert that triggers when a user_name  exist in a lookup table (e.g. group_names.csv). But I'm not sure how to create the search string for this. The fields I'm using in the group_names.csv lookup table is group_names type as follows: If the user_name matches group_names listed in the table, the alert should triggered. Any help on how to do this are much appreciated. Thanks..
Hi all, i have a question Index= app-data "cgth14678ghj"  host= http:jbossserver source=application_data_http:jbossserver-20210102-10.log   When i search with this query in will get  events in ... See more...
Hi all, i have a question Index= app-data "cgth14678ghj"  host= http:jbossserver source=application_data_http:jbossserver-20210102-10.log   When i search with this query in will get  events in Splunk But when i see on the host side there are no events with this term cgth14678ghj on the source file How come there are displaying on splunk without being in server From were splunk is taking this data which is not there in server.   Can any help me on this???
Using the "virustotal" cmd and it appears that if there are multiple events that have the same file_hash that only one of the events will "populate" the field/values from the virustotal cmd.  I can't... See more...
Using the "virustotal" cmd and it appears that if there are multiple events that have the same file_hash that only one of the events will "populate" the field/values from the virustotal cmd.  I can't post events. Example would be: event 1: _time=08/06/2023 07:00:00 dest=abc1 file_hash=45vv678 file_name=badguy.dll file_path=my_path vt_* will be populated event 2: _time=08/06/2023 07:150:00 dest=abc2 file_hash=45vv678 file_name=badguy.dll file_path=my_path vt_* - nothing will be populated event 3: _time=08/06/2023 07:30:00 dest=abc3 file_hash=45vv678 file_name=badguy.dll file_path=my_path vt_* - nothing will be populated I know the spl is fine as if I were to change the time picker to that of just the 2nd or 3rd event, all the vt_ fields would be populated.  It looks like this is the expected behavior.  Thanks in advance.
Hello, I have the following query that I am working with and it generates a table with multiple counts for various ports at 15 min intervals. index=abc source=xyz  SMF119HDSubType=2 | timechart sp... See more...
Hello, I have the following query that I am working with and it generates a table with multiple counts for various ports at 15 min intervals. index=abc source=xyz  SMF119HDSubType=2 | timechart span=15m count by SMF119AP_TTLPort_0001 usenull=f useother=f | stats values(*) as * by _time | table _time Port1 Port2 The result is the following table. I only want to display results more that 5000 counts. I am trying to use the where Port 2>5000 command. But it does not work. I am only displaying 2 port columns. However, I have several other ports to monitor as well. _time Port1 Port2 2023-08-09 09:30:00 800 2700 2023-08-09 09:45:00 1200 4800 2023-08-09 10:00:00 1300 5300 2023-08-09 10:15:00 600 8000 2023-08-09 10:30:00 400 13500   I would appreciate your inputs.   Thank you,   Chinmay.
Hey ya'll - I am attempting to create an efficient search to detect password compromises within some environments, the map command is very intensive and clicking to pass tokens is not automated, I wa... See more...
Hey ya'll - I am attempting to create an efficient search to detect password compromises within some environments, the map command is very intensive and clicking to pass tokens is not automated, I was wondering if there are other solutions. Below I have 2 macros, 1 which will detect failed logons based on the account_name field greater than 13. The second will pull all of the successful logins from a keyboard. Using a saved-search would also be acceptable. My goal is to have a result from these two macros that will provide the host name where the compromise occurred, the time when the initial password compromise occurred, and which user, if any, within 120 seconds successfully logged onto that same machine where the compromise occurred.  This seems to be a simple solution but cant seem to put two and two together. Appreciate the support!!! Macro 1 - password_compromise   index=wineventlog source="wineventlog:security" "logname=security" (EventCode=4625 Logon_Type=2) | eval newtime=_time+120 | eval oldtime=_time | eval Account_Name = mvfilter(Account_Name!="-" AND NOT match(Account_Name,"\$$")) | eval number=len(Account_Name) | eval status=case(number>13,"Potential Password Compromise", number<=9,"OK") | where status="Potential Password Compromise" | rename Account_Name as "Potential Password?" | sort -_time | table _time host "Potential Password?" oldtime newtime status | outputcsv passwordcompromise_initial.csv   Macro 2 - password_logon   index=wineventlog source="wineventlog:security" "logname=security" (EventCode=4624 LogonType=2) | eval AccountName = mvfilter(NOT match(AccountName, "-") AND NOT match(AccountName, "DWM*") AND NOT match(AccountName,"\$$$$")) | eval logontime=time | dedup host, AccountName, logontime | where AccountName!="" | rename AccountName as "AuthenticatedUser" | table time host "AuthenticatedUser" logontime | outputcsv password_successfullogon.csv    
Hi, I wrote a report that merge the result with lookup table to add fields (like machineName). the lookup table contain the field,source. then, I do sistats as the following: index=....search quer... See more...
Hi, I wrote a report that merge the result with lookup table to add fields (like machineName). the lookup table contain the field,source. then, I do sistats as the following: index=....search query...  | lookup lk_table_name.csv source AS source | sistats values(*) as * by TimeStamp,source if I write sistats command after the lookup command the new fields from the lookup table disappear.  if i write the sistats before the lookup command everything is ok but then i have other problem when i try to parse the summary index: index=summary search_name="query_SummaryIndex_Main" | stats values(*) as * by TimeStamp,source what should i do? why sistats doesnt work after lookup? thanks, Maayan
While browsing the documentation for the "Splunk Add-on for Microsoft Office 365", I see a statement: "The MessageTrace API for the Splunk Add-on for Microsoft Office 365 does not support data coll... See more...
While browsing the documentation for the "Splunk Add-on for Microsoft Office 365", I see a statement: "The MessageTrace API for the Splunk Add-on for Microsoft Office 365 does not support data collection for USGovGCC and USGovGCCHigh endpoints." (https://docs.splunk.com/Documentation/AddOns/released/MSO365/Configureinputmessagetrace) Is this a limitation of the Splunk add-on, or from Microsoft? Being USGovGCC, it would certainly be nice to have message traces...  
Hi,  can anyone tell me how to replace new client secret value which i received. i am new to splunk and badly need the help here!
Greetings, We started seeing OPSNSSL vulnerabilities on all of our Splunk forwarders and the main engine this week. The advisory tells us we must use OPENSSL 3.0.8 or newer. Since OPENSSL is now on... See more...
Greetings, We started seeing OPSNSSL vulnerabilities on all of our Splunk forwarders and the main engine this week. The advisory tells us we must use OPENSSL 3.0.8 or newer. Since OPENSSL is now on 3.1.2, I really thought the latest Splunk updates would fix the problem. I have just updated all forwarders to 9.1.0.1 and the main engine to 9.1.0.2, and it is now showing OPENSSL at 3.0.7. When will Splunk issue an update to address this and get OPENSSL to at least 3.0.8? 
I have a lookup test_lookup with 2 fields a1 and b1. The field a1 is common with the fields in the raw data. the values of field a1 and b1 are as follows: a1   a2  a       1    a        2 b  ... See more...
I have a lookup test_lookup with 2 fields a1 and b1. The field a1 is common with the fields in the raw data. the values of field a1 and b1 are as follows: a1   a2  a       1    a        2 b        3 b        4 What would be the o/p of the command ....| lookup test_lookup a1 OUTPUT a2?
Dear Team, Am setting up the machine agent and trying to send data to server, but am getting this error Caused by: org.apache.http.conn.ConnectTimeoutException  (Windows Machine agent) Caused by: ... See more...
Dear Team, Am setting up the machine agent and trying to send data to server, but am getting this error Caused by: org.apache.http.conn.ConnectTimeoutException  (Windows Machine agent) Caused by: org.apache.http.conn.ConnectTimeoutException: Connect to *****.appdynamics.com:443 [****-dev.saas.appdynamics.com/***.139, *****-saas.appdynamics.com/18.***.***.23, ****-dev.saas.appdynamics.com/5*.2*.**.***] failed: connect timed out Things we tried and checked Enable debug mode and captured the log Updated proxy in conf with port and url Url can be reached via browser and curl command Imported certs in security lib of machine agent Note : Via broswer i can reach the server Pls throw some light  Thanks in advance Sathish
Hi Team, I was trying to find out the workstations clock out of sync logs in splunk by using the below query. but I can not getting the expected logs.  Also, could not able to see the previous time ... See more...
Hi Team, I was trying to find out the workstations clock out of sync logs in splunk by using the below query. but I can not getting the expected logs.  Also, could not able to see the previous time field in interesting fields but it was existing in raw message field. can someone help me out how to make this query efficient...?  index=*_win sourcetype=wineventlog EventCode=4616 category="Security State Change" | stats max(Previous Time) by asset_id | where isnull(lastTime) | addinfo | eval hourDiff=floor((info_max_time-info_min_time)/3600) | fields dest,should_timesync,hourDiff Thanks in advance Previous Time: ‎2023‎-‎08‎-‎09T09:18:00.490316500Z 
Hi Team, I am setting up an alert on Splunk where my data is in below format.  I am writing a query where it returns those row only where CertExpiry is in15 days. Basically alert should trigger if... See more...
Hi Team, I am setting up an alert on Splunk where my data is in below format.  I am writing a query where it returns those row only where CertExpiry is in15 days. Basically alert should trigger if cert is getting expired in next 15days. Component  Server CertExpiry Zone.jar sample September 13, 2023 9:49:49 AM CDT