All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to add dropdown menu to a table value. Each value in a row should be a collapsable dropdown giving the description of the value. For example if my column entry has a value R_5, if I click on i... See more...
I want to add dropdown menu to a table value. Each value in a row should be a collapsable dropdown giving the description of the value. For example if my column entry has a value R_5, if I click on it, it should expand and show me as radius=5. I am able to do use a tooltip for this but want a dropdown instead.
Hi Community! I'm hoping someone can set my head straight.  I have two app inputs. One that I push to all *NIX servers (Splunk_TA_nix), and one additional app that I want to push to one specific se... See more...
Hi Community! I'm hoping someone can set my head straight.  I have two app inputs. One that I push to all *NIX servers (Splunk_TA_nix), and one additional app that I want to push to one specific server, serverXX (Splunk_TA_nix_serverXX_inputs). For serverXX, I want it to have an additional blacklist entry to exclude all files named /var/log/syslog/XYZ.* Splunk_TA_nix/local/inputs.conf    (other stanzas exist but have been removed for this example) [monitor:///var/log] whitelist = kern*|syslog$ blacklist=(lastlog|cron|FILES.*$) disabled = 0 index = nix sourcetype = syslog Splunk_TA_nix_serverXX_inputs/local/inputs.conf    (the app just contains this stanza) [monitor:///var/log] whitelist = kern*|syslog$ blacklist=(lastlog|cron|FILES.*$|XYZ\.) disabled = 0 index = nix sourcetype = syslog I tried this method of pushing the 2 apps to serverXX, and btool is showing that it's picking up the blacklist from the Splunk_TA_nix (not the one with the XYZ), so I guess I'm doing this all wrong! What should be the correct way to exclude XYZ files for only serverXX while deploying to all *NIX hosts?    
Hi there,   So we have one of our event logs set to archive.  But there were some files that are already there before we started ingesting this log.   So if I want to bring these logs in to splun... See more...
Hi there,   So we have one of our event logs set to archive.  But there were some files that are already there before we started ingesting this log.   So if I want to bring these logs in to splunk how do you do it?  I understand in this case UF and WI are only options.   So I did deploy the below with deployment server and restarted deploy-server.  But this log did not make it to splunk.  Any ideas what could be the problem?  or any other way I can bring in exported/archived event logs in to splunk?   [monitor://C:\windows\system32\winent\logs\Archive_log.evtx] disabled = 0 index=idx
Average response time with 10% additional buffer ( single number)
I have a simple lookup table that contains a list of IPs.  I'd like to take this list and search across all of my indexes, which don't all use the same fields for source/destination IPs.  What would ... See more...
I have a simple lookup table that contains a list of IPs.  I'd like to take this list and search across all of my indexes, which don't all use the same fields for source/destination IPs.  What would be the best/most efficient way to search all of these indexes for IP matches?
Hi, I'm having trouble seeing the "Advanced Hunting Results" Dashboard section of the "Microsoft 365 App for Splunk" app, I have the Add-on "Splunk add-On for Microsoft Security" installed but I can... See more...
Hi, I'm having trouble seeing the "Advanced Hunting Results" Dashboard section of the "Microsoft 365 App for Splunk" app, I have the Add-on "Splunk add-On for Microsoft Security" installed but I can't get the sourcetype m365:defender:incident:advanced_hunting. I already validated the permissions within the application in AAD and if they are granted, any ideas?
So I have a search I run for an alert which looks for a missing event, it's a simple tstats that shows stuff within the last 30 days I would like to compare the 90 days variant in the same search and... See more...
So I have a search I run for an alert which looks for a missing event, it's a simple tstats that shows stuff within the last 30 days I would like to compare the 90 days variant in the same search and determine the missing events.    Any ideas? 
Hello, I have the below SPL with the two mvindex functions. mvindex position '6' in the array is supposed to apply http statuses for /developers.  mvindex position '10' in the array is supposed... See more...
Hello, I have the below SPL with the two mvindex functions. mvindex position '6' in the array is supposed to apply http statuses for /developers.  mvindex position '10' in the array is supposed to apply http statuses for /apps.  Currently position 6 and 10 are crossing events. Applying to both APIs. Is there anyway I can have one mvindex apply to one command?    (index=wf_pvsi_virt OR index=wf_pvsi_tmps) (sourcetype="wf:wca:access:txt" OR sourcetype="wf:devp1:access:txt") wf_env=PROD | eval temp=split(_raw," ") | eval API=mvindex(temp,4,8) | eval http_status=mvindex(temp,6,10) | search ( "/services/protected/v1/developers" OR "/wcaapi/userReg/wgt/apps" ) | search NOT "Mozilla" | eval API = if(match(API,"/services/protected/v1/developers"), "DEVP1: Developers", API) | eval API = if(match(API,"/wcaapi/userReg/wgt/apps"), "User Registration Enhanced Login", API)  
Field = 1.123456789 Field = 14.123456 Field = 3.1234567 I need to run a query that will return the number of decimals for each record in Field. Expected Result: 9 6 7
hello all. i have a .csv report that gets generated regularly and that I'm monitoring. working fine there. trying to figure out how to display it because the data(events?) are in columns. is this pos... See more...
hello all. i have a .csv report that gets generated regularly and that I'm monitoring. working fine there. trying to figure out how to display it because the data(events?) are in columns. is this possible? example data here. Hosts server1 server2 IPLevel median median Tip1662 N/A N/A Tip1663 PASSED PASSED Tip1664 FAILED FAILED Tip1666 PASSED PASSED Tip1667 PASSED PASSED Tip1668 PASSED PASSED Tip1669 N/A N/A Tip1671 PASSED PASSED Tip1674 SKIPPED SKIPPED Tip1675 FAILED FAILED Tip1676 PASSED PASSED Tip1677 PASSED PASSED Tip1680 PASSED PASSED Tip1685 PASSED PASSED Tip1687 PASSED PASSED Tip1688 SKIPPED SKIPPED Tip1689 SKIPPED SKIPPED Tip1690 FAILED FAILED
I have an alert configured, the search finds an error in a windows event log, the alert is set up to trigger a notification email. Is there a way to have the alert run a  powershell script when the e... See more...
I have an alert configured, the search finds an error in a windows event log, the alert is set up to trigger a notification email. Is there a way to have the alert run a  powershell script when the error is found?
I am looking to create a simple pie chart that contrasts the total number of users during any give timeframe vs how many logged into a specific app. I am probably over thinking this, but what I did i... See more...
I am looking to create a simple pie chart that contrasts the total number of users during any give timeframe vs how many logged into a specific app. I am probably over thinking this, but what I did is a search for distinct_count of users during a period and then joined another search that calculates the distinct_count of users that logged into a specific app over that same period. For example:  index="okta" "outcome.result"=SUCCESS displayMessage="User single sign on to app" | stats dc(actor.alternateId) as "Total Logins" | join [ | search index="okta" "target{}.displayName"="Palo Alto Networks - Prisma Access" "outcome.result"=SUCCESS displayMessage="User single sign on to app" | stats dc(actor.alternateId) as "Total Palo Logins"] | table "Total Palo Logins" "Total Logins" Only issue is I can't get a proper pie graph of the percentage of Palo Logins vs Total Logins. Any help would be appreciated. I am sure I am missing something simple here. 
Is there anyway to get http request logs easily from Splunk created apps?  There is a failure in communicating w/ zscaler. The error msg seems to be generated on their side, but they are pushing h... See more...
Is there anyway to get http request logs easily from Splunk created apps?  There is a failure in communicating w/ zscaler. The error msg seems to be generated on their side, but they are pushing hard for the body of the message that was sent to their api.  Since the app was created by Splunk, I'm dis-inclined to hack that into their app just to get this intermittent data for Zscaler.  any suggestions by the community? 
I have two looksups that have a lists of subnets and name of the subnets. One lookup (subnet1.csv) as a field called name and subnet and the other (subnet2.csv) has fields named Name and Range. I wou... See more...
I have two looksups that have a lists of subnets and name of the subnets. One lookup (subnet1.csv) as a field called name and subnet and the other (subnet2.csv) has fields named Name and Range. I would like to combine the two. So far I have this: | inputlookup subnet1.csv | lookup subnet2.csv Name Range OUTPUT Range AS Subnet | table Name Subnet This doesn't seem to work. When I run it, I only get the results from subnet1.csv and I can't seem to figure out why. 
     Without the ability to remove testing errors in uptime calculation when reporting monthly numbers, I spend a lot of time doing it manually (multiple teams).  To alleviate this, I plan on writing... See more...
     Without the ability to remove testing errors in uptime calculation when reporting monthly numbers, I spend a lot of time doing it manually (multiple teams).  To alleviate this, I plan on writing a Pandas script to automate this process, but I need to export a CSV with a column that includes success or failure of each run (HTTP Check).  I fail to see CSV as an export option aside from the comparison reports.  The comparison reports only allow me to use RB tests.  Can anyone direct me to a mechanism to export run data (success/failure) for HTTP checks via CSV? Legacy Synthetics (Rigor)
Hi, We have a new implementation of Splunk ITSI, running on Splunk Cloud, in a new search head. Since the day the search head was installed, every search that we run is followed by a warning messag... See more...
Hi, We have a new implementation of Splunk ITSI, running on Splunk Cloud, in a new search head. Since the day the search head was installed, every search that we run is followed by a warning message related to a missing eventttype. Warning message is similar to below: "[idx-1.my-company.splunkcloud.com,idx-2.my-company.splunkcloud.com] Eventtype 'wineventlog-ds' does not exist or is disabled." Anyone have ever experienced this behavior on Splunk ITSI? Or have any knowledge of which is the source app/add-on that contains this eventtype that is being referenced by ITSI? Thanks!
I have a Min Host alert that was deleted that is triggering and spamming our support systems.   How can I stop this from occurring.      The alert does not appear in the Active Alerts or Detectors li... See more...
I have a Min Host alert that was deleted that is triggering and spamming our support systems.   How can I stop this from occurring.      The alert does not appear in the Active Alerts or Detectors lists.     I recreated the alert with the same name but the old code is still triggering.   Is there a way to disable a deleted alert or flush it from the SignalFx system? Thanks   -Sean
 I have logs that get generated every 5 min.         time=2023-02-06 00:01:00, app=bema, currentUseCount=7 time=2023-02-06 00:06:00, app=bema, currentUseCount=7 time=2023-02-06 00:11:00, app=bema,... See more...
 I have logs that get generated every 5 min.         time=2023-02-06 00:01:00, app=bema, currentUseCount=7 time=2023-02-06 00:06:00, app=bema, currentUseCount=7 time=2023-02-06 00:11:00, app=bema, currentUseCount=10 time=2023-02-06 00:16:00, app=bema, currentUseCount=8 time=2023-02-06 00:21:00, app=ash, currentUseCount=12 time=2023-02-06 00:26:00, app=ash, currentUseCount=10 time=2023-02-06 00:31:00, app=ash, currentUseCount=8 time=2023-02-06 00:36:00, app=ash, currentUseCount=9      How can i calculate the hours spent on each app based on the above logs   
I have the following search query that I've been using so far to display the unique values in lists of Ids: <search> | eval ids=if(group_id >= 4, id, '') | eval type_x_ids=if((group_id >= 4 AND is... See more...
I have the following search query that I've been using so far to display the unique values in lists of Ids: <search> | eval ids=if(group_id >= 4, id, '') | eval type_x_ids=if((group_id >= 4 AND is_type_x="true"), id, '') | eval non_type_x_ids=if((group_id >= 4 AND is_type_x="false"), id, '') | stats count as total_count, values(type_x_ids) as list_of_x_ids, values(non_type_x_ids) as list_of_non_x_ids, values(ids) as list_of_all_ids by some_characteristic Now that I've seen which Ids are in the lists, I would like to change the query to count the number of unique ids in the lists split up by some characteristic. mvcount doesn't seem to work in the stats command the way I tried it: attempt 1: | stats count as total_count, mvcount(type_x_ids) as num_of_x_ids, mvcount(non_type_x_ids) as num_of_non_x_ids, mvcount(ids) as num_of_all_ids by some_characteristic attempt 2: | stats count as total_count, mvcount(values(type_x_ids)) as num_of_x_ids, mvcount(values(non_type_x_ids)) as num_of_non_x_ids, mvcount(values(ids)) as num_of_all_ids by some_characteristic How should I write the stats line so I would get a table that shows the number of unique of Ids in each list of Ids split by some characteristic? I would like the following fields in my resulting table: |  some_characteristic  |  total_count  |  num_of_x_ids  |  num_of_non_x_ids  |  num_of_all_ids  |  I would appreciate any help you can give!!
Hi, I am working on a playbook which will check for any new artifact that has been added during the playbook execution. It must be repeatedly checking for any new artifacts. I am looking to add cus... See more...
Hi, I am working on a playbook which will check for any new artifact that has been added during the playbook execution. It must be repeatedly checking for any new artifacts. I am looking to add custom code that will be triggered by any addition of new artifacts.     Regards Sujoy