All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have an alert configured, the search finds an error in a windows event log, the alert is set up to trigger a notification email. Is there a way to have the alert run a  powershell script when the e... See more...
I have an alert configured, the search finds an error in a windows event log, the alert is set up to trigger a notification email. Is there a way to have the alert run a  powershell script when the error is found?
I am looking to create a simple pie chart that contrasts the total number of users during any give timeframe vs how many logged into a specific app. I am probably over thinking this, but what I did i... See more...
I am looking to create a simple pie chart that contrasts the total number of users during any give timeframe vs how many logged into a specific app. I am probably over thinking this, but what I did is a search for distinct_count of users during a period and then joined another search that calculates the distinct_count of users that logged into a specific app over that same period. For example:  index="okta" "outcome.result"=SUCCESS displayMessage="User single sign on to app" | stats dc(actor.alternateId) as "Total Logins" | join [ | search index="okta" "target{}.displayName"="Palo Alto Networks - Prisma Access" "outcome.result"=SUCCESS displayMessage="User single sign on to app" | stats dc(actor.alternateId) as "Total Palo Logins"] | table "Total Palo Logins" "Total Logins" Only issue is I can't get a proper pie graph of the percentage of Palo Logins vs Total Logins. Any help would be appreciated. I am sure I am missing something simple here. 
Is there anyway to get http request logs easily from Splunk created apps?  There is a failure in communicating w/ zscaler. The error msg seems to be generated on their side, but they are pushing h... See more...
Is there anyway to get http request logs easily from Splunk created apps?  There is a failure in communicating w/ zscaler. The error msg seems to be generated on their side, but they are pushing hard for the body of the message that was sent to their api.  Since the app was created by Splunk, I'm dis-inclined to hack that into their app just to get this intermittent data for Zscaler.  any suggestions by the community? 
I have two looksups that have a lists of subnets and name of the subnets. One lookup (subnet1.csv) as a field called name and subnet and the other (subnet2.csv) has fields named Name and Range. I wou... See more...
I have two looksups that have a lists of subnets and name of the subnets. One lookup (subnet1.csv) as a field called name and subnet and the other (subnet2.csv) has fields named Name and Range. I would like to combine the two. So far I have this: | inputlookup subnet1.csv | lookup subnet2.csv Name Range OUTPUT Range AS Subnet | table Name Subnet This doesn't seem to work. When I run it, I only get the results from subnet1.csv and I can't seem to figure out why. 
     Without the ability to remove testing errors in uptime calculation when reporting monthly numbers, I spend a lot of time doing it manually (multiple teams).  To alleviate this, I plan on writing... See more...
     Without the ability to remove testing errors in uptime calculation when reporting monthly numbers, I spend a lot of time doing it manually (multiple teams).  To alleviate this, I plan on writing a Pandas script to automate this process, but I need to export a CSV with a column that includes success or failure of each run (HTTP Check).  I fail to see CSV as an export option aside from the comparison reports.  The comparison reports only allow me to use RB tests.  Can anyone direct me to a mechanism to export run data (success/failure) for HTTP checks via CSV? Legacy Synthetics (Rigor)
Hi, We have a new implementation of Splunk ITSI, running on Splunk Cloud, in a new search head. Since the day the search head was installed, every search that we run is followed by a warning messag... See more...
Hi, We have a new implementation of Splunk ITSI, running on Splunk Cloud, in a new search head. Since the day the search head was installed, every search that we run is followed by a warning message related to a missing eventttype. Warning message is similar to below: "[idx-1.my-company.splunkcloud.com,idx-2.my-company.splunkcloud.com] Eventtype 'wineventlog-ds' does not exist or is disabled." Anyone have ever experienced this behavior on Splunk ITSI? Or have any knowledge of which is the source app/add-on that contains this eventtype that is being referenced by ITSI? Thanks!
I have a Min Host alert that was deleted that is triggering and spamming our support systems.   How can I stop this from occurring.      The alert does not appear in the Active Alerts or Detectors li... See more...
I have a Min Host alert that was deleted that is triggering and spamming our support systems.   How can I stop this from occurring.      The alert does not appear in the Active Alerts or Detectors lists.     I recreated the alert with the same name but the old code is still triggering.   Is there a way to disable a deleted alert or flush it from the SignalFx system? Thanks   -Sean
 I have logs that get generated every 5 min.         time=2023-02-06 00:01:00, app=bema, currentUseCount=7 time=2023-02-06 00:06:00, app=bema, currentUseCount=7 time=2023-02-06 00:11:00, app=bema,... See more...
 I have logs that get generated every 5 min.         time=2023-02-06 00:01:00, app=bema, currentUseCount=7 time=2023-02-06 00:06:00, app=bema, currentUseCount=7 time=2023-02-06 00:11:00, app=bema, currentUseCount=10 time=2023-02-06 00:16:00, app=bema, currentUseCount=8 time=2023-02-06 00:21:00, app=ash, currentUseCount=12 time=2023-02-06 00:26:00, app=ash, currentUseCount=10 time=2023-02-06 00:31:00, app=ash, currentUseCount=8 time=2023-02-06 00:36:00, app=ash, currentUseCount=9      How can i calculate the hours spent on each app based on the above logs   
I have the following search query that I've been using so far to display the unique values in lists of Ids: <search> | eval ids=if(group_id >= 4, id, '') | eval type_x_ids=if((group_id >= 4 AND is... See more...
I have the following search query that I've been using so far to display the unique values in lists of Ids: <search> | eval ids=if(group_id >= 4, id, '') | eval type_x_ids=if((group_id >= 4 AND is_type_x="true"), id, '') | eval non_type_x_ids=if((group_id >= 4 AND is_type_x="false"), id, '') | stats count as total_count, values(type_x_ids) as list_of_x_ids, values(non_type_x_ids) as list_of_non_x_ids, values(ids) as list_of_all_ids by some_characteristic Now that I've seen which Ids are in the lists, I would like to change the query to count the number of unique ids in the lists split up by some characteristic. mvcount doesn't seem to work in the stats command the way I tried it: attempt 1: | stats count as total_count, mvcount(type_x_ids) as num_of_x_ids, mvcount(non_type_x_ids) as num_of_non_x_ids, mvcount(ids) as num_of_all_ids by some_characteristic attempt 2: | stats count as total_count, mvcount(values(type_x_ids)) as num_of_x_ids, mvcount(values(non_type_x_ids)) as num_of_non_x_ids, mvcount(values(ids)) as num_of_all_ids by some_characteristic How should I write the stats line so I would get a table that shows the number of unique of Ids in each list of Ids split by some characteristic? I would like the following fields in my resulting table: |  some_characteristic  |  total_count  |  num_of_x_ids  |  num_of_non_x_ids  |  num_of_all_ids  |  I would appreciate any help you can give!!
Hi, I am working on a playbook which will check for any new artifact that has been added during the playbook execution. It must be repeatedly checking for any new artifacts. I am looking to add cus... See more...
Hi, I am working on a playbook which will check for any new artifact that has been added during the playbook execution. It must be repeatedly checking for any new artifacts. I am looking to add custom code that will be triggered by any addition of new artifacts.     Regards Sujoy
index=akamai "httpMessage.host"="*" "httpMessage.path"="/auth/realms/user/login-actions/authenticate" "*User-Agent:*" | spath "attackData.clientIP" | rename "attackData.clientIP" as ipAddress, http... See more...
index=akamai "httpMessage.host"="*" "httpMessage.path"="/auth/realms/user/login-actions/authenticate" "*User-Agent:*" | spath "attackData.clientIP" | rename "attackData.clientIP" as ipAddress, httpMessage.host as Host, httpMessage.path as Path, User-Agent as "User Agent" | where [search index=keycloak type=LOGIN* [ inputlookup fraud_accounts.csv | rename "Account Login" as customerReferenceAccountId, "Input IP" as ipAddress | return 1000 customerReferenceAccountId ] | return 10000 ipAddress ] | table ipAddress, Host, Path, _time, "Account ID", "User Agent"
Hello Splunkees, I have a requirement where I need to calculate the availability or uptime percentage of some Critical APIs. We ingest those API logs in Splunk and it tells us about the throughput,... See more...
Hello Splunkees, I have a requirement where I need to calculate the availability or uptime percentage of some Critical APIs. We ingest those API logs in Splunk and it tells us about the throughput, latency and HTTP status codes. Is there a way to calculate the availability of any API using these metrics? I mean something like calculating the success and failure rate and then based on that come up with a number to say how much available my API is. Does anyone have any basic query which can calculate that?  I have created something like below to calculate the success and failure rates -   index=myapp_prod sourcetype="service_log" MyCriticalAPI Status=200 | timechart span=15m count as SuccessRequest | appendcols [ search index=myapp_prod sourcetype="service_log" MyCriticalAPI NOT Status=200 | timechart span=15m count as FailedRequest] | eval Total = SuccessRequest + FailedRequest | eval successRate = round(((SuccessRequest/Total) * 100),2) | eval failureRate = round(((FailedRequest/Total) * 100),2)    
Hello, would you copy the app's full folder in another location as backup, extract new app from tgz in master-apps or shcluster/apps then copy your local folder from backup to new one? Thanks.
Hello  I find it difficult to stop the search when I got first result in multisearch. I tried |head 1  but it can't be implemented in multisearch  Is there anyway to stop it to enhance my search... See more...
Hello  I find it difficult to stop the search when I got first result in multisearch. I tried |head 1  but it can't be implemented in multisearch  Is there anyway to stop it to enhance my search efficiency? Because I got over 10 indexes which has over 10 million entires in each index to search. |multisearch [index = A |search ....] [index = B |search ....] [index = C |search ....] [index = D |search ....] .... Thank you so much.
Hello everyone, I got such table after search   ip subnets 10.0.0.2 10.0.0.0/24   10.0.0.3 10.0.0.0/24 172.24.23.23/24   I want to compare if ip be... See more...
Hello everyone, I got such table after search   ip subnets 10.0.0.2 10.0.0.0/24   10.0.0.3 10.0.0.0/24 172.24.23.23/24   I want to compare if ip belongs to subnets, using next one comparison | eval match=if(cidrmatch(subnets, ip), "match", "nomatch") It works correct if there is one subnet, but if more - not, how can I correct my search query?
Hi Team, we have some questions on uploading data. 1) Can we upload sample json/csv data with CIM compatible, can we see any demo ? 2) how to ingest network / network-traffic related sample dat... See more...
Hi Team, we have some questions on uploading data. 1) Can we upload sample json/csv data with CIM compatible, can we see any demo ? 2) how to ingest network / network-traffic related sample data on splunk enterprise ? 3) Similarly looking for some more sample data related to email or mac-addr etc on splunk enterprise (trial account). Regards Anand  
Hello Everyone, Our requirement is to fetch/download the Service health score via rest API. we are in splunk cloud as of now. Thank you  
Hi everyone, I want to deploy standard inputs for ca. 50 linux UFs via custom apps. Since there is a difference between standard log paths on Debian and RedHat flavored systems I want to know if th... See more...
Hi everyone, I want to deploy standard inputs for ca. 50 linux UFs via custom apps. Since there is a difference between standard log paths on Debian and RedHat flavored systems I want to know if there is a possibility to differentiate these systems on side of the Forwarder Management e.g: - Server Class DEB-based -> App A - Server Class RPM-based -> App B I'm aware of the fact, that I can tell apart windows and linux machine types, but is it possible to get the latter in more detail? I wouldn't like to use Ansible to deploy all the Inputs.conf files because I think this will be a mess when updates to the configs are pending. Thank you
Hello, since some time ago we are experiencing high CPU and/or memory usage related to the splunk_ta_o365 input addon. The processes impacted are the python ones: %CPU PID USER COMMAND 98.9 31... See more...
Hello, since some time ago we are experiencing high CPU and/or memory usage related to the splunk_ta_o365 input addon. The processes impacted are the python ones: %CPU PID USER COMMAND 98.9 317938 splunk /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365_graph_api.py 97.7 317203 splunk /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365_graph_api.py 8.8 3058237 splunk splunkd -p 8089 restart   Updating didn't resolved the issue Is anyone else experiencing this? How can we manage it?   Thanks!!
Hello, I have an deployment app that monitor log file from an external server that work fine since last year. But suddenly, since 26/1/2023 untill now, it can't index anything. Nothing changed from... See more...
Hello, I have an deployment app that monitor log file from an external server that work fine since last year. But suddenly, since 26/1/2023 untill now, it can't index anything. Nothing changed from the server side or on my side either, the host still produce log file on a daily basis. I also request to check the connection and restart deployment client but no improvement. My input.config is: [monitor:///u01/pv/log-1/data/trafficmanager/enriched/access/*.log] disabled = 0 index = my index sourcetype = my sourcetype The example log file name is: access_worker_6_2023_01_26.log  I like to resolve this problem, even redo every step if I have to because this is urgent. And I like to know how to troubleshoot step by step to know where is the problem, and how to prevent this in the future.