All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Can you create a query that search for all the logs that got entered in an index for the last 24hours and group it by index? That similar to a table with the number of logs added per index in the per... See more...
Can you create a query that search for all the logs that got entered in an index for the last 24hours and group it by index? That similar to a table with the number of logs added per index in the period of time you select. It would be much appreciated thank you so much for your help:)
I'm trying to count the number of sessions (known as sessionId) that have more than 2 intents. (An intent is a field value). And also include the total number of sessions, including sessions with 0 o... See more...
I'm trying to count the number of sessions (known as sessionId) that have more than 2 intents. (An intent is a field value). And also include the total number of sessions, including sessions with 0 or 1 intents. I can't figure out the concept for this query.    index=conversation botId=ccb | eval intent_total=if(intent=*, 1, 0) | stats sum(intent_total) by sessionId | where intent_total > 2 | table sessionId intent_count    
Hi  I am using Cisco WSA proxy and i need help on creating a usecase for Proxy avoindance/bypass  can you please help me with the query 
I want to check if the user is detected in other areas based on IP. Detection criteria are pre-written scenarios. In this scenario, we inserted numbers such as rule_1 and rule_2. The purpose of ... See more...
I want to check if the user is detected in other areas based on IP. Detection criteria are pre-written scenarios. In this scenario, we inserted numbers such as rule_1 and rule_2. The purpose of this task is to see that a user who should be in a specified location is detected in a scenario in another location. This is my log table country title Area detail userlist ip USA WA Washington Washington Monument user1 192.168.0.100 USA WA Washington Washington Monument user2 192.168.0.101 USA VA Virginia   user3 192.168.0.102 USA NJ New Jersey   user4 192.168.0.103   i want view
please i will be glad to get answer to this query | eval  InT = if(((lastpickupdate + DaysOfARVRefil  + 28) > IIT), "Interrupted", "Active") "lastpickupdate" and "IIT" columns are in date format,... See more...
please i will be glad to get answer to this query | eval  InT = if(((lastpickupdate + DaysOfARVRefil  + 28) > IIT), "Interrupted", "Active") "lastpickupdate" and "IIT" columns are in date format, whereas "DaysOfARVRefil" is in Days(int) please how do i successfully run this query  thanks osita
Is there an easy way to implement a recovery alert in the same query as the alert query? For example if I have a system that creates a log file every 10 min if everything is working. I built a quer... See more...
Is there an easy way to implement a recovery alert in the same query as the alert query? For example if I have a system that creates a log file every 10 min if everything is working. I built a query that runs every half an hour and tells me if there is something new in the log location. That part is easy enough but I would also like the same query to be able to send a recovery notification. Or is this not going to be possible because I want to trigger two different actions because from what I can tell you can only configure the one email or slack action per alert?  I did see that there is a splunk addon with VictorOps that has this functionality but I wanted to check here first before I went down that route.
Hi, I am having no luck with a dashboard input restriction. I have a dashboard textbox input that queries a lookup.   For instance, the input could be "hostname".   I want the user to be able to pu... See more...
Hi, I am having no luck with a dashboard input restriction. I have a dashboard textbox input that queries a lookup.   For instance, the input could be "hostname".   I want the user to be able to put in the exact value or partial with a wildcard "*".  So if hostname = 12345ABCD, they could enter the exact or 12345A*  and return all those that match.   BUT I don't want them to just enter hostname = "*"   and pull everything back. Any ideas how to sanitize the inputs so a user cannot just use "*" star? Thank you
I need help to append this rest command to my query. The problem is that the rest command is adding to the first row and I need it to be added to the row that was last entered. | rest /services/au... See more...
I need help to append this rest command to my query. The problem is that the rest command is adding to the first row and I need it to be added to the row that was last entered. | rest /services/authentication/current-context/context | fields + username | search username!=*splunk* | append [ | inputlookup test.csv ] | append [| makeresults | eval user="test", description="test", manager="test", revisit=(now() + 7776000), user_added=now(), token_confirm="$confirm_addition$"] | table username, user, description, user_added, revisit, category, department, description, manager | outputlookup test.csv example: I go to my dashboard and enter user "tom" when I do the rest command should display my username since I entered the user "tom". Now I need to write this to the lookup table so that my name is next to "tom" entry row.
Hi  I am following this documentation from GCP [1], which mentions to omit YOUR_SPLUNK_HEC_URL must not include the HEC endpoint path, for example, /services/collector My question is more specifi... See more...
Hi  I am following this documentation from GCP [1], which mentions to omit YOUR_SPLUNK_HEC_URL must not include the HEC endpoint path, for example, /services/collector My question is more specifically related to this section [2], it mentions that format should be  <protocol>://http-inputs.<host>.splunkcloud.com:<port>/<endpoint> You must add http-inputs- before the <host> which one would be the correct url, for eg https://http-inputs.xxxx.splunkcloud.com:433 or https://http-inputs-xxxx.splunkcloud.com:433 Send data to HTTP Event Collector on Splunk Cloud Platform  [1]https://cloud.google.com/architecture/deploying-production-ready-log-exports-to-splunk-using-dataflow#deploy_the_dataflow_pipeline [2]https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector#Configure_HTTP_Event_Collector_on_Splunk_Cloud_Platform
Hello, I am setting up this application (Microsoft 365 App for Splunk https://splunkbase.splunk.com/app/3786/) with our office tenant so that the dashboards work, I have followed the instructions o... See more...
Hello, I am setting up this application (Microsoft 365 App for Splunk https://splunkbase.splunk.com/app/3786/) with our office tenant so that the dashboards work, I have followed the instructions on the documentation but I miss some things on the dashboards of exchange and Defender. I installed the add-on security and set up the inputs, gave the necessary authorizations from an application on Azure but no data passes. Same thing for Exchange, I followed the account creation procedure + rights, still nothing. Add-on security : https://splunkbase.splunk.com/app/6207/  Exchange add-on : https://splunkbase.splunk.com/app/3720/  Has anyone had the same problem as me? Or the applications I use are no longer adequate for the Office 365 application? Thanks in advance for your help.
We are using the Splunk Operator for Kubernetes for some Standalone Instances. It works so far.  But now we want to use DB Connect on a Standalone Deployment. The installation of DB Connect and the J... See more...
We are using the Splunk Operator for Kubernetes for some Standalone Instances. It works so far.  But now we want to use DB Connect on a Standalone Deployment. The installation of DB Connect and the JDBC Drivers work, but we get errors. --> Cannot communicate with task server, please check your settings. The errors are related to the missing JRE on the docker image. How are we supposed to use db connect without JRE? Does anyone have a solution?  
We are trying to upgrade search head from 8.0.1 to 8.2.6 but we are getting error Splunk setup wizard ended prematurely.   We tried the solution mentioned on below community post but it didnt wor... See more...
We are trying to upgrade search head from 8.0.1 to 8.2.6 but we are getting error Splunk setup wizard ended prematurely.   We tried the solution mentioned on below community post but it didnt worked either "Splunk Enterprise Setup Wizard ended prematurely"... - Splunk Community Please find below snapshot for the same-
We want the alert type to be in real-time and send an alert only if the search query met the condition not to run every minute even though it does not have any result (to avoid spam alerts). How do w... See more...
We want the alert type to be in real-time and send an alert only if the search query met the condition not to run every minute even though it does not have any result (to avoid spam alerts). How do we see the Alert type for “Real-time” ? instead of a scheduled option only. Because on our end there where no options like that it is automatically tag as "scheduled" on the alert type.
Hi, I try to calculate the duration I have extracted 2 fields, start_time and end_time -- I believe both times should be in the exact same format in order to successful calculate the dur... See more...
Hi, I try to calculate the duration I have extracted 2 fields, start_time and end_time -- I believe both times should be in the exact same format in order to successful calculate the duration start_time = 2022-06-03T02_11_50 end_time = 2022-06-03T03:48:06 -- I have been puzzling for some time now, but how do I get the start_time in the same format as the end_time? ... Thanks for help in advance! Edwin          
Greetings, I have recently added a new Calculated Field to a Data Model by stopping the accelerated Data Model, and inputting the following Eval Calculated field on the bottom of the table as follo... See more...
Greetings, I have recently added a new Calculated Field to a Data Model by stopping the accelerated Data Model, and inputting the following Eval Calculated field on the bottom of the table as follows: After accelerating the Data Model again, all the related dest field values even though they were included in the CIM index now appear as "unknown" and the field now does not even show up on the original index that was first feeded on to the Data Model. I am currently baffled as to why this would even interfere with the original index parsing let alone showing up as unknown in the data model aswell. Thanks, Regards,
Is it at all possible to remove/uninstall UFs by pushing some script(s) from the deployment server. I do not have OS access on these endpoints & servers. OS access option is not possible, hence need ... See more...
Is it at all possible to remove/uninstall UFs by pushing some script(s) from the deployment server. I do not have OS access on these endpoints & servers. OS access option is not possible, hence need to think of some alternative ways to achieve this (if possible). I can always disable the inputs on the UF but the requirement is to remove the UF installation itself, if not the installation then all configs like inputs.conf/outputs.conf/deploymentclient.conf and other apps (essentially everything in $SPLUNK_HOME/etc/system/local) Splunk Deployment server version 8.1.x UF version >7.1 OS - Windows endpoints and servers, Linux servers
Hi All, I have been working on the luhn algorithm to validate the credit card. For that, I have used the below link query. The query is correct, but it is cannot validate the correct credit card numb... See more...
Hi All, I have been working on the luhn algorithm to validate the credit card. For that, I have used the below link query. The query is correct, but it is cannot validate the correct credit card numbers.Sometimes it will result the not valid cards too. Pls refer to this below link and help out to find the valid cards.   https://gosplunk.com/detect-credit-card-numbers-using-luhn-algorithm/ @ITWhisperer @aasabatini 
Hi,   I am getting this error "web interface does not seem to be available!" when I try to start splunk on Hyper-V Ubuntu VM Can anyone help?    
I recently inherited this splunk system, and I am gradually working out how it is set up. When running a search yesterday, I noticed something. We have 10 indexers, 5 at site1, 5 at site2. We have 4 ... See more...
I recently inherited this splunk system, and I am gradually working out how it is set up. When running a search yesterday, I noticed something. We have 10 indexers, 5 at site1, 5 at site2. We have 4 search heads, all assigned to Site0. When inspecting my search job, I saw that my results were only pulled from a single site's peers, not from both. Here are some pictures to explain: My rep factor tells me I should have 2 copies at each site. My search factor tells me I should have 2 searchable copies at each site. This would imply that when I run a search across my 10 indexers, it would be pulling data from both sites. So then i run a search on a specific index, and I see this: I expected to see data pulled equally from both sites, but I see Site k is left completely alone. Even if a single indexer was the ingest point for all the data, it would still be scattered across the 10 indexers as it worked to meet the replication/search factors. There is no reason everything should be stuck on one site. Am I way off base here, or is something configured wrong?
initially MLTK was working fine but now I started getting this error "Error in 'fit' command: (ImportError) DLL load failed while importing _arpack: The specified procedure could not be found." ple... See more...
initially MLTK was working fine but now I started getting this error "Error in 'fit' command: (ImportError) DLL load failed while importing _arpack: The specified procedure could not be found." please anyone suggest a sountion