All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

In Splunk db connect some specific data labs are not indexing properly to Splunk means not forwarding its data to Splunk  search head from the databases where as those databases are executing fine wh... See more...
In Splunk db connect some specific data labs are not indexing properly to Splunk means not forwarding its data to Splunk  search head from the databases where as those databases are executing fine what could be the issue is it on server or on Splunk?
Trying to find Time Taken for last 7 days for a batch job using splunk search, trying to find the average of the time taken and then finding the jobs that have time taken greater than average time. ... See more...
Trying to find Time Taken for last 7 days for a batch job using splunk search, trying to find the average of the time taken and then finding the jobs that have time taken greater than average time. splunk search | eval sTime=strptime(StartTime, "%B %d, %Y %I:%M:%S %p") | eval eTime=strptime(EndTime, "%B %d, %Y %I:%M:%S %p") | eval TimeTaken = ceil((eTime-sTime)/60) | stats avg(TimeTaken) as avgtime by JobbName | where TimeTaken > avgtime Once I use the stats average command, the TimeTaken values are not coming up . Tried using streamstats but averagetime calculation is not right        
We have searches for 4740 account lockouts not showing as action=lockout but instead as action=modified. This is important to us as we are trying to configure ES but that's one dashboard where we ar... See more...
We have searches for 4740 account lockouts not showing as action=lockout but instead as action=modified. This is important to us as we are trying to configure ES but that's one dashboard where we aren't getting any results. Where do we go to fix this?   Also whenever you get a a source or field that shows as "unknown" whats the best way to go about fixing these?
Hi Splunkers, I have an issue with a search that use a lookup. I know here on community there are a lots of post on this argument, but event reading them I'm still in struck. My search must simple m... See more...
Hi Splunkers, I have an issue with a search that use a lookup. I know here on community there are a lots of post on this argument, but event reading them I'm still in struck. My search must simple match this: verify if, from firewall logs, the destination ip match against the address contained on a lookup file and traffic is accepted/permitted. The search is:     index=* sourcetype=cp_log direction=outbound action="Accept" | lookup tor_node tor_node_address as dst_ip output exclude | where isnull(exclude) | stats count by src_ip, dst_ip     Where: the table name is tor_node this table has 2 columns: tor_node_address, which contains ip address, and exclude, inserted to esclude temporary some IPs from matching if needed. So, the query logic is: check if, in the events, the dst_ip field values match the lookup field named tor_node address ones and, with the use of isnull command, those IP are not marked with exclusion. How I want to check if the value in the lookup must be excluded from matching or not? If the cell of exclude column is empty, the row must be included in check; if it is populated, not. So, if the exclude column is full empty, all data of tor_node_address must be matched by dst_ip. That means that, if no dst_ip match tor_node_address, the search resut must be empty. Here a graphical view of my lookup: As you can see, the table has been created with https://splunkbase.splunk.com/app/1724  I ensured that, after saving file, the lookup is well populated with inputlookup command: I changed correctly lookup file permissions: I created the related lookup definition and set correctly its permission: And set no particular advanced settings, except avoid case sensitive match: So, what's the problem? The search dos not perform the check. Even if no dst_ip match the tor_node_address one, the search result is not empty. If I launch it, I can see the same results gained with:     index=* sourcetype=cp_log direction=outbound action="Accept" | stats count by src_ip, dst_ip     It seems like the lookup command is totally ignored.
Hello, We ingest data from a database using rising columns, however a small amount of events are missing from the index, although I can see them in DBConnect. The field that we use as a rising colu... See more...
Hello, We ingest data from a database using rising columns, however a small amount of events are missing from the index, although I can see them in DBConnect. The field that we use as a rising column is set as an identity column so I`m expecting that each new value is generated based on the current seed & increment. Query timeout is set to 30 seconds, max rows to retrieve is 0 (maximum),  fetch size is 300 and frequency is 60 seconds - from what I`ve observed this should be sufficient for our requirements. Any assistance would be greatly appreciated. Many thanks.
I am not able to properly configure this feature, I have looked online as well and cannot find documentation and or videos on how to properly configure this. Please provide me some direction and or h... See more...
I am not able to properly configure this feature, I have looked online as well and cannot find documentation and or videos on how to properly configure this. Please provide me some direction and or help. Thank You
Where can I find a list of all available options per visualization type? I've looked through the documentation but haven't found an all-encompassing list of the "options" available for each visuali... See more...
Where can I find a list of all available options per visualization type? I've looked through the documentation but haven't found an all-encompassing list of the "options" available for each visualization type.   I found this for Dashboard Studio but need the Classic version: Object options and defaults reference - Splunk Documentation
We have a requirement to send audit logs from Splunk to Another tool for security purpose. asked to install the UF on where the audit logs stored to forward the logs to their end. I would like to kno... See more...
We have a requirement to send audit logs from Splunk to Another tool for security purpose. asked to install the UF on where the audit logs stored to forward the logs to their end. I would like to know if it is possible to install UF on one of our indexer (where the audit logs stored location /opt/splunk/var/log/splunk). Please let me know if anyone has any experience working on similar requirement/task.  Thanks in advance!
Hi, Getting below mentioned error while executing splunk query.   QUERY: index=lsc_exacta_index source="L:\\ProgramData\\Bastian Software\\Logs\\ExactaImport\\ExactaImport.txt" | rex fi... See more...
Hi, Getting below mentioned error while executing splunk query.   QUERY: index=lsc_exacta_index source="L:\\ProgramData\\Bastian Software\\Logs\\ExactaImport\\ExactaImport.txt" | rex field=_raw ".* Order \[(?<imWho>[\d-]+) - .*\] successfully assigned.*" | rex field=_raw "\.* Bastian\.Exacta\.Interface\.Processes\.ExactaProductTranslatorBase - Validation of Message Successfull, Prepare to Insert\n.*ROWS ONLY;\@p0 = \'(?<imWho>[\d-]+)\'.*\[.*" | rex field=_raw ".*\/line id \[(?<imWho>[\d-]+) -.* was cancelled successfully.\n.*" | rex field=_raw ".*\[Import Pick Orders\].*ROWS ONLY;@p0 = \'(?<imWho>[\d-]+)\' \[.*(\n|.)*- Messages processed successfully.*" | eval exactaDocTime = strftime(_time, "%Y-%m-%d %H:%M:%S") | search imWho !="" | eval exactaDocStatus = if(exactaDocTime != "","Created",NA) | table imWho exactaDocTime exactaDocStatus   Help me to optimize regex used in above mentioned query to avoid error mentioned in screenshot. Thanks Abhineet Kumar
If I need to display data over stacked bar chart,where these values are based on multiple values which are numeric. How can i modify the chart command,where x-axis represents year and y-axis with nu... See more...
If I need to display data over stacked bar chart,where these values are based on multiple values which are numeric. How can i modify the chart command,where x-axis represents year and y-axis with numeric data.
DeviceID Completed Crashed 1 17 1 2 13 4 3 12 3     How to create a donut chart like the below snippet in splunk.   so here instead of 158... See more...
DeviceID Completed Crashed 1 17 1 2 13 4 3 12 3     How to create a donut chart like the below snippet in splunk.   so here instead of 15884 ,the total of complete and crash should come Similarly completed and crash count
let's suppose I have a set of the log from Windows authentication and I want to search if user field does not match a specific pattren, can we use regex to do that in splunk.
Hi Splunkers, we are setting a Splunk Cloud environment for a customer and we are working on Trigger Actions for alerts. We don't need, for now, some particular custom actions: afer alert triggerin... See more...
Hi Splunkers, we are setting a Splunk Cloud environment for a customer and we are working on Trigger Actions for alerts. We don't need, for now, some particular custom actions: afer alert triggering, sending an email to our SOC is enough. We know that fields in the events/alerts are easily usable thanks to $<field_name> notation, so how to customize the email action is not a problem. What we don't know is: if we have a custom template we want to use for our emails, with some logos and HTML code, is it possible simply put it in the message box? I mean, simply put our html code here: or we have to follow another way? And which one?
Hi, We recently updated the technology add-on for Armis in Splunk IDM, but after the update, it's no longer generating any alerts. Could you please provide guidance on troubleshooting this issue?
Hello !  I would like to generate a drop down of data during a specfic hour. Let's say 2pm here is my eventgen.conf :  ## Example replace {"pointName": "nb", "value": "25.33"} token.1.token = "p... See more...
Hello !  I would like to generate a drop down of data during a specfic hour. Let's say 2pm here is my eventgen.conf :  ## Example replace {"pointName": "nb", "value": "25.33"} token.1.token = "pointName":\s"nb".{1,60}"value":\s"(.{1,10})"} token.1.replacementType = random token.1.replacement = integer[5:30] And I would like at 2pm that the number it's change like this :  token.1.token = "pointName":\s"nb".{1,60}"value":\s"(.{1,10})"} token.1.replacementType = random token.1.replacement = integer[0:4] Does anybody know how to create a fluctuation for a specific time on a eventgen.conf ? 
hey guys, i'm stuck with this macro problem, where i cannot run a savedsearch with a macro inside it. 1. i have a savedsearch like this: .... | eval param1="777" | `myMacro("$param1$")` 2. my... See more...
hey guys, i'm stuck with this macro problem, where i cannot run a savedsearch with a macro inside it. 1. i have a savedsearch like this: .... | eval param1="777" | `myMacro("$param1$")` 2. myMacro is configured like this: eval mySqlQuery="select * from myTable where someField like ".$param1$." and otherField=='abc' " 3. i doesn't work. main error i face is this: Error in 'savedsearch' command: Encountered the following error while building a search for saved search 'mySavedSearch': Error while replacing variable name='param1'. Could not find variable in the argument map.. The closest info i've found is this (which works perfectly in the shown example, but not in my case - and i don't understand why): https://community.splunk.com/t5/Knowledge-Management/How-do-I-make-macro-arguments-get-parsed-as-fields-instead-of/m-p/416938   i mean, i tried many options with macro and savedsearch configuration (with $-s and "-s), unsuccessfully so far. P.S. maybe this is important: i try to run a savedsearch, and the guys in the link above just run a search (which i tried as well - and it's OK). anyway, i don't know how to fix my savedsearch scenario...
Dear Team, We have configured the Splunk OTEL collector to collect logs from OpenShift environment namespaces and Pods and send them to Splunk Enterprise using HEC (HTTP Event Collector). However, w... See more...
Dear Team, We have configured the Splunk OTEL collector to collect logs from OpenShift environment namespaces and Pods and send them to Splunk Enterprise using HEC (HTTP Event Collector). However, we are experiencing unusual behavior with the values.yaml configuration when it comes to collecting audit logs. logsCollection: extraFileLogs: filelog/audit-log-kube-apiserver: include: [/var/log/kube-apiserver/audit.log] start_at: beginning include_file_path: true include_file_name: false resource: com.splunk.source: /var/log/kube-apiserver/audit.log host.name: 'EXPR(env("K8S_NODE_NAME"))' com.splunk.sourcetype: kube:apiserver-audit I'm having an issue with the OTEL collector Pod. Whenever I restart it, it starts ingesting data from the beginning instead of resuming where it left off. I've tried modifying the "start_at" option by setting it to "current," but that didn't work. I also attempted removing the key-value pair, but it didn't solve the problem. I would greatly appreciate any assistance in resolving this matter.
Could you please provide any detailed Migration steps for Splunk On-premises environment to Splunk Cloud. Also provide how to add team efforts to the plan. We are using SCMA application for assessm... See more...
Could you please provide any detailed Migration steps for Splunk On-premises environment to Splunk Cloud. Also provide how to add team efforts to the plan. We are using SCMA application for assessment.    
We have java agents installed in our environment but discovered that data stopped populating because of Http2 framework.  Is this supported by AppDynamics?
Hi all, We have a an index (say log_index) where the log retention is only 7 days. We can not have this increased to larger values due to disk space restrictions. Now, we have a  requirement where ... See more...
Hi all, We have a an index (say log_index) where the log retention is only 7 days. We can not have this increased to larger values due to disk space restrictions. Now, we have a  requirement where we would like to retain small parts of the logs in log_index for future reference, like search result for "index=log_index level=ERROR" for a 10 minute window or something. Is it possible to copy a search result to another index which has a longer log retention? I know we could export events, but it would be better to have these in a separate index so everyone will be able to make use of the same splunk log analytics tools on these. Also I dont want to reindex logs since that will again be using up license available.