All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am setting up this application (Microsoft 365 App for Splunk https://splunkbase.splunk.com/app/3786/) with our office tenant so that the dashboards work, I have followed the instructions o... See more...
Hello, I am setting up this application (Microsoft 365 App for Splunk https://splunkbase.splunk.com/app/3786/) with our office tenant so that the dashboards work, I have followed the instructions on the documentation but I miss some things on the dashboards of exchange and Defender. I installed the add-on security and set up the inputs, gave the necessary authorizations from an application on Azure but no data passes. Same thing for Exchange, I followed the account creation procedure + rights, still nothing. Add-on security : https://splunkbase.splunk.com/app/6207/  Exchange add-on : https://splunkbase.splunk.com/app/3720/  Has anyone had the same problem as me? Or the applications I use are no longer adequate for the Office 365 application? Thanks in advance for your help.
We are using the Splunk Operator for Kubernetes for some Standalone Instances. It works so far.  But now we want to use DB Connect on a Standalone Deployment. The installation of DB Connect and the J... See more...
We are using the Splunk Operator for Kubernetes for some Standalone Instances. It works so far.  But now we want to use DB Connect on a Standalone Deployment. The installation of DB Connect and the JDBC Drivers work, but we get errors. --> Cannot communicate with task server, please check your settings. The errors are related to the missing JRE on the docker image. How are we supposed to use db connect without JRE? Does anyone have a solution?  
We are trying to upgrade search head from 8.0.1 to 8.2.6 but we are getting error Splunk setup wizard ended prematurely.   We tried the solution mentioned on below community post but it didnt wor... See more...
We are trying to upgrade search head from 8.0.1 to 8.2.6 but we are getting error Splunk setup wizard ended prematurely.   We tried the solution mentioned on below community post but it didnt worked either "Splunk Enterprise Setup Wizard ended prematurely"... - Splunk Community Please find below snapshot for the same-
We want the alert type to be in real-time and send an alert only if the search query met the condition not to run every minute even though it does not have any result (to avoid spam alerts). How do w... See more...
We want the alert type to be in real-time and send an alert only if the search query met the condition not to run every minute even though it does not have any result (to avoid spam alerts). How do we see the Alert type for “Real-time” ? instead of a scheduled option only. Because on our end there where no options like that it is automatically tag as "scheduled" on the alert type.
Hi, I try to calculate the duration I have extracted 2 fields, start_time and end_time -- I believe both times should be in the exact same format in order to successful calculate the dur... See more...
Hi, I try to calculate the duration I have extracted 2 fields, start_time and end_time -- I believe both times should be in the exact same format in order to successful calculate the duration start_time = 2022-06-03T02_11_50 end_time = 2022-06-03T03:48:06 -- I have been puzzling for some time now, but how do I get the start_time in the same format as the end_time? ... Thanks for help in advance! Edwin          
Greetings, I have recently added a new Calculated Field to a Data Model by stopping the accelerated Data Model, and inputting the following Eval Calculated field on the bottom of the table as follo... See more...
Greetings, I have recently added a new Calculated Field to a Data Model by stopping the accelerated Data Model, and inputting the following Eval Calculated field on the bottom of the table as follows: After accelerating the Data Model again, all the related dest field values even though they were included in the CIM index now appear as "unknown" and the field now does not even show up on the original index that was first feeded on to the Data Model. I am currently baffled as to why this would even interfere with the original index parsing let alone showing up as unknown in the data model aswell. Thanks, Regards,
Is it at all possible to remove/uninstall UFs by pushing some script(s) from the deployment server. I do not have OS access on these endpoints & servers. OS access option is not possible, hence need ... See more...
Is it at all possible to remove/uninstall UFs by pushing some script(s) from the deployment server. I do not have OS access on these endpoints & servers. OS access option is not possible, hence need to think of some alternative ways to achieve this (if possible). I can always disable the inputs on the UF but the requirement is to remove the UF installation itself, if not the installation then all configs like inputs.conf/outputs.conf/deploymentclient.conf and other apps (essentially everything in $SPLUNK_HOME/etc/system/local) Splunk Deployment server version 8.1.x UF version >7.1 OS - Windows endpoints and servers, Linux servers
Hi All, I have been working on the luhn algorithm to validate the credit card. For that, I have used the below link query. The query is correct, but it is cannot validate the correct credit card numb... See more...
Hi All, I have been working on the luhn algorithm to validate the credit card. For that, I have used the below link query. The query is correct, but it is cannot validate the correct credit card numbers.Sometimes it will result the not valid cards too. Pls refer to this below link and help out to find the valid cards.   https://gosplunk.com/detect-credit-card-numbers-using-luhn-algorithm/ @ITWhisperer @aasabatini 
Hi,   I am getting this error "web interface does not seem to be available!" when I try to start splunk on Hyper-V Ubuntu VM Can anyone help?    
I recently inherited this splunk system, and I am gradually working out how it is set up. When running a search yesterday, I noticed something. We have 10 indexers, 5 at site1, 5 at site2. We have 4 ... See more...
I recently inherited this splunk system, and I am gradually working out how it is set up. When running a search yesterday, I noticed something. We have 10 indexers, 5 at site1, 5 at site2. We have 4 search heads, all assigned to Site0. When inspecting my search job, I saw that my results were only pulled from a single site's peers, not from both. Here are some pictures to explain: My rep factor tells me I should have 2 copies at each site. My search factor tells me I should have 2 searchable copies at each site. This would imply that when I run a search across my 10 indexers, it would be pulling data from both sites. So then i run a search on a specific index, and I see this: I expected to see data pulled equally from both sites, but I see Site k is left completely alone. Even if a single indexer was the ingest point for all the data, it would still be scattered across the 10 indexers as it worked to meet the replication/search factors. There is no reason everything should be stuck on one site. Am I way off base here, or is something configured wrong?
initially MLTK was working fine but now I started getting this error "Error in 'fit' command: (ImportError) DLL load failed while importing _arpack: The specified procedure could not be found." ple... See more...
initially MLTK was working fine but now I started getting this error "Error in 'fit' command: (ImportError) DLL load failed while importing _arpack: The specified procedure could not be found." please anyone suggest a sountion
Hi All,   I have around 30 Hosts forwarding logs to splunk.   I have the below same paths in all the servers /data/abc/vault.logs /data/abc/vault_audit.logs /data/xyz/proxy.logs   S... See more...
Hi All,   I have around 30 Hosts forwarding logs to splunk.   I have the below same paths in all the servers /data/abc/vault.logs /data/abc/vault_audit.logs /data/xyz/proxy.logs   So I have created an app included inputs with all those above stanzas and pushed the app to all hosts.   So by default all those hosts are sending the above mentioned logs to splunk.   But I want 5 servers to send just the below log but not other logs /data/xyz/proxy.logs How to achieve this?  
Good morning, When RHEL9 will be supported in Splunk Free/Enterprise? 
I have a field called query that's like so: (index="abc" OR index="def") (host="ghi" OR host="jkl") (sourcetype="mno" sourcetype="pqr") (source="stu" source="vwx") "*yz*" I am trying to leverage it... See more...
I have a field called query that's like so: (index="abc" OR index="def") (host="ghi" OR host="jkl") (sourcetype="mno" sourcetype="pqr") (source="stu" source="vwx") "*yz*" I am trying to leverage it in a map search:  <search that gets me the above field> | map search="search $query$"  It doesn't seem to work. How do I go about doing it if another way is possible?   Just to clarify, map doesn't have to be the only solution; I simply need a solution to use the query field to perform a search per row (in addition to stats count) to find the number of results returned for each search.
I have an existing add-on app. I need to add a new field in the configuration section in add Account (please check attached screenshot). The field should be of type radio select. Based on the select... See more...
I have an existing add-on app. I need to add a new field in the configuration section in add Account (please check attached screenshot). The field should be of type radio select. Based on the selected option, I need to make different API calls to pull the data. I tried adding new field in the globalConfig.json, deleted existing app, created tar.gz of the updated app and installed, restarted splunk but I don't see the new field in the UI. Could you please suggest what's wrong here. Also if there is any better way to modify the UI and python code, kindly suggest.  
Hello, I have a search (timechart) with a dynamic span (minspan=1h) Is there a way (token ?) to get the span used to use it in drilldown ? Thanks  
Hi Splunkers, I was wondering if this is possible on tstats command. Get the dynamic value from savedsearch result or lookup? savedsearch or lookup is updating every hour.   | tstats max(_... See more...
Hi Splunkers, I was wondering if this is possible on tstats command. Get the dynamic value from savedsearch result or lookup? savedsearch or lookup is updating every hour.   | tstats max(_time) as last_updated WHERE index=* BY index, host   To avoid wildcard.. I was thinking it will be efficient if that is possible. Appreciate any response.. Thanks!
Hi All,      I'm trying to extract the username from the _raw field using regex, how do I extract the username. The username comes after some parameters, the parameters look like (\"requestParamete... See more...
Hi All,      I'm trying to extract the username from the _raw field using regex, how do I extract the username. The username comes after some parameters, the parameters look like (\"requestParameters\": {\"userName\": <username>)
Hello, I'm trying to use ldapfilter to add some info to events I collect from MS Exchange but as soon as my ldapfilter query is dynamic (makes use of $field$) it does not return anything. My init... See more...
Hello, I'm trying to use ldapfilter to add some info to events I collect from MS Exchange but as soon as my ldapfilter query is dynamic (makes use of $field$) it does not return anything. My initial search looks like this:     index=Exchange Mailboxes=* | rex "'?S:Mailboxes=(?<SMailboxes>[^']+)'?;'?S:StoreObjectIds" | makemv SMailboxes delim=";" | mvexpand SMailboxes| top SMailboxes limit=50 | rex field=SMailboxes "(?<m1>..)(?<m2>..)(?<m3>..)(?<m4>..)-(?<m5>..)(?<m6>..)-(?<m7>..)(?<m8>..)-(?<m9>..)(?<m10>..)-(?<m11>..)(?<m12>..)(?<m13>..)(?<m14>..)(?<m15>..)(?<m16>..)" | eval conv="\\\\" . m4 . "\\\\" . m3 . "\\\\" . m2 . "\\\\" . m1 . "\\\\" . m6 . "\\\\" . m5 . "\\\\" . m8 . "\\\\" . m7 . "\\\\" . m9 . "\\\\" . m10 . "\\\\" . m11 . "\\\\" . m12 . "\\\\" . m13 . "\\\\" . m14 . "\\\\" . m15 . "\\\\" . m16 | table SMailboxes,conv     And the result looks like this: SMailboxes conv 7409c768-ed1b-45dd-8d5d-d36e65af77c1 \\68\\c7\\09\\74\\1b\\ed\\dd\\45\\8d\\5d\\d3\\6e\\65\\af\\77\\c1   All good. Things get wrong when I add ldapfilter:     index=Exchange Mailboxes=* | rex "'?S:Mailboxes=(?<SMailboxes>[^']+)'?;'?S:StoreObjectIds" | makemv SMailboxes delim=";" | mvexpand SMailboxes| top SMailboxes limit=50 | rex field=SMailboxes "(?<m1>..)(?<m2>..)(?<m3>..)(?<m4>..)-(?<m5>..)(?<m6>..)-(?<m7>..)(?<m8>..)-(?<m9>..)(?<m10>..)-(?<m11>..)(?<m12>..)(?<m13>..)(?<m14>..)(?<m15>..)(?<m16>..)" | eval conv="\\\\" . m4 . "\\\\" . m3 . "\\\\" . m2 . "\\\\" . m1 . "\\\\" . m6 . "\\\\" . m5 . "\\\\" . m8 . "\\\\" . m7 . "\\\\" . m9 . "\\\\" . m10 . "\\\\" . m11 . "\\\\" . m12 . "\\\\" . m13 . "\\\\" . m14 . "\\\\" . m15 . "\\\\" . m16 | table SMailboxes,conv | ldapfilter debug=true domain="default" basedn="..." search="(msExchMailboxGuid=$conv$)" attrs="name"     -> the result is empty. In the ldapfilter, if I replace "$conv$" with "\\68\\c7\\09\\74\\1b\\ed\\dd\\45\\8d\\5d\\d3\\6e\\65\\af\\77\\c1" then the query works and attribute "name" is properly returned and added to the table. How can I make things work with $conv$ in order to have proper results (and not the same static "name" for every event) ? I've tried so many combinations: from 1 to 4 \, with or without quotes/simple quotes, making $conv$ the whole "search" value, etc. Nothing works.