All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Does anybody know where the failures of sendemail are being logged? I wonder about cases where the e-mail address no longer exists and what type of error is generated and where. _internal and _audit ... See more...
Does anybody know where the failures of sendemail are being logged? I wonder about cases where the e-mail address no longer exists and what type of error is generated and where. _internal and _audit don't seem to have this data.
I have an event id 4674 that I would like to block from being indexed.  I have the following in my in inputs.conf in local not default. [WinEventLog://Security] disabled = true blacklist1 = Eve... See more...
I have an event id 4674 that I would like to block from being indexed.  I have the following in my in inputs.conf in local not default. [WinEventLog://Security] disabled = true blacklist1 = EventCode = "4764" Message="*"
my subject may not be worded correctly but i need some help. i have the below raw data, and i would like to group them together into its own line for reporting. Redundancy group: 0 , Failover ... See more...
my subject may not be worded correctly but i need some help. i have the below raw data, and i would like to group them together into its own line for reporting. Redundancy group: 0 , Failover count: 0 node0 200 primary no no None node1 2 secondary no no None Redundancy group: 1 , Failover count: 0 node0 200 primary no no None node1 20 secondary no no None Redundancy group: 2 , Failover count: 0 node0 200 primary no no None node1 2 secondary no no None how can i have the output with the following: would like to group them based on Redundancy Group # and Node # Redundancy group: 0 , Failover count: 0,node0 200 primary no no None Redundancy group: 0 , Failover count: 0,node1 2 secondary no no None Redundancy group: 1 , Failover count: 0,node0 200 primary no no None Redundancy group: 1 , Failover count: 0,node1 2 secondary no no None Redundancy group: 2 , Failover count: 0,node0 200 primary no no None Redundancy group: 2 , Failover count: 0,node1 2 secondary no no None
Hi All, Is there any way to populate fixed date range in a dropdown in Dashboard Studio ?  I have done it in the XML dashboard, but not finding a way to do it in Studio (JSON). Can anyone suggest ?... See more...
Hi All, Is there any way to populate fixed date range in a dropdown in Dashboard Studio ?  I have done it in the XML dashboard, but not finding a way to do it in Studio (JSON). Can anyone suggest ? XML Dashboard example:  (looking for equivalent in JSON) <input type="dropdown" token="simple"> <label>Simple Time Picker</label> <choice value="last_24h">Last 24 Hours</choice> <choice value="last_7d">Last 7 days</choice> <choice value="last_30d">Last 30 days</choice> <default>last_24h</default> <change> <condition value="last_24h"> <set token="simple.label">$label$</set> <set token="simple.earliest">-24h</set> <set token="simple.latest">now</set> </condition> <condition value="last_7d"> <set token="simple.label">$label$</set> <set token="simple.earliest">-7d</set> <set token="simple.latest">now</set> </condition> <condition value="last_30d"> <set token="simple.label">$label$</set> <set token="simple.earliest">-30d</set> <set token="simple.latest">now</set> </condition> </change> </input>
Is anyone aware of a way, other than manually, of creating a MITRE ATT&CK Navigator Layer based on the rules enabled in Splunk Enterprise Security ?    https://mitre-attack.github.io/attack-navigat... See more...
Is anyone aware of a way, other than manually, of creating a MITRE ATT&CK Navigator Layer based on the rules enabled in Splunk Enterprise Security ?    https://mitre-attack.github.io/attack-navigator/ 
hello, each time I upgrade DB Connect to the v3.7, I got sistematically the below error I have tried on multiple splunk instances, all same version v.9.0.2 same behaviour using CLI or web interf... See more...
hello, each time I upgrade DB Connect to the v3.7, I got sistematically the below error I have tried on multiple splunk instances, all same version v.9.0.2 same behaviour using CLI or web interface any ideas ? thanks     In handler 'localapps': Error installing application: Failed to copy: C:\Program Files\Splunk\var\run\splunk\bundle_tmp\3767c00e4cddc2c3\splunk_app_db_connect to C:\Program Files\Splunk\etc\apps\splunk_app_db_connect. 4 errors occurred. Description for first 4: [ { "operation":"renaming .tmp file to destination file", "error":"The process cannot access the file because it is being used by another process.", "src":"C:\\Program Files\\Splunk\\var\run\\splunk\bundle_tmp\\3767c00e4cddc2c3\\splunk_app_db_connect\\jars\\dbxquery.jar", "dest":"C:\\Program Files\\Splunk\\etc\\apps\\splunk_app_db_connect\\jars\\dbxquery.jar" }, { "operation":"renaming .tmp file to destination file", "error":"The process cannot access the file because it is being used by another process.", "src":"C:\\Program Files\\Splunk\\var\run\\splunk\bundle_tmp\\3767c00e4cddc2c3\\splunk_app_db_connect\\jars\\server.jar", "dest":"C:\\Program Files\\Splunk\\etc\\apps\\splunk_app_db_connect\\jars\\server.jar" }, { "operation":"copying contents from source to destination", "error":"There are no more files.", "src":"C:\\Program Files\\Splunk\\var\run\\splunk\bundle_tmp\\3767c00e4cddc2c3\\splunk_app_db_connect\\jars", "dest":"C:\\Program Files\\Splunk\\etc\\apps\\splunk_app_db_connect\\jars" }, { "operation":"copying contents from source to destination", "error":"There are no more files.", "src":"C:\\Program Files\\Splunk\\var\run\\splunk\bundle_tmp\\3767c00e4cddc2c3\\splunk_app_db_connect", "dest":"C:\\Program Files\\Splunk\\etc\\apps\\splunk_app_db_connect" } ]    
Hey,   I'm trying to use pandas in the backend python script for an alert. I copied the module into the /bin folder. Trying to import it, I get the error "module 'os' has no attribute 'add_dll_di... See more...
Hey,   I'm trying to use pandas in the backend python script for an alert. I copied the module into the /bin folder. Trying to import it, I get the error "module 'os' has no attribute 'add_dll_directory'". Does somebody know how to solve / work around that? Im running Splunk + Python on a Windows 10 Machine Regards  
Hello Is there a way to play  with ITSI with a trial license or something else or is it mandatory to buy a premium app license? Thanks
Hello Splunkers, I have a Splunk HF that will receive multiple logs coming from different machines, all sending via UDP. I am wondering it I need to configures the external sources to send the lo... See more...
Hello Splunkers, I have a Splunk HF that will receive multiple logs coming from different machines, all sending via UDP. I am wondering it I need to configures the external sources to send the logs via UDP but with different port (on port for each sources), or if I can simply tell all my sources to send over UDP port 514 for instance. I am wondering if the UDP port 514 could become a "network bottleneck" because of too many logs coming from multiple sources on the same port.  Thanks for your help, GaetanVP
index="redis" sourcetype="csv" total_commands_processed="*" | timechart span=5m total_commands_processed In the search command above, I want to display value of field "total_commands_processed", an... See more...
index="redis" sourcetype="csv" total_commands_processed="*" | timechart span=5m total_commands_processed In the search command above, I want to display value of field "total_commands_processed", anyone can help
I'm Trying  to get oracle DB data using DB Connect  app and I have successfully scheduled my job and set up my connection. When executing query in the Preview Data window my results are as expected. ... See more...
I'm Trying  to get oracle DB data using DB Connect  app and I have successfully scheduled my job and set up my connection. When executing query in the Preview Data window my results are as expected. I Now comes the problem, the job is executing without error and on time from what I can see in the DB Connect Input Health tab, however, none of my data is being ingested. when i was trying to ingest using Input Type rising but getting error as invalid column index let me know settings while selecting rising input type and why data was not getting while selecting batcH
Hi Splunk community, I have an excel file that sorts a field at certain order and possibly changes over time The excel file look something like this field1 AS AC RO BE .. Is it possible for ... See more...
Hi Splunk community, I have an excel file that sorts a field at certain order and possibly changes over time The excel file look something like this field1 AS AC RO BE .. Is it possible for me to sort with that particular field order in a search ? Thanks for your help.  
I used Upgrade Readiness app to determine the outdated Splunkbase Apps which showed in its result that i have to upgrade the Splunk Add-on for AWS. I did upgrade the add-on. Now when i again ran the ... See more...
I used Upgrade Readiness app to determine the outdated Splunkbase Apps which showed in its result that i have to upgrade the Splunk Add-on for AWS. I did upgrade the add-on. Now when i again ran the scan for checking if everything's fine, the upgrade readiness app is not able to scan the app properly. From the attached screenshot, we see that the readiness app is not able to the determine the addon's python 3 compatibility as is it not able to find the app's version.   Can any one help me out with this, whats wrong here? I
Hi Folks, Here's our setup for Windows logging with Splunk Splunk UF --> Splunk HWF --> Splunk Cloud   Is there a way to identify which UFs are pushing logs to which HWFs? 
I have two saved searches  1) Metrics-Location-Client -- Gives LocationId, Client_Name as output 2) Matched-Locations -- Gives LocationId I want to get count of locationIds by Client_Name which... See more...
I have two saved searches  1) Metrics-Location-Client -- Gives LocationId, Client_Name as output 2) Matched-Locations -- Gives LocationId I want to get count of locationIds by Client_Name which are present in Metrics-Location-Client but not in Matched-Locations. Below is the one which I am trying but its not working it seems. | savedsearch "Metrics-Location-Client" | join Left=L Right=R where L.LocationId!=R.LocationId [ savedsearch "Matched-Locations" ] | stats count(L.LocationId) as MonitoringCalls by L.Client_Name I have tried set diff as well, but not working | set diff [ savedsearch "Metrics-Location-Client" ] [ savedsearch "Matched-Locations" ] | stats count(LocationId) by Client_Name Any help would be appreciated!
Hi  we have a heavy forwarder with the Splunk_TA_cisco-esa app and a props.conf as below: TIME_FORMAT=%y>%b %d %H:%M:%S TIME_PREFIX=^<   We are finding that sometimes the events are being cr... See more...
Hi  we have a heavy forwarder with the Splunk_TA_cisco-esa app and a props.conf as below: TIME_FORMAT=%y>%b %d %H:%M:%S TIME_PREFIX=^<   We are finding that sometimes the events are being created in Splunk with incorrect dates (see examples below). Any ideas why it's putting the events in the past?   thanks,  
status = ON name = log4j monitorInterval=30 rootLogger.level = OFF property.defaultUrl = http://localhost:8000 property.defaultToken = 321d1411-04a4-4179-8f49-6ce502f1236a property.defaultPattern =... See more...
status = ON name = log4j monitorInterval=30 rootLogger.level = OFF property.defaultUrl = http://localhost:8000 property.defaultToken = 321d1411-04a4-4179-8f49-6ce502f1236a property.defaultPattern = %d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n appender.console.type = Console appender.console.name = console appender.console.layout.type = PatternLayout appender.console.layout.pattern = %d [%t] %-5p %c - %m%n appender.lightconsole.type = Console appender.lightconsole.name = lightconsole appender.lightconsole.layout.type = PatternLayout appender.lightconsole.layout.pattern = %m%n loggers = DATABASE logger.DATABASE.name = DATABASE logger.DATABASE.level = INFO logger.DATABASE.additivity = false logger.DATABASE.appenderRefs = database logger.DATABASE.appenderRef.database.ref = database appenders = database appender.database.type = http appender.database.name = database appender.database.url = ${defaultUrl} appender.database.token = ${defaultToken} appender.database.layout.type = PatternLayout appender.database.layout.pattern = ${defaultPattern} With the following set up when I try to post the logs into the splunk I am getting the following error. main ERROR Unable to send HTTP in appender [updater] java.io.IOException public class TestSplunk { public static final Logger direct = LogManager.getLogger("DATABASE"); public static void main(String[] args) { /* * *Log message which has to be visible in splunk dashboard */ direct.info("The following message should be displayed); direct.info("!!!!!!!!!!!!!!!!!!!!!!"); } The program which I used to post the message into the splunk Dashboard.  
I am learning Splunk Enterprise Security and SPL of Splunk Enterprise. Although the official tutorials are detailed, they lack actual cases and are difficult to understand. Is there any website expla... See more...
I am learning Splunk Enterprise Security and SPL of Splunk Enterprise. Although the official tutorials are detailed, they lack actual cases and are difficult to understand. Is there any website explaining Splunk case for easy understanding? thank you
Hi, can any one help me how to get splunk query for below requirement. index="abc" | search "message"="Exit" | search "stepName"=NotifyFlowEnd | search stepName!="" | search "message"="PAST_TIM... See more...
Hi, can any one help me how to get splunk query for below requirement. index="abc" | search "message"="Exit" | search "stepName"=NotifyFlowEnd | search stepName!="" | search "message"="PAST_TIME_FRAME" |  chart count by "message" In message="Exit"  these 2 conditions need to include ----->     | search "stepName"=NotifyFlowEnd | search stepName!=""   Output is to get Sum of  "message"="Exit"  +  "PAST_TIME_FRAME" count
Has anyone else had problems connecting SOAR to CrowdStrike to ingest detections? Our test connection is fine. We set the ingest to poll on a ten minute interval. We can see a succesful outbound ca... See more...
Has anyone else had problems connecting SOAR to CrowdStrike to ingest detections? Our test connection is fine. We set the ingest to poll on a ten minute interval. We can see a succesful outbound call get made through the proxy but no data is ingested from CrowdStrike. Other apps we see hit the proxy at the defined interval period, but with CrowdStrike it's completely ad hoc, no matter whether we try interval or scheduled. It will do nothing for hours, and then hit it a couple of times and then go quiet. Every couple of days it might bizarrely ingest something, but then stops again for days. I can't find anything of relevance in the debug logs ingestd.log and the SOAR console isn't indiciating any ingestion errors. I have checkd CrowdStrike's API rate limiting with a manual request and we aren't anywhere near reaching any limits. Has anyone experienced anything like this? Not sure where to go from here, it's like it's failing to schedule correctly. However I can see the scheduled ingestion under ingestion summary in the console.