All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I used Upgrade Readiness app to determine the outdated Splunkbase Apps which showed in its result that i have to upgrade the Splunk Add-on for AWS. I did upgrade the add-on. Now when i again ran the ... See more...
I used Upgrade Readiness app to determine the outdated Splunkbase Apps which showed in its result that i have to upgrade the Splunk Add-on for AWS. I did upgrade the add-on. Now when i again ran the scan for checking if everything's fine, the upgrade readiness app is not able to scan the app properly. From the attached screenshot, we see that the readiness app is not able to the determine the addon's python 3 compatibility as is it not able to find the app's version.   Can any one help me out with this, whats wrong here? I
Hi Folks, Here's our setup for Windows logging with Splunk Splunk UF --> Splunk HWF --> Splunk Cloud   Is there a way to identify which UFs are pushing logs to which HWFs? 
I have two saved searches  1) Metrics-Location-Client -- Gives LocationId, Client_Name as output 2) Matched-Locations -- Gives LocationId I want to get count of locationIds by Client_Name which... See more...
I have two saved searches  1) Metrics-Location-Client -- Gives LocationId, Client_Name as output 2) Matched-Locations -- Gives LocationId I want to get count of locationIds by Client_Name which are present in Metrics-Location-Client but not in Matched-Locations. Below is the one which I am trying but its not working it seems. | savedsearch "Metrics-Location-Client" | join Left=L Right=R where L.LocationId!=R.LocationId [ savedsearch "Matched-Locations" ] | stats count(L.LocationId) as MonitoringCalls by L.Client_Name I have tried set diff as well, but not working | set diff [ savedsearch "Metrics-Location-Client" ] [ savedsearch "Matched-Locations" ] | stats count(LocationId) by Client_Name Any help would be appreciated!
Hi  we have a heavy forwarder with the Splunk_TA_cisco-esa app and a props.conf as below: TIME_FORMAT=%y>%b %d %H:%M:%S TIME_PREFIX=^<   We are finding that sometimes the events are being cr... See more...
Hi  we have a heavy forwarder with the Splunk_TA_cisco-esa app and a props.conf as below: TIME_FORMAT=%y>%b %d %H:%M:%S TIME_PREFIX=^<   We are finding that sometimes the events are being created in Splunk with incorrect dates (see examples below). Any ideas why it's putting the events in the past?   thanks,  
status = ON name = log4j monitorInterval=30 rootLogger.level = OFF property.defaultUrl = http://localhost:8000 property.defaultToken = 321d1411-04a4-4179-8f49-6ce502f1236a property.defaultPattern =... See more...
status = ON name = log4j monitorInterval=30 rootLogger.level = OFF property.defaultUrl = http://localhost:8000 property.defaultToken = 321d1411-04a4-4179-8f49-6ce502f1236a property.defaultPattern = %d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n appender.console.type = Console appender.console.name = console appender.console.layout.type = PatternLayout appender.console.layout.pattern = %d [%t] %-5p %c - %m%n appender.lightconsole.type = Console appender.lightconsole.name = lightconsole appender.lightconsole.layout.type = PatternLayout appender.lightconsole.layout.pattern = %m%n loggers = DATABASE logger.DATABASE.name = DATABASE logger.DATABASE.level = INFO logger.DATABASE.additivity = false logger.DATABASE.appenderRefs = database logger.DATABASE.appenderRef.database.ref = database appenders = database appender.database.type = http appender.database.name = database appender.database.url = ${defaultUrl} appender.database.token = ${defaultToken} appender.database.layout.type = PatternLayout appender.database.layout.pattern = ${defaultPattern} With the following set up when I try to post the logs into the splunk I am getting the following error. main ERROR Unable to send HTTP in appender [updater] java.io.IOException public class TestSplunk { public static final Logger direct = LogManager.getLogger("DATABASE"); public static void main(String[] args) { /* * *Log message which has to be visible in splunk dashboard */ direct.info("The following message should be displayed); direct.info("!!!!!!!!!!!!!!!!!!!!!!"); } The program which I used to post the message into the splunk Dashboard.  
I am learning Splunk Enterprise Security and SPL of Splunk Enterprise. Although the official tutorials are detailed, they lack actual cases and are difficult to understand. Is there any website expla... See more...
I am learning Splunk Enterprise Security and SPL of Splunk Enterprise. Although the official tutorials are detailed, they lack actual cases and are difficult to understand. Is there any website explaining Splunk case for easy understanding? thank you
Hi, can any one help me how to get splunk query for below requirement. index="abc" | search "message"="Exit" | search "stepName"=NotifyFlowEnd | search stepName!="" | search "message"="PAST_TIM... See more...
Hi, can any one help me how to get splunk query for below requirement. index="abc" | search "message"="Exit" | search "stepName"=NotifyFlowEnd | search stepName!="" | search "message"="PAST_TIME_FRAME" |  chart count by "message" In message="Exit"  these 2 conditions need to include ----->     | search "stepName"=NotifyFlowEnd | search stepName!=""   Output is to get Sum of  "message"="Exit"  +  "PAST_TIME_FRAME" count
Has anyone else had problems connecting SOAR to CrowdStrike to ingest detections? Our test connection is fine. We set the ingest to poll on a ten minute interval. We can see a succesful outbound ca... See more...
Has anyone else had problems connecting SOAR to CrowdStrike to ingest detections? Our test connection is fine. We set the ingest to poll on a ten minute interval. We can see a succesful outbound call get made through the proxy but no data is ingested from CrowdStrike. Other apps we see hit the proxy at the defined interval period, but with CrowdStrike it's completely ad hoc, no matter whether we try interval or scheduled. It will do nothing for hours, and then hit it a couple of times and then go quiet. Every couple of days it might bizarrely ingest something, but then stops again for days. I can't find anything of relevance in the debug logs ingestd.log and the SOAR console isn't indiciating any ingestion errors. I have checkd CrowdStrike's API rate limiting with a manual request and we aren't anywhere near reaching any limits. Has anyone experienced anything like this? Not sure where to go from here, it's like it's failing to schedule correctly. However I can see the scheduled ingestion under ingestion summary in the console.
Hi, I have a dashboard where one row has the panels appearing overlapping: This is the current HTML code for this row:   How can I fix this? Thanks as always!
I have a unique requirement to forward Splunk alerts to external syslog server. I have only seen use cases of forwarding data from Splunk heavy forwarder to syslog which is straightforward. However o... See more...
I have a unique requirement to forward Splunk alerts to external syslog server. I have only seen use cases of forwarding data from Splunk heavy forwarder to syslog which is straightforward. However our use case is such that Splunk should forward alerts to syslog server anytime alert is triggered.  I used some syslog Mod alert from splunkbase but none worked. Any suggestions
I have fields for user and URL parsed into splunk from a proxy log and am trying to collate a table which displays me deduplicated users which have visited at least two of four or five URLs. E.G: us... See more...
I have fields for user and URL parsed into splunk from a proxy log and am trying to collate a table which displays me deduplicated users which have visited at least two of four or five URLs. E.G: user1 - URL1, URL2, URL3 user 2 - URL2, URL5 etc... What would be the best way of accomplishing this? I am not sure if I should be trying to transform or just format or something else entirely.
Hi I have couple of rex on my search query that not use anywhere. now question is does it have negative impact on my performance?   Any idea? Thanks
Hi What is the quickest way to find 100 max values of "Q" on huge log file?   here is my query: index="myindex" |  rex "Q\[(?<Q>\d+) | stats max(Q)   here is the log: 13:58:34.999  Q[16... See more...
Hi What is the quickest way to find 100 max values of "Q" on huge log file?   here is my query: index="myindex" |  rex "Q\[(?<Q>\d+) | stats max(Q)   here is the log: 13:58:34.999  Q[16]   Any idea? Thanks
Noob question, can someone pls assist how to get alert when any of the inputs under any TA (Add-on) stops sending logs for last 24 h ?   Lets take Splunk Add-on for AWS as an example.  I have  around... See more...
Noob question, can someone pls assist how to get alert when any of the inputs under any TA (Add-on) stops sending logs for last 24 h ?   Lets take Splunk Add-on for AWS as an example.  I have  around 60 inputs configured.  How do i write a search that can alert when either of these 60 inputs stop working ?  if i do a "index=aws" , it shows me various sources and sourcetypes but there isn't any field that has names of these Inputs . We have run into issues that we had to manually enable/disable inputs quite frequently when they stop logging.  
I am trying to increase the "Network Socket timeout" in the LDAP group configuration.  I tried modifying parameters as mentioned in this vlog: https://community.splunk.com/t5/Splunk-Search/LDAP-Grou... See more...
I am trying to increase the "Network Socket timeout" in the LDAP group configuration.  I tried modifying parameters as mentioned in this vlog: https://community.splunk.com/t5/Splunk-Search/LDAP-Group-Configuration-Can-I-increase-the-search-request-time/m-p/137841. Though the value reflects fine on UI, when I edit anything on UI for LDAP settings it errors out with the message - "Invalid network timeout". Am unable to figure out which is the param against which the set value is verified. Any help would be appreciated. TIA
Our Splunk alerts were integrated to Service Now via email ingestion. But it suddenly stopped and we are not receiving tickets from SNow even though there are alerts triggered in Splunk. What is the ... See more...
Our Splunk alerts were integrated to Service Now via email ingestion. But it suddenly stopped and we are not receiving tickets from SNow even though there are alerts triggered in Splunk. What is the error for this and how can we troubleshoot this issue? Thank you
1) Which 'splunkd' is this referring to? The Universal Forwarder or Splunk Enterprise (the Deployment Server)? 2) 'After installation' of what....the deployment app? 3) Does this tick box cause... See more...
1) Which 'splunkd' is this referring to? The Universal Forwarder or Splunk Enterprise (the Deployment Server)? 2) 'After installation' of what....the deployment app? 3) Does this tick box cause the Universal Forwarder to restart each time there's a modification to the deployment app, e.g. a change to inputs.conf 
To find the ips hitting the index waf by client ip, if the hitting ips  present in  lookup table 2 have to be excluded and inplace of policy id we need policyname  from lookup table 1, we need only a... See more...
To find the ips hitting the index waf by client ip, if the hitting ips  present in  lookup table 2 have to be excluded and inplace of policy id we need policyname  from lookup table 1, we need only alert  from rules to be displayed in the search ClientIP PolicyID Rules details   194.38.20.161 199.249.230.183   xxxx yyyy zzzz alert deny   xxxx  xxxx  xxxx   lookup 1  PolicyID PolicyName xxxx prod yyyy ops zzzz xps   lookup 2 description            IP xyz 3.13.1561.11/16 abc 6.18.293.133/32 sdfdh 9.18.53.54/8 aftiml 2.57.344.66/64   Client_IP PolicyName Rules details   194.38.20.161 199.249.230.183 192.456.46.92 prod ops xps alert alert alert xydihflaf  hdkafhfh  yedukak   Ciao  
Hi, From splunk search how to convert "msDS-UserPasswordExpiryTimeComputed" value recover from AD in date ? I wish to convert the value  with splunk command in date. Thank you 
Hi, I need to subtract -30d from earliest, where earliest is counted by token. I tried to convert token result to unix time and subtract unix date counted from token- 2628000 but this doesn't wor... See more...
Hi, I need to subtract -30d from earliest, where earliest is counted by token. I tried to convert token result to unix time and subtract unix date counted from token- 2628000 but this doesn't work. The token will use day before today with hour 14:30 or 23:59 so I need to have this exact time for latest to be chosen but I need to look with earliest 30 days ago this exact date and time? index="*" sourcetype="*"  earliest=1669296600.000000-2628000.000000 latest=1669296600.000000 OR index="*" sourcetype="*"  earliest="11/24/2022 14:30:00"-30d latest="11/24/2022 14:30:00"   It is possible, could someone please help? Thank you in advance.