All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It appears that only one of my hosts is sending in security logs - the Splunk search head.  Verified all other hosts have received the inputs.conf and are running with the required level of permissio... See more...
It appears that only one of my hosts is sending in security logs - the Splunk search head.  Verified all other hosts have received the inputs.conf and are running with the required level of permissions.  Don't see any windows firewall events which are blocking the outbound connection.
Updating here for anyone who may experience the error now. When running the command on a linux host and   your admin password has ! in it, enclose the password in ' e.g, 'password!' while typing the ... See more...
Updating here for anyone who may experience the error now. When running the command on a linux host and   your admin password has ! in it, enclose the password in ' e.g, 'password!' while typing the command.
Last question on this. The existing SplunkTA alreayd defines the info I'm interested in an event type defined in /default/eventtypes.conf. Is it possible to configure this so that only a specific eve... See more...
Last question on this. The existing SplunkTA alreayd defines the info I'm interested in an event type defined in /default/eventtypes.conf. Is it possible to configure this so that only a specific event type is indexed and all others discarded?
Splunk Enterprise 9.14  Security Essentials 3.80 Security Content updates at 4.32. After updating the Security Essentials to 3.80 I can't load the security content page. error: Cannot read properti... See more...
Splunk Enterprise 9.14  Security Essentials 3.80 Security Content updates at 4.32. After updating the Security Essentials to 3.80 I can't load the security content page. error: Cannot read properties of undefined (reading 'count')  is returned. We've uninstalled and reinstalled the app.   any suggestions?
  Why is it that when I do the threat type Security Domain which is an endpoint it is always categorized as Threat,d and it always gives me low in the alart. What is the problem? I hope for an an... See more...
  Why is it that when I do the threat type Security Domain which is an endpoint it is always categorized as Threat,d and it always gives me low in the alart. What is the problem? I hope for an answer.
 Ok, why when I do the threat type endpoint and high, it considers it threat and low. What is the problem? I hope for an answer.
My inputs.conf looks like this   index = wineventlog sourcetype = WinEventLog:Security disabled = 0 whitelist = 1, 2, 3, 4, 5 blacklist1 = $XmlRegex="(?ms)<EventID>5156<\/EventID>.*<Data\sName=... See more...
My inputs.conf looks like this   index = wineventlog sourcetype = WinEventLog:Security disabled = 0 whitelist = 1, 2, 3, 4, 5 blacklist1 = $XmlRegex="(?ms)<EventID>5156<\/EventID>.*<Data\sName='Application'>\\device\\harddiskvolume\d+\\program\sfiles\\splunkuniversalforwarder\\(bin\\splunkd\.exe|etc\\apps\\splunk_ta_stream\\windows_x86_64\\bin\\streamfwd\.exe)<.*<Data\sName='DestPort'>(9997|443|8000)<" blacklist2 = $XmlRegex="(?ms)<EventID>5156<\/EventID>.*<Data\sName='DestAddress'>(127.0.0.1|::1|0:0:0:0:0:0:0:1|169.254.*?|fe80:.*?)<" blacklist3 = $XmlRegex="(?ms)<EventID>4688<\/EventID>.*<Data\sName='NewProcessName'>C:\\Program Files\\SplunkUniversalForwarder\\(etc\\apps\\Splunk_TA_stream\\windows_x86_64\\bin\\streamfwd.exe|bin\\(splunk-powershell.exe|splunk-MonitorNoHandle.exe|splunk-netmon.exe|splunk-regmon.exe|splunkd.exe|btool.exe|splunk.exe|splunk-winevtlog.exe|splunk-admon.exe|splunk-perfmon.exe|splunk-winprintmon.exe|splunk-wmi.exe))<"   I confirmed that this config has been pushed to all forwarders, the forwarders are using the local system account, and that the firewall is not blocking anything.  Despite this the logs I am ingesting are unrelated to my explicit whitelist and are ~5% of what I am expecting to see. Any ideas?
My Splunk specs are: Searchhead & Monitoring Console on one server 3 indexers on separate servers Cluster Manager on a separate server License Manager & Deployment Server on one server What is t... See more...
My Splunk specs are: Searchhead & Monitoring Console on one server 3 indexers on separate servers Cluster Manager on a separate server License Manager & Deployment Server on one server What is the proper process to shut everything down and bring everything back up for a power outage?
Here is a diagram of what I am trying to accomplish. Not able to get the last 2 columns of end goal to match Index       Inputlookup   End Goal                             ... See more...
Here is a diagram of what I am trying to accomplish. Not able to get the last 2 columns of end goal to match Index       Inputlookup   End Goal                                 src_user EventCode user   nt_host distinguishedName   src_user EventCode user nt_host distinguishedName service 4725 device1   device1 CN=device1,OUComputers,OU,Agency   service 4725 device1 device1 CN=device1,OUComputers,OU,Agency service 4725 device2   device2 CN=device2,OUComputers,OU,Agency   service 4725 device2 device2 CN=device2,OUComputers,OU,Agency service 4725 device3   device3 CN=device3,OUComputers,OU,Agency   service 4725 device3 device3 CN=device3,OUComputers,OU,Agency service 4725 device4   device4 CN=device4,OUComputers,OU,Agency   service 4725 device4 device4 CN=device4,OUComputers,OU,Agency service 4725 device5   device5 CN=device5,OUComputers,OU,Agency   service 4725 device5 device5 CN=device5,OUComputers,OU,Agency
pls can i get a query to set up an alert for when a scheduled job failed to run
Double-check the calculation for the Day field.  Events less than a day old will have *greater* timestamps than older events. eval Day=if(_time>=relative_time(now(),"-1d@d"),"yesterday","Last30Day")
We have created what we call apps--which is just a inputs.conf file in a directory structure.  We push that out to our clients.  However, these do not create a drop-down in the Apps drop-down from th... See more...
We have created what we call apps--which is just a inputs.conf file in a directory structure.  We push that out to our clients.  However, these do not create a drop-down in the Apps drop-down from the main screen.  How can we convert them so that they do show on the drop-down menu under Apps?  Our client wants that and I'm not certain how to take what I already have and wrap it into "real" application.   for instance: ourcompany_thisapp_forwarder       local             inputs.conf That is our current structure on the deploymentclient servers.  It's small and does not do much but handle the inputs.  Our new client wants it to really be an app with multiple types of folders for adding other data into Splunk.  Does anyone know how to convert the "app" to a true app?
Hi, I am working in a distributed environment with a SHC of 3 search heads and I am mapping vpn logs to fill certain datasets of my custom version of the Authentication data model (not accelerated f... See more...
Hi, I am working in a distributed environment with a SHC of 3 search heads and I am mapping vpn logs to fill certain datasets of my custom version of the Authentication data model (not accelerated for the moment). The datasets I added to the default authentication Data Model are "Failed_Authentication","Successful_Authentication" and "Login_Attempt", as you can see below:                       Then, I created an eventtype (with some associated tags) to match specific conditions for an authentication success, as shown below:     sourcetype=XX action=success signature IN ("Agent login","Login","Secondary authentication","Primary authentication") OR (signature="Session" AND action="success")     Then, I used the Eventtype as a constraint for the dataset "Authentication.Successful_Authentication" as shown below: To test if the constraint is working or not: I used the pivoting button offered by the GUI and it returns me some results! I run in the search app the following SPL and it also returns some results:    index=vpn* tag=authentication eventtype=auth_vpn_success​     However, if I try to retrieve the same information by using the following SPL (by using tstat), it returns no results:   |tstats summariesonly=f count from datamodel=Authentication where nodename=Authentication.Successful_Authentication   Even by running another SPL(based on tstat) to retrieve the eventtypes of the Authentication Data Model it returns no results:   | tstats count from datamodel=Authentication by eventtype     I tried to troubleshoot the issue with 2 different tests: Not using the field eventtypes as Dataset constraint.  Creating another eventtype and using a different Data Model (Change).   1) I created a dataset constraint for "Authentication.Failed_Authentication" which is not using either tag or eventtypes, as follow:   action=failure     And both of the aforementioned tstats SPLs are working now!   2) I created another eventtype related to a change log type, as follow:     index=vpn* sourcetype=XX AND "User Accounts modified."     And I added it as a constraint  for the dataset "All_Changes.Account_Change" : And by running the 2 aforementioned tstat SPLs  they return me some results!   In conclusion, I suspect there is an issue related to either the tag=authentication (maybe some conflict with other default apps?) or the Authentication Data Model (related to custom datasets I added?). Do you have any clue of what I could have done wrong ?    Kind Regard, Z  
Facing the same error with Machine agent 23.6. Server Insight is already enabled at Machine agent end.
I am working on a linux server, does anyone have a few spl suggestions? which logs to look for?  what text to look for in thelogs?  that would be great help.  thanks for the responses but i need some... See more...
I am working on a linux server, does anyone have a few spl suggestions? which logs to look for?  what text to look for in thelogs?  that would be great help.  thanks for the responses but i need somethin more specific, like what logs/text to look for.  i have the linux app installed and the os index is basically indexing most of the important log files in /ar/log, but do not know what to look for or which log to specifically target.  help would be apreciated.
Here are the latest props.conf setting at 9.2.1 on the universal forwarder:  (json file parsing works g8 with this option) EVENT_BREAKER_ENABLE = <boolean> EVENT_BREAKER = <regular expression> LB_... See more...
Here are the latest props.conf setting at 9.2.1 on the universal forwarder:  (json file parsing works g8 with this option) EVENT_BREAKER_ENABLE = <boolean> EVENT_BREAKER = <regular expression> LB_CHUNK_BREAKER = <regular expression> force_local_processing = <boolean>  * new * Forces a universal forwarder to process all data tagged with this sourcetype locally before forwarding it to the indexers. * Data with this sourcetype is processed by the linebreaker, aggerator, and the regexreplacement processors in addition to the existing utf8 processor. * Note that switching this property potentially increases the cpu and memory consumption of the forwarder. * Applicable only on a universal forwarder. * Default: false
Hello, I have created server down and up alerts separately which triggers when the server is down on the basis of percentile80>5 and up when the percentile80<5. But I want to create one combine a... See more...
Hello, I have created server down and up alerts separately which triggers when the server is down on the basis of percentile80>5 and up when the percentile80<5. But I want to create one combine alert which should trigger all the time when the server is down and I just want only one up alert (Recovery alert) once the server is up again, means it should not trigger multiple alerts for up until it again down. Any way to get this done ? Below is the query : Time Range is last 15 minutes and Cron job is */2 * * * * (every 2 minutes) index=xyz sourcetype=xyz host=* | eval RespTime=time_taken/1000 | eval RespTime = round(RespTime,2) | bucket _time span=2m | stats avg(RespTime) as Average perc80(RespTime) as "Percentile_80" by _time | eval Server_Status=if(Percentile_80>=5, "Server Down", "Server UP") So above alert should trigger when the Server is down and it should trigger every 2 minutes until is up. And then alert should trigger only once when the server is Up again and it should not trigger every 2 minutes until the server is down again.
Newbie here. Trying get the results from the index to match result int he inputlookup to only return result from the index. Been playing around with joins, append, appendcols. Cannot seem to get th... See more...
Newbie here. Trying get the results from the index to match result int he inputlookup to only return result from the index. Been playing around with joins, append, appendcols. Cannot seem to get the results matching the index search. Need some guidance. | inputlookup Assets | appendcols [ search nt_host distinguishedName dns ] [ search index=win EventCode=4725 src_user="*" | eval user=replace(user,"[^[:word:]]","") ] | eval user=nt_host | stats count by src_user, EventCode, signature, user, nt_host, distinguishedName
What ever do you mean? A Splunk Admin manages and maintains Splunk Environments and the platform overall and other aspects.  What are you trying to achieve, for this community to help you, please d... See more...
What ever do you mean? A Splunk Admin manages and maintains Splunk Environments and the platform overall and other aspects.  What are you trying to achieve, for this community to help you, please details what objectives you want meet? 
Thank you  @deepakc