All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

 Ok, why when I do the threat type endpoint and high, it considers it threat and low. What is the problem? I hope for an answer.
My inputs.conf looks like this   index = wineventlog sourcetype = WinEventLog:Security disabled = 0 whitelist = 1, 2, 3, 4, 5 blacklist1 = $XmlRegex="(?ms)<EventID>5156<\/EventID>.*<Data\sName=... See more...
My inputs.conf looks like this   index = wineventlog sourcetype = WinEventLog:Security disabled = 0 whitelist = 1, 2, 3, 4, 5 blacklist1 = $XmlRegex="(?ms)<EventID>5156<\/EventID>.*<Data\sName='Application'>\\device\\harddiskvolume\d+\\program\sfiles\\splunkuniversalforwarder\\(bin\\splunkd\.exe|etc\\apps\\splunk_ta_stream\\windows_x86_64\\bin\\streamfwd\.exe)<.*<Data\sName='DestPort'>(9997|443|8000)<" blacklist2 = $XmlRegex="(?ms)<EventID>5156<\/EventID>.*<Data\sName='DestAddress'>(127.0.0.1|::1|0:0:0:0:0:0:0:1|169.254.*?|fe80:.*?)<" blacklist3 = $XmlRegex="(?ms)<EventID>4688<\/EventID>.*<Data\sName='NewProcessName'>C:\\Program Files\\SplunkUniversalForwarder\\(etc\\apps\\Splunk_TA_stream\\windows_x86_64\\bin\\streamfwd.exe|bin\\(splunk-powershell.exe|splunk-MonitorNoHandle.exe|splunk-netmon.exe|splunk-regmon.exe|splunkd.exe|btool.exe|splunk.exe|splunk-winevtlog.exe|splunk-admon.exe|splunk-perfmon.exe|splunk-winprintmon.exe|splunk-wmi.exe))<"   I confirmed that this config has been pushed to all forwarders, the forwarders are using the local system account, and that the firewall is not blocking anything.  Despite this the logs I am ingesting are unrelated to my explicit whitelist and are ~5% of what I am expecting to see. Any ideas?
My Splunk specs are: Searchhead & Monitoring Console on one server 3 indexers on separate servers Cluster Manager on a separate server License Manager & Deployment Server on one server What is t... See more...
My Splunk specs are: Searchhead & Monitoring Console on one server 3 indexers on separate servers Cluster Manager on a separate server License Manager & Deployment Server on one server What is the proper process to shut everything down and bring everything back up for a power outage?
Here is a diagram of what I am trying to accomplish. Not able to get the last 2 columns of end goal to match Index       Inputlookup   End Goal                             ... See more...
Here is a diagram of what I am trying to accomplish. Not able to get the last 2 columns of end goal to match Index       Inputlookup   End Goal                                 src_user EventCode user   nt_host distinguishedName   src_user EventCode user nt_host distinguishedName service 4725 device1   device1 CN=device1,OUComputers,OU,Agency   service 4725 device1 device1 CN=device1,OUComputers,OU,Agency service 4725 device2   device2 CN=device2,OUComputers,OU,Agency   service 4725 device2 device2 CN=device2,OUComputers,OU,Agency service 4725 device3   device3 CN=device3,OUComputers,OU,Agency   service 4725 device3 device3 CN=device3,OUComputers,OU,Agency service 4725 device4   device4 CN=device4,OUComputers,OU,Agency   service 4725 device4 device4 CN=device4,OUComputers,OU,Agency service 4725 device5   device5 CN=device5,OUComputers,OU,Agency   service 4725 device5 device5 CN=device5,OUComputers,OU,Agency
pls can i get a query to set up an alert for when a scheduled job failed to run
Double-check the calculation for the Day field.  Events less than a day old will have *greater* timestamps than older events. eval Day=if(_time>=relative_time(now(),"-1d@d"),"yesterday","Last30Day")
We have created what we call apps--which is just a inputs.conf file in a directory structure.  We push that out to our clients.  However, these do not create a drop-down in the Apps drop-down from th... See more...
We have created what we call apps--which is just a inputs.conf file in a directory structure.  We push that out to our clients.  However, these do not create a drop-down in the Apps drop-down from the main screen.  How can we convert them so that they do show on the drop-down menu under Apps?  Our client wants that and I'm not certain how to take what I already have and wrap it into "real" application.   for instance: ourcompany_thisapp_forwarder       local             inputs.conf That is our current structure on the deploymentclient servers.  It's small and does not do much but handle the inputs.  Our new client wants it to really be an app with multiple types of folders for adding other data into Splunk.  Does anyone know how to convert the "app" to a true app?
Hi, I am working in a distributed environment with a SHC of 3 search heads and I am mapping vpn logs to fill certain datasets of my custom version of the Authentication data model (not accelerated f... See more...
Hi, I am working in a distributed environment with a SHC of 3 search heads and I am mapping vpn logs to fill certain datasets of my custom version of the Authentication data model (not accelerated for the moment). The datasets I added to the default authentication Data Model are "Failed_Authentication","Successful_Authentication" and "Login_Attempt", as you can see below:                       Then, I created an eventtype (with some associated tags) to match specific conditions for an authentication success, as shown below:     sourcetype=XX action=success signature IN ("Agent login","Login","Secondary authentication","Primary authentication") OR (signature="Session" AND action="success")     Then, I used the Eventtype as a constraint for the dataset "Authentication.Successful_Authentication" as shown below: To test if the constraint is working or not: I used the pivoting button offered by the GUI and it returns me some results! I run in the search app the following SPL and it also returns some results:    index=vpn* tag=authentication eventtype=auth_vpn_success​     However, if I try to retrieve the same information by using the following SPL (by using tstat), it returns no results:   |tstats summariesonly=f count from datamodel=Authentication where nodename=Authentication.Successful_Authentication   Even by running another SPL(based on tstat) to retrieve the eventtypes of the Authentication Data Model it returns no results:   | tstats count from datamodel=Authentication by eventtype     I tried to troubleshoot the issue with 2 different tests: Not using the field eventtypes as Dataset constraint.  Creating another eventtype and using a different Data Model (Change).   1) I created a dataset constraint for "Authentication.Failed_Authentication" which is not using either tag or eventtypes, as follow:   action=failure     And both of the aforementioned tstats SPLs are working now!   2) I created another eventtype related to a change log type, as follow:     index=vpn* sourcetype=XX AND "User Accounts modified."     And I added it as a constraint  for the dataset "All_Changes.Account_Change" : And by running the 2 aforementioned tstat SPLs  they return me some results!   In conclusion, I suspect there is an issue related to either the tag=authentication (maybe some conflict with other default apps?) or the Authentication Data Model (related to custom datasets I added?). Do you have any clue of what I could have done wrong ?    Kind Regard, Z  
Facing the same error with Machine agent 23.6. Server Insight is already enabled at Machine agent end.
I am working on a linux server, does anyone have a few spl suggestions? which logs to look for?  what text to look for in thelogs?  that would be great help.  thanks for the responses but i need some... See more...
I am working on a linux server, does anyone have a few spl suggestions? which logs to look for?  what text to look for in thelogs?  that would be great help.  thanks for the responses but i need somethin more specific, like what logs/text to look for.  i have the linux app installed and the os index is basically indexing most of the important log files in /ar/log, but do not know what to look for or which log to specifically target.  help would be apreciated.
Here are the latest props.conf setting at 9.2.1 on the universal forwarder:  (json file parsing works g8 with this option) EVENT_BREAKER_ENABLE = <boolean> EVENT_BREAKER = <regular expression> LB_... See more...
Here are the latest props.conf setting at 9.2.1 on the universal forwarder:  (json file parsing works g8 with this option) EVENT_BREAKER_ENABLE = <boolean> EVENT_BREAKER = <regular expression> LB_CHUNK_BREAKER = <regular expression> force_local_processing = <boolean>  * new * Forces a universal forwarder to process all data tagged with this sourcetype locally before forwarding it to the indexers. * Data with this sourcetype is processed by the linebreaker, aggerator, and the regexreplacement processors in addition to the existing utf8 processor. * Note that switching this property potentially increases the cpu and memory consumption of the forwarder. * Applicable only on a universal forwarder. * Default: false
Hello, I have created server down and up alerts separately which triggers when the server is down on the basis of percentile80>5 and up when the percentile80<5. But I want to create one combine a... See more...
Hello, I have created server down and up alerts separately which triggers when the server is down on the basis of percentile80>5 and up when the percentile80<5. But I want to create one combine alert which should trigger all the time when the server is down and I just want only one up alert (Recovery alert) once the server is up again, means it should not trigger multiple alerts for up until it again down. Any way to get this done ? Below is the query : Time Range is last 15 minutes and Cron job is */2 * * * * (every 2 minutes) index=xyz sourcetype=xyz host=* | eval RespTime=time_taken/1000 | eval RespTime = round(RespTime,2) | bucket _time span=2m | stats avg(RespTime) as Average perc80(RespTime) as "Percentile_80" by _time | eval Server_Status=if(Percentile_80>=5, "Server Down", "Server UP") So above alert should trigger when the Server is down and it should trigger every 2 minutes until is up. And then alert should trigger only once when the server is Up again and it should not trigger every 2 minutes until the server is down again.
Newbie here. Trying get the results from the index to match result int he inputlookup to only return result from the index. Been playing around with joins, append, appendcols. Cannot seem to get th... See more...
Newbie here. Trying get the results from the index to match result int he inputlookup to only return result from the index. Been playing around with joins, append, appendcols. Cannot seem to get the results matching the index search. Need some guidance. | inputlookup Assets | appendcols [ search nt_host distinguishedName dns ] [ search index=win EventCode=4725 src_user="*" | eval user=replace(user,"[^[:word:]]","") ] | eval user=nt_host | stats count by src_user, EventCode, signature, user, nt_host, distinguishedName
What ever do you mean? A Splunk Admin manages and maintains Splunk Environments and the platform overall and other aspects.  What are you trying to achieve, for this community to help you, please d... See more...
What ever do you mean? A Splunk Admin manages and maintains Splunk Environments and the platform overall and other aspects.  What are you trying to achieve, for this community to help you, please details what objectives you want meet? 
Thank you  @deepakc 
That app is not Splunk Cloud Supported - hence you can't install it. It might be worth contacting the Developers of this app (Not splunk Developed) and see what options they offer  you. 
You will need to use the Splunk DB connect application - it mentions that Mongo DB atlas is supported - you will need to install the drivers etc.  https://docs.splunk.com/Documentation/DBX/3.17.1/... See more...
You will need to use the Splunk DB connect application - it mentions that Mongo DB atlas is supported - you will need to install the drivers etc.  https://docs.splunk.com/Documentation/DBX/3.17.1/DeployDBX/AboutSplunkDBConnect  There are many steps so work through them slowly  and not miss any steps.  This is a good link to get an overview of DB connect, but you will have to set-up for Mongo db as you go along https://lantern.splunk.com/Splunk_Platform/Product_Tips/Extending_the_Platform/Configuring_Splunk_DB_Connect   
My Splunk instance (Splunk Enterprise Server 9.0.8) is a standalone for demo purpose, hence has only demo data. It should only show data for 'All time' as all the data are of 2022 - 2023.  I saved ti... See more...
My Splunk instance (Splunk Enterprise Server 9.0.8) is a standalone for demo purpose, hence has only demo data. It should only show data for 'All time' as all the data are of 2022 - 2023.  I saved time input to 'All time' as default but on page load it is not showing any data in the whole dashboard. Next I select any other time option and then again on selecting 'All time' whole dashboard starts showing all the expected data. Please help me.
why keep getting this error on ACS     information on securing this, see https://docs.ansible.com/ansible- core/2.11/user_guide/become.html#risks-of-becoming-an-unprivileged-user... See more...
why keep getting this error on ACS     information on securing this, see https://docs.ansible.com/ansible- core/2.11/user_guide/become.html#risks-of-becoming-an-unprivileged-user FAILED - RETRYING: Restart the splunkd service - Via CLI (60 retries left). FAILED - RETRYING: Restart the splunkd service - Via CLI (59 retries left). FAILED - RETRYING: Restart the splunkd service - Via CLI (58 retries left). FAILED - RETRYING: Restart the splunkd service - Via CLI (57 retries left). FAILED - RETRYING: Restart the splunkd service - Via CLI (56 retries left). FAILED - RETRYING: Restart the splunkd service - Via CLI (55 retries left). FAILED - RETRYING: Restart the splunkd service - Via CLI (54 retries left). FAILED - RETRYING: Restart the splunkd service - Via CLI (53 retries left). FAILED - RETRYING: Restart the splunkd service - Via CLI (52 retries left).    
Hi, For a personal project, I am using MongoDB Atlas and Splunk. I would like to ingest my logs from MongoDB Atlas into Splunk. Is there any documentation or method to achieve this? Th... See more...
Hi, For a personal project, I am using MongoDB Atlas and Splunk. I would like to ingest my logs from MongoDB Atlas into Splunk. Is there any documentation or method to achieve this? Thank you!