All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello All, Perhaps I have the 64K $ question. I am trying to understand (better) the IOWAIT warnings and errors. The yellow and red icons, etc.  I know that IOWAIT can be an issue, and only on Linu... See more...
Hello All, Perhaps I have the 64K $ question. I am trying to understand (better) the IOWAIT warnings and errors. The yellow and red icons, etc.  I know that IOWAIT can be an issue, and only on Linux based servers. I will guess that running Splunk Enterprise on a virtual linux machine makes things harder. I have revised the Health Report Managaer settings per a Splunk forum posting, and the issue is resolved for the most part. I can run an "unreasonable"  search and get the warining icon, and then as the search progresses, the red error icon. I have run some linux commands like iostat,  and iotop while the search is running but do not see any useful data. I am just curious how Splunk determines the IOWAIT values as part of the health monitoring. I was also wondering if I reset the healh repoting values back to the default, how I might go about reducing the "IOWAIT" characteristic on the Splunk server. Thanks for any hints or tips ewholz
Splunk Enterprise 9.14  Security Essentials 3.80 Security Content updates at 4.32. After updating the Security Essentials to 3.80 I can't load the security content page. error: Cannot read properti... See more...
Splunk Enterprise 9.14  Security Essentials 3.80 Security Content updates at 4.32. After updating the Security Essentials to 3.80 I can't load the security content page. error: Cannot read properties of undefined (reading 'count')  is returned. We've uninstalled and reinstalled the app.   any suggestions?
  Why is it that when I do the threat type Security Domain which is an endpoint it is always categorized as Threat,d and it always gives me low in the alart. What is the problem? I hope for an an... See more...
  Why is it that when I do the threat type Security Domain which is an endpoint it is always categorized as Threat,d and it always gives me low in the alart. What is the problem? I hope for an answer.
My inputs.conf looks like this   index = wineventlog sourcetype = WinEventLog:Security disabled = 0 whitelist = 1, 2, 3, 4, 5 blacklist1 = $XmlRegex="(?ms)<EventID>5156<\/EventID>.*<Data\sName=... See more...
My inputs.conf looks like this   index = wineventlog sourcetype = WinEventLog:Security disabled = 0 whitelist = 1, 2, 3, 4, 5 blacklist1 = $XmlRegex="(?ms)<EventID>5156<\/EventID>.*<Data\sName='Application'>\\device\\harddiskvolume\d+\\program\sfiles\\splunkuniversalforwarder\\(bin\\splunkd\.exe|etc\\apps\\splunk_ta_stream\\windows_x86_64\\bin\\streamfwd\.exe)<.*<Data\sName='DestPort'>(9997|443|8000)<" blacklist2 = $XmlRegex="(?ms)<EventID>5156<\/EventID>.*<Data\sName='DestAddress'>(127.0.0.1|::1|0:0:0:0:0:0:0:1|169.254.*?|fe80:.*?)<" blacklist3 = $XmlRegex="(?ms)<EventID>4688<\/EventID>.*<Data\sName='NewProcessName'>C:\\Program Files\\SplunkUniversalForwarder\\(etc\\apps\\Splunk_TA_stream\\windows_x86_64\\bin\\streamfwd.exe|bin\\(splunk-powershell.exe|splunk-MonitorNoHandle.exe|splunk-netmon.exe|splunk-regmon.exe|splunkd.exe|btool.exe|splunk.exe|splunk-winevtlog.exe|splunk-admon.exe|splunk-perfmon.exe|splunk-winprintmon.exe|splunk-wmi.exe))<"   I confirmed that this config has been pushed to all forwarders, the forwarders are using the local system account, and that the firewall is not blocking anything.  Despite this the logs I am ingesting are unrelated to my explicit whitelist and are ~5% of what I am expecting to see. Any ideas?
My Splunk specs are: Searchhead & Monitoring Console on one server 3 indexers on separate servers Cluster Manager on a separate server License Manager & Deployment Server on one server What is t... See more...
My Splunk specs are: Searchhead & Monitoring Console on one server 3 indexers on separate servers Cluster Manager on a separate server License Manager & Deployment Server on one server What is the proper process to shut everything down and bring everything back up for a power outage?
pls can i get a query to set up an alert for when a scheduled job failed to run
We have created what we call apps--which is just a inputs.conf file in a directory structure.  We push that out to our clients.  However, these do not create a drop-down in the Apps drop-down from th... See more...
We have created what we call apps--which is just a inputs.conf file in a directory structure.  We push that out to our clients.  However, these do not create a drop-down in the Apps drop-down from the main screen.  How can we convert them so that they do show on the drop-down menu under Apps?  Our client wants that and I'm not certain how to take what I already have and wrap it into "real" application.   for instance: ourcompany_thisapp_forwarder       local             inputs.conf That is our current structure on the deploymentclient servers.  It's small and does not do much but handle the inputs.  Our new client wants it to really be an app with multiple types of folders for adding other data into Splunk.  Does anyone know how to convert the "app" to a true app?
Hi, I am working in a distributed environment with a SHC of 3 search heads and I am mapping vpn logs to fill certain datasets of my custom version of the Authentication data model (not accelerated f... See more...
Hi, I am working in a distributed environment with a SHC of 3 search heads and I am mapping vpn logs to fill certain datasets of my custom version of the Authentication data model (not accelerated for the moment). The datasets I added to the default authentication Data Model are "Failed_Authentication","Successful_Authentication" and "Login_Attempt", as you can see below:                       Then, I created an eventtype (with some associated tags) to match specific conditions for an authentication success, as shown below:     sourcetype=XX action=success signature IN ("Agent login","Login","Secondary authentication","Primary authentication") OR (signature="Session" AND action="success")     Then, I used the Eventtype as a constraint for the dataset "Authentication.Successful_Authentication" as shown below: To test if the constraint is working or not: I used the pivoting button offered by the GUI and it returns me some results! I run in the search app the following SPL and it also returns some results:    index=vpn* tag=authentication eventtype=auth_vpn_success​     However, if I try to retrieve the same information by using the following SPL (by using tstat), it returns no results:   |tstats summariesonly=f count from datamodel=Authentication where nodename=Authentication.Successful_Authentication   Even by running another SPL(based on tstat) to retrieve the eventtypes of the Authentication Data Model it returns no results:   | tstats count from datamodel=Authentication by eventtype     I tried to troubleshoot the issue with 2 different tests: Not using the field eventtypes as Dataset constraint.  Creating another eventtype and using a different Data Model (Change).   1) I created a dataset constraint for "Authentication.Failed_Authentication" which is not using either tag or eventtypes, as follow:   action=failure     And both of the aforementioned tstats SPLs are working now!   2) I created another eventtype related to a change log type, as follow:     index=vpn* sourcetype=XX AND "User Accounts modified."     And I added it as a constraint  for the dataset "All_Changes.Account_Change" : And by running the 2 aforementioned tstat SPLs  they return me some results!   In conclusion, I suspect there is an issue related to either the tag=authentication (maybe some conflict with other default apps?) or the Authentication Data Model (related to custom datasets I added?). Do you have any clue of what I could have done wrong ?    Kind Regard, Z  
Hello, I have created server down and up alerts separately which triggers when the server is down on the basis of percentile80>5 and up when the percentile80<5. But I want to create one combine a... See more...
Hello, I have created server down and up alerts separately which triggers when the server is down on the basis of percentile80>5 and up when the percentile80<5. But I want to create one combine alert which should trigger all the time when the server is down and I just want only one up alert (Recovery alert) once the server is up again, means it should not trigger multiple alerts for up until it again down. Any way to get this done ? Below is the query : Time Range is last 15 minutes and Cron job is */2 * * * * (every 2 minutes) index=xyz sourcetype=xyz host=* | eval RespTime=time_taken/1000 | eval RespTime = round(RespTime,2) | bucket _time span=2m | stats avg(RespTime) as Average perc80(RespTime) as "Percentile_80" by _time | eval Server_Status=if(Percentile_80>=5, "Server Down", "Server UP") So above alert should trigger when the Server is down and it should trigger every 2 minutes until is up. And then alert should trigger only once when the server is Up again and it should not trigger every 2 minutes until the server is down again.
Newbie here. Trying get the results from the index to match result int he inputlookup to only return result from the index. Been playing around with joins, append, appendcols. Cannot seem to get th... See more...
Newbie here. Trying get the results from the index to match result int he inputlookup to only return result from the index. Been playing around with joins, append, appendcols. Cannot seem to get the results matching the index search. Need some guidance. | inputlookup Assets | appendcols [ search nt_host distinguishedName dns ] [ search index=win EventCode=4725 src_user="*" | eval user=replace(user,"[^[:word:]]","") ] | eval user=nt_host | stats count by src_user, EventCode, signature, user, nt_host, distinguishedName
My Splunk instance (Splunk Enterprise Server 9.0.8) is a standalone for demo purpose, hence has only demo data. It should only show data for 'All time' as all the data are of 2022 - 2023.  I saved ti... See more...
My Splunk instance (Splunk Enterprise Server 9.0.8) is a standalone for demo purpose, hence has only demo data. It should only show data for 'All time' as all the data are of 2022 - 2023.  I saved time input to 'All time' as default but on page load it is not showing any data in the whole dashboard. Next I select any other time option and then again on selecting 'All time' whole dashboard starts showing all the expected data. Please help me.
why keep getting this error on ACS     information on securing this, see https://docs.ansible.com/ansible- core/2.11/user_guide/become.html#risks-of-becoming-an-unprivileged-user... See more...
why keep getting this error on ACS     information on securing this, see https://docs.ansible.com/ansible- core/2.11/user_guide/become.html#risks-of-becoming-an-unprivileged-user FAILED - RETRYING: Restart the splunkd service - Via CLI (60 retries left). FAILED - RETRYING: Restart the splunkd service - Via CLI (59 retries left). FAILED - RETRYING: Restart the splunkd service - Via CLI (58 retries left). FAILED - RETRYING: Restart the splunkd service - Via CLI (57 retries left). FAILED - RETRYING: Restart the splunkd service - Via CLI (56 retries left). FAILED - RETRYING: Restart the splunkd service - Via CLI (55 retries left). FAILED - RETRYING: Restart the splunkd service - Via CLI (54 retries left). FAILED - RETRYING: Restart the splunkd service - Via CLI (53 retries left). FAILED - RETRYING: Restart the splunkd service - Via CLI (52 retries left).    
Hi, For a personal project, I am using MongoDB Atlas and Splunk. I would like to ingest my logs from MongoDB Atlas into Splunk. Is there any documentation or method to achieve this? Th... See more...
Hi, For a personal project, I am using MongoDB Atlas and Splunk. I would like to ingest my logs from MongoDB Atlas into Splunk. Is there any documentation or method to achieve this? Thank you!
Hello, How can I get all the pod names with a query where the value will be in between 1.5 - 2.5. I can share a sample signalfx query for better understanding.    How can I write a equivalent ... See more...
Hello, How can I get all the pod names with a query where the value will be in between 1.5 - 2.5. I can share a sample signalfx query for better understanding.    How can I write a equivalent splunk query for this.  
Can any one suggest use cases for Admin Role  
hello , i need to install Rest Api Modular Input but i get this error     
Hi All, Hopefully someone can help with this.   We have logs that contain JSON where one of the fields can have multiple groups/entries - I would like to unwind/expand the groups to have a separate... See more...
Hi All, Hopefully someone can help with this.   We have logs that contain JSON where one of the fields can have multiple groups/entries - I would like to unwind/expand the groups to have a separate output per line.  I think I have to use mvzip command but I'm having issues with syntax.  Example data/query below... | makeresults format=json Data="[ { \"event\": \"AGREEMENT_ACTION_COMPLETED\", \"participantUserEmail\": \"123456789@test.com\", \"agreement\": { \"id\": \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\", \"status\": \"OUT_FOR_SIGNATURE\", \"participantSetsInfo\": { \"participantSets\": [ { \"memberInfos\": [ { \"id\": \"abcdefg\", \"email\": \"abcdefg@test.com\", \"company\": null, \"name\": \"test o'test\", \"privateMessage\": null, \"status\": \"ACTIVE\" } ], \"order\": \"1\", \"role\": \"SIGNER\", \"status\": \"WAITING_FOR_OTHERS\", \"id\": \"abcdefg1234\", \"name\": null, \"privateMessage\": null }, { \"memberInfos\": [ { \"id\": \"hijklmno\", \"email\": \"hijklmno@test.com\", \"company\": null, \"name\": null, \"privateMessage\": null, \"status\": \"ACTIVE\" } ], \"order\": \"1\", \"role\": \"SIGNER\", \"status\": \"WAITING_FOR_MY_SIGNATURE\", \"id\": \"hijklmno1234\", \"name\": null, \"privateMessage\": null } ] }, \"documentsInfo\": null, \"agreementViewRequest\": null } }]" | spath output=eventType path=event | spath output=agreementId path=agreement.id | spath output=agreementStatus path=agreement.status | spath output=participantUserEmail path=participantUserEmail | rename participantSets{}.membersInfos{}.email as memberEmail, participantSets{}.status as memberStatus | table _time, agreementId, eventType, agreementStatus, participantUserEmail, memberEmail, memberStatus I still see only one line output and the 'memberEmail' and 'memberStatus' fields are showing as blank where as I want to see one line out to match every entry under 'participantSets' field. Any help appreciated.
Hi,    I am running a search to get count of IP';s from yesterday & last month. index=<> source="/****" IP!="10.*" [| inputlookup ip_tracking.csv | rename MIDS AS MID | format ] earliest=-30d@d la... See more...
Hi,    I am running a search to get count of IP';s from yesterday & last month. index=<> source="/****" IP!="10.*" [| inputlookup ip_tracking.csv | rename MIDS AS MID | format ] earliest=-30d@d latest=now| eval ReportKey="Last30Day"|append [search index=<> source=""/****"" IP!="10.*" [| inputlookup ip_tracking.csv | rename MIDS AS MID | format ] earliest=-1d@d latest=@d | eval ReportKey="yesterday"]| eval Day=if(_time<=relative_time(now(),"-30d@d"),"yesterday","Last30Day") | stats count(eval(Day="yesterday")) AS yesterday count(eval(Day="Last30Day")) AS Last30Day BY IP     This search is giving me all results in Month but not in yesterday. Can you help me in correcting the query  
i have faced problem with Qradar and transformation of log (Trend micro)   i forwarded the log as a raw format from splunk HF to Qradar    i'm facing problem with the header of the events on Qrad... See more...
i have faced problem with Qradar and transformation of log (Trend micro)   i forwarded the log as a raw format from splunk HF to Qradar    i'm facing problem with the header of the events on Qradar they have double hostname and timestamp (date) i tried to define syslogSourcetype = sourcetype::<sourcetype>    but same occuers they are double    is there a way to solve this problem please i'm trying now for 1 week to solve this issue   Thanks
i have faced problem with Qradar and transformation of log (Trend micro)   i forwarded the log as a raw format from splunk HF to Qradar    i'm facing problem with the header of the events on Qrad... See more...
i have faced problem with Qradar and transformation of log (Trend micro)   i forwarded the log as a raw format from splunk HF to Qradar    i'm facing problem with the header of the events on Qradar they have double hostname and timestamp (date) i tried to define syslogSourcetype = sourcetype::<sourcetype>    but same occuers they are double    is there a way to solve this problem please i'm trying now for 1 week to solve this issue   Thanks