All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Before creating my own set of knowledge objects to get information on user activity, especially around searches, I decided to see what else was out there.  I stumbled across the Search Activity app w... See more...
Before creating my own set of knowledge objects to get information on user activity, especially around searches, I decided to see what else was out there.  I stumbled across the Search Activity app which seemingly has pretty much everything I am looking for.  However, it isn't working in my Splunk Enterprise 8.2 environment.  Most dashboards don't populate data and it will not use the SA-ldapsearch add-on that is installed, configured, and working properly.  My guess is that the app is no longer supported (no updates since 2019).  Is anyone successfully using the app in Splunk 8.x? Any other recommendations for a similar app that may exist?  The only other thing I found was the User Monitoring for Splunk app, which  has some of the things I am looking for.  The data reported doesn't seem to be complete, which may just require some tweaking.  Curious what others may be using, if anything, to gain insight into Splunk user activity, especially as it pertains to user search behavior.
Hello All Just got a job with Splunk inheritance, no knowledge about Splunk I could say I'm in the category Splunk for Dummy. what I know is we have Splunk Enterprise Universal forward install on... See more...
Hello All Just got a job with Splunk inheritance, no knowledge about Splunk I could say I'm in the category Splunk for Dummy. what I know is we have Splunk Enterprise Universal forward install on domain controller and other important servers as well.  Could someone assistance me creating alerts for the following Excessive Login Failures Account Added to Security Enabled Group Event Logs Cleared Detect Excessive Account Lockouts from Endpoint  Short Lived Windows Accounts Windows User Account Created/Deleted Unclean Malware Detected Disk Utilization Over 95% thank you very much in advance.
I am trying to get our Add-on that was developed for standalone Splunk to work in a SHC environment. The Add-on takes input from the user in a setup view and saves the configuration values via custo... See more...
I am trying to get our Add-on that was developed for standalone Splunk to work in a SHC environment. The Add-on takes input from the user in a setup view and saves the configuration values via custom endpoint using the Splunk JS SDK. When Set up is run on a standalone instance we get custom fields from the system we are connecting to and create the modular alert html using the custom REST endpoint (also stored in /data/ui/alert/sa_myapp.html). Is there a way to replicate the modular alert html across the search had cluster members if running Setup from the Deployer? As far as I can tell the Setup needs to be run on each search head member to generate the html for that node and this conflicts with SHC best practices with Setup run only on the deployer and pushing the conf files to the SHC members. Setup may need to be rerun for the Add-on if custom fields are added or deleted in the system we are connecting to, to change the html used for mapping the fields between Splunk and our system. Is there a solution so that Setup can only be run on the deployer? How can I replicate the html across the cluster members? In my investigation the file /data/ui/alert/sa_myapp.html is not replicated across the search heads. If Setup is run on each search head cluster member the html is generated. It is my understanding that Setup should not be run on the SHC members but only on the deployer.  Can Setup run on the deployer post to the custom endpoint on each SHC member?
Please share the process of adding an .xlsx file to a Lookup list in Splunk Enterprise. Thank you a bunch.
I have set up new data sources already in Splunk that bring in CIM compliant data (from sophos and cisco meraki). Is there a way that I can link them to the InfoSec App? I didn't originally set up th... See more...
I have set up new data sources already in Splunk that bring in CIM compliant data (from sophos and cisco meraki). Is there a way that I can link them to the InfoSec App? I didn't originally set up the InfoSec app so I am unsure how data gets tied to it. Most documentation online is unhelpful for adding new data sources to the InfoSec App unless its one of the very few listed in the data onboarding guides in the Splunk Security Essentials app. Any help or references to actually helpful documentation on adding a new data source would be greatly appreciated. 
When mean & avg are both present on a "stats" search, the first one in order will be missing so: | makeresults count=100 | eval Value=random() % 100 | Stats count(Value) AS Count avg(Value) AS Avg ... See more...
When mean & avg are both present on a "stats" search, the first one in order will be missing so: | makeresults count=100 | eval Value=random() % 100 | Stats count(Value) AS Count avg(Value) AS Avg mean(Value) AS Mean results in: Count Avg Mean 100   49.49   While | makeresults count=100 | eval Value=random() % 100 | Stats count(Value) AS Count mean(Value) AS Mean avg(Value) AS Avg results in: Count Mean Avg 100   43.78   So why is one of the values missing?   John W.
I'm configuring our Jira add-on to connect with our Jira Software in the Cloud connect with he add on. I'm using API Token instead of Personal Access Token. Yet it still not communicating with our Ji... See more...
I'm configuring our Jira add-on to connect with our Jira Software in the Cloud connect with he add on. I'm using API Token instead of Personal Access Token. Yet it still not communicating with our Jira Cloud. Would the configured OURin the addon be the same URL we access our Jira Cloud.
| makeresults | eval _raw="!!! --- HUB ctxsdc1cvdi013.za.sbicdirectory.com:443 is unavailable --- !!! user='molefe_user' password='molefe' quota='user' host='002329bvpc123cw.branches.sbicdirectory... See more...
| makeresults | eval _raw="!!! --- HUB ctxsdc1cvdi013.za.sbicdirectory.com:443 is unavailable --- !!! user='molefe_user' password='molefe' quota='user' host='002329bvpc123cw.branches.sbicdirectory.com' port='443' count='1' !!! --- HUB 002329bvpc123cw.branches.sbicdirectory.com:443 is unavailable --- !!! host='005558bvpc5ce4w.za.sbicdirectory.com' port='443' count='1' !!! --- HUB 005558bvpc5ce4w.za.sbicdirectory.com:443 is unavailable --- !!! host='41360jnbpbb758w.za.sbicdirectory.com' port='443' count='1' !!! --- HUB 41360jnbpbb758w.za.sbicdirectory.com:443 is unavailable --- !!! host='48149jnbpbb041w.za.sbicdirectory.com' port='443' count='1' !!! --- HUB 48149jnbpbb041w.za.sbicdirectory.com:443 is unavailable --- !!! user='pips_lvl_one_user' password='pips_lvl_one' quota='user'" | rex "!!! --- HUB (?[^:]*):\d+\s(?[^-]*).*?password='(?[^']*)"   I'm running above rex command on splunk, it works when using it with "makeresults" command but when Im using it in my search it doesnt bring back any results
I have knowledge objects in my custom apps which are created & managed in /default by manually uploading to splunkcloud and installing. this causes me a couple of problems : 1. even though they hav... See more...
I have knowledge objects in my custom apps which are created & managed in /default by manually uploading to splunkcloud and installing. this causes me a couple of problems : 1. even though they have write perms in default.meta for sc_admin only, users with other roles can change the knowledge objects through the ui - for example they can disable a savedsearch. presumably this creates a new copy in /local which means that my perms from default.meta no longer apply because new perms are written in local.meta. am i correct in my assessment, and if so what is the point of write perms? 2. once the user has created a /local copy of the savedsearch by changing or disabling it, there is a lock or conflict situation.... the ui /local version always gets precedence, and because there is also a version in /default i can no longer see a delete option for the ui version. so i am stuck with the ui version forever. in other words, the person with zero perms wins over the sc_admin. The only ways I have found to get out of this situation are (a) ask splunk cloudops to delete the files from /local, which takes 3 days, or (b) to rename all of the savedsearches in /default, upload and install the app, manually delete the versions that the user created in the ui, name the /default versions back again, and upload / install the app a 2nd time.  Am i missing something in terms of a better way to rectify things when this happens and why this might be the correct splunk behaviour? Thanks in advance Ian
When I test the regex in both regex101 and using the rex command in the search bar and they parsed out the fields correctly. Now that i have added them to the props conf on the search head, they are ... See more...
When I test the regex in both regex101 and using the rex command in the search bar and they parsed out the fields correctly. Now that i have added them to the props conf on the search head, they are capturing extra information.    The Result field is the one that is mainly caputuring the SessionID which the the capture is Verified or Failed.   Thank you all for your help with this.      props.conf   [exp_test] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom pulldown_type = true CHECK_FOR_HEADER = false CHARSET = AUTO EXTRACT-SessionID = (?<=SessionID:)(?P<SessionID>.+) EXTRACT-Result = \VerificationResult:(?P<Result>.+) EXTRACT-UserName = (?<=User:)(?P<UserName>.+) EXTRACT-Response_1 = (?<=Response_1:)(?P<Response_1>.+) EXTRACT-Response_2 = (?<=Response_2:)(?P<Response_1>.+) Sample Log Time: 13-09-2021 10:08:19 VerificationResult: Failed SessionID: K3K2N2G3JPSOZNOWJFOMFPBP.pidd1v-210913090809460797217 User: LAST, FIRST 13-09-2021 10:10:18 Response_1: 1st reqest Sent! for User: LAST, FIRST 13-09-2021 10:10:19 Response_1: 1st response received! for User: LAST, FIRST Time: 13-09-2021 10:10:19 SessionID and User Mapping: SessionID: 3EV6PLCHK795Z8FQBKKYS3Z3.pidd2v-210913091018537820706 User: LAST, FIRST 13-09-2021 10:15:13 Response_1: 1st reqest Sent! for User: LAST, FIRST 13-09-2021 10:15:14 Response_1: 1st response received! for User: LAST, FIRST Time: 13-09-2021 10:15:14 SessionID and User Mapping: SessionID: GAWJ1C7ZWNAWCVTEEIWGE3LL.pidd2v-210913091513558630064 User: LAST, FIRST 13-09-2021 10:15:33 Response_1: 1st reqest Sent! for User: LAST, FIRST 13-09-2021 10:15:33 Response_1: 1st response received! for User: LAST, FIRST 13-09-2021 10:15:38 Response_1: 1st reqest Sent! for User: LAST, FIRST 13-09-2021 10:15:39 Response_1: 1st response received! for User: LAST, FIRST Time: 13-09-2021 10:15:39 SessionID and User Mapping: SessionID: 2SYZV3QHCZKYM2YTYIJLVL3E.pidd2v-210913091538460803649 User: LAST, FIRST 13-09-2021 10:15:47 Response_1: 2nd request sent! for the user verification SessionID: 2SYZV3QHCZKYM2YTYIJLVL3E.pidd2v-210913091538460803649 13-09-2021 10:15:48 Response_1: 2nd response received! for user verification SessionID: 2SYZV3QHCZKYM2YTYIJLVL3E.pidd2v-210913091538460803649 Time: 13-09-2021 10:15:48 VerificationResult: Verified SessionID: 2SYZV3QHCZKYM2YTYIJLVL3E.pidd2v-210913091538460803649 User: LAST, FIRST 13-09-2021 10:16:47 Response_1: 1st reqest Sent! for User: LAST, FIRST 13-09-2021 10:16:48 Response_1: 1st response received! for User: LAST, FIRST Time: 13-09-2021 10:16:48 SessionID and User Mapping: SessionID: D5JVVUR3AAKFURITHCI993H9.pidd2v-210913091647448944771 User: LAST, FIRST
Need an SPL to review the time zone on my Splunk instances please. Is it important for these TZs to be consistent with Time zones on all the FWs? Should I really care for time zones to be right on th... See more...
Need an SPL to review the time zone on my Splunk instances please. Is it important for these TZs to be consistent with Time zones on all the FWs? Should I really care for time zones to be right on the FWs? Thank u in advance.
I used Azure/Splunk Enterprise deployment to set up Splunk on my Azure instance. I then did this: Settings > Show All Settings Create an index via Settings > Indexes (Type: Events, ensured it is ... See more...
I used Azure/Splunk Enterprise deployment to set up Splunk on my Azure instance. I then did this: Settings > Show All Settings Create an index via Settings > Indexes (Type: Events, ensured it is enabled) Create an HTTP Event Collector via Settings > Data Inputs > HTTP Event Collector Attempt to run a curl to the hec Public IP Address Azure Resource that was created I get: {"text":"Invalid token","code":4} Based on what I was reading, I need to push the change out to the Indexers. So here's my questions: Can I do that through the UI? Do I need to update each of the indexers manually? Is there an alternative location for setting this up I am missing? Thanks for helping a newbie!
Hi Everybody, Here my requirement is to create the alerts for JVM logs, we are try to create the alert for "Heap Memory Usage"  and  "Deadlock Threads" but we are unable to find out the events. What... See more...
Hi Everybody, Here my requirement is to create the alerts for JVM logs, we are try to create the alert for "Heap Memory Usage"  and  "Deadlock Threads" but we are unable to find out the events. What type of events we are getting for "Heap Memory Usage" and "Deadlock Threads" and is there any particular app to monitor the JVM logs??
INFO [monki_HMCatalogSyncJob::de.hybris.platform.servicelayer.internal.jalo.ServicelayerJob] -[J= U= C=] (monki) (0000VVDK) [CatalogVersionSyncJob] Finished synchronization in 0d 00h:01m:33s:630ms. N... See more...
INFO [monki_HMCatalogSyncJob::de.hybris.platform.servicelayer.internal.jalo.ServicelayerJob] -[J= U= C=] (monki) (0000VVDK) [CatalogVersionSyncJob] Finished synchronization in 0d 00h:01m:33s:630ms. No errors.
HI    please tell me how to write the query for the range of the IP ADDRESS Such as src!=10.0.0.0/8 To src!=10.24.1.3
I have a field timeofevent which contains the time at which the event was logged in 24 hour format. Format of timeofevent: HH:MM I want only the events which were logged between 18:30 to 08:30 CST.
Hello all,   I am tryin to extract only the highlighted from the below event, however I am failing to extract. Can you please let me know here. "Error","","/Example/JP1/NTEVENT_LOGTRAP/Oracle.per... See more...
Hello all,   I am tryin to extract only the highlighted from the below event, however I am failing to extract. Can you please let me know here. "Error","","/Example/JP1/NTEVENT_LOGTRAP/Oracle.persona","LOGFILE","NTEVENTLOG","LOGFILE","NTEVENTLOG","","","","","",9,"A0","1630500097","A1","PSD067","A2","Application","A3","Error","A4","None","A5","20","A6","N/A" "Error","","/Example/JP1/NTEVENT_LOGTRAP/Microsoft-Windows-Kerberos-Key-Distribution-Center","LOGFILE","NTEVENTLOG","LOGFILE","NTEVENTLOG" Thank you
Hi, how can we send ES notable events from cluster setup to a stand alone indexer.
so my log lines look something like this <<METRIC-START>>{"A":332,"B":45,"C":67,"D":23,"E":234,"F":435,"G":43,"H":66,"I":32,"J":67,"K":21,"L":678,"M":45,"N":56}<<METRIC-END>> It is in form of a... See more...
so my log lines look something like this <<METRIC-START>>{"A":332,"B":45,"C":67,"D":23,"E":234,"F":435,"G":43,"H":66,"I":32,"J":67,"K":21,"L":678,"M":45,"N":56}<<METRIC-END>> It is in form of a Json and I am able to extract the fields along with time using this | rex field=line "(?<=<<METRIC-START>>)(?<importMetrics>.*)(?=<<METRIC-END>>)" | spath input=importMetrics now I wish to plot A,B,C,D as timecharts, so I will have to give this command | timechart span=1h max(A) as A, max(B) as B....till Z So the whole query works fine but I wanted to know if there is anyshort way of doing it | rex field=line "(?<=<<METRIC-START>>)(?<importMetrics>.*)(?=<<METRIC-END>>)" | spath input=importMetrics | timechart span=1h max(A) as A, max(B) as B....till Z
We have a requirement to collect the logs using client Certs (mTLS) authentication, and we are using Splunk HTTP Event Collector Endpoint along with token and client certs to achieve this.  So in or... See more...
We have a requirement to collect the logs using client Certs (mTLS) authentication, and we are using Splunk HTTP Event Collector Endpoint along with token and client certs to achieve this.  So in order to achieve extension to this TLS support we would like to know is there any way to update the .conf files to support the multiple server-side certificates which can be used for Server Name Indication (SNI) by which a client indicates which hostname it is attempting to connect.  Have someone tried a similar approach before? Also if you could give other suggestions for our solution will be much appreciated! Thank you. Amit R. S