All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have knowledge objects in my custom apps which are created & managed in /default by manually uploading to splunkcloud and installing. this causes me a couple of problems : 1. even though they hav... See more...
I have knowledge objects in my custom apps which are created & managed in /default by manually uploading to splunkcloud and installing. this causes me a couple of problems : 1. even though they have write perms in default.meta for sc_admin only, users with other roles can change the knowledge objects through the ui - for example they can disable a savedsearch. presumably this creates a new copy in /local which means that my perms from default.meta no longer apply because new perms are written in local.meta. am i correct in my assessment, and if so what is the point of write perms? 2. once the user has created a /local copy of the savedsearch by changing or disabling it, there is a lock or conflict situation.... the ui /local version always gets precedence, and because there is also a version in /default i can no longer see a delete option for the ui version. so i am stuck with the ui version forever. in other words, the person with zero perms wins over the sc_admin. The only ways I have found to get out of this situation are (a) ask splunk cloudops to delete the files from /local, which takes 3 days, or (b) to rename all of the savedsearches in /default, upload and install the app, manually delete the versions that the user created in the ui, name the /default versions back again, and upload / install the app a 2nd time.  Am i missing something in terms of a better way to rectify things when this happens and why this might be the correct splunk behaviour? Thanks in advance Ian
When I test the regex in both regex101 and using the rex command in the search bar and they parsed out the fields correctly. Now that i have added them to the props conf on the search head, they are ... See more...
When I test the regex in both regex101 and using the rex command in the search bar and they parsed out the fields correctly. Now that i have added them to the props conf on the search head, they are capturing extra information.    The Result field is the one that is mainly caputuring the SessionID which the the capture is Verified or Failed.   Thank you all for your help with this.      props.conf   [exp_test] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom pulldown_type = true CHECK_FOR_HEADER = false CHARSET = AUTO EXTRACT-SessionID = (?<=SessionID:)(?P<SessionID>.+) EXTRACT-Result = \VerificationResult:(?P<Result>.+) EXTRACT-UserName = (?<=User:)(?P<UserName>.+) EXTRACT-Response_1 = (?<=Response_1:)(?P<Response_1>.+) EXTRACT-Response_2 = (?<=Response_2:)(?P<Response_1>.+) Sample Log Time: 13-09-2021 10:08:19 VerificationResult: Failed SessionID: K3K2N2G3JPSOZNOWJFOMFPBP.pidd1v-210913090809460797217 User: LAST, FIRST 13-09-2021 10:10:18 Response_1: 1st reqest Sent! for User: LAST, FIRST 13-09-2021 10:10:19 Response_1: 1st response received! for User: LAST, FIRST Time: 13-09-2021 10:10:19 SessionID and User Mapping: SessionID: 3EV6PLCHK795Z8FQBKKYS3Z3.pidd2v-210913091018537820706 User: LAST, FIRST 13-09-2021 10:15:13 Response_1: 1st reqest Sent! for User: LAST, FIRST 13-09-2021 10:15:14 Response_1: 1st response received! for User: LAST, FIRST Time: 13-09-2021 10:15:14 SessionID and User Mapping: SessionID: GAWJ1C7ZWNAWCVTEEIWGE3LL.pidd2v-210913091513558630064 User: LAST, FIRST 13-09-2021 10:15:33 Response_1: 1st reqest Sent! for User: LAST, FIRST 13-09-2021 10:15:33 Response_1: 1st response received! for User: LAST, FIRST 13-09-2021 10:15:38 Response_1: 1st reqest Sent! for User: LAST, FIRST 13-09-2021 10:15:39 Response_1: 1st response received! for User: LAST, FIRST Time: 13-09-2021 10:15:39 SessionID and User Mapping: SessionID: 2SYZV3QHCZKYM2YTYIJLVL3E.pidd2v-210913091538460803649 User: LAST, FIRST 13-09-2021 10:15:47 Response_1: 2nd request sent! for the user verification SessionID: 2SYZV3QHCZKYM2YTYIJLVL3E.pidd2v-210913091538460803649 13-09-2021 10:15:48 Response_1: 2nd response received! for user verification SessionID: 2SYZV3QHCZKYM2YTYIJLVL3E.pidd2v-210913091538460803649 Time: 13-09-2021 10:15:48 VerificationResult: Verified SessionID: 2SYZV3QHCZKYM2YTYIJLVL3E.pidd2v-210913091538460803649 User: LAST, FIRST 13-09-2021 10:16:47 Response_1: 1st reqest Sent! for User: LAST, FIRST 13-09-2021 10:16:48 Response_1: 1st response received! for User: LAST, FIRST Time: 13-09-2021 10:16:48 SessionID and User Mapping: SessionID: D5JVVUR3AAKFURITHCI993H9.pidd2v-210913091647448944771 User: LAST, FIRST
Need an SPL to review the time zone on my Splunk instances please. Is it important for these TZs to be consistent with Time zones on all the FWs? Should I really care for time zones to be right on th... See more...
Need an SPL to review the time zone on my Splunk instances please. Is it important for these TZs to be consistent with Time zones on all the FWs? Should I really care for time zones to be right on the FWs? Thank u in advance.
I used Azure/Splunk Enterprise deployment to set up Splunk on my Azure instance. I then did this: Settings > Show All Settings Create an index via Settings > Indexes (Type: Events, ensured it is ... See more...
I used Azure/Splunk Enterprise deployment to set up Splunk on my Azure instance. I then did this: Settings > Show All Settings Create an index via Settings > Indexes (Type: Events, ensured it is enabled) Create an HTTP Event Collector via Settings > Data Inputs > HTTP Event Collector Attempt to run a curl to the hec Public IP Address Azure Resource that was created I get: {"text":"Invalid token","code":4} Based on what I was reading, I need to push the change out to the Indexers. So here's my questions: Can I do that through the UI? Do I need to update each of the indexers manually? Is there an alternative location for setting this up I am missing? Thanks for helping a newbie!
Hi Everybody, Here my requirement is to create the alerts for JVM logs, we are try to create the alert for "Heap Memory Usage"  and  "Deadlock Threads" but we are unable to find out the events. What... See more...
Hi Everybody, Here my requirement is to create the alerts for JVM logs, we are try to create the alert for "Heap Memory Usage"  and  "Deadlock Threads" but we are unable to find out the events. What type of events we are getting for "Heap Memory Usage" and "Deadlock Threads" and is there any particular app to monitor the JVM logs??
INFO [monki_HMCatalogSyncJob::de.hybris.platform.servicelayer.internal.jalo.ServicelayerJob] -[J= U= C=] (monki) (0000VVDK) [CatalogVersionSyncJob] Finished synchronization in 0d 00h:01m:33s:630ms. N... See more...
INFO [monki_HMCatalogSyncJob::de.hybris.platform.servicelayer.internal.jalo.ServicelayerJob] -[J= U= C=] (monki) (0000VVDK) [CatalogVersionSyncJob] Finished synchronization in 0d 00h:01m:33s:630ms. No errors.
HI    please tell me how to write the query for the range of the IP ADDRESS Such as src!=10.0.0.0/8 To src!=10.24.1.3
I have a field timeofevent which contains the time at which the event was logged in 24 hour format. Format of timeofevent: HH:MM I want only the events which were logged between 18:30 to 08:30 CST.
Hello all,   I am tryin to extract only the highlighted from the below event, however I am failing to extract. Can you please let me know here. "Error","","/Example/JP1/NTEVENT_LOGTRAP/Oracle.per... See more...
Hello all,   I am tryin to extract only the highlighted from the below event, however I am failing to extract. Can you please let me know here. "Error","","/Example/JP1/NTEVENT_LOGTRAP/Oracle.persona","LOGFILE","NTEVENTLOG","LOGFILE","NTEVENTLOG","","","","","",9,"A0","1630500097","A1","PSD067","A2","Application","A3","Error","A4","None","A5","20","A6","N/A" "Error","","/Example/JP1/NTEVENT_LOGTRAP/Microsoft-Windows-Kerberos-Key-Distribution-Center","LOGFILE","NTEVENTLOG","LOGFILE","NTEVENTLOG" Thank you
Hi, how can we send ES notable events from cluster setup to a stand alone indexer.
so my log lines look something like this <<METRIC-START>>{"A":332,"B":45,"C":67,"D":23,"E":234,"F":435,"G":43,"H":66,"I":32,"J":67,"K":21,"L":678,"M":45,"N":56}<<METRIC-END>> It is in form of a... See more...
so my log lines look something like this <<METRIC-START>>{"A":332,"B":45,"C":67,"D":23,"E":234,"F":435,"G":43,"H":66,"I":32,"J":67,"K":21,"L":678,"M":45,"N":56}<<METRIC-END>> It is in form of a Json and I am able to extract the fields along with time using this | rex field=line "(?<=<<METRIC-START>>)(?<importMetrics>.*)(?=<<METRIC-END>>)" | spath input=importMetrics now I wish to plot A,B,C,D as timecharts, so I will have to give this command | timechart span=1h max(A) as A, max(B) as B....till Z So the whole query works fine but I wanted to know if there is anyshort way of doing it | rex field=line "(?<=<<METRIC-START>>)(?<importMetrics>.*)(?=<<METRIC-END>>)" | spath input=importMetrics | timechart span=1h max(A) as A, max(B) as B....till Z
We have a requirement to collect the logs using client Certs (mTLS) authentication, and we are using Splunk HTTP Event Collector Endpoint along with token and client certs to achieve this.  So in or... See more...
We have a requirement to collect the logs using client Certs (mTLS) authentication, and we are using Splunk HTTP Event Collector Endpoint along with token and client certs to achieve this.  So in order to achieve extension to this TLS support we would like to know is there any way to update the .conf files to support the multiple server-side certificates which can be used for Server Name Indication (SNI) by which a client indicates which hostname it is attempting to connect.  Have someone tried a similar approach before? Also if you could give other suggestions for our solution will be much appreciated! Thank you. Amit R. S
Hello, I am after a way of testing connectivity and reliability between my Search Heads and Indexers in a cluster as I am seeing some errors in the remote searches log for scheduled searches that ar... See more...
Hello, I am after a way of testing connectivity and reliability between my Search Heads and Indexers in a cluster as I am seeing some errors in the remote searches log for scheduled searches that are showing as some nodes 'does not exist' errors.
I would have to move my custom Correlation rules  to a custom TA-foo app My correlation searches comprises of: custom rules created from scratch (all across the apps estate - yup, its a mess) and ... See more...
I would have to move my custom Correlation rules  to a custom TA-foo app My correlation searches comprises of: custom rules created from scratch (all across the apps estate - yup, its a mess) and a few of the OOB CRs from the DA-ESS-, SA-, TA-, Splunk_SA_, Splunk_TA_, and Splunk_DA-ESS_  apps that were modified as per my requirement Are there any best practices/recommendations that i need to consider other than   Add import = TA-foo in local.meta in <Splunk_HOME>/etc/apps/SplunkEnterpriseSecuritySuite/metadata add request.ui_dispatch_app = SplunkEnterpriseSecuritySuite in savedsearches.conf for each of the Correlation searches that i migrate PS: I will also migrate the dependant KOs (macros/lookups etc) in a similar fashion to the TA-foo add on. Is there any other better way to go about it, just to be future safe for upgrades, so that i have a single touchpoint rather than running after optimisations in each app after any activity such as a version upgrade . Splunk version 7.3.0 ES version 5.3.1
I am parsing SFTP logs of file downloads and want to count how many bytes a specific user downloaded at what time. The logs look like this (I am abbreviating the standard rfc5424 syslog prefix):   ... See more...
I am parsing SFTP logs of file downloads and want to count how many bytes a specific user downloaded at what time. The logs look like this (I am abbreviating the standard rfc5424 syslog prefix):   session opened for local user XXX from [10.#.#.#] received client version # open "/some/file/name" flags READ mode 0666 close "/some/file/name" bytes read ### written # open "/some/other/file/name" flags READ mode 0666 close "/some/other/file/name" bytes read ### written # open "/and/another/filename" flags READ mode 0666 close "/and/another/filename" bytes read ### written # session closed for local user XXX from [10.#.#.#]   I want to somehow show how many bytes a specific user downloaded at what time. I start by inline extraction of a few extra helper fields like the username and the file sizes for example:   appname=sftp-server | rex field=_raw "session (opened|closed) for local user (?<sftp_user>[^ ]+) from" | rex field=_raw "close \"(?<sftp_filename>.*)\" bytes read (?<sftp_bytes_read>\d+)"   If I wanted to see how much data was downloaded (without caring about which user) I would just do a timechart which does the trick: appname=sftp-server | rex field=_raw "session (opened|closed) for local user (?<sftp_user>[^ ]+) from" | rex field=_raw "close \"(?<sftp_filename>.*)\" bytes read (?<sftp_bytes_read>\d+)" | timechart sum(sftp_bytes_read) However the event which has the file size, does not have the user so I can filter or chart the username. If I want to filter by sftp_user, the only way I found how to do it is by making a transaction for the user session and then filtering on the sftp_user (in the example below, host, appname, and procid are extracted by the rfc5424 syslog addon):     appname=sftp-server | rex field=_raw "session (opened|closed) for local user (?<sftp_user>[^ ]+) from" | rex field=_raw "close \"(?<sftp_filename>.*)\" bytes read (?<sftp_bytes_read>\d+)" | transaction host appname procid sftp_user startswith="session opened for" endswith="session closed for"   This does give me an events for every single SFTP session by the user, but I cannot figure out how to get the details of each file download (the individual "close" lines) out of it.  What would be the way to do that, or just... "explode" it back into individual events?
Past 1 week, I'm trying to create csv using splunk add-on builder, but still i didn't figure out the solution. Anybody have the solution pls share.  Thanks Advance
I am using the smartstore function of splunk. The S3 protocol is used to tier data to ceph storage. Can Splunk SmartStore delete buckets from object storage using the S3 protocol?
Hello, In order to make syslog communication through TLS work, I followed this procedure https://docs.splunk.com/Documentation/Splunk/8.0.2/Security/Howtoself-signcertificates  on one node. I backe... See more...
Hello, In order to make syslog communication through TLS work, I followed this procedure https://docs.splunk.com/Documentation/Splunk/8.0.2/Security/Howtoself-signcertificates  on one node. I backed up the original cacert.pem and copy the new root certificate just created to $SPLUNK_HOME/etc/auth/cacert.pem I also copied the server certificate to $SPLUNK_HOME/etc/auth/server.pem and change the configuration in files $SPLUNK_HOME/etc/apps/launcher/local/inputs.conf and $SPLUNK_HOME/etc/system/local/server.conf Since then, I have the error log : ERROR LMTracker - failed to send rows, reason='Unable to connect to license master=https://xxx:8089 Error connecting: SSL not configured on client' (xxx correspond to the license master server) So I tried to restore original cacert.pem and server.pem but i still get the error. I tried to connect to the license master through TLS with curl but I get an error (Peer's Certificate has expired) I checked the license master certificate and it appears to be expired since one month. But license verification is working from other Splunk nodes (on which I did not change root certificate) and curl too. Also I am not able to renew this certificate as it is sign by the default root CA and I do not have the passphrase of the private key. The connection to the web interface of this node does not work, I get an internal server error. Could you please help me to figure out what is blocking the license verification? Do not hesitate to tell me if you need more details. Thank you
I have used CSS to increase the size of pie chart legend labels;     .highcharts-data-label text tspan { font-size:14px !important; word-break: break-all !important; }     On smaller ... See more...
I have used CSS to increase the size of pie chart legend labels;     .highcharts-data-label text tspan { font-size:14px !important; word-break: break-all !important; }     On smaller panels and with long text however the text overflows out of the panel. I've attempted to use word break to stop this from happening without any result. Please refer to the screenshot attached. Is there anyway to break and stay within the panel?
Hi splunkers, I heard some rumors that Microsoft 365 App and anything related to Microsoft Apps are planning to change their free versions to paid versions. Please let me know the truth about the a... See more...
Hi splunkers, I heard some rumors that Microsoft 365 App and anything related to Microsoft Apps are planning to change their free versions to paid versions. Please let me know the truth about the above one. Kindly please help, Best Regards.