All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

So i am representing endpoint url (y-axis) and http status code (x-axis). I can show the count of each url & status code using chart like so: <base search> | chart count by url_path, http_status_... See more...
So i am representing endpoint url (y-axis) and http status code (x-axis). I can show the count of each url & status code using chart like so: <base search> | chart count by url_path, http_status_code  Now, i need to add another item into the chart command to show the percentage of each count in addition to count, so that i get something like  this together: 48 (72%).  Also i know how to calculate the percentage as such:  eventstats sum(count) as total | eval percent=100*count/total | strcat percent "%" percent. Can you please tell me how to construct the chart command to encapsulate the count and percentage together? 
I would like to run a timechart query that ends with `| timechart span=1h distinct_count(thing) by other_thing` The problem is that there are a huge number of events being counted so the query take... See more...
I would like to run a timechart query that ends with `| timechart span=1h distinct_count(thing) by other_thing` The problem is that there are a huge number of events being counted so the query takes a long time. Is there a way that I can run the same query but sample only the first 5 minutes of every hour so that I can speed up the query?
how to create a props.conf for the below data..Need to break the line from ### endwith    ######################################## 20220815.011001: ========================= 20220815.011001: C... See more...
how to create a props.conf for the below data..Need to break the line from ### endwith    ######################################## 20220815.011001: ========================= 20220815.011001: Cron dummy started by dummy1. 20220815.011001: ========================= 20220815.011002: 20220815.011001: Checking processes on Prod SEATTLE server seat1 20220815.011001: 3 critical processes run on seat1 20220815.011001: 1 non-critical processes run on seat1 20220815.011001: 208 processes are now running on seat1 20220815.011001: 10 processes owned by dummy1 20220815.011001: SEATTLE Authentication_Process is running (581). 20220815.011001: SEATTLE is running (1709). 20220815.011001: PS Pmeter_Server is running (1886). 20220815.011002: PS Pmeter_Server is running (2000). 20220815.011002: All critical processes are running. 20220815.011002: ========================= 20220815.011002: dummy complete. 20220815.011002: ========================= 20220815.011501:
Hi Splunkers! We have an issue where, when upgrading to a newer version of the Splunk Universal forwarder (we are currently on 6.2.4, old but working fine), we are finding that the newer forwarders... See more...
Hi Splunkers! We have an issue where, when upgrading to a newer version of the Splunk Universal forwarder (we are currently on 6.2.4, old but working fine), we are finding that the newer forwarders will stop sending the logs of our specified files after a random amount of time.  (This is in a Kubernetes environment, and have verified there are no memory/cpu/disk issues, and that it is working fine sending the splunkd.log and metrics.log files without issue).  We have tried rolling back to 6.2.4, from 8.2.7, and things work fine.  We are now trying to roll forward to 7.3.9 from 6.2.4 (versus the jump from 6.2.4 to 8.2.7). With the above stated, it seems very strange to me that, even with low-output logs (maybe 1x transaction every 15-20 minutes), it just "works".... for maybe a few hours, or even up to 1 day.  However, the logs appear to stop being recorded in the Splunk forwarder.  The only "error" we have noticed is the following entries after enabling debug mode. Thanks for any assistance!   08-04-2022 19:01:45.983 +0000 DEBUG FilesystemFilter [104 tailreader0] - Testing path=/data/logs/gos-transactions.log(real=/data/logs/gos-transactions.log) with global blacklisted paths 08-04-2022 19:01:45.983 +0000 DEBUG FilesystemChangeWatcher [100 MainTailingThread] - updateReliabilityScore: fs=0x10304, worked=Y, score=1->2 08-04-2022 19:01:45.983 +0000 DEBUG FileClassifierManager [104 tailreader0] - Finding type for file: /data/logs/gos-transactions.log 08-04-2022 19:01:45.983 +0000 DEBUG FileClassifierManager [104 tailreader0] - filename="/data/logs/gos-transactions.log" invalidCharCount="0" TotalCharCount="0" PercentInvalid="0.000000" 08-04-2022 19:01:45.983 +0000 DEBUG WatchedFile [104 tailreader0] - Storing pending metadata for file=/data/logs/gos-transactions.log, sourcetype=log4j, charset=UTF-8 08-04-2022 19:01:45.983 +0000 DEBUG WatchedFile [104 tailreader0] - setting trailing nulls to false via 'true' or 'false' from conf' 08-04-2022 19:01:45.983 +0000 DEBUG WatchedFile [104 tailreader0] - Loading state from fishbucket. 08-04-2022 19:01:45.983 +0000 DEBUG WatchedFile [104 tailreader0] - Attempting to load indexed extractions config from conf=source::/data/logs/gos-transactions.log|host::cbs-global-outbound-services-systest-v1-0-0-deployment-7b9r9xc7|log4j|53 ... 08-04-2022 19:01:45.983 +0000 DEBUG WatchedFile [104 tailreader0] - /data/logs/gos-transactions.log is a small file (size=0b). 08-04-2022 19:01:45.983 +0000 DEBUG WatchedFile [104 tailreader0] - initcrc has changed to: 0x720891e9581b5428. 08-04-2022 19:01:45.983 +0000 INFO FileTracker [104 tailreader0] - Locked key=0x720891e9581b5428 to state=0x7f14a1493000 08-04-2022 19:01:45.983 +0000 INFO FileTracker [104 tailreader0] - Retrieving record for key=0x720891e9581b5428 08-04-2022 19:01:45.983 +0000 DEBUG WatchedFile [104 tailreader0] - Normal record was not found for initCrc=0x720891e9581b5428. 08-04-2022 19:01:45.983 +0000 INFO FileTracker [104 tailreader0] - Retrieving record for key=0x720891e9581b5428 08-04-2022 19:01:45.983 +0000 DEBUG WatchedFile [104 tailreader0] - Creating new pipeline input channel with channel id: 54. 08-04-2022 19:01:45.984 +0000 DEBUG WatchedFile [104 tailreader0] - Attempting to load indexed extractions config from conf=source::/data/logs/gos-transactions.log|host::cbs-global-outbound-services-systest-v1-0-0-deployment-7b9r9xc7|log4j|54 ... 08-04-2022 19:01:45.984 +0000 DEBUG WatchedFile [104 tailreader0] - seeking /data/logs/gos-transactions.log to off=0 08-04-2022 19:01:45.984 +0000 DEBUG WatchedFile [104 tailreader0] - Reached EOF: /data/logs/gos-transactions.log (read 0 bytes) 08-04-2022 19:01:45.984 +0000 INFO FileTracker [104 tailreader0] - Unlocked key=0x720891e9581b5428 locked to state=0x7f14a1493000 08-04-2022 19:01:45.984 +0000 DEBUG FilesystemChangeWatcher [100 MainTailingThread] - inotify doing infrequent backup polling for healthy path="/data/logs/gos-transactions.log" 08-04-2022 19:01:45.985 +0000 DEBUG FilesystemFilter [104 tailreader0] - Testing path=/data/logs/gos-error.log(real=/data/logs/gos-error.log) with global blacklisted paths 08-04-2022 19:01:45.985 +0000 DEBUG FileClassifierManager [104 tailreader0] - Finding type for file: /data/logs/gos-error.log 08-04-2022 19:01:45.985 +0000 DEBUG FileClassifierManager [104 tailreader0] - filename="/data/logs/gos-error.log" invalidCharCount="0" TotalCharCount="0" PercentInvalid="0.000000" 08-04-2022 19:01:45.985 +0000 DEBUG WatchedFile [104 tailreader0] - Storing pending metadata for file=/data/logs/gos-error.log, sourcetype=log4j, charset=UTF-8 08-04-2022 19:01:45.985 +0000 DEBUG WatchedFile [104 tailreader0] - setting trailing nulls to false via 'true' or 'false' from conf' 08-04-2022 19:01:45.985 +0000 DEBUG WatchedFile [104 tailreader0] - Loading state from fishbucket. 08-04-2022 19:01:45.985 +0000 DEBUG WatchedFile [104 tailreader0] - Attempting to load indexed extractions config from conf=source::/data/logs/gos-error.log|host::cbs-global-outbound-services-systest-v1-0-0-deployment-7b9r9xc7|log4j|55 ... 08-04-2022 19:01:45.985 +0000 DEBUG WatchedFile [104 tailreader0] - /data/logs/gos-error.log is a small file (size=0b). 08-04-2022 19:01:45.985 +0000 DEBUG WatchedFile [104 tailreader0] - initcrc has changed to: 0x117409bca1aa15ee. 08-04-2022 19:01:45.985 +0000 INFO FileTracker [104 tailreader0] - Locked key=0x117409bca1aa15ee to state=0x7f14a1493400 08-04-2022 19:01:45.985 +0000 INFO FileTracker [104 tailreader0] - Retrieving record for key=0x117409bca1aa15ee 08-04-2022 19:01:45.985 +0000 DEBUG WatchedFile [104 tailreader0] - Normal record was not found for initCrc=0x117409bca1aa15ee. 08-04-2022 19:01:45.985 +0000 INFO FileTracker [104 tailreader0] - Retrieving record for key=0x117409bca1aa15ee 08-04-2022 19:01:45.985 +0000 DEBUG WatchedFile [104 tailreader0] - Creating new pipeline input channel with channel id: 56. 08-04-2022 19:01:45.985 +0000 DEBUG WatchedFile [104 tailreader0] - Attempting to load indexed extractions config from conf=source::/data/logs/gos-error.log|host::cbs-global-outbound-services-systest-v1-0-0-deployment-7b9r9xc7|log4j|56 ... 08-04-2022 19:01:45.985 +0000 DEBUG WatchedFile [104 tailreader0] - seeking /data/logs/gos-error.log to off=0 08-04-2022 19:01:45.986 +0000 DEBUG WatchedFile [104 tailreader0] - Reached EOF: /data/logs/gos-error.log (read 0 bytes) 08-04-2022 19:01:45.986 +0000 INFO FileTracker [104 tailreader0] - Unlocked key=0x117409bca1aa15ee locked to state=0x7f14a1493400 08-04-2022 19:01:45.986 +0000 DEBUG FilesystemChangeWatcher [100 MainTailingThread] - inotify doing infrequent backup polling for healthy path="/data/logs/gos-error.log" 08-04-2022 19:01:45.988 +0000 DEBUG FilesystemFilter [104 tailreader0] - Testing path=/data/logs/gos-reqresp.log(real=/data/logs/gos-reqresp.log) with global blacklisted paths 08-04-2022 19:01:45.988 +0000 DEBUG FileClassifierManager [104 tailreader0] - Finding type for file: /data/logs/gos-reqresp.log 08-04-2022 19:01:45.988 +0000 DEBUG FileClassifierManager [104 tailreader0] - filename="/data/logs/gos-reqresp.log" invalidCharCount="0" TotalCharCount="0" PercentInvalid="0.000000" 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - Storing pending metadata for file=/data/logs/gos-reqresp.log, sourcetype=log4j, charset=UTF-8 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - setting trailing nulls to false via 'true' or 'false' from conf' 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - Loading state from fishbucket. 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - Attempting to load indexed extractions config from conf=source::/data/logs/gos-reqresp.log|host::cbs-global-outbound-services-systest-v1-0-0-deployment-7b9r9xc7|log4j|57 ... 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - /data/logs/gos-reqresp.log is a small file (size=0b). 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - initcrc has changed to: 0x5f2cc808b9ff9884. 08-04-2022 19:01:45.988 +0000 INFO FileTracker [104 tailreader0] - Locked key=0x5f2cc808b9ff9884 to state=0x7f14a1493800 08-04-2022 19:01:45.988 +0000 INFO FileTracker [104 tailreader0] - Retrieving record for key=0x5f2cc808b9ff9884 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - Normal record was not found for initCrc=0x5f2cc808b9ff9884. 08-04-2022 19:01:45.988 +0000 INFO FileTracker [104 tailreader0] - Retrieving record for key=0x5f2cc808b9ff9884 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - Creating new pipeline input channel with channel id: 58. 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - Attempting to load indexed extractions config from conf=source::/data/logs/gos-reqresp.log|host::cbs-global-outbound-services-systest-v1-0-0-deployment-7b9r9xc7|log4j|58 ... 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - seeking /data/logs/gos-reqresp.log to off=0 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - Reached EOF: /data/logs/gos-reqresp.log (read 0 bytes) 08-04-2022 19:01:45.988 +0000 INFO FileTracker [104 tailreader0] - Unlocked key=0x5f2cc808b9ff9884 locked to state=0x7f14a1493800 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - Loading state from fishbucket. 08-04-2022 19:01:45.988 +0000 DEBUG FilesystemChangeWatcher [100 MainTailingThread] - inotify doing infrequent backup polling for healthy path="/data/logs/gos-reqresp.log" 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - /opt/splunk/etc/splunk.version is a small file (size=70b). 08-04-2022 19:01:45.988 +0000 INFO FileTracker [104 tailreader0] - Retrieving record for key=0x88bb06af0f1e7032 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - Record found, will advance file by offset=70 initcrc=0x88bb06af0f1e7032. 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - Preserving seekptr and initcrc. 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - seeking /opt/splunk/etc/splunk.version to off=70 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - Reached EOF: fname=/opt/splunk/etc/splunk.version initcrclen=1048576 fishstate=key=0x88bb06af0f1e7032 sptr=70 scrc=0x4ec910cde69cfaaa fnamecrc=0x88bb06af0f1e7032 modtime=1659639678 08-04-2022 19:01:45.989 +0000 INFO FileTracker [104 tailreader0] - Unlocked key=0x88bb06af0f1e7032 locked to state=0x7f14a140d400 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - Loading state from fishbucket. 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - /opt/splunk/var/log/splunk/first_install.log is a small file (size=70b). 08-04-2022 19:01:45.989 +0000 INFO FileTracker [104 tailreader0] - Retrieving record for key=0x2deca923e7cb5a06 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - Record found, will advance file by offset=70 initcrc=0x2deca923e7cb5a06. 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - Preserving seekptr and initcrc. 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - seeking /opt/splunk/var/log/splunk/first_install.log to off=70 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - Reached EOF: fname=/opt/splunk/var/log/splunk/first_install.log initcrclen=1048576 fishstate=key=0x2deca923e7cb5a06 sptr=70 scrc=0x4ec910cde69cfaaa fnamecrc=0x2deca923e7cb5a06 modtime=1659639679 08-04-2022 19:01:45.989 +0000 INFO FileTracker [104 tailreader0] - Unlocked key=0x2deca923e7cb5a06 locked to state=0x7f14a140ec00 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - Loading state from fishbucket. 08-04-2022 19:01:45.989 +0000 DEBUG FilesystemChangeWatcher [100 MainTailingThread] - inotify doing infrequent backup polling for healthy path="/opt/splunk/var/log/splunk/first_install.log" 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - /opt/splunk/var/log/splunk/splunkd-utility.log is a small file (size=560b). 08-04-2022 19:01:45.989 +0000 INFO FileTracker [104 tailreader0] - Retrieving record for key=0x33674102d8ed48d7 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - Record found, will advance file by offset=560 initcrc=0x33674102d8ed48d7. 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - Preserving seekptr and initcrc. 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - seeking /opt/splunk/var/log/splunk/splunkd-utility.log to off=560 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - Reached EOF: fname=/opt/splunk/var/log/splunk/splunkd-utility.log initcrclen=1048576 fishstate=key=0x33674102d8ed48d7 sptr=560 scrc=0x484f37b61ca44a94 fnamecrc=0x33674102d8ed48d7 modtime=1659639693 08-04-2022 19:01:45.989 +0000 INFO FileTracker [104 tailreader0] - Unlocked key=0x33674102d8ed48d7 locked to state=0x7f14a140f000 08-04-2022 19:01:45.989 +0000 DEBUG FilesystemChangeWatcher [100 MainTailingThread] - inotify doing infrequent backup polling for healthy path="/opt/splunk/var/log/splunk/splunkd-utility.log" 08-04-2022 19:01:45.989 +0000 INFO FileTracker [104 tailreader0] - Retrieving record for key=0x3a51dcd384f999d2 08-04-2022 19:01:45.990 +0000 DEBUG WatchedFile [104 tailreader0] - Preserving seekptr and initcrc. 08-04-2022 19:01:46.082 +0000 DEBUG FilesystemChangeWatcher [100 MainTailingThread] - inotify doing infrequent backup polling for healthy path="/opt/splunk/var/log/splunk" 08-04-2022 19:01:46.082 +0000 DEBUG FilesystemChangeWatcher [100 MainTailingThread] - inotify doing infrequent backup polling for healthy path="/opt/splunk/var/log/watchdog" 08-04-2022 19:01:46.286 +0000 DEBUG FilesystemChangeWatcher [100 MainTailingThread] - inotify doing infrequent backup polling for healthy path="/opt/splunk/var/run/splunk/search_telemetry" 08-04-2022 19:01:46.286 +0000 DEBUG FilesystemChangeWatcher [100 MainTailingThread] - inotify doing infrequent backup polling for healthy path="/opt/splunk/var/spool/splunk" 08-04-2022 19:01:46.286 +0000 DEBUG FilesystemChangeWatcher [100 MainTailingThread] - inotify doing infrequent backup polling for healthy path="/data/logs"
Hi community, I have to calculate previous week result, based on that, I calculate Percent difference with this weeks results. I have following code, but not able to get previous week result right.... See more...
Hi community, I have to calculate previous week result, based on that, I calculate Percent difference with this weeks results. I have following code, but not able to get previous week result right. My Code :  | bucket _time span=1w | lookup table_1 LicenseKey OUTPUT CustomerName | eval CustomerName=coalesce(CustomerName,LicenseKey) | stats count as Result by CustomerName, ErrorCode | eventstats sum(Result) as Total by CustomerName | eval PercentOfTotal = round((Result/Total)*100,3) | sort - _time | streamstats current=f latest(Result) as Result_Prev by CustomerName | eval PercentDifference = round(((Result/Result_Prev)-1)*100,2) | fillnull value="0" | append [ search index=abc= xyz:123 ErrorCode!=0 | `DedupDHI` | lookup Table_1 LicenseKey OUTPUT CustomerName | eval CustomerName=coalesce(CustomerName,LicenseKey) | stats count as Result by CustomerName | eval ErrorCode="Total", PercentOfTotal=100] | fillnull value="0" | lookup Table_2 ErrorCode OUTPUT Description | lookup Table_1 LicenseKey OUTPUT CustomerName | eval CustomerName=coalesce(CustomerName,LicenseKey) | eval Error=if(ErrorCode!="Total", ErrorCode+" ("+coalesce(Description,"Description Missing - Update Table_2")+")", ErrorCode) | rename Result_Prev as "Previous Week Results", PercentDifference as " Percent Difference", PercentOfTotal as "Percent of Total" | fields CustomerName, Error, Result,"Previous Week Results", " Percent Difference" , "Percent of Total" | sort CustomerName, Error, PercentDifference Output -  CustomerName Error Result Previous Week Results  Percent Difference Percent of Total AIG Private Client Group 1002 (abc) 4 0 0 3.252 AIG Private Client Group 1003 (cxz) 2 4 -50 1.626 AIG Private Client Group 1013 (Invalid Format) 12 4 200 9.756 AIG Private Client Group 1023 (Invalid Name) 3 4 -25 2.439 AIG Private Client Group 1027 (Invalid ) 102 4 2450 82.927 AIG Private Client Group Total 123 0 0 100 AIICO 1023 (Invalid Name) 8 0 0 38.095 AIICO 1201 1 8 -87.5 4.762 AIICO 1305  12 8 50 57.143 AIICO Total 21 0 0 100 Acceptance 1023 (Invalid Name) 3 0 0 27.273 Acceptance 1027 8 3 166.67 72.727 Acceptance Total 11 0 0 100   The Previous Week Results column is appending 4, if noticed which is wrong. any suggestions to solve this ? 
After some troubleshooting we noticed that when we configure the SSL cert and key on the webhook input, it turns off the port being used for the webhook. So if you have a path to the cert and key in ... See more...
After some troubleshooting we noticed that when we configure the SSL cert and key on the webhook input, it turns off the port being used for the webhook. So if you have a path to the cert and key in the input config then do a netstat, port is NOT listening. Remove the cert/key config from the webhook input, port starts listening again. What would be causing this?
first of all, questions can be very under-leveled compare to the other community questions, therefore, please don't make any bad comments; I understand. Baseline -Win2019 Server (Server A), Splun... See more...
first of all, questions can be very under-leveled compare to the other community questions, therefore, please don't make any bad comments; I understand. Baseline -Win2019 Server (Server A), Splunk Enterprise installed and will be used as a main SEARCH HEAD and INDEXER -Win2019 Server (Server B), Installed Universal Forwarder and connected to the Server A, AND will be forwarding data that I will manually feed. -RedHat (Server X) (syslog server), Installed Universal Forwarder and connected to the Server A, and I want this to send the syslogs to Server A   Problem and Question 1. ?The Server B is forwarding data to 'main' indexer by default. How can I change this so that Server B is forwarding data to Server A to a 'test' indexer? ??Server B is by default sending logs from /var/log/splunk, is there a way to change the location? or what items to send? I understand it's probably in the .conf files but I just cannot find it   P&Q 2. Server X, on the other hand, sending the logs to the "_internal" indexer of the Server A. when installing both of them, Server B and X, I've used same IP address, not sure why it's sending it to different indexers. ?How can I can I change the destination indexer from Server X to Server A? ??seems like Server X is sending the logs from /var/log/splunk also, how can I change this?? ???also how can I select which logs to send, and not to send???
Hi, Is there a way to determine Splunk License Usage for a specific event type.  I used index=_internal source=*license_usage.log* st=abcd to determine the license usage for the entire sourcetype... See more...
Hi, Is there a way to determine Splunk License Usage for a specific event type.  I used index=_internal source=*license_usage.log* st=abcd to determine the license usage for the entire sourcetype. To dig in deeper for the specific event type I found articles pointing to use len(_raw) which gives the byte size length of the raw event. I used the below to check if it returns the same from license_usage.log index="x" sourcetype=abcd | bin _time span=1d | eval size=len(_raw) | stats sum(size) as sizeInBytes by _time | eval GB = sizeInBytes/1024/1024/1024 The numbers do not match. The numbers from len(_raw) are very high when compared to the actual License Usage.  
We're summary indexing events from one index into another.  The original index contains JSON events e.g. {"field1": "value1", "field2": "value2"} The summary events all have a hostname value append... See more...
We're summary indexing events from one index into another.  The original index contains JSON events e.g. {"field1": "value1", "field2": "value2"} The summary events all have a hostname value appended onto them e.g. {"field1": "value1", "field2": "value2"}, hostname=abc I found abc configured in alert_actions.conf under the email stanza; [email] hostname = abc I can't figure out why this is happening and how to remove it from future summary events.  In advanced edit I can see a field called action.summary_index.hostname which contains "abc".  Is it simply inheriting the hostname parameter from alert_actions.conf? I tried removing the value from this parameter and saving but when I go back into advanced edit the value is back.  This search cluster is quite mature (old...) so I wouldn't be happy simply removing that hostname parameter from alert_actions.conf and "hoping for the best".  I'm not sure where else it might be being used and what impact it could have on other users. Any thoughts at all would be welcome here.
Hi All, We are planning to upgrade Splunk ES from 6.2 to 7.0.1. In Release Notes of 7.0.1 deprecated features, its mentioned like below. No support for Malware Domains threatlist The Malw... See more...
Hi All, We are planning to upgrade Splunk ES from 6.2 to 7.0.1. In Release Notes of 7.0.1 deprecated features, its mentioned like below. No support for Malware Domains threatlist The Malware Domains threatlist is not supported in Enterprise security version 6.5.0 or higher.    Is it any kind of lookup definition as mentioned in below link? https://community.splunk.com/t5/Security/Add-domains-to-threat-lists/td-p/116392 Or its related to below dashboard in Enterprise Security Suit? SplunkEnterpriseSecuritySuite/Security Intelligence/Threat Intelligence/Threat Activity/Threat Group/Threat Group (malwaretriage) Basically, I am not able to find out which feature is going to deprecate or remove. Please su  
Hi Splunkers, I'm trying to set an alert condition to block traffic for IP addresses from 13.108.0.0 to13.111.255.255 and from 66.231.70.0  to 66.231.85.255, but I'm really stuck is there anybody c... See more...
Hi Splunkers, I'm trying to set an alert condition to block traffic for IP addresses from 13.108.0.0 to13.111.255.255 and from 66.231.70.0  to 66.231.85.255, but I'm really stuck is there anybody can help please? My query below:   | tstats count values(All_Traffic.app) AS app values(All_Traffic.dvc) AS devicename values(All_Traffic.src_zone) AS src_zone values(All_Traffic.dest_zone) AS dest_zone from datamodel=Network_Traffic where All_Traffic.action=blocked All_Traffic.src_ip IN (*) All_Traffic.dest IN (13.108.0.0 13.111.255.255 OR 66.231.80.0 66.231.95.255) All_Traffic.dest_port IN (*) by _time,All_Traffic.action,All_Traffic.src_ip, All_Traffic.dest ,All_Traffic.dest_port ,All_Traffic.transport,All_Traffic.rule,sourcetype | rename All_Traffic.* AS * | sort - _time limit=0 | fields - count | rename rule as policy,src_ip AS src | eval action=case(action="teardown","drop",1=1,action)    
Our Splunk rep walked us through setting up SSL for our Splunk server communication with each other and for our Universal Forwarders to connect to our Indexer. However, we still get the warning X509 ... See more...
Our Splunk rep walked us through setting up SSL for our Splunk server communication with each other and for our Universal Forwarders to connect to our Indexer. However, we still get the warning X509 certificate (O=SplunkUser,CN=SplunkServerDefaultCert) should not be used, as it is issued by Splunk's own default Certificate Authority (CA) In addition, Nessus scans find the default Splunk certificate on all of the systems with Universal Forwarders. We have SSL certificates created by our government agency's CA. I have verified that our indexer's server.conf is pointing sslRootCAPath to our CA's pem. I have verified that our indexer's inputs.conf is pointing serverCert at our server's pem. I have verified that our universal forwarders' outputs.conf have clientCert pointing at our server's pem, which is located on each system in C:\Program Files\SplunkUniversalForwarder\etc\auth. I have verified that our universal forwarders' outputs.conf have sslRootCAPath pointing at our CA's pem, which is located on each system in C:\Program Files\SplunkUniversalForwarder\etc\auth. Why do we still get this warning? Are we missing a setting somewhere?
HI , Getting error after upgrade Splunk version, It is custom service link app, how to fix this issue, suscpecting app is not cpmpactable with python 3.7 08-15-2022 18:26:33.763 +0400 ERROR Scrip... See more...
HI , Getting error after upgrade Splunk version, It is custom service link app, how to fix this issue, suscpecting app is not cpmpactable with python 3.7 08-15-2022 18:26:33.763 +0400 ERROR ScriptRunner [10252 TcpChannelThread] - stderr from '/data/splunk/bin/python3.7 /data/splunk/bin/runScript.py setup': The script at path=/data/splunk/etc/apps/TA-servicelink/bin/TA_servicelink_rh_settings.py has thrown an exception=Traceback (most recent call last): 08-15-2022 18:26:33.763 +0400 ERROR ScriptRunner [10252 TcpChannelThread] - stderr from '/data/splunk/bin/python3.7 /data/splunk/bin/runScript.py setup': File "/data/splunk/etc/apps/TA-servicelink/bin/ta_servicelink/splunktaucclib/rest_handler/endpoint/validator.py", line 388 08-15-2022 18:26:33.764 +0400 ERROR ScriptRunner [10252 TcpChannelThread] - stderr from '/data/splunk/bin/python3.7 /data/splunk/bin/runScript.py setup': File "/data/splunk/etc/apps/TA-servicelink/bin/ta_servicelink/splunktaucclib/rest_handler/endpoint/validator.py", line 388 08-15-2022 18:34:47.307 +0400 ERROR ScriptRunner [21184 TcpChannelThread] - stderr from '/data/splunk/bin/python3.7 /data/splunk/bin/runScript.py setup': The script at path=/data/splunk/etc/apps/TA-servicelink/bin/TA_servicelink_rh_settings.py has thrown an exception=Traceback (most recent call last): 08-15-2022 18:34:47.307 +0400 ERROR ScriptRunner [21184 TcpChannelThread] - stderr from '/data/splunk/bin/python3.7 /data/splunk/bin/runScript.py setup': File "/data/splunk/etc/apps/TA-servicelink/bin/ta_servicelink/splunktaucclib/rest_handler/endpoint/validator.py", line 388 08-15-2022 18:34:47.309 +0400 ERROR ScriptRunner [21184 TcpChannelThread] - stderr from '/data/splunk/bin/python3.7 /data/splunk/bin/runScript.py setup': File "/data/splunk/etc/apps/TA-servicelink/bin/ta_servicelink/splunktaucclib/rest_handler/endpoint/validator.py", line 388 08-15-2022 18:40:39.629 +0400 INFO sendmodalert [18314 AlertNotifierWorker-0] - Invoking modular alert action=servicelink for search="Threat - NYUAD - Exfiltration of Valuable Data - Rule" sid="scheduler__admin__SplunkEnterpriseSecuritySuite__RMD5ecd0c23d8fa296d1_at_1660574400_145_92DB7169-4DFE-4D47-AE11-FB9899BE27C5" in app="SplunkEnterpriseSecuritySuite" owner="admin" type="saved" 08-15-2022 18:40:39.683 +0400 ERROR sendmodalert [18314 AlertNotifierWorker-0] - action=servicelink STDERR - File "/data/splunk/etc/apps/TA-servicelink/bin/servicelink.py", line 57 08-15-2022 18:40:39.683 +0400 ERROR sendmodalert [18314 AlertNotifierWorker-0] - action=servicelink STDERR - results_url=self.settings.get('results_link') 08-15-2022 18:40:39.683 +0400 ERROR sendmodalert [18314 AlertNotifierWorker-0] - action=servicelink STDERR - ^ 08-15-2022 18:40:39.683 +0400 ERROR sendmodalert [18314 AlertNotifierWorker-0] - action=servicelink STDERR - TabError: inconsistent use of tabs and spaces in indentation 08-15-2022 18:40:39.686 +0400 INFO sendmodalert [18314 AlertNotifierWorker-0] - action=servicelink - Alert action script completed in duration=45 ms with exit code=1 08-15-2022 18:40:39.686 +0400 WARN sendmodalert [18314 AlertNotifierWorker-0] - action=servicelink - Alert action script returned error code=1
Hello, I have data being gather one per min.   FYI its disk usage %. Is it possible to create an SPL that output simple time from _time and UsePct every time UsePct changes. Not dedup it well... See more...
Hello, I have data being gather one per min.   FYI its disk usage %. Is it possible to create an SPL that output simple time from _time and UsePct every time UsePct changes. Not dedup it well yes but only when it (UsePct) changes.  So if on a give date / hour / min it goes up or down.  I can track the change. i.e. 2022-08-15 07:54:29 100% 2022-08-15 07:55:29 100% 2022-08-15 07:56:29 100% 2022-08-15 07:57:29 100% 2022-08-15 07:58:29 99% 2022-08-15 08:00:29 100% 2022-08-15 08:01:29 100% 2022-08-15 08:02:29 100%   For this i would see  2022-08-15 07:57:29 100% 2022-08-15 07:58:29 99% 2022-08-15 07:59:29 100% Close as i can get it this ((index=windows OR index=perfmon OR index=os*) tag=oshost tag=performance tag=storage) host=by0saq Filesystem="/dev/mapper/vgappl-_u01_app" | eval date=strftime(_time,"%x") | sort _time | table date UsePct | dedup date   Thanks.
Does a CSV import connector or a XML import connector exist in current Splunk versions?:)
H, I want to take rules on security essentials as a list.I m try to search in app but I cant get rule list.There r many content in this app. https://docs.splunksecurityessentials.com/content-detail... See more...
H, I want to take rules on security essentials as a list.I m try to search in app but I cant get rule list.There r many content in this app. https://docs.splunksecurityessentials.com/content-detail/ .I want to export this rule and colleration search as a xml. Could u help me about this search?   Thanks.
Hello, I want to extract 4 fields using regex with their respective names in bold and their respective values as per below: Hashes="SHA1=27EFA81247501EBA6603842F476C899B5DAAB8C7,MD5=49E93FA14D4E0... See more...
Hello, I want to extract 4 fields using regex with their respective names in bold and their respective values as per below: Hashes="SHA1=27EFA81247501EBA6603842F476C899B5DAAB8C7,MD5=49E93FA14D4E09AAFD418AB616AD1BB1,SHA256=35E3F44C587DE8BFF62095E768C77E12E2C522FB7EFD038FFFCC0DD2AE960A57,IMPHASH=B7A4477FA36E2E5287EE76AC4AFCB05B" The actual field name is "Hashes", I want to extract one field named SHA1 with the value "27EFA81247501EBA6603842F476C899B5DAAB8C7", one field named MD5 with the value "49E93FA14D4E09AAFD418AB616AD1BB1" etc. Thank you in advance.
I used cyberark and created 3 servers via cyberark and installed splunk this server machine 192.0.0.1 via cyberark access remote desktop connection. Currently, this machine gets an event log. If some... See more...
I used cyberark and created 3 servers via cyberark and installed splunk this server machine 192.0.0.1 via cyberark access remote desktop connection. Currently, this machine gets an event log. If someone accesses this machine, a login event is received in splunk but it doesn't show that the login event occurred from where? The windows event log has src ip, but does it mean where did the login event occur from? If the client PC logs into the Windows server directly, that is the IP address of the client PC. If the client PC logs into the Windows server through CyberArk, that is the CyberArk IP address. I want the windows event log to send IP. In the two cases above, I want to get the IP address of the Windows server. Flowchart
Hello, Our Splunk system just got an increase in size as image below (we have a Master, 1:1 indexes cluster struture) Meaning we have an increase for hot from 500GB -> 1T and cold from 1.5T ... See more...
Hello, Our Splunk system just got an increase in size as image below (we have a Master, 1:1 indexes cluster struture) Meaning we have an increase for hot from 500GB -> 1T and cold from 1.5T -> 3T I have change the stanza in splunk/etc/master-apps/_cluster/local/indexes.conf (where we put our individual indexes config like maxTotalDataSizeMB, homePath.MaxDataSizeMB, coldPath.MaxDataSizeMB) to match the newly provide disk space. But after I restart services for both our indexers and master, it won;t apply the newly assign disk space but still using old one. I suspect I miss something here. Can anyone point me to where can I config overall setting? (Because I'm not familial with splunk structure)
Hi, we would like to set allow_skew =15% globally for all of our searches, except for searches which reside in one specific app b. How do i do that?   We tried to set a global value in apps/a/d... See more...
Hi, we would like to set allow_skew =15% globally for all of our searches, except for searches which reside in one specific app b. How do i do that?   We tried to set a global value in apps/a/default/savedsearches.conf   [default] allow_skew=15%     And then a add specific configuration in app b to override the global default (apps/b/local/savedsearches.conf)   [default] allow_skew=0     But it doesn't work. btool shows, that the setting in b/local/savedsearches.conf wins over apps/a/default/savedsearches.conf.   According to Configuration file precedence - Splunk Documentation savedsearches.conf is per app/user configuration file. Adding a default.meta for app b with   [savedsearches] export=none   also didn't help.   Is there a bug or am i missing something? For reference the link to the official documentation: Offset scheduled search start times - Splunk Documentation   Thanks! - Lorenz