All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have a URL with proper business transaction definition, lets call it '/foo'. I'm able montior it well. I want to monitor another URL which is '/foo/bared' but when I try to create a busine... See more...
Hello, I have a URL with proper business transaction definition, lets call it '/foo'. I'm able montior it well. I want to monitor another URL which is '/foo/bared' but when I try to create a business transaction and check for live preview, it is masked by the parent URL (/foo) so it doesn't hit to the seperate BT. Is there a way to achieve this? I tried also creating a custom service endpoint, but I don't see it in service endpoints home page, there is again the parent URL (/foo). Regards.
How to forward the app logs from Splunk to any  third party application( Ex: Log insights tool) Is there any feasibility, Can you please provide the steps for implementation. 
Dear Splunkers,    I am having an issue with the process of squashing fields. When searching for events with no hosts or source I don't get any results:  index=<my_index> | where isnull(sourc... See more...
Dear Splunkers,    I am having an issue with the process of squashing fields. When searching for events with no hosts or source I don't get any results:  index=<my_index> | where isnull(source) Does Splunk drop events after being squashed? Because logically, there should be events on my index that are missing the field host and source.  
Hello, When I extract fields from the structured XML files using props.conf,  it is not extracted any key/value pairs and also headers info come as an event, how I would eliminate headers info  so ... See more...
Hello, When I extract fields from the structured XML files using props.conf,  it is not extracted any key/value pairs and also headers info come as an event, how I would eliminate headers info  so it  won't show up as an event and  is there anything I am missing because of that  it's not extracting any key/value pairs . I used   [sourcename] BREAK_ONLY_BEFORE=<DSMODEL> CHARSET=UTF-8 KV_MODE=xml LINE_BREAKER=([\r\n]*)<DSMODEL> MAX_TIMESTAMP_LOOKAHEAD=24 MUST_BREAK_AFTER=\/DSMODEL> NO_BINARY_CHECK=true SHOULD_LINEMERGE=false TIME_FORMAT=%Y%m%d%H%M%S TIME_PREFIX=<TIMESTAMP> TRUNCATE=2500 category=Custom disabled=false pulldown_type=true   Any help will be highly appreciated. Thank you so much.
Hello everyone, I have 1 search head and 3 indexes with a index cluster, it worked fine until yesterday, today I can't search event, then I found this event from splunkd.log 08-16-2022 13:23:10.7... See more...
Hello everyone, I have 1 search head and 3 indexes with a index cluster, it worked fine until yesterday, today I can't search event, then I found this event from splunkd.log 08-16-2022 13:23:10.727 +0800 ERROR HttpListener [175497 TcpChannelThread] - Exception while processing request from 10.20.5.10:38210 for /services/streams/search?sh_sid=rt_scheduler__admin_U0EtQWNjZXNzUHJvdGVjdGlvbg__RMD53730174ad49bc45c_at_1660627323_2982: Connection closed by peer  10.20.5.10 is search head server, What should I do? Thanks
how to access splunk using python script when i run this code i get an error import splunklib.client as client service = client.connect(host='192.0.0.1', port=8000, username='username', password=... See more...
how to access splunk using python script when i run this code i get an error import splunklib.client as client service = client.connect(host='192.0.0.1', port=8000, username='username', password='password',verify=False) print(service) shows an error ssl.SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1129)
Dear splunk community: So i am using the following chart command: <base search> | chart count by url_path, http_status_code |  addtotals col=true to get the following search result: URL (Y-Axis... See more...
Dear splunk community: So i am using the following chart command: <base search> | chart count by url_path, http_status_code |  addtotals col=true to get the following search result: URL (Y-Axis) & http status code (X-Axis)                         200     400     500     502     Total url1                    15        5          5         5         30 url2                    10       3          6          2         21                           25       8          11         7         51 Now i need to add the percentage of each count based on the total number and display the count & percentage together like so:                         200                    400                   500               502              Total url1               15 (50%)           5 (16%)         5 (16%)        5(16%)         30 url2                10 (47%)          3 (14%)         6 (28%)         2 (9%)          21                           25 (49%)        8 (15%)         11 (21%)      7 (13%)        51 Can someone show me how to achieve this? Greatly appreciate your help in advance!  
Hey all, Working on creating some access control based on indices and running into a weird issue. When I create a custom role and grant this role all capabilities (with no role inheritance) to the ... See more...
Hey all, Working on creating some access control based on indices and running into a weird issue. When I create a custom role and grant this role all capabilities (with no role inheritance) to the specified index, I'm not able to search data inside that index. However if I create said custom role inheriting the user role, but with the exact same capabilities it then it lets me search.  I've also cloned the user role and appended the index permissions to suit my needs but experiencing the exact same issue, the cloned role has no access to the allowed indices but the second I inherit the user role it seems to work again. This behaviour is only found on our dedicated search heads. When I enable the web ui and replicate on indexers it works as expected with the custom role searching assigned indices.  Splunk Enterprise Version: 9.0.0.1   Any help would be appreciated!!! Thanks guys    
So i am representing endpoint url (y-axis) and http status code (x-axis). I can show the count of each url & status code using chart like so: <base search> | chart count by url_path, http_status_... See more...
So i am representing endpoint url (y-axis) and http status code (x-axis). I can show the count of each url & status code using chart like so: <base search> | chart count by url_path, http_status_code  Now, i need to add another item into the chart command to show the percentage of each count in addition to count, so that i get something like  this together: 48 (72%).  Also i know how to calculate the percentage as such:  eventstats sum(count) as total | eval percent=100*count/total | strcat percent "%" percent. Can you please tell me how to construct the chart command to encapsulate the count and percentage together? 
I would like to run a timechart query that ends with `| timechart span=1h distinct_count(thing) by other_thing` The problem is that there are a huge number of events being counted so the query take... See more...
I would like to run a timechart query that ends with `| timechart span=1h distinct_count(thing) by other_thing` The problem is that there are a huge number of events being counted so the query takes a long time. Is there a way that I can run the same query but sample only the first 5 minutes of every hour so that I can speed up the query?
how to create a props.conf for the below data..Need to break the line from ### endwith    ######################################## 20220815.011001: ========================= 20220815.011001: C... See more...
how to create a props.conf for the below data..Need to break the line from ### endwith    ######################################## 20220815.011001: ========================= 20220815.011001: Cron dummy started by dummy1. 20220815.011001: ========================= 20220815.011002: 20220815.011001: Checking processes on Prod SEATTLE server seat1 20220815.011001: 3 critical processes run on seat1 20220815.011001: 1 non-critical processes run on seat1 20220815.011001: 208 processes are now running on seat1 20220815.011001: 10 processes owned by dummy1 20220815.011001: SEATTLE Authentication_Process is running (581). 20220815.011001: SEATTLE is running (1709). 20220815.011001: PS Pmeter_Server is running (1886). 20220815.011002: PS Pmeter_Server is running (2000). 20220815.011002: All critical processes are running. 20220815.011002: ========================= 20220815.011002: dummy complete. 20220815.011002: ========================= 20220815.011501:
Hi Splunkers! We have an issue where, when upgrading to a newer version of the Splunk Universal forwarder (we are currently on 6.2.4, old but working fine), we are finding that the newer forwarders... See more...
Hi Splunkers! We have an issue where, when upgrading to a newer version of the Splunk Universal forwarder (we are currently on 6.2.4, old but working fine), we are finding that the newer forwarders will stop sending the logs of our specified files after a random amount of time.  (This is in a Kubernetes environment, and have verified there are no memory/cpu/disk issues, and that it is working fine sending the splunkd.log and metrics.log files without issue).  We have tried rolling back to 6.2.4, from 8.2.7, and things work fine.  We are now trying to roll forward to 7.3.9 from 6.2.4 (versus the jump from 6.2.4 to 8.2.7). With the above stated, it seems very strange to me that, even with low-output logs (maybe 1x transaction every 15-20 minutes), it just "works".... for maybe a few hours, or even up to 1 day.  However, the logs appear to stop being recorded in the Splunk forwarder.  The only "error" we have noticed is the following entries after enabling debug mode. Thanks for any assistance!   08-04-2022 19:01:45.983 +0000 DEBUG FilesystemFilter [104 tailreader0] - Testing path=/data/logs/gos-transactions.log(real=/data/logs/gos-transactions.log) with global blacklisted paths 08-04-2022 19:01:45.983 +0000 DEBUG FilesystemChangeWatcher [100 MainTailingThread] - updateReliabilityScore: fs=0x10304, worked=Y, score=1->2 08-04-2022 19:01:45.983 +0000 DEBUG FileClassifierManager [104 tailreader0] - Finding type for file: /data/logs/gos-transactions.log 08-04-2022 19:01:45.983 +0000 DEBUG FileClassifierManager [104 tailreader0] - filename="/data/logs/gos-transactions.log" invalidCharCount="0" TotalCharCount="0" PercentInvalid="0.000000" 08-04-2022 19:01:45.983 +0000 DEBUG WatchedFile [104 tailreader0] - Storing pending metadata for file=/data/logs/gos-transactions.log, sourcetype=log4j, charset=UTF-8 08-04-2022 19:01:45.983 +0000 DEBUG WatchedFile [104 tailreader0] - setting trailing nulls to false via 'true' or 'false' from conf' 08-04-2022 19:01:45.983 +0000 DEBUG WatchedFile [104 tailreader0] - Loading state from fishbucket. 08-04-2022 19:01:45.983 +0000 DEBUG WatchedFile [104 tailreader0] - Attempting to load indexed extractions config from conf=source::/data/logs/gos-transactions.log|host::cbs-global-outbound-services-systest-v1-0-0-deployment-7b9r9xc7|log4j|53 ... 08-04-2022 19:01:45.983 +0000 DEBUG WatchedFile [104 tailreader0] - /data/logs/gos-transactions.log is a small file (size=0b). 08-04-2022 19:01:45.983 +0000 DEBUG WatchedFile [104 tailreader0] - initcrc has changed to: 0x720891e9581b5428. 08-04-2022 19:01:45.983 +0000 INFO FileTracker [104 tailreader0] - Locked key=0x720891e9581b5428 to state=0x7f14a1493000 08-04-2022 19:01:45.983 +0000 INFO FileTracker [104 tailreader0] - Retrieving record for key=0x720891e9581b5428 08-04-2022 19:01:45.983 +0000 DEBUG WatchedFile [104 tailreader0] - Normal record was not found for initCrc=0x720891e9581b5428. 08-04-2022 19:01:45.983 +0000 INFO FileTracker [104 tailreader0] - Retrieving record for key=0x720891e9581b5428 08-04-2022 19:01:45.983 +0000 DEBUG WatchedFile [104 tailreader0] - Creating new pipeline input channel with channel id: 54. 08-04-2022 19:01:45.984 +0000 DEBUG WatchedFile [104 tailreader0] - Attempting to load indexed extractions config from conf=source::/data/logs/gos-transactions.log|host::cbs-global-outbound-services-systest-v1-0-0-deployment-7b9r9xc7|log4j|54 ... 08-04-2022 19:01:45.984 +0000 DEBUG WatchedFile [104 tailreader0] - seeking /data/logs/gos-transactions.log to off=0 08-04-2022 19:01:45.984 +0000 DEBUG WatchedFile [104 tailreader0] - Reached EOF: /data/logs/gos-transactions.log (read 0 bytes) 08-04-2022 19:01:45.984 +0000 INFO FileTracker [104 tailreader0] - Unlocked key=0x720891e9581b5428 locked to state=0x7f14a1493000 08-04-2022 19:01:45.984 +0000 DEBUG FilesystemChangeWatcher [100 MainTailingThread] - inotify doing infrequent backup polling for healthy path="/data/logs/gos-transactions.log" 08-04-2022 19:01:45.985 +0000 DEBUG FilesystemFilter [104 tailreader0] - Testing path=/data/logs/gos-error.log(real=/data/logs/gos-error.log) with global blacklisted paths 08-04-2022 19:01:45.985 +0000 DEBUG FileClassifierManager [104 tailreader0] - Finding type for file: /data/logs/gos-error.log 08-04-2022 19:01:45.985 +0000 DEBUG FileClassifierManager [104 tailreader0] - filename="/data/logs/gos-error.log" invalidCharCount="0" TotalCharCount="0" PercentInvalid="0.000000" 08-04-2022 19:01:45.985 +0000 DEBUG WatchedFile [104 tailreader0] - Storing pending metadata for file=/data/logs/gos-error.log, sourcetype=log4j, charset=UTF-8 08-04-2022 19:01:45.985 +0000 DEBUG WatchedFile [104 tailreader0] - setting trailing nulls to false via 'true' or 'false' from conf' 08-04-2022 19:01:45.985 +0000 DEBUG WatchedFile [104 tailreader0] - Loading state from fishbucket. 08-04-2022 19:01:45.985 +0000 DEBUG WatchedFile [104 tailreader0] - Attempting to load indexed extractions config from conf=source::/data/logs/gos-error.log|host::cbs-global-outbound-services-systest-v1-0-0-deployment-7b9r9xc7|log4j|55 ... 08-04-2022 19:01:45.985 +0000 DEBUG WatchedFile [104 tailreader0] - /data/logs/gos-error.log is a small file (size=0b). 08-04-2022 19:01:45.985 +0000 DEBUG WatchedFile [104 tailreader0] - initcrc has changed to: 0x117409bca1aa15ee. 08-04-2022 19:01:45.985 +0000 INFO FileTracker [104 tailreader0] - Locked key=0x117409bca1aa15ee to state=0x7f14a1493400 08-04-2022 19:01:45.985 +0000 INFO FileTracker [104 tailreader0] - Retrieving record for key=0x117409bca1aa15ee 08-04-2022 19:01:45.985 +0000 DEBUG WatchedFile [104 tailreader0] - Normal record was not found for initCrc=0x117409bca1aa15ee. 08-04-2022 19:01:45.985 +0000 INFO FileTracker [104 tailreader0] - Retrieving record for key=0x117409bca1aa15ee 08-04-2022 19:01:45.985 +0000 DEBUG WatchedFile [104 tailreader0] - Creating new pipeline input channel with channel id: 56. 08-04-2022 19:01:45.985 +0000 DEBUG WatchedFile [104 tailreader0] - Attempting to load indexed extractions config from conf=source::/data/logs/gos-error.log|host::cbs-global-outbound-services-systest-v1-0-0-deployment-7b9r9xc7|log4j|56 ... 08-04-2022 19:01:45.985 +0000 DEBUG WatchedFile [104 tailreader0] - seeking /data/logs/gos-error.log to off=0 08-04-2022 19:01:45.986 +0000 DEBUG WatchedFile [104 tailreader0] - Reached EOF: /data/logs/gos-error.log (read 0 bytes) 08-04-2022 19:01:45.986 +0000 INFO FileTracker [104 tailreader0] - Unlocked key=0x117409bca1aa15ee locked to state=0x7f14a1493400 08-04-2022 19:01:45.986 +0000 DEBUG FilesystemChangeWatcher [100 MainTailingThread] - inotify doing infrequent backup polling for healthy path="/data/logs/gos-error.log" 08-04-2022 19:01:45.988 +0000 DEBUG FilesystemFilter [104 tailreader0] - Testing path=/data/logs/gos-reqresp.log(real=/data/logs/gos-reqresp.log) with global blacklisted paths 08-04-2022 19:01:45.988 +0000 DEBUG FileClassifierManager [104 tailreader0] - Finding type for file: /data/logs/gos-reqresp.log 08-04-2022 19:01:45.988 +0000 DEBUG FileClassifierManager [104 tailreader0] - filename="/data/logs/gos-reqresp.log" invalidCharCount="0" TotalCharCount="0" PercentInvalid="0.000000" 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - Storing pending metadata for file=/data/logs/gos-reqresp.log, sourcetype=log4j, charset=UTF-8 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - setting trailing nulls to false via 'true' or 'false' from conf' 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - Loading state from fishbucket. 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - Attempting to load indexed extractions config from conf=source::/data/logs/gos-reqresp.log|host::cbs-global-outbound-services-systest-v1-0-0-deployment-7b9r9xc7|log4j|57 ... 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - /data/logs/gos-reqresp.log is a small file (size=0b). 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - initcrc has changed to: 0x5f2cc808b9ff9884. 08-04-2022 19:01:45.988 +0000 INFO FileTracker [104 tailreader0] - Locked key=0x5f2cc808b9ff9884 to state=0x7f14a1493800 08-04-2022 19:01:45.988 +0000 INFO FileTracker [104 tailreader0] - Retrieving record for key=0x5f2cc808b9ff9884 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - Normal record was not found for initCrc=0x5f2cc808b9ff9884. 08-04-2022 19:01:45.988 +0000 INFO FileTracker [104 tailreader0] - Retrieving record for key=0x5f2cc808b9ff9884 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - Creating new pipeline input channel with channel id: 58. 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - Attempting to load indexed extractions config from conf=source::/data/logs/gos-reqresp.log|host::cbs-global-outbound-services-systest-v1-0-0-deployment-7b9r9xc7|log4j|58 ... 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - seeking /data/logs/gos-reqresp.log to off=0 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - Reached EOF: /data/logs/gos-reqresp.log (read 0 bytes) 08-04-2022 19:01:45.988 +0000 INFO FileTracker [104 tailreader0] - Unlocked key=0x5f2cc808b9ff9884 locked to state=0x7f14a1493800 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - Loading state from fishbucket. 08-04-2022 19:01:45.988 +0000 DEBUG FilesystemChangeWatcher [100 MainTailingThread] - inotify doing infrequent backup polling for healthy path="/data/logs/gos-reqresp.log" 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - /opt/splunk/etc/splunk.version is a small file (size=70b). 08-04-2022 19:01:45.988 +0000 INFO FileTracker [104 tailreader0] - Retrieving record for key=0x88bb06af0f1e7032 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - Record found, will advance file by offset=70 initcrc=0x88bb06af0f1e7032. 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - Preserving seekptr and initcrc. 08-04-2022 19:01:45.988 +0000 DEBUG WatchedFile [104 tailreader0] - seeking /opt/splunk/etc/splunk.version to off=70 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - Reached EOF: fname=/opt/splunk/etc/splunk.version initcrclen=1048576 fishstate=key=0x88bb06af0f1e7032 sptr=70 scrc=0x4ec910cde69cfaaa fnamecrc=0x88bb06af0f1e7032 modtime=1659639678 08-04-2022 19:01:45.989 +0000 INFO FileTracker [104 tailreader0] - Unlocked key=0x88bb06af0f1e7032 locked to state=0x7f14a140d400 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - Loading state from fishbucket. 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - /opt/splunk/var/log/splunk/first_install.log is a small file (size=70b). 08-04-2022 19:01:45.989 +0000 INFO FileTracker [104 tailreader0] - Retrieving record for key=0x2deca923e7cb5a06 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - Record found, will advance file by offset=70 initcrc=0x2deca923e7cb5a06. 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - Preserving seekptr and initcrc. 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - seeking /opt/splunk/var/log/splunk/first_install.log to off=70 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - Reached EOF: fname=/opt/splunk/var/log/splunk/first_install.log initcrclen=1048576 fishstate=key=0x2deca923e7cb5a06 sptr=70 scrc=0x4ec910cde69cfaaa fnamecrc=0x2deca923e7cb5a06 modtime=1659639679 08-04-2022 19:01:45.989 +0000 INFO FileTracker [104 tailreader0] - Unlocked key=0x2deca923e7cb5a06 locked to state=0x7f14a140ec00 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - Loading state from fishbucket. 08-04-2022 19:01:45.989 +0000 DEBUG FilesystemChangeWatcher [100 MainTailingThread] - inotify doing infrequent backup polling for healthy path="/opt/splunk/var/log/splunk/first_install.log" 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - /opt/splunk/var/log/splunk/splunkd-utility.log is a small file (size=560b). 08-04-2022 19:01:45.989 +0000 INFO FileTracker [104 tailreader0] - Retrieving record for key=0x33674102d8ed48d7 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - Record found, will advance file by offset=560 initcrc=0x33674102d8ed48d7. 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - Preserving seekptr and initcrc. 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - seeking /opt/splunk/var/log/splunk/splunkd-utility.log to off=560 08-04-2022 19:01:45.989 +0000 DEBUG WatchedFile [104 tailreader0] - Reached EOF: fname=/opt/splunk/var/log/splunk/splunkd-utility.log initcrclen=1048576 fishstate=key=0x33674102d8ed48d7 sptr=560 scrc=0x484f37b61ca44a94 fnamecrc=0x33674102d8ed48d7 modtime=1659639693 08-04-2022 19:01:45.989 +0000 INFO FileTracker [104 tailreader0] - Unlocked key=0x33674102d8ed48d7 locked to state=0x7f14a140f000 08-04-2022 19:01:45.989 +0000 DEBUG FilesystemChangeWatcher [100 MainTailingThread] - inotify doing infrequent backup polling for healthy path="/opt/splunk/var/log/splunk/splunkd-utility.log" 08-04-2022 19:01:45.989 +0000 INFO FileTracker [104 tailreader0] - Retrieving record for key=0x3a51dcd384f999d2 08-04-2022 19:01:45.990 +0000 DEBUG WatchedFile [104 tailreader0] - Preserving seekptr and initcrc. 08-04-2022 19:01:46.082 +0000 DEBUG FilesystemChangeWatcher [100 MainTailingThread] - inotify doing infrequent backup polling for healthy path="/opt/splunk/var/log/splunk" 08-04-2022 19:01:46.082 +0000 DEBUG FilesystemChangeWatcher [100 MainTailingThread] - inotify doing infrequent backup polling for healthy path="/opt/splunk/var/log/watchdog" 08-04-2022 19:01:46.286 +0000 DEBUG FilesystemChangeWatcher [100 MainTailingThread] - inotify doing infrequent backup polling for healthy path="/opt/splunk/var/run/splunk/search_telemetry" 08-04-2022 19:01:46.286 +0000 DEBUG FilesystemChangeWatcher [100 MainTailingThread] - inotify doing infrequent backup polling for healthy path="/opt/splunk/var/spool/splunk" 08-04-2022 19:01:46.286 +0000 DEBUG FilesystemChangeWatcher [100 MainTailingThread] - inotify doing infrequent backup polling for healthy path="/data/logs"
Hi community, I have to calculate previous week result, based on that, I calculate Percent difference with this weeks results. I have following code, but not able to get previous week result right.... See more...
Hi community, I have to calculate previous week result, based on that, I calculate Percent difference with this weeks results. I have following code, but not able to get previous week result right. My Code :  | bucket _time span=1w | lookup table_1 LicenseKey OUTPUT CustomerName | eval CustomerName=coalesce(CustomerName,LicenseKey) | stats count as Result by CustomerName, ErrorCode | eventstats sum(Result) as Total by CustomerName | eval PercentOfTotal = round((Result/Total)*100,3) | sort - _time | streamstats current=f latest(Result) as Result_Prev by CustomerName | eval PercentDifference = round(((Result/Result_Prev)-1)*100,2) | fillnull value="0" | append [ search index=abc= xyz:123 ErrorCode!=0 | `DedupDHI` | lookup Table_1 LicenseKey OUTPUT CustomerName | eval CustomerName=coalesce(CustomerName,LicenseKey) | stats count as Result by CustomerName | eval ErrorCode="Total", PercentOfTotal=100] | fillnull value="0" | lookup Table_2 ErrorCode OUTPUT Description | lookup Table_1 LicenseKey OUTPUT CustomerName | eval CustomerName=coalesce(CustomerName,LicenseKey) | eval Error=if(ErrorCode!="Total", ErrorCode+" ("+coalesce(Description,"Description Missing - Update Table_2")+")", ErrorCode) | rename Result_Prev as "Previous Week Results", PercentDifference as " Percent Difference", PercentOfTotal as "Percent of Total" | fields CustomerName, Error, Result,"Previous Week Results", " Percent Difference" , "Percent of Total" | sort CustomerName, Error, PercentDifference Output -  CustomerName Error Result Previous Week Results  Percent Difference Percent of Total AIG Private Client Group 1002 (abc) 4 0 0 3.252 AIG Private Client Group 1003 (cxz) 2 4 -50 1.626 AIG Private Client Group 1013 (Invalid Format) 12 4 200 9.756 AIG Private Client Group 1023 (Invalid Name) 3 4 -25 2.439 AIG Private Client Group 1027 (Invalid ) 102 4 2450 82.927 AIG Private Client Group Total 123 0 0 100 AIICO 1023 (Invalid Name) 8 0 0 38.095 AIICO 1201 1 8 -87.5 4.762 AIICO 1305  12 8 50 57.143 AIICO Total 21 0 0 100 Acceptance 1023 (Invalid Name) 3 0 0 27.273 Acceptance 1027 8 3 166.67 72.727 Acceptance Total 11 0 0 100   The Previous Week Results column is appending 4, if noticed which is wrong. any suggestions to solve this ? 
After some troubleshooting we noticed that when we configure the SSL cert and key on the webhook input, it turns off the port being used for the webhook. So if you have a path to the cert and key in ... See more...
After some troubleshooting we noticed that when we configure the SSL cert and key on the webhook input, it turns off the port being used for the webhook. So if you have a path to the cert and key in the input config then do a netstat, port is NOT listening. Remove the cert/key config from the webhook input, port starts listening again. What would be causing this?
first of all, questions can be very under-leveled compare to the other community questions, therefore, please don't make any bad comments; I understand. Baseline -Win2019 Server (Server A), Splun... See more...
first of all, questions can be very under-leveled compare to the other community questions, therefore, please don't make any bad comments; I understand. Baseline -Win2019 Server (Server A), Splunk Enterprise installed and will be used as a main SEARCH HEAD and INDEXER -Win2019 Server (Server B), Installed Universal Forwarder and connected to the Server A, AND will be forwarding data that I will manually feed. -RedHat (Server X) (syslog server), Installed Universal Forwarder and connected to the Server A, and I want this to send the syslogs to Server A   Problem and Question 1. ?The Server B is forwarding data to 'main' indexer by default. How can I change this so that Server B is forwarding data to Server A to a 'test' indexer? ??Server B is by default sending logs from /var/log/splunk, is there a way to change the location? or what items to send? I understand it's probably in the .conf files but I just cannot find it   P&Q 2. Server X, on the other hand, sending the logs to the "_internal" indexer of the Server A. when installing both of them, Server B and X, I've used same IP address, not sure why it's sending it to different indexers. ?How can I can I change the destination indexer from Server X to Server A? ??seems like Server X is sending the logs from /var/log/splunk also, how can I change this?? ???also how can I select which logs to send, and not to send???
Hi, Is there a way to determine Splunk License Usage for a specific event type.  I used index=_internal source=*license_usage.log* st=abcd to determine the license usage for the entire sourcetype... See more...
Hi, Is there a way to determine Splunk License Usage for a specific event type.  I used index=_internal source=*license_usage.log* st=abcd to determine the license usage for the entire sourcetype. To dig in deeper for the specific event type I found articles pointing to use len(_raw) which gives the byte size length of the raw event. I used the below to check if it returns the same from license_usage.log index="x" sourcetype=abcd | bin _time span=1d | eval size=len(_raw) | stats sum(size) as sizeInBytes by _time | eval GB = sizeInBytes/1024/1024/1024 The numbers do not match. The numbers from len(_raw) are very high when compared to the actual License Usage.  
We're summary indexing events from one index into another.  The original index contains JSON events e.g. {"field1": "value1", "field2": "value2"} The summary events all have a hostname value append... See more...
We're summary indexing events from one index into another.  The original index contains JSON events e.g. {"field1": "value1", "field2": "value2"} The summary events all have a hostname value appended onto them e.g. {"field1": "value1", "field2": "value2"}, hostname=abc I found abc configured in alert_actions.conf under the email stanza; [email] hostname = abc I can't figure out why this is happening and how to remove it from future summary events.  In advanced edit I can see a field called action.summary_index.hostname which contains "abc".  Is it simply inheriting the hostname parameter from alert_actions.conf? I tried removing the value from this parameter and saving but when I go back into advanced edit the value is back.  This search cluster is quite mature (old...) so I wouldn't be happy simply removing that hostname parameter from alert_actions.conf and "hoping for the best".  I'm not sure where else it might be being used and what impact it could have on other users. Any thoughts at all would be welcome here.
Hi All, We are planning to upgrade Splunk ES from 6.2 to 7.0.1. In Release Notes of 7.0.1 deprecated features, its mentioned like below. No support for Malware Domains threatlist The Malw... See more...
Hi All, We are planning to upgrade Splunk ES from 6.2 to 7.0.1. In Release Notes of 7.0.1 deprecated features, its mentioned like below. No support for Malware Domains threatlist The Malware Domains threatlist is not supported in Enterprise security version 6.5.0 or higher.    Is it any kind of lookup definition as mentioned in below link? https://community.splunk.com/t5/Security/Add-domains-to-threat-lists/td-p/116392 Or its related to below dashboard in Enterprise Security Suit? SplunkEnterpriseSecuritySuite/Security Intelligence/Threat Intelligence/Threat Activity/Threat Group/Threat Group (malwaretriage) Basically, I am not able to find out which feature is going to deprecate or remove. Please su  
Hi Splunkers, I'm trying to set an alert condition to block traffic for IP addresses from 13.108.0.0 to13.111.255.255 and from 66.231.70.0  to 66.231.85.255, but I'm really stuck is there anybody c... See more...
Hi Splunkers, I'm trying to set an alert condition to block traffic for IP addresses from 13.108.0.0 to13.111.255.255 and from 66.231.70.0  to 66.231.85.255, but I'm really stuck is there anybody can help please? My query below:   | tstats count values(All_Traffic.app) AS app values(All_Traffic.dvc) AS devicename values(All_Traffic.src_zone) AS src_zone values(All_Traffic.dest_zone) AS dest_zone from datamodel=Network_Traffic where All_Traffic.action=blocked All_Traffic.src_ip IN (*) All_Traffic.dest IN (13.108.0.0 13.111.255.255 OR 66.231.80.0 66.231.95.255) All_Traffic.dest_port IN (*) by _time,All_Traffic.action,All_Traffic.src_ip, All_Traffic.dest ,All_Traffic.dest_port ,All_Traffic.transport,All_Traffic.rule,sourcetype | rename All_Traffic.* AS * | sort - _time limit=0 | fields - count | rename rule as policy,src_ip AS src | eval action=case(action="teardown","drop",1=1,action)    
Our Splunk rep walked us through setting up SSL for our Splunk server communication with each other and for our Universal Forwarders to connect to our Indexer. However, we still get the warning X509 ... See more...
Our Splunk rep walked us through setting up SSL for our Splunk server communication with each other and for our Universal Forwarders to connect to our Indexer. However, we still get the warning X509 certificate (O=SplunkUser,CN=SplunkServerDefaultCert) should not be used, as it is issued by Splunk's own default Certificate Authority (CA) In addition, Nessus scans find the default Splunk certificate on all of the systems with Universal Forwarders. We have SSL certificates created by our government agency's CA. I have verified that our indexer's server.conf is pointing sslRootCAPath to our CA's pem. I have verified that our indexer's inputs.conf is pointing serverCert at our server's pem. I have verified that our universal forwarders' outputs.conf have clientCert pointing at our server's pem, which is located on each system in C:\Program Files\SplunkUniversalForwarder\etc\auth. I have verified that our universal forwarders' outputs.conf have sslRootCAPath pointing at our CA's pem, which is located on each system in C:\Program Files\SplunkUniversalForwarder\etc\auth. Why do we still get this warning? Are we missing a setting somewhere?