All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, i am finding average per day for some of the data. My search looks like this.     | bucket _time span=1d | stats distinct_count(Task) as counts by _time Group| stats avg(counts) as Aver... See more...
Hi all, i am finding average per day for some of the data. My search looks like this.     | bucket _time span=1d | stats distinct_count(Task) as counts by _time Group| stats avg(counts) as AverageCountPerDay by Group     I was able to get the results but the problem is the stats avg does not consider the day in which there is no "Task". I want my search to consider all the days. How can i achieve this?
Hello Everyone,  Please I need help with this. I would like to see the percentage of IPs that login from different countries. To get the total percentage of all the IPs from one country grouped... See more...
Hello Everyone,  Please I need help with this. I would like to see the percentage of IPs that login from different countries. To get the total percentage of all the IPs from one country grouped into one column. Not quite sure how to achieve this.  IP Country Percentage 2.3.1.2 BR 22% 2.4.3.1 BR 27%     Total=49% 1.2.3.4 CA 11% 1.1.3.2 CA 10% 1.2.3.2 CA 8%     Total=29% 6.5.3.2 IN 5% 6.4.2.1 IN 7% 6.2.3.1 IN 8% 5.7.9.8 IN 2%     Total=22%  
I have the following that should display the current free memory in a windows system. However, it appears that I am missing something.   index="perfmonmemory" | eval mem_free=mem_free/1024 | eval... See more...
I have the following that should display the current free memory in a windows system. However, it appears that I am missing something.   index="perfmonmemory" | eval mem_free=mem_free/1024 | eval mem_free=round(mem_free,0) | timechart count span=1min | bin _time span=1min | stats avg(mem_free) as rpm | gauge rpm 10 20 30 40 50 60
Hi there, I've been attempting to create a dashboard with metrics from the itsi_im_metrics index but am struggling with "instances" and the LogicalDisk.%_Free_Space metric. Using the following se... See more...
Hi there, I've been attempting to create a dashboard with metrics from the itsi_im_metrics index but am struggling with "instances" and the LogicalDisk.%_Free_Space metric. Using the following search, I can see the "instances" dimension are being used for each logical volume:     | mcatalog values(_dims) WHERE "index"="*" GROUPBY metric_name index instance | rename values(_dims) AS dimensions | table metric_name dimensions index instance     I can get a visualisation for each of the instances with the following and changing the C: to d: or E: respectively:     | mstats prestats=true avg(LogicalDisk.Free_Megabytes) WHERE (`itsi_entity_type_windows_metrics_indexes`) span=1m AND instance=C: | timechart span=1m avg(LogicalDisk.Free_Megabytes) as "Megabytes Free"     ...but I can't get all three of them (C:, d: and E:) into the same table like this: _time C: % free % free E: % free   Any tips or advice would be greatly appreciated! Cheers
I inherited this splunk instance that uses SAML , but when I add a "new" user  its configured as Authentication Method as "splunk"   All the other users have SAML as the Authentication Method.   ... See more...
I inherited this splunk instance that uses SAML , but when I add a "new" user  its configured as Authentication Method as "splunk"   All the other users have SAML as the Authentication Method.     Can anyone share some help in adding users in the SAML Authentication method ?
Two independent playbooks performing different automation tasks are triggered by the same event. The expectation is that both playbooks will start approximately at the same time however it was observ... See more...
Two independent playbooks performing different automation tasks are triggered by the same event. The expectation is that both playbooks will start approximately at the same time however it was observed that in some cases they start anywhere between 10sec to 50sec apart.  Is there some way to configure SOAR to run these 2 playbooks synchronously?   First playbook start time: 2022-10-12T15:07:40.773325Z: Starting playbook 'core/SGs Link Verification (id: 121, version: 14, pyversion: 3, scm id: 10)' on event '1811' with playbook run id: 513, running as user '2' with scope 'new'   Second playbook start time: 2022-10-12T15:08:32.483185Z: Starting playbook 'core/Limit SGs Run Time (id: 122, version: 10, pyversion: 3, scm id: 10)' on event '1811' with playbook run id: 514, running as user '2' with scope 'new'        
We have implemented the Splunk Add-On for Google Workspace (https://splunkbase.splunk.com/app/5556) in our Splunk environment. We used this documentation for the implementation: https://docs.splun... See more...
We have implemented the Splunk Add-On for Google Workspace (https://splunkbase.splunk.com/app/5556) in our Splunk environment. We used this documentation for the implementation: https://docs.splunk.com/Documentation/AddOns/released/GoogleWorkspace/Configureinputs1 Currently, we are successfully getting logs with the sourcetype "gws:gmail", which is good. However, we are not getting logs for the other sourcetypes: gws:reports:admin gws:reports:calendar gws:reports:context_aware_access gws:reports:drive gws:reports:gcp gws:reports:login gws:reports:oauthtoken gws:reports:saml In looking at the _internal index, we see the following error: 2022-10-06 18:45:36,130 ERROR pid=32667 tid=MainThread file=activity_report.py:stream_events:140 | Exception raised while ingesting data for activity report: . Traceback: Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_Google_Workspace/bin/activity_report.py", line 133, in stream_events service, File "/opt/splunk/etc/apps/Splunk_TA_Google_Workspace/bin/gws_runner.py", line 97, in run_ingest proxies, File "/opt/splunk/etc/apps/Splunk_TA_Google_Workspace/bin/gws_request.py", line 116, in fetch_report rand=random.random, File "/opt/splunk/etc/apps/Splunk_TA_Google_Workspace/bin/gws_request.py", line 52, in _retry_request raise CouldNotAuthenticateException() gws_request.CouldNotAuthenticateException This appears to be a permissions/authentication issue. We have recreated the accounts and applied them to the inputs in the app. However this has not resolved the issue. At this stage we are trying to determine the appropriate permissions for the account needed to access the above sourcetypes. To clarify: we have 2 service accounts for this implementation.  One for Gmail (which is working), and the other for the other activity reports (which is not).  At this stage, I just need permissions/role/scope info for the non-working service account.  The troubleshooting documentation is somewhat confusing as to what is needed, specifically steps 1 and 6, which seem to contradict one another: Log into your Google Cloud service account. This service account cannot be an organization admin account. Copy Client ID of this service account Navigate to https://admin.google.com/ac/owl/domainwidedelegation. Check if the Client ID for your service account contains the https://www.googleapis.com/auth/admin.reports.audit.readonly scope. If it is not there, add your Client ID, and specify the https://www.googleapis.com/auth/admin.reports.audit.readonly scope. Navigate to https://console.cloud.google.com/iam-admin/iam. Check if the account you are using for the Username field contains the Organization Administrator role. Navigate to the Certificate field. Verify that you added the entire JSON file that you downloaded as a key for your service account. Save your changes." It seems like they have merged the instructions for two service accounts into one?  Apologies in advance if I am missing something simple - I think I may have gotten too far into the weeds on this one.
I am a student and I want to use Splunk for education purposes. I believe that 14 days of trial (in case of Cloud) or 60 days of trial (in case of Enterprise) not enough for me, so I'd like to use Fr... See more...
I am a student and I want to use Splunk for education purposes. I believe that 14 days of trial (in case of Cloud) or 60 days of trial (in case of Enterprise) not enough for me, so I'd like to use Free license, mentioned here.  So, the question: is Splunk Free license available only for Enterprise version? Or it can be used with Splunk Cloud too?
We need to know how to monitor lookups created inside splunk, checking if they are empty or with errors. We use REST (| rest /services/data/transforms/lookups | table eai:acl.app eai:appName filename... See more...
We need to know how to monitor lookups created inside splunk, checking if they are empty or with errors. We use REST (| rest /services/data/transforms/lookups | table eai:acl.app eai:appName filename title fields_list updated id) to bring up all Lookups, but how can we check if the lookups are wrong or empty?
For those of you who have installed SC4S in a Docker for Windows environment, what differences were there in the install (as opposed to any of the other environments in the documentation)?  Did you r... See more...
For those of you who have installed SC4S in a Docker for Windows environment, what differences were there in the install (as opposed to any of the other environments in the documentation)?  Did you run into any Docker for Windows roadblocks?  Any particular challenges?
Hi, I have a search head cluster of 3 members and I have a scheduled search which is basically doing an output CSV at the end, but my query is to is there a way I can run the scheduled search in onl... See more...
Hi, I have a search head cluster of 3 members and I have a scheduled search which is basically doing an output CSV at the end, but my query is to is there a way I can run the scheduled search in only one node, not in all the nodes?
Hi guys, I need help with a Splunk query. The boss wants me to have a total of all different types of errors.  When I run this query:   index = css-dev error = "*" it gives the logs where for ea... See more...
Hi guys, I need help with a Splunk query. The boss wants me to have a total of all different types of errors.  When I run this query:   index = css-dev error = "*" it gives the logs where for each log there is an error field present. The error field has 5 values - access_denied, invalid_request, invalid_token, server_error, unauthorised_client.   In addition to this "error" field, there are some other errors also which I want to capture but they are added by developers by adding them using log. These errors are: 1. runtime error: attempt to get length of a boolean value 2. Authentication error : WRONGPASS invalid username-password pair 3. Error while sending 2 (size = 1KB) traces to the DD agent So these above 3 errors are not included in the "error" field and so therefore when i run the query - index = css-dev error="*" , I cannot find the above 3 errors. What I want is a query that should include the already present errors in the "error" field(access_denied, invalid_request, invalid_token, server_error, unauthorised_client), and should  also dynamically add any new error added by the developer. Is it possible?
I am planning a migration of Splunk Enterprise to a new instance. The old instance consists of a single standalone server. The new one has a search head, an indexer cluster master, and 3 indexer clus... See more...
I am planning a migration of Splunk Enterprise to a new instance. The old instance consists of a single standalone server. The new one has a search head, an indexer cluster master, and 3 indexer cluster peers. My original plan was this: Add the old standalone server to the new search head as a search peer Instruct users to search from the new search head instead of the old standalone server Reconfigure my 300+ universal forwarders to send data to the new indexer cluster instead of the old standalone instance Retain the old standalone server for 1 year until we no longer need the data, then decommission it But based on the following documentation, I would also need to deactivate the search role on the old standalone server before performing step 1. https://docs.splunk.com/Documentation/Splunk/9.0.1/DistSearch/Configuredistributedsearch Am I interpreting this correctly? Thanks in advance.
I have been working on this issue for several days now and cannot seem to pull message trace data over to Splunk  via OAuth using Splunk Add-on for Microsoft Office 365 Reporting Web Service 2.0.1 on... See more...
I have been working on this issue for several days now and cannot seem to pull message trace data over to Splunk  via OAuth using Splunk Add-on for Microsoft Office 365 Reporting Web Service 2.0.1 on Splunk 8.1.1  The following is the error that is thrown. ERROR pid=42990 tid=MainThread file=base_modinput.py:log_error:316 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/lib/splunktaucclib/modinput_wrapper/base_modinput.py", line 140, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ms_o365_message_trace_oauth.py", line 355, in collect_events get_events_continuous(helper, ew) File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ms_o365_message_trace_oauth.py", line 97, in get_events_continuous messages = message_response['value'] or None TypeError: 'NoneType' object is not subscriptable   Any help on this issue is greatly appreciated. 
I have an action that I need a response from before the playbook can proceed, but the app is prone to occasionally time out or return an invalid result. To handle this, I want to try the action 3 tim... See more...
I have an action that I need a response from before the playbook can proceed, but the app is prone to occasionally time out or return an invalid result. To handle this, I want to try the action 3 times; if I still don't get a valid response, then the playbook should proceed with handling it as a failure and alert as such. I'm having trouble finding a good way to build the loop, though. It doesn't appear that there's a way to declare a variable (i.e. the loop counter) outside the action block, so I have no way to tell where I am in the loop. How can I declare a variable with global scope (or at least scope it outside the action block) in 5.2.1.78411? Alternately, is there a "retry this action n times if it fails" option that I can apply?
I am trying to create an alert and send the alert details to summary index.Below is the search I am using.I have scheduled the below search everyday at 2AM and look for yesterday data and send alert ... See more...
I am trying to create an alert and send the alert details to summary index.Below is the search I am using.I have scheduled the below search everyday at 2AM and look for yesterday data and send alert and then send same data to summary index..I am trying to create another alert to compare the data with summary index and send alert only if there is a difference in results..I am trying to compare the combination of host gpu and VBIOS_Version fields..if all these are different then send an alert   Query for alert   index=preo host IN(*) | rex field=_raw "log-inventory.sh\[(?<id>[^\]]+)\]\:\s*(?<gpu>[^\:]+)\:\s*(?<Hardware_Details>.*)" | rex field=_raw "GPU.*PCISLOT.*VBIOS\:\s(?<ios>[^\,]+)" | search gpu=GPU* | eval gpu_ios=gpu." : ".ios | stats latest(_time) AS _time latest(*) AS * BY host gpu | bucket _time span=1m | bucket _time span=1m | appendpipe [| top 1 ios BY _time host | rename ios AS common_ios | table _time common_ios host] | eventstats max(common_ios) AS common_ios values(gpu_ios) AS gpu_ios BY _time host | table _time host gpu ios common_ios gpu_ios | rename _time as time | eval time=strftime(time,"%Y-%m-%d %H:%M:%S") | rename ios AS VBIOS_Version common_ios as Common_VBIOS_Version gpu_ios as GPU_VBIOS | where LEN(gpu)>1 AND VBIOS_Version!=Common_VBIOS_Version |collect index=summary marker="summary_type=test" | eval details= "preos Splunk: ".host. " node VBIOS mismatch " .gpu. " " .VBIOS_Version. " Common:" .Common_VBIOS_Version." date:" .time | table details     Below is the query I tried to compare with summary index and send if there is a change    index=preos host IN(*) *GPU*: PCISLOT* | rex field=_raw "log-inventory.sh\[(?<id>[^\]]+)\]\:\s*(?<gpu>[^\:]+)\:\s*(?<Hardware_Details>.*)" | rex field=_raw "GPU.*PCISLOT.*VBIOS\:\s(?<ios>[^\,]+)" | search gpu=GPU* | eval gpu_ios=gpu." : ".ios | stats latest(_time) AS _time latest(*) AS * BY host gpu | bucket _time span=1m | bucket _time span=1m | appendpipe [| top 1 ios BY _time host | rename ios AS common_ios | table _time common_ios host] | eventstats max(common_ios) AS common_ios values(gpu_ios) AS gpu_ios BY _time host | table _time host gpu ios common_ios gpu_ios | rename _time as time | eval time=strftime(time,"%Y-%m-%d %H:%M:%S") | rename ios AS VBIOS_Version common_ios as Common_VBIOS_Version gpu_ios as GPU_VBIOS | where LEN(gpu)>1 AND VBIOS_Version!=Common_VBIOS_Version | join host gpu VBIOS_Version [search index=summary summary_type=test | table gpu orig_host VBIOS_Version | rename orig_host as host ]  
Hi forum! getting a bit muddled here, I want to statistically demonstrate a recurring weekly trend , so timewrap sounds great.  Then again I want to work out a 95% variation of this, so predict soun... See more...
Hi forum! getting a bit muddled here, I want to statistically demonstrate a recurring weekly trend , so timewrap sounds great.  Then again I want to work out a 95% variation of this, so predict sounds awesome. I want to do this so that I can hopefully create an action (alert) condition based on overlaying this variance on a real time data series: enabling me (hopefully) to answer the question "is this normal or not?" When I look at what the two commands do, they seem to want to do different things - I mean how can you predict a timewrap that circles back by design?  So splunk - understandably errors - an I ask for forgiveness of my bad logic .  Can anyone give me any advice?
How to set a report hourly for time frame between 26th to 5th of each month?
Getting errors as Failed to start KV Store process. See mongod.log and splunkd.log for details. tried few steps by rm -rf /data1/kvstore/mongo/mongod.lock followed by restart but still showing stat... See more...
Getting errors as Failed to start KV Store process. See mongod.log and splunkd.log for details. tried few steps by rm -rf /data1/kvstore/mongo/mongod.lock followed by restart but still showing status as failed  please let us know on this to resolve   /opt/splunk/bin #systemctl status Splunkd ● Splunkd.service - Systemd service file for Splunk, generated by 'splunk enable boot-start' Loaded: loaded (/etc/systemd/system/Splunkd.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2022-10-12 11:02:22 CDT; 9s ago Main PID: 28477 (splunkd) Tasks: 7 Memory: 111.6M (limit: 100.0G) CGroup: /system.slice/Splunkd.service ├─28477 splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd ├─28530 [splunkd pid=28477] splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd [process-runner] └─28558 /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_aws/bin/aws_billing_cur.py --scheme Oct 12 11:02:24 usappslnglp100 splunk[28477]: Checking critical directories... Done Oct 12 11:02:24 usappslnglp100 splunk[28477]: Checking indexes... Oct 12 11:02:24 usappslnglp100 splunk[28477]: Validated: _audit _internal _introspection _metrics _telemetry _thefishbucket add_on_builder_index ...rts ivz_o Oct 12 11:02:24 usappslnglp100 splunk[28477]: Done Oct 12 11:02:25 usappslnglp100 splunk[28477]: Checking filesystem compatibility... Done Oct 12 11:02:25 usappslnglp100 splunk[28477]: Checking conf files for problems... Oct 12 11:02:25 usappslnglp100 splunk[28477]: Done Oct 12 11:02:25 usappslnglp100 splunk[28477]: Checking default conf files for edits... Oct 12 11:02:25 usappslnglp100 splunk[28477]: Validating installed files against hashes from '/opt/splunk/splunk-8.0.1-6db836e2fb9e-linux-2.6-x86...manifest' Oct 12 11:02:25 usappslnglp100 splunk[28477]: 2022-10-12 11:02:25.875 -0500 splunkd started (build 6db836e2fb9e) pid=28477 Hint: Some lines were ellipsized, use -l to show in full. root@usappslnglp100:/opt/splunk/bin #./splunk show kvstore-status Your session is invalid. Please login. Splunk username: admin Password: This member: backupRestoreStatus : Ready disabled : 0 guid : 45F2DDC2-C57A-4A47-B6C9-3523E0D936E6 port : 8191 standalone : 1 status : failed  
Hi all! I feel as if I'm overcomplicating an issue, but I haven't gotten any built-in Splunk tools to work.  Here's the situation: I have a field that I extract from my logs using rex. I want to ... See more...
Hi all! I feel as if I'm overcomplicating an issue, but I haven't gotten any built-in Splunk tools to work.  Here's the situation: I have a field that I extract from my logs using rex. I want to be able to take an average AND a standard deviation count of each field's occurrence over the days to be able to detect any new abnormalities of this field. Here's the field extraction:     earliest=-7d@d latest=-0d@d index=prod "<b>ERROR:</b>" | rex "foo:\ (?<my_foo>[^\ ]*)" | rex "bar:\ (?<my_bar>[^\<]*)" | eval my_foo = coalesce(my_foo,"-") | eval my_bar = coalesce(my_bar, "-") | rex mode=sed field=my_bar "s/[\d]{2,50}/*/g" | strcat my_foo " - " my_bar my_foobar     I can use stats to get a total count by my_foobar. And I can use timechart to get a count by day for my_foobar. However, if I try to average by day after timechart, I'll get no output unless I give up my my_foobar discretion.     | timechart span=1d@d count as my_count by my_foobar | stats avg(my_count)      No output     | bin span=1d@d my_chunk | stats count(my_script_message) by my_chunk     No output I did come up with a solution, but it's hideous. I basically made my own bins using joins     <initial search above> | chart count as my_count1 by my_foobar | join my_foobar [search <initial search above with my_count iterated> <x5 more joins> | eval my_avg = SUM(my_count1 + my_count2 + my_count3 + my_count4 + my_count5 + my_count6 + my_count7)/7 | eval my_std = (POW((my_count1 - my_avg),2) + POW((my_count2 - my_avg),2) + POW((my_count3 - my_avg),2) + POW((my_count4 - my_avg),2) + POW((my_count5 - my_avg),2) + POW((my_count6 - my_avg),2) + POW((my_count7 - my_avg),2))/7 | eval my_last_day_dev = ABS(my_count1 - my_mess_avg) | table my_foobar my_avg my_std my_last_day_dev | search my_last_day_dev > my_std     I hate it and need to use this methodology for many of my monitoring plans. Any ideas on how to make this more sleek?