All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi guys, I need help with a Splunk query. The boss wants me to have a total of all different types of errors.  When I run this query:   index = css-dev error = "*" it gives the logs where for ea... See more...
Hi guys, I need help with a Splunk query. The boss wants me to have a total of all different types of errors.  When I run this query:   index = css-dev error = "*" it gives the logs where for each log there is an error field present. The error field has 5 values - access_denied, invalid_request, invalid_token, server_error, unauthorised_client.   In addition to this "error" field, there are some other errors also which I want to capture but they are added by developers by adding them using log. These errors are: 1. runtime error: attempt to get length of a boolean value 2. Authentication error : WRONGPASS invalid username-password pair 3. Error while sending 2 (size = 1KB) traces to the DD agent So these above 3 errors are not included in the "error" field and so therefore when i run the query - index = css-dev error="*" , I cannot find the above 3 errors. What I want is a query that should include the already present errors in the "error" field(access_denied, invalid_request, invalid_token, server_error, unauthorised_client), and should  also dynamically add any new error added by the developer. Is it possible?
I am planning a migration of Splunk Enterprise to a new instance. The old instance consists of a single standalone server. The new one has a search head, an indexer cluster master, and 3 indexer clus... See more...
I am planning a migration of Splunk Enterprise to a new instance. The old instance consists of a single standalone server. The new one has a search head, an indexer cluster master, and 3 indexer cluster peers. My original plan was this: Add the old standalone server to the new search head as a search peer Instruct users to search from the new search head instead of the old standalone server Reconfigure my 300+ universal forwarders to send data to the new indexer cluster instead of the old standalone instance Retain the old standalone server for 1 year until we no longer need the data, then decommission it But based on the following documentation, I would also need to deactivate the search role on the old standalone server before performing step 1. https://docs.splunk.com/Documentation/Splunk/9.0.1/DistSearch/Configuredistributedsearch Am I interpreting this correctly? Thanks in advance.
I have been working on this issue for several days now and cannot seem to pull message trace data over to Splunk  via OAuth using Splunk Add-on for Microsoft Office 365 Reporting Web Service 2.0.1 on... See more...
I have been working on this issue for several days now and cannot seem to pull message trace data over to Splunk  via OAuth using Splunk Add-on for Microsoft Office 365 Reporting Web Service 2.0.1 on Splunk 8.1.1  The following is the error that is thrown. ERROR pid=42990 tid=MainThread file=base_modinput.py:log_error:316 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/lib/splunktaucclib/modinput_wrapper/base_modinput.py", line 140, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ms_o365_message_trace_oauth.py", line 355, in collect_events get_events_continuous(helper, ew) File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ms_o365_message_trace_oauth.py", line 97, in get_events_continuous messages = message_response['value'] or None TypeError: 'NoneType' object is not subscriptable   Any help on this issue is greatly appreciated. 
I have an action that I need a response from before the playbook can proceed, but the app is prone to occasionally time out or return an invalid result. To handle this, I want to try the action 3 tim... See more...
I have an action that I need a response from before the playbook can proceed, but the app is prone to occasionally time out or return an invalid result. To handle this, I want to try the action 3 times; if I still don't get a valid response, then the playbook should proceed with handling it as a failure and alert as such. I'm having trouble finding a good way to build the loop, though. It doesn't appear that there's a way to declare a variable (i.e. the loop counter) outside the action block, so I have no way to tell where I am in the loop. How can I declare a variable with global scope (or at least scope it outside the action block) in 5.2.1.78411? Alternately, is there a "retry this action n times if it fails" option that I can apply?
I am trying to create an alert and send the alert details to summary index.Below is the search I am using.I have scheduled the below search everyday at 2AM and look for yesterday data and send alert ... See more...
I am trying to create an alert and send the alert details to summary index.Below is the search I am using.I have scheduled the below search everyday at 2AM and look for yesterday data and send alert and then send same data to summary index..I am trying to create another alert to compare the data with summary index and send alert only if there is a difference in results..I am trying to compare the combination of host gpu and VBIOS_Version fields..if all these are different then send an alert   Query for alert   index=preo host IN(*) | rex field=_raw "log-inventory.sh\[(?<id>[^\]]+)\]\:\s*(?<gpu>[^\:]+)\:\s*(?<Hardware_Details>.*)" | rex field=_raw "GPU.*PCISLOT.*VBIOS\:\s(?<ios>[^\,]+)" | search gpu=GPU* | eval gpu_ios=gpu." : ".ios | stats latest(_time) AS _time latest(*) AS * BY host gpu | bucket _time span=1m | bucket _time span=1m | appendpipe [| top 1 ios BY _time host | rename ios AS common_ios | table _time common_ios host] | eventstats max(common_ios) AS common_ios values(gpu_ios) AS gpu_ios BY _time host | table _time host gpu ios common_ios gpu_ios | rename _time as time | eval time=strftime(time,"%Y-%m-%d %H:%M:%S") | rename ios AS VBIOS_Version common_ios as Common_VBIOS_Version gpu_ios as GPU_VBIOS | where LEN(gpu)>1 AND VBIOS_Version!=Common_VBIOS_Version |collect index=summary marker="summary_type=test" | eval details= "preos Splunk: ".host. " node VBIOS mismatch " .gpu. " " .VBIOS_Version. " Common:" .Common_VBIOS_Version." date:" .time | table details     Below is the query I tried to compare with summary index and send if there is a change    index=preos host IN(*) *GPU*: PCISLOT* | rex field=_raw "log-inventory.sh\[(?<id>[^\]]+)\]\:\s*(?<gpu>[^\:]+)\:\s*(?<Hardware_Details>.*)" | rex field=_raw "GPU.*PCISLOT.*VBIOS\:\s(?<ios>[^\,]+)" | search gpu=GPU* | eval gpu_ios=gpu." : ".ios | stats latest(_time) AS _time latest(*) AS * BY host gpu | bucket _time span=1m | bucket _time span=1m | appendpipe [| top 1 ios BY _time host | rename ios AS common_ios | table _time common_ios host] | eventstats max(common_ios) AS common_ios values(gpu_ios) AS gpu_ios BY _time host | table _time host gpu ios common_ios gpu_ios | rename _time as time | eval time=strftime(time,"%Y-%m-%d %H:%M:%S") | rename ios AS VBIOS_Version common_ios as Common_VBIOS_Version gpu_ios as GPU_VBIOS | where LEN(gpu)>1 AND VBIOS_Version!=Common_VBIOS_Version | join host gpu VBIOS_Version [search index=summary summary_type=test | table gpu orig_host VBIOS_Version | rename orig_host as host ]  
Hi forum! getting a bit muddled here, I want to statistically demonstrate a recurring weekly trend , so timewrap sounds great.  Then again I want to work out a 95% variation of this, so predict soun... See more...
Hi forum! getting a bit muddled here, I want to statistically demonstrate a recurring weekly trend , so timewrap sounds great.  Then again I want to work out a 95% variation of this, so predict sounds awesome. I want to do this so that I can hopefully create an action (alert) condition based on overlaying this variance on a real time data series: enabling me (hopefully) to answer the question "is this normal or not?" When I look at what the two commands do, they seem to want to do different things - I mean how can you predict a timewrap that circles back by design?  So splunk - understandably errors - an I ask for forgiveness of my bad logic .  Can anyone give me any advice?
How to set a report hourly for time frame between 26th to 5th of each month?
Getting errors as Failed to start KV Store process. See mongod.log and splunkd.log for details. tried few steps by rm -rf /data1/kvstore/mongo/mongod.lock followed by restart but still showing stat... See more...
Getting errors as Failed to start KV Store process. See mongod.log and splunkd.log for details. tried few steps by rm -rf /data1/kvstore/mongo/mongod.lock followed by restart but still showing status as failed  please let us know on this to resolve   /opt/splunk/bin #systemctl status Splunkd ● Splunkd.service - Systemd service file for Splunk, generated by 'splunk enable boot-start' Loaded: loaded (/etc/systemd/system/Splunkd.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2022-10-12 11:02:22 CDT; 9s ago Main PID: 28477 (splunkd) Tasks: 7 Memory: 111.6M (limit: 100.0G) CGroup: /system.slice/Splunkd.service ├─28477 splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd ├─28530 [splunkd pid=28477] splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd [process-runner] └─28558 /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_aws/bin/aws_billing_cur.py --scheme Oct 12 11:02:24 usappslnglp100 splunk[28477]: Checking critical directories... Done Oct 12 11:02:24 usappslnglp100 splunk[28477]: Checking indexes... Oct 12 11:02:24 usappslnglp100 splunk[28477]: Validated: _audit _internal _introspection _metrics _telemetry _thefishbucket add_on_builder_index ...rts ivz_o Oct 12 11:02:24 usappslnglp100 splunk[28477]: Done Oct 12 11:02:25 usappslnglp100 splunk[28477]: Checking filesystem compatibility... Done Oct 12 11:02:25 usappslnglp100 splunk[28477]: Checking conf files for problems... Oct 12 11:02:25 usappslnglp100 splunk[28477]: Done Oct 12 11:02:25 usappslnglp100 splunk[28477]: Checking default conf files for edits... Oct 12 11:02:25 usappslnglp100 splunk[28477]: Validating installed files against hashes from '/opt/splunk/splunk-8.0.1-6db836e2fb9e-linux-2.6-x86...manifest' Oct 12 11:02:25 usappslnglp100 splunk[28477]: 2022-10-12 11:02:25.875 -0500 splunkd started (build 6db836e2fb9e) pid=28477 Hint: Some lines were ellipsized, use -l to show in full. root@usappslnglp100:/opt/splunk/bin #./splunk show kvstore-status Your session is invalid. Please login. Splunk username: admin Password: This member: backupRestoreStatus : Ready disabled : 0 guid : 45F2DDC2-C57A-4A47-B6C9-3523E0D936E6 port : 8191 standalone : 1 status : failed  
Hi all! I feel as if I'm overcomplicating an issue, but I haven't gotten any built-in Splunk tools to work.  Here's the situation: I have a field that I extract from my logs using rex. I want to ... See more...
Hi all! I feel as if I'm overcomplicating an issue, but I haven't gotten any built-in Splunk tools to work.  Here's the situation: I have a field that I extract from my logs using rex. I want to be able to take an average AND a standard deviation count of each field's occurrence over the days to be able to detect any new abnormalities of this field. Here's the field extraction:     earliest=-7d@d latest=-0d@d index=prod "<b>ERROR:</b>" | rex "foo:\ (?<my_foo>[^\ ]*)" | rex "bar:\ (?<my_bar>[^\<]*)" | eval my_foo = coalesce(my_foo,"-") | eval my_bar = coalesce(my_bar, "-") | rex mode=sed field=my_bar "s/[\d]{2,50}/*/g" | strcat my_foo " - " my_bar my_foobar     I can use stats to get a total count by my_foobar. And I can use timechart to get a count by day for my_foobar. However, if I try to average by day after timechart, I'll get no output unless I give up my my_foobar discretion.     | timechart span=1d@d count as my_count by my_foobar | stats avg(my_count)      No output     | bin span=1d@d my_chunk | stats count(my_script_message) by my_chunk     No output I did come up with a solution, but it's hideous. I basically made my own bins using joins     <initial search above> | chart count as my_count1 by my_foobar | join my_foobar [search <initial search above with my_count iterated> <x5 more joins> | eval my_avg = SUM(my_count1 + my_count2 + my_count3 + my_count4 + my_count5 + my_count6 + my_count7)/7 | eval my_std = (POW((my_count1 - my_avg),2) + POW((my_count2 - my_avg),2) + POW((my_count3 - my_avg),2) + POW((my_count4 - my_avg),2) + POW((my_count5 - my_avg),2) + POW((my_count6 - my_avg),2) + POW((my_count7 - my_avg),2))/7 | eval my_last_day_dev = ABS(my_count1 - my_mess_avg) | table my_foobar my_avg my_std my_last_day_dev | search my_last_day_dev > my_std     I hate it and need to use this methodology for many of my monitoring plans. Any ideas on how to make this more sleek?
Hi everyone,   New splunker here. I want to use WMI to collect windows event logs from different windows server instead of splunk forwarder. is it doable ? if yes, please provide steps how to... See more...
Hi everyone,   New splunker here. I want to use WMI to collect windows event logs from different windows server instead of splunk forwarder. is it doable ? if yes, please provide steps how to collect logs remotely and send it to splunk ? What pull method do i need to use from splunk UI OR do i need to use Push method?
I'm trying to get our syslogs forwarded via UF to Splunk Cloud.  I've got the UF listening on port 514 and added  [udp://514] connection_host = network sourcetype = syslog to the inputs.comf fi... See more...
I'm trying to get our syslogs forwarded via UF to Splunk Cloud.  I've got the UF listening on port 514 and added  [udp://514] connection_host = network sourcetype = syslog to the inputs.comf file but I'm not seeing anything in search.     Is there a way to make sure UF is seeing anything on that port?  Am I missing a step?
On my Splunk Master node, I can check the status if "All Data is Searchable, Search Factor is Met, Replication Factor is Met". Is there a way to check the status using CLI? Or what's the best way to ... See more...
On my Splunk Master node, I can check the status if "All Data is Searchable, Search Factor is Met, Replication Factor is Met". Is there a way to check the status using CLI? Or what's the best way to set an alert, so if there is any issue, it emails me?
What file and registry path is required for Windows Splunk Universal Forwarder? Looking to deploy Unified Write Filter (UWF) to harden kiosks/shared Windows workstations. UWF works by redirecting a... See more...
What file and registry path is required for Windows Splunk Universal Forwarder? Looking to deploy Unified Write Filter (UWF) to harden kiosks/shared Windows workstations. UWF works by redirecting all non-approved file and registry write to temporary memory which is wiped out by a reboot. We need to identify the file and registry locations which Splunk Universal Forwarder (UF) requires so it can be excluded from UWF. 
Hello, is it possible to change the color of the Single Value Visualization based on a time value of the search result.   I get a timestamp as a search result and would like to make the text of t... See more...
Hello, is it possible to change the color of the Single Value Visualization based on a time value of the search result.   I get a timestamp as a search result and would like to make the text of the visualization red if the timestamp is from more than 3 days ago. Thanks for your help!
My customer wants a count of calls coming into their call center during their business hours (M, Tu, Th, F: 8:00 a.m. - 4:30 p.m. and W: 9:00 a.m. - 4:30 p.m.) and a count of calls that come in outsi... See more...
My customer wants a count of calls coming into their call center during their business hours (M, Tu, Th, F: 8:00 a.m. - 4:30 p.m. and W: 9:00 a.m. - 4:30 p.m.) and a count of calls that come in outside these hours and on weekends. This is what I have for the time element of the after-hours search so far, but I am getting no results: | eval date_hour=strftime(_time, "%H") | eval date_wday = strftime(_time, "%w") | search (date_wday=1 OR date_wday=2 OR date_wday=4 OR day_wday=5 date_hour<=7 date_hour>=17.5) OR (date_wday=3 date_hour<=8 date_hour>=17.5) OR (date_wday=6 OR date_wday=7)
I've stumbled today on a strange thing. It started out with a case about user hitting quota limits. But when I dug into that deeper it turned out that the user doesn't show in the system. It's not ... See more...
I've stumbled today on a strange thing. It started out with a case about user hitting quota limits. But when I dug into that deeper it turned out that the user doesn't show in the system. It's not displayed in the UI, it doesn't show in the REST output of /services/authentication/users But it is defined in etc/passwd so it can log in. And it has KOs created and assigned to it (most importantly in my case - scheduled searches). It has a role assigned in etc/passwd and it seems that that role is properly enforced (hence the quota limitations). Anyone encountered such thing? How could such user have been "lost"?
Hi everyone,   I am experiencing some issues with the ServiceNow add-on not creating incidents in ServiceNow. I was able to successfully add the ServiceNow account in Splunk and confirmed that th... See more...
Hi everyone,   I am experiencing some issues with the ServiceNow add-on not creating incidents in ServiceNow. I was able to successfully add the ServiceNow account in Splunk and confirmed that the correct permissions have been granted to the account in ServiceNow.   When I try to create an incident for an episode in Splunk ITSI I receive the error: "Unable to run the action snow_incident. Make sure the action is configured correctly and has all required fields. See the Activity tab of the episode for more information."   I checked the Activity log and found the following errors: "Action="snow_incident" failed with the following error: None search failed for actionId=search..." "Search command "snowincidentalert" failed to return an incident ID or URL. Check the add-on configuration and input parameters."   I also ran the following search as per the Splunk documentation: eventtype=snow_ticket_error   And I see the error: "ERROR pid=1 tid=MainThread file=snow_ticket.py:_do_event:182 | Failed to connect to https://companydev.service-now.com/https://companydev.service-now.com, error=Traceback (most recent call last):..."   I'm not sure why the URL is listed twice in the error. I am able to connect and login to the URL with the account used in Splunk.   Has anyone else run into an issue like this before?   Thanks.
Is there a way to query ES investigations for artifacts?  For example, suppose that I have a current notable with a hostile foreign IP address.  I would like to query Splunk and find all previous inv... See more...
Is there a way to query ES investigations for artifacts?  For example, suppose that I have a current notable with a hostile foreign IP address.  I would like to query Splunk and find all previous investigations with that IP address so that analyst can review the previous investigations.
Hi Splunkers, Any Best practices for field extraction and line breaking. i want to know something like , if we all these stanzas in props nd transfroms.conf . The line Breakin nd Field extraction... See more...
Hi Splunkers, Any Best practices for field extraction and line breaking. i want to know something like , if we all these stanzas in props nd transfroms.conf . The line Breakin nd Field extraction will take less resource and for good optimization method.  
  I would like to onboard the data from Oracle 19c database to splunk. So, i would like to know if Oracle 19c is compatible/supportable version to be used via Splunk DB Connect ?