All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Are there any internal logs in Splunk that show changes made to the query, who made it and what change he made?
I have an issue, and I found a posting here that I thought would fix me up, but there is something wrong and I am not sure what it is. I want to create a stacked barchart showing a date from a datest... See more...
I have an issue, and I found a posting here that I thought would fix me up, but there is something wrong and I am not sure what it is. I want to create a stacked barchart showing a date from a datestamp field we have, an error code and the number of devices that get that error code on that day. now if I run my current search just using the | timechart dc(field1), it works just fine, but uses the _time field. My datestamp field is a string, with the format of "2021-07-30". I tried using this code to assign the datestamp field to _ time: | eval NewTime=strptime(datestamp,"%Y-%m-%d %H:%M:%S") | eval _time=NewTime | timechart dc(field1) by field2 The search runs, but returns no values. Any suggestions would be helpful.
I have an issue with the connectivity between the heavy forwarder and the deployment server. What is a search that I could use in the GUI to diagnose the issue?
I have Splunk setup on an air gapped network (no internet connection). The search head is a single instance running 8.1.1. There are about 320 machines on the network. It's mostly Windows 10 clients,... See more...
I have Splunk setup on an air gapped network (no internet connection). The search head is a single instance running 8.1.1. There are about 320 machines on the network. It's mostly Windows 10 clients, the servers are 2016 & 2019. The clients are running the universal forwarder 7.1.1. I've created a couple of reports to see different things about the computers on the network (OS, if the Print log is turned on, etc). I get very inconsistent results from these reports (and other searches I'll do). E.G. - If I do a search on the Security event log for EventCode 4608 (restart event) I'll only get about 80 results (clients reboot nightly) when I should be getting closer to about 300. I've searched the event logs of machines that aren't on the report and they have the event code logged, but it's not being reported to splunk. I've checked everything I can think of, uninstalled/reinstalled the forwarder, installed different versions of the forwarder, etc. I can't figure out why one machine will report all events and another only reports some events (4624, 4627, 4634). Has anyone else had this issue?  Thank you 
operationName urls avg_time max_time count MethodUsingGET https://www.google.com/api/v1/571114808/CAR.202 https://www.google.com/api/v1/571114899 3255 3255 2 UsingGET https://w... See more...
operationName urls avg_time max_time count MethodUsingGET https://www.google.com/api/v1/571114808/CAR.202 https://www.google.com/api/v1/571114899 3255 3255 2 UsingGET https://www.googleA.com/api/v1/571114888/api/ https://www.googleB.com/api/v1/571114877/api/ 1316.889 5345 18 I would only want one url but it should count others as well. Is there a way?
Kindly help on the below scenario where I need to compare two different columns created using  different sourcetype.    For Ex:  |appendcols [search index="X" sourcetype="xy" |table ID,CASE_ID|] [... See more...
Kindly help on the below scenario where I need to compare two different columns created using  different sourcetype.    For Ex:  |appendcols [search index="X" sourcetype="xy" |table ID,CASE_ID|] [search index="X" sourcetype="YZ" OR sourcetype="ABC"|table Role,Name,NewID| Now here,  I need to Match ID and NewID which has similar results but not is same row.    ID      NewID 123   789 456  123  789  987 987 456     Now, the result should come match for the data.  I have tried many ways like (|foreach ID [eval status =if (match (ID, NewID), "YES", "NO")]. But nothing worked .  Please provide you suggestion.
This is affecting one of our HF that we use to do ingest external data via scripts, vendor provided apps and REST API polls.   For the REST API part we use the REST API Modular Input app (https://spl... See more...
This is affecting one of our HF that we use to do ingest external data via scripts, vendor provided apps and REST API polls.   For the REST API part we use the REST API Modular Input app (https://splunkbase.splunk.com/app/1546/).  The REST inputs works without any issues when we were at Splunk Enterprise 7.1.3. After upgrade SE to 8.1.1 and the rest_ta app to 2.0.1 last weekend, none of the scheduled REST inputs worked.   Problem is, this only happens on this server.   The REST inputs still work on a separate, dev server that was also upgraded to SE 8.1.1 and rest_ta 2.0.1.  I see the following set of error events in splunkd.log but they only show up when I make a change to any of the REST inputs, like changing the cron schedule to force it to run at the next minute.   Exception in thread Thread-1: Traceback (most recent call last): File "/opt/splunk/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/opt/splunk/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/opt/splunk/etc/apps/rest_ta/bin/rest.py", line 447, in do_run endpoint_list[i] = endpoint.replace(replace_key,c['clear_password']) File "/opt/splunk/lib/python3.7/site-packages/splunk/entity.py", line 574, in __getitem__ return self.properties[key] KeyError: 'clear_password'    I do not see any errors at the times when the cron schedules's supposed to execute the API calls.   So it feels like the rest_ta app itself just quit working.  Honestly, I'm a bit lost trying to interpret the errors.  Anyone have seen something similar, or have any tips on how to resolve this? I tried removing the app completely, restart splunkd then reinstall and reconfigure rest_ta 2.0.1 from scratch.  Still none of the scheduled jobs run.  The same errors still only show up after I modified one of the REST inputs.   Here's one of the several REST inputs configured.   They're all identical in that I'm only using the bundled "JSONArrayHandler" response_handler to process the returning JSON data from Infoblox.  It's not customized in any way.   [rest://InfoBlox_Networks] activation_key = --snip-- auth_password = {encrypted:splunk_svc_user} auth_type = basic auth_user = splunk_svc_user delimiter = : endpoint = https://a.b.c.d/wapi/v2.6.1/network?_max_results=15000 host = a.b.c.d http_method = GET index = infoblox index_error_response_codes = 1 log_level = INFO polling_interval = 3 * * * * request_timeout = 60 response_handler = JSONArrayHandler response_type = json sequential_mode = 0 sourcetype = infoblox:api:network streaming_request = 0    
Hello, I used the following search to convert the Date field in the CSV so Splunk could read it. I would like to create a chart using the Date and Amount fields but am having no luck. source="graph... See more...
Hello, I used the following search to convert the Date field in the CSV so Splunk could read it. I would like to create a chart using the Date and Amount fields but am having no luck. source="graph info-csv.csv" host="Tom1-PC" sourcetype="csv" | convert timeformat="%m/%d/%Y" mktime(Date) as numdate | reverse | table Date Amount Any help would be appreciated.  
I track the overall CPU usage on a server with:     index=mcadth_metrics host=IS20_DB sourcetype=PerfmonMk:CPU instance=_Total     And it works well for all other servers, and for this server u... See more...
I track the overall CPU usage on a server with:     index=mcadth_metrics host=IS20_DB sourcetype=PerfmonMk:CPU instance=_Total     And it works well for all other servers, and for this server until it went down for a reboot about 10 days ago.   The server's average CPU load (%_Processor_Time) is usually around 40%, but has been reporting in Splunk at about 5% since the reboot. No config was changed over the time of the reboot; and the Splunk forwarder has been restarted with no change. Image below shows Splunk search for %_Processor_Time returning with an Average of 5.35%. Image below shows actual server metrics reporting at 41% utilization.    
Hello, I am pretty new to splunk, and just feel lost at times. I have a question that i cant seem to find an answer for.  I have data that looks like  so the above is like 1 row and then the... See more...
Hello, I am pretty new to splunk, and just feel lost at times. I have a question that i cant seem to find an answer for.  I have data that looks like  so the above is like 1 row and then there are multiple  rows with the same type of list of entries for timestamp and total now I want to turn each row into a line on a line chart where the x-axis is the timestamp and the y-axis is the "Total". sort of like overlapping line charts based on all the rows. anyone have ideas 
Hi, I am trying to check if date that is stored within a field in table is within the last 24h from the moment the search is ran. I do NOT mean that for the search itself, it is set to 30 days in my... See more...
Hi, I am trying to check if date that is stored within a field in table is within the last 24h from the moment the search is ran. I do NOT mean that for the search itself, it is set to 30 days in my case and I cant change it, I want to check the value within only a specific field. For example I receive the following date:  2021-05-13T12:02:44.000+0000 And I need to know if its a date from the last 24h or not. So far I am out of luck, any ideas?
We are using the latest version of the Splunk App for Jenkins and we have configured it to use our own index. The drop-down filters are all populating correctly, however, the search panels are all us... See more...
We are using the latest version of the Splunk App for Jenkins and we have configured it to use our own index. The drop-down filters are all populating correctly, however, the search panels are all using the default jenkins indexes. Has anyone encountered this before? I've looked at the configuration files and I see where the indexes we set have been added to macros in local/macros.conf, but I don't see anywhere else that they have been set so I would assume that the panels should be using the same macros. If this app were using standard dashboards and panels we could just override the search that they are using, but it uses javascript to build and execute these searches so I'm at a loss for what we could do to resolve this on our own.
Hi everyone, I have some questions about skipped searches. With the following search, I have found, that on my SH I have a few (2800 last 7 days) skipped searches.    index = _internal skipped sour... See more...
Hi everyone, I have some questions about skipped searches. With the following search, I have found, that on my SH I have a few (2800 last 7 days) skipped searches.    index = _internal skipped sourcetype=scheduler status=skipped | stats count by app search_type reason savedsearch_name | sort -count   I have made other searches with show me all saved searches and their scheduled cronjob. I have found, that I have more than 70 searches that are running every 5 minutes and a few are running every minute.  Would that be my issue with the skipped searches, even they are running for just a few seconds (max 5 seconds). On all 70 scheduled searches is the parameter schedule_window set to 0.
Hi all, I have a Correlation Search that generates notable events ignoring the throttling configuration. The search is "Excessive Logins Failed" and is set with the current parameters: Cron schedu... See more...
Hi all, I have a Correlation Search that generates notable events ignoring the throttling configuration. The search is "Excessive Logins Failed" and is set with the current parameters: Cron schedule: */20 * * * * Time range: from '-65m' to 'now' Scheduling: continuous Schedule Window: No Window Scheduling priority: Default Trigger condition: number of results > 0 Throttling window: 86400 seconds Throttling fields to group by: src The search is the following:   | tstats summariesonly=true allow_old_summaries=true dc(Authentication.user) as "user_count",dc(Authentication.dest) as "dest_count",count from datamodel="Authentication"."Authentication" where Authentication.user!=*$ nodename="Authentication.Failed_Authentication" by "Authentication.app","Authentication.src" | `drop_dm_object_name(Authentication)` | replace "::ffff:*" with "*" in src | where count>=500   Search runtime is very short (few seconds), so I'm sure there are no overlapping searches at the same time. Nevertheless, I often find notable events generated for the same 'src' in the last 24 hours. I also have another Correlation Search (Brute Force Attacks detection) which have similar configuration/scheduling but in this case the throttling is working fine. Can anyone help me with this? Anybody else having the same issue? Thanks in advance
Hello, I would like to exclude just one user from forwarding logs and I am thinking if my solution will work: in inputs.conf I would like to define: [monitor:///home/nessus/.bash_history] disable... See more...
Hello, I would like to exclude just one user from forwarding logs and I am thinking if my solution will work: in inputs.conf I would like to define: [monitor:///home/nessus/.bash_history] disabled = true [monitor:///home/*/.bash_history] disabled = false The goal is to exclude logging data from user nessus but to log everybody else. I am not sure if it's a good solution, maybe someone has better idea? 
Hi Team, I have the following requirement - I have a report that needs to be scheduled to be run every 10 minutes. The catch is, I want the first search of the day to be run at 00:10AM and after th... See more...
Hi Team, I have the following requirement - I have a report that needs to be scheduled to be run every 10 minutes. The catch is, I want the first search of the day to be run at 00:10AM and after that it should run every 10 minutes. I am implementing the report in 'Search and reporting' app. Thanks in advance.  
Hi team,    How can I get the value of 'status' from below payload in Splunk search. {"log":" \"status\" : \"END\",","payload":"stdout","time":"2021-08-13T11:54:17.255787345Z"}   Thanks in Advan... See more...
Hi team,    How can I get the value of 'status' from below payload in Splunk search. {"log":" \"status\" : \"END\",","payload":"stdout","time":"2021-08-13T11:54:17.255787345Z"}   Thanks in Advance.
Hi Experts,   I have created a search query to fetch details from Linux log and extracted a timestamp field and converted that with command strftime. Timestamp from Linux log: 1628674387976621 | ... See more...
Hi Experts,   I have created a search query to fetch details from Linux log and extracted a timestamp field and converted that with command strftime. Timestamp from Linux log: 1628674387976621 | eval CT_time=strftime(Start_Time/pow(10,6),"%d/%m/%Y %H:%M:%S")  Now I would like to filter the events based on converted time, like From CT_time to CT_time.   Please help with a query to filter with converted timestamp.   Regards, Karthikeyan.SV
I'm reading the docs about sharing summaries between search-heads and I'm a bit puzzled. https://docs.splunk.com/Documentation/Splunk/8.2.1/Knowledge/Sharedatamodelsummaries The article states: "Yo... See more...
I'm reading the docs about sharing summaries between search-heads and I'm a bit puzzled. https://docs.splunk.com/Documentation/Splunk/8.2.1/Knowledge/Sharedatamodelsummaries The article states: "You can find the GUID for a search head cluster in the [shclustering] stanza of server.conf. If you are running a single instance you can find the GUID in etc/instance.cfg. "  But in my case the only guids I can find are those on single shcluster members in etc/instance.cfg. Of course each one is different. I cannot seem to find a "search head cluster GUID" anywhere. What am I doing wrong?
HI, I am using the below query to calculate the percentage value for available and total columns.     index=nextgen mango_trace="SyntheticTitan*" | where status = "200" OR status = "204"|stats co... See more...
HI, I am using the below query to calculate the percentage value for available and total columns.     index=nextgen mango_trace="SyntheticTitan*" | where status = "200" OR status = "204"|stats count as available by service | appendcols [search index=nextgen mango_trace="SyntheticTitan*" | stats count as total by service] | eval percentage = round((available/total)*100,2) |table service, percentage, available, total     I wanted to trigger an alert when the percentage values are less than 100.00. My Splunk search results for the above query looks like Can you please help me with the trigger conditions to set an alert of any of the service percentages is < than 100.00   Thanks, SG