All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Quick question about KV store - wondering what the best way to update multiple records at once via search may be? Example - let's say I have the most recent logon for users for the past week: u... See more...
Quick question about KV store - wondering what the best way to update multiple records at once via search may be? Example - let's say I have the most recent logon for users for the past week: user1 - last_logon_time user2 - last_logon_time etc.... I would like to query last_logon_time for all users for the past day, then update the KV store with the most recent info. The goal would be to set this up as a schedule search running daily to keep the KV store updated. Any thoughts?
Is it possible to "expand" a single variable with comma-separated values into a "list" and then use it inside IN condition? For example, I have a field "days-off" and want to filter events when ... See more...
Is it possible to "expand" a single variable with comma-separated values into a "list" and then use it inside IN condition? For example, I have a field "days-off" and want to filter events when "days-off" include "Sat" So I would want something like Search "Sat" IN (days-off) May be with mvexpand ? Or should I just use match or like against a string with regular expression (if required)?
Need a report that: Lists volumes with significant disk usage spikes over a given timeframe. Plots those disk usage spikes over time. P.S. Not interested in volumes with high percentage o... See more...
Need a report that: Lists volumes with significant disk usage spikes over a given timeframe. Plots those disk usage spikes over time. P.S. Not interested in volumes with high percentage of used disk space - only in those that had a spike of say more than 20%. I am assuming I'd need to: List volumes that had such a spike by calculating max and average values for e.g. UsePct for a volume and then leaving only those with the delta > 20; Run a timechart or something similar on those volumes. Blanking out on how to do that and would appreciate your help - thanks! P.P.S. This is as far as I've gotten - and it seems to correctly ID volumes with usage spikes (updated May 5): sourcetype=WinHostMon source=disk FileSystem!="SNFS" | stats min(storage_used_percent) as min, avg(storage_used_percent) as avg, max(storage_used_percent) as max, by host, Name FileSystem DriveType | eval delta = max - avg | where delta>20 | sort - max delta avg The above produces the full stats table for all hosts and their volumes that had a spike; adding | fields host Name to it would produce just the hosts and volume names; the question remains: what is the best way to plot storage_used_percent on those volumes over the timeframe of the search? P.P.P.S. Bonus points for streamlining the above search and making it faster; generally a streamlined mechanism for pinpointing anomalies (spikes, unusual deviations or volatility) on any available metrics - such as CPU, memory, disk and network utilization. (I have yet to properly configure Splunk infrastructure apps - perhaps such mechanisms are included in those.)
Hi We installed a new heavy forwarder, splunk v7.3.5 on Red Hat 7.7 added mimecase app v3.1.5 Splunk runs as user "splunk" All files under /opt/splunk are owned byt that user. Also doubl... See more...
Hi We installed a new heavy forwarder, splunk v7.3.5 on Red Hat 7.7 added mimecase app v3.1.5 Splunk runs as user "splunk" All files under /opt/splunk are owned byt that user. Also double cheked /opt/splunk/etc/apps/TA-mimecast-for-splunk and below are all splunk:splunk When I navigate to the mimecast app, the page loads as expected. https://server.name:8000/en-US/app/TA-mimecast-for-splunk/mc_home When I click Configuarion it loads the 3 sub tabs (Proxy/Logging/Caching) with proxy selected, but its just a never ending loeading icon under there. Logging and Caching sub tabs load OK and making changes where updates the conf files in the local dir as expeted. Going back to the proxy tab - again results in the never ending loading message. Similarly The main inputs tab https://server.name:8000/en-US/app/TA-mimecast-for-splunk/inputs just give the never ending loading issue. Other tabls seem to load OK and display some things. Any ideas? Thanks Mark
Hi. When I search a '_time' field, there are two result values like '2020/04/30 18:00' and '2020/04/30 18:03' I just want to delete the result values within 5 minutes. for example, _time 2... See more...
Hi. When I search a '_time' field, there are two result values like '2020/04/30 18:00' and '2020/04/30 18:03' I just want to delete the result values within 5 minutes. for example, _time 2020/04/30 18:00 2020/04/30 18:06 Above is ok but, following result search I do not want. _time 2020/04/30 18:00 2020/04/30 18:03 Is it possible to delete data value in '_time' field within 5 minutes? My goal is _time 2020/04/30 18:00 2020/04/30 18:03 **(delete automatically in search)** I would appreciate it if you give me some tips. Thanks.
don't hate me @to4kawa But can you help me one last time! Ive been stuck for a few hours trying to figure out how to do this, my splunk forum searches are getting me close, but I'm not sure how to... See more...
don't hate me @to4kawa But can you help me one last time! Ive been stuck for a few hours trying to figure out how to do this, my splunk forum searches are getting me close, but I'm not sure how to go about it... I know the below search is incorrect, but i need to come up with the "avg_kWhU" value and and the "avg_kWhP" value in one search, and then find the percentage difference between kwh used and kwh produced. for example if kWhP was 50 and kWhU was 50 Percent_powered would be 100% i think i cant have two bins grouping by _time? I have tried many things, and seem to be stuck | where 'usage_info.d_w'>=0 or 'usage_info.solar_w'>=0 | bin _time span=1h | stats count as samplesU sum(usage_info.d_w) as watt_sumU by _time | eval kW_SumU=watt_sumU/1000 | eval avg_kWhU=kW_SumU/samplesU | stats count as samplesP sum(usage_info.solar_w) as watt_sumP by _time | eval kW_SumP=watt_sumP/1000 | eval avg_kWhP=kW_SumP/samplesP | eval percent_powered=((avg_kWhP/avg_kWhU)100) | table percent_powered
Hello, Splunkers We want to deploy Splunk in our organization and I have a question about the operating system to use for Splunk instances. Did you use CentOS 8 for Splunk in a production environme... See more...
Hello, Splunkers We want to deploy Splunk in our organization and I have a question about the operating system to use for Splunk instances. Did you use CentOS 8 for Splunk in a production environment? Is there any dependency or consideration (like Python3 or etc.)?
Health Check:msg="A script exited abnormally with exit status:1" are poppling for below inputs input=".opt/splunk/etc/apps/SA-Utils/bin/dm_accel_settings.py" input="opt/splunk/etc/apps/SA-Utils/... See more...
Health Check:msg="A script exited abnormally with exit status:1" are poppling for below inputs input=".opt/splunk/etc/apps/SA-Utils/bin/dm_accel_settings.py" input="opt/splunk/etc/apps/SA-Utils/bin/configuration_check.py" Internal log for all above shows "Client is not authenticated" Internal log dm_accel_settings.log ERROR pid=23024 tid=MainThread file=dm_accel_settings.py:run:182 | status="REST exception encountered when updating acceleration settings" model=Splunk_Audit,exc=[HTTP 401] Client is not authenitcated ERROR pid=23024 tid=MainThread file=dm_accel_settings.py:run:182 | status="REST exception encountered when updating acceleration settings" model=Risk,exc=[HTTP 401] Client is not authenitcated ERROR pid=23024 tid=MainThread file=dm_accel_settings.py:run:182 | status="REST exception encountered when updating acceleration settings" model=Incident_Management,exc=[HTTP 401] Client is not authenitcated ERROR pid=23024 tid=MainThread file=dm_accel_settings.py:run:182 | status="REST exception encountered when updating acceleration settings" model=Endpoint,exc=[HTTP 401] Client is not authenitcated ERROR pid=23024 tid=MainThread file=dm_accel_settings.py:run:182 | status="REST exception encountered when updating acceleration settings" model=Domain_Analysis,exc=[HTTP 401] Client is not authenitcated ERROR pid=23024 tid=MainThread file=dm_accel_settings.py:run:182 | status="REST exception encountered when updating acceleration settings" model=Change,exc=[HTTP 401] Client is not authenitcated Internal log
I have installed the Palo Alto App and Addon on our searchhead and have installed the add on on 1 of our three indexers to happy path testing. I set up an inputs.conf file to send the data to the pa... See more...
I have installed the Palo Alto App and Addon on our searchhead and have installed the add on on 1 of our three indexers to happy path testing. I set up an inputs.conf file to send the data to the pan_logs index. With regards to dashboards under Operations, the firewall system and configuration dashboards are working well. The realtime event feed i actually had to edit the base search query to include index=pan_logs (changed 'pan_logs' to index=pan_logs) to get the FWs to show up as reporting and generate the live events. I know the timestamps are good because its (almost) realtime in the system and configuration dashboards. I guess my questions are: - is the app expecting everything to be in the default index? - why would i need to update the basesearch query to see data (ie. even if i search for 'pan_logs' i dont see anything, index=pan_logs i get everything) Software versions: -Splunk 8.0.2 -Palo Alto for Splunk App 6.2.0 (on search head) -Palo Alto for Splunk Add on 6.2.- (on search head and indexer) Inputs.conf from indexer: [udp://5514] index = pan_logs sourcetype = pan:firewall connection_host = ip no_appending_timestamp = true Any help would be greatly appreciated. We are working through the issues (but not sure it's the right approach) and just need to figure out if i need to consider templating out eventtypes.conf, etc as part of our install to account for changes up front.
It seems as though support for edge labels in the Force Directed Graph Viz is currently half-baked. The code in visualizations.js seems to use a line_label field emitted from the search results to la... See more...
It seems as though support for edge labels in the Force Directed Graph Viz is currently half-baked. The code in visualizations.js seems to use a line_label field emitted from the search results to label the edges, but fails to add this data to the entries in linksArray (which contains only target, source, and optionally, count). Thus when trying to label the nodes the lookup returns undefined and the label isn't used. Could someone let me know if I am reading the code correctly? Is there a work-around for this issue? If not what is the process to get this fixed? I'm happy to make the change myself if someone provides me with instructions. datum.forEach(function (link) { group_id = 0; // Create a link object to push the target and source to the linksArray array. object = {}; object.target = nodeByName(link[0], group_id); object.source = nodeByName(link[1], group_id); object.count = link[2]; // Push the nodes to the nodesByName array including a group id of 0. // Push the object dictionary item from lines above to the linksArray array linksArray.push(object); }); // If there is a field named line_label in the Splunk results if (headers.line_label) { // Set the line_label variable to the number in the field line_label = Number(headers.line_label); } else { line_label = "False"; } // Create edge labels for labels on paths to exist edgelabels = svg.selectAll(".edgelabel") .data(linksArray) .enter() .append('text') .style("pointer-events", "none") .attr('class', 'edgelabel') .attr('id', function (d, i) { return 'edgelabel' + i }) .attr('font-size', 10) .attr('fill', stringFill) .on("mouseover", fade(.1)).on("mouseout", fade(1)); // Append text to the edge labels edgelabels.append('textPath') .attr('xlink:href', function (d, i) { return '#edgepath' + i }) .style("text-anchor", "middle") .style("pointer-events", "none") .attr("startOffset", "50%") .text(function (d) { return d[line_label] });
Hi guys, We want to onboard some data from the Cloud Storage Bucket of our GCP platform. When adding a new input, we have this error: Unexpected error "" from python handler: "(SSLError(1, u... See more...
Hi guys, We want to onboard some data from the Cloud Storage Bucket of our GCP platform. When adding a new input, we have this error: Unexpected error "" from python handler: "(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:741)'),)". See splunkd.log for more details. I searched Splunkd log, and we have these error messages: ERROR Failed to execute function=handleList, error=Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/splunktalib/common/pattern.py", line 44, in __call__ return func(*args, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/splunk_ta_gcp/resthandlers/projects.py", line 38, in handleList res_mgr = grm.GoogleResourceManager(logger, config) File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/splunk_ta_gcp/legacy/resource_manager.py", line 51, in __init__ self._client = gwc.create_google_client(self._config) File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/splunk_ta_gcp/legacy/common.py", line 210, in create_google_client client = discovery.build(config["service_name"], config["version"], http=http, cache_discovery=False) File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/3rdparty/oauth2client/util.py", line 137, in positional_wrapper return wrapped(*args, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/3rdparty/googleapiclient/discovery.py", line 229, in build requested_url, discovery_http, cache_discovery, cache) File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/3rdparty/googleapiclient/discovery.py", line 276, in _retrieve_discovery_doc resp, content = http.request(actual_url) File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/httplib2shim/google_auth.py", line 201, in request uri, method, body=body, headers=request_headers, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/httplib2_helper/httplib2_py2/httplib2/__init__.py", line 2135, in request cachekey, File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/httplib2_helper/httplib2_py2/httplib2/__init__.py", line 1796, in _request conn, request_uri, method, body, headers File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/httplib2shim/__init__.py", line 171, in _conn_request raise _map_exception(e) SSLError: (SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:741)'),) And this: ERROR AdminManagerExternal - Stack trace from python handler:\nTraceback (most recent call last):\n File "/opt/splunk/lib/python2.7/site-packages/splunk/admin.py", line 131, in init\n hand.execute(info)\n File "/opt/splunk/lib/python2.7/site-packages/splunk/admin.py", line 595, in execute\n if self.requestedAction == ACTION_LIST: self.handleList(confInfo)\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/splunktalib/common/pattern.py", line 44, in __call__\n return func(*args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/splunk_ta_gcp/resthandlers/projects.py", line 38, in handleList\n res_mgr = grm.GoogleResourceManager(logger, config)\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/splunk_ta_gcp/legacy/resource_manager.py", line 51, in __init__\n self._client = gwc.create_google_client(self._config)\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/splunk_ta_gcp/legacy/common.py", line 210, in create_google_client\n client = discovery.build(config["service_name"], config["version"], http=http, cache_discovery=False)\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/3rdparty/oauth2client/util.py", line 137, in positional_wrapper\n return wrapped(*args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/3rdparty/googleapiclient/discovery.py", line 229, in build\n requested_url, discovery_http, cache_discovery, cache)\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/3rdparty/googleapiclient/discovery.py", line 276, in _retrieve_discovery_doc\n resp, content = http.request(actual_url)\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/httplib2shim/google_auth.py", line 201, in request\n uri, method, body=body, headers=request_headers, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/httplib2_helper/httplib2_py2/httplib2/__init__.py", line 2135, in request\n cachekey,\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/httplib2_helper/httplib2_py2/httplib2/__init__.py", line 1796, in _request\n conn, request_uri, method, body, headers\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/httplib2shim/__init__.py", line 171, in _conn_request\n raise _map_exception(e)\nSSLError: (SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:741)'),)\n So is there a way to add our own CA cert to avoid the SSL error, or is there a way to turn-off SSL verification? Many thanks, S
Hello, On one of couple HFs, I received this error message "HTTP Input data endpoint server cannot be started." when creating a new HTTP Event Collector. Even thought, HTTP record created with a p... See more...
Hello, On one of couple HFs, I received this error message "HTTP Input data endpoint server cannot be started." when creating a new HTTP Event Collector. Even thought, HTTP record created with a pre-assigned token value, but the token doesn't allow the connection where it's used (F5-VPN appliance in this case). Anyone had resolved this issue? Thanks,
Hello everyone, I need help with a search. I have a table with the following fields: VISITDATE USERNUMBER WEBSITE 4/19/2020 7:15:11 001 www.google.com 4/26/2020 10:24:... See more...
Hello everyone, I need help with a search. I have a table with the following fields: VISITDATE USERNUMBER WEBSITE 4/19/2020 7:15:11 001 www.google.com 4/26/2020 10:24:03 001 www.google.com 4/26/2020 10:33:03 001 www.google.com 4/26/2020 11:15:12 001 www.google.com 4/26/2020 11:30:12 001 www.google.com 4/28/2020 14:30:12 001 www.google.com I want to count the number of visits (by usernumber and website) that occurred 60 minutes apart, and return the earliest time in that 60-minute bucket. VISITDATE USERNUMBER WEBSITE COUNT 4/19/2020 7:15:11 001 www.google.com 1 4/26/2020 10:24:03 001 www.google.com 3 4/26/2020 11:30:12 001 www.google.com 1 4/28/2020 14:30:12 001 www.google.com 1 For example, using bins or buckets with a 60-min/1-hour time span that snap to the hour 10:00 -11:00 is one hour and that would give me this result, which I don't want: VISITDATE USERNUMBER WEBSITE COUNT 4/19/2020 7:15:11 001 www.google.com 1 4/26/2020 10:24:03 001 www.google.com 2 4/26/2020 11:15:12 001 www.google.com 2 4/28/2020 14:30:12 001 www.google.com 1 Thank you in advance for your help.
Hi Guys, I am just trying to write a spluNk query to extract data between 1-32 days , >32 days , > 42 days , > 72 days , < 365 days and < 720 days I tried multiple queries and I believe it's po... See more...
Hi Guys, I am just trying to write a spluNk query to extract data between 1-32 days , >32 days , > 42 days , > 72 days , < 365 days and < 720 days I tried multiple queries and I believe it's possible with the case statement . Kindly suggest on this. Note : I just have one custom field acd_date which I should use in my case statement. Thanks in advance
I've got logs that have time being sent to a syslog - the syslog is also putting a time on it to track when the logs hit the syslog. I want Splunk to parse the original time in the log, and I've t... See more...
I've got logs that have time being sent to a syslog - the syslog is also putting a time on it to track when the logs hit the syslog. I want Splunk to parse the original time in the log, and I've tried configuring the props.conf, but it seems that Splunk is still picking up the syslog prepended time. This is running on a HF and then being sent to Splunk cloud. Inputs sourcetype matches what I have in the props.conf. I've run this through a local instance of Splunk to get the props.conf and it looks correct in the data preview - local is Windows, prod is Linux, but I wouldn't think that would matter for this. Any suggestions on what to change would be greatly appreciated. Example log line: Apr 29 19:44:33 text SysLog[425355]: time="[29/Apr/2020:19:44:33 +0000]" Current props: [<sourcetype_name_here>] SHOULD_LINEMERGE = false NO_BINARY_CHECK = true LINE_BREAKER = ([\r\n]+) TIME_FORMAT = %d/%b/%Y:%H:%M:%S %z TIME_PREFIX = time\=\"\[ TZ = GMT MAX_TIMESTAMP_LOOKAHEAD = 27
I have a RHEL 6.10 Splunk server and currently have the following configuration in /etc/security/limits.d/ for open file descriptors: root soft nofile 64512 root hard nofile... See more...
I have a RHEL 6.10 Splunk server and currently have the following configuration in /etc/security/limits.d/ for open file descriptors: root soft nofile 64512 root hard nofile 80896 But when I run the Monitoring Console Health Check, it reports that the current ulimit.open_files = 4096. I tried using * instead of root but that didn't change anything. I have seen this issue brought up but for RHEL 7.x that uses systemd instead of SysV.
What's the api command to get the current logged-in user without specifying the user id. I want to type phantom.get_user() and have the attributes of the logged-in user returned.
I have an Enclave server that already forwards logs to my indexer. We installed a network interface that should remain turned off unless we are upgrading/patching the server. Is there a way to see if... See more...
I have an Enclave server that already forwards logs to my indexer. We installed a network interface that should remain turned off unless we are upgrading/patching the server. Is there a way to see if the Network interface was left ON?
Hi , I am trying to create a dashboard like this - which has a text box to search for title ( index name) . How do I put this title token in my search ? ==================================... See more...
Hi , I am trying to create a dashboard like this - which has a text box to search for title ( index name) . How do I put this title token in my search ? ====================================================================================================== Index List Index Search <input type="text" token="title"> <label>Search</label> </input> <panel> <table> <search> <query>|rest /services/data/indexes | dedup title | table title updated splunk_server</query> <earliest>@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> =====================================================================================================
Having trouble finding documentation on these two parameters. But, say column 3 is an epoch timestamp, would input_timestamp_format be a single 's'? ie: input_timestamp_format = s Thank yo... See more...
Having trouble finding documentation on these two parameters. But, say column 3 is an epoch timestamp, would input_timestamp_format be a single 's'? ie: input_timestamp_format = s Thank you.