All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am upgrading our splunk instance from 8.0 to 8.2. I noticed after following the kvstore upgrade which followed the upgrade to 8.2, which I did on our cluster heads, and they all have Wire... See more...
Hello, I am upgrading our splunk instance from 8.0 to 8.2. I noticed after following the kvstore upgrade which followed the upgrade to 8.2, which I did on our cluster heads, and they all have WiredTiger on them now.  Logging into our cluster master, I get an error saying I need to upgrade the kvstore. This comes up like 30+ minutes after the upgrade. I look at the cluster master and run the same command for the kvstore status, and it says that it has the old kvstore on it still. Am I supposed to be going onto each box and upgrading the kvstore on them? The instructions said the search heads and so that's what I followed.   
I don't want the graph to show 105.
I have a db connect input that I want to programatically activate and deactivate. Following some docs I came up with this:   curl -k -H "authorization: Splunk XXXX" https://localhost:8089/services... See more...
I have a db connect input that I want to programatically activate and deactivate. Following some docs I came up with this:   curl -k -H "authorization: Splunk XXXX" https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/configs/conf-db_inputs/SIP_CAMIO_AUDIT_IN -d "disabled=1"   However db connect seems to ignore the change and keeps indexing data. Unless I access the /en-US/debug/refresh URL and manually refresh the whole server. I decided to do a test and the following cURL works:   curl -k -H "authorization: Splunk XXXX" https://localhost:8089/servicesNS/nobody/Admin_Tools/configs/conf-macros/test_rest -d "disabled=1"     How can I disable/enable db connect inputs through REST? Why does db_connect ignore conf updates through REST? Edit: I've tried accessing the following endpoint without luck aswell curl -k -H "authorization: Splunk XXXX" -X POST https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/configs/conf-db_inputs/_reload  
Dears, Is it possible to track who / how my application server was restarted, from the controller? Thanks, Rathna SR. 
I am trying to add a dropdown in a Dashboard in Splunk cloud using the index field host from a metric dataset. I want the host field to dynamically update the dropdown from the host field available i... See more...
I am trying to add a dropdown in a Dashboard in Splunk cloud using the index field host from a metric dataset. I want the host field to dynamically update the dropdown from the host field available in the dataset at any one time.  I am using the search for the data source of the dropdown as '|mcatalog values(host) where index=em_metrics | mvexpand values(host)'.  All other settings on the drop down are the defaults.  I have tried changing the search several times and have been unable to get it to work.  Any suggestions would be greatly appreciated.  
Hi together! I have an issue with the point separator, after conversion from a json file. This is raw json: "customfield_26202" : { "self" : "link", "value" : "Software: Softwareentwicklung und Pr... See more...
Hi together! I have an issue with the point separator, after conversion from a json file. This is raw json: "customfield_26202" : { "self" : "link", "value" : "Software: Softwareentwicklung und Product Security", "id" : "30705", "disabled" : false, "child" : { "self" : "link", "value" : "Software-Projektleiter", "id" : "30771", "disabled" : false Splunk extracts the field: customfield_26202.child.value="Software-Projektleiter" (Done with _json sourcetype) Now I want to merge to fields like this: | eval output = mvappend(customfield_26202.child.value, customfield_26204.child.value) | mvexpand output | table output When I do exactly the same thing without .child.value everything works fine. I tried it with several quotation marks (", ' etc.) nothing helps. Any idea what I do wrong? Thank you! Timo
My search query - source="http:product_inv_rest" | spath message | search message="Request: GET */product-inventory/product 123456" In above query , I want to find records which has any number (o... See more...
My search query - source="http:product_inv_rest" | spath message | search message="Request: GET */product-inventory/product 123456" In above query , I want to find records which has any number (only number) in place of 123456.    
Has anyone run into an issue where a Splunk HF, is not monioring files being written to it. This HF is also a syslog server so files are being written to it and the monirotied inputs are on the serve... See more...
Has anyone run into an issue where a Splunk HF, is not monioring files being written to it. This HF is also a syslog server so files are being written to it and the monirotied inputs are on the server. The file ingestion happens after a restart. Any pointers?    
I am somewhat puzzled by the performance of this search. When I leave the wildcards off the search is WAY faster than with the wildcards. In essence, shouldn't I get the same results from both search... See more...
I am somewhat puzzled by the performance of this search. When I leave the wildcards off the search is WAY faster than with the wildcards. In essence, shouldn't I get the same results from both searches? index="myindex" sourcetype="mysourcetype" "my term" vs index="myindex" sourcetype="mysourcetype" "*my term*"   On another answer I saw a Splunk employee state that ... "my term" was essentially the same as ... _raw="*my term*"   The performance difference on my system is undeniable, so I guess my question would be is there a reason I would want/need to put the wildcards in? Would I potentially get different results? Thanks.
After install of a new Enterprise 9.0 instance, there's a lot of new logging appearing in _internal. Notably, this log line is being generated every 15 seconds and there's no clear indication in doc... See more...
After install of a new Enterprise 9.0 instance, there's a lot of new logging appearing in _internal. Notably, this log line is being generated every 15 seconds and there's no clear indication in documentation how to disable it.     2022-06-23 09:25:05,957 INFO [assist::supervisor_modular_input.py] [context] [build_supervisor_secrets] [4932] Secret load failed, key=tenant_id, error=[HTTP 404] https://127.0.0.1:8090/servicesNS/nobody/splunk_assist/storage/passwords/tenant_id?output_mode=json     source = D:\Splunk\var\log\splunk\splunk_assist_supervisor_modular_input.log sourcetype = splunk_assist_uiassets_modular_input.log* This is a substantial increase in overall volume of logs with "error" in them, not to mention the rest of the logging related to these new "assist supervisor" processes.  splunkd.log is flooded with messages from instance_id_modular_input.py executing.   The Splunk Assist documentation (https://docs.splunk.com/Documentation/Splunk/9.0.0/DMC/AssistIntro) has no information on how to adjust the log level or disable specific components. This is on an instance *without* a Splunk Assist activation code installed, meaning this is generating at this volume out-of-box.   It's incredibly frustrating that searching this log file name "splunk_assist_uiassets_modular_input.log" returns 0 results in all of Splunk Docs. How is this useful if there's no information on what to do with it, and why am I paying more for Cloud Compute to ingest all this additional volume without any instruction for how to configure it? Any assistance in finding relevant documentation would be appreciated. Edit: There's a new .conf file for this - assist.conf - that is completely undocumented. Nothing in the configuration file reference doc page. https://docs.splunk.com/Documentation/Splunk/9.0.0/Admin/assistconf The inputs generating all this extra logging are located in $SPLUNK_HOME/etc/apps/splunk_assist Until more information becomes available, I've disabled them: [supervisor_modular_input://default] disabled = 1 [instance_id_modular_input://default] disabled = 1 [uiassets_modular_input://default] disabled = 1 [selfupdate_modular_input://default] disabled = 1  
Hi all, My team needs to clear an alert with a totally different department before we consider it "published" for the purposes of audit etc. I need a SIMPLE way to mark an alert as "in review"  that... See more...
Hi all, My team needs to clear an alert with a totally different department before we consider it "published" for the purposes of audit etc. I need a SIMPLE way to mark an alert as "in review"  that has the ability to make the distinction between "published" and "in review" clear on dashboards.  Requirements: 1. something simple that a non-tech team won't mess up 2. readable by dashboards Thanks in advance!
We recently deployed 5 new indexers into site 2 our 2-site clustered environment to replace 5 old ones in the same site (2). We have offlined the old indexers and I am now attempting to rebalance the... See more...
We recently deployed 5 new indexers into site 2 our 2-site clustered environment to replace 5 old ones in the same site (2). We have offlined the old indexers and I am now attempting to rebalance the cluster.   I will note that a large amount of bucket fixing activities are taking place currently, as the new indexers in site 2 are copying buckets from site 1 to reestablish data redundancy.   The problem is: When attempting to run a rebalance operation in the GUI from the cluster master, it will begin the rebalance successfully. A couple minutes to an hour go by while the completion % slowly climbs. This is demonstrated in splunkd.log: 06-23-2022 10:19:32.148 -0400 INFO CMMaster - data rebalance started, initial_work=900897 06-23-2022 10:19:32.148 -0400 INFO CMMaster - data rebalance completion percent=0.00 06-23-2022 10:20:02.534 -0400 INFO CMMaster - data rebalance completion percent=1.90 06-23-2022 10:20:32.893 -0400 INFO CMMaster - data rebalance completion percent=1.90 06-23-2022 09:51:49.099 -0400 INFO CMMaster - data rebalance completion percent=3.05 06-23-2022 09:52:21.558 -0400 INFO CMMaster - data rebalance completion percent=3.06     Then, seemingly at random, I get this error message in the logs, and the rebalance suddenly stops.    06-23-2022 10:04:58.657 -0400 INFO FixupStrategy - rebalance skipped all buckets, forcing a stop 06-23-2022 10:04:59.189 -0400 INFO CMMaster - data rebalance complete! percent=100.00 Searching the internet did not yield any results for this error message. does anyone know what could be causing my rebalance to skip all buckets?  
Hello I'm having trouble with metric data not being sent from our HF to our Enterprise deployment. I'll add a diagram later to better explain myself. For now, our deployment looks a bit like th... See more...
Hello I'm having trouble with metric data not being sent from our HF to our Enterprise deployment. I'll add a diagram later to better explain myself. For now, our deployment looks a bit like this: Monitored File -----> UF -----> HF ------> Indexer (WORKS!) We decided to send our collectd metric data through the UF but for some reason, the metric data got lost while the monitored file data reached our indexer: Monitored File ------┐                                         ├------> UF --------> HF ------> Indexer (Only file works) Metric Data ---------┘ As part of the debugging test I ran to solve this I realized that sending the metric data straight to the indexer while having the file data pass through the HF works! while assuring me that the problem is not with the UF or the metric data, I still need the HF to act as a proxy in our network. Since we have the Forwarder License on the HF we can't run searches on it and `splunkd.log` or `metrics.log` are not showing any errors. Can anyone point me to some setting on the HF that I might miss? I've been trying to solve this for a couple of days but can't seem to make any progress.   Edit: Let me know what .conf files or other configurations you need me to share.
Hi Everyone: I have this query on which is comparing the file from last week to the one of this one. I'm doing this to bring new events by date, but when there is no results found it is no showing me... See more...
Hi Everyone: I have this query on which is comparing the file from last week to the one of this one. I'm doing this to bring new events by date, but when there is no results found it is no showing me the Date and a 0, and I need this line to append it to another lookup. | inputlookup append=t NEW.csv | lookup OLD.csv UniqueID OUTPUTNEW UniqueID as NEW | where like(ISSUE,"%Wrong%") | where isnull(NEW) | stats count as New_event by DATE_REPORT | eval Date=strftime(strptime(DATE_REPORT, "%Y-%m-%d %H:%M:%S"), "%m-%d-%Y") | fields Date New_event     I would like to get something like this: Date                           New_event 6-23-2022               0
hi Can I make the below mstats into one SPL? Below there are 2. I am pulling data from the same metric. I have a "where" in one.       | mstats sum("mx.http.server.requests") as total WHERE... See more...
hi Can I make the below mstats into one SPL? Below there are 2. I am pulling data from the same metric. I have a "where" in one.       | mstats sum("mx.http.server.requests") as total WHERE "index"="murex_metrics" mx.env="feature/monitoring/nfr/new-log-metrics-19" | appendcols [ mstats sum("mx.http.server.requests") as declined WHERE "index"="murex_metrics" mx.env="feature/monitoring/nfr/new-log-metrics-19" status.code>=400 | appendpipe [ stats count | eval declined=0 | where count=0 | table declined ]] | eval percent_declined=100 * declined / total | table percent_declined     I looked at this question and I don't think I can use "stats style eval aggr functions"    | stats count(eval(ip=="37.25.139.34"))   https://community.splunk.com/t5/Splunk-Search/How-to-use-eval-with-mstats/td-p/563751 So perhaps I need to run 2 mstats, I was hoping I could just run 1. However i think it is not possible for what i am trying to do.   Rob
Hi folks,   I'm trying to see how I can truncate the entire event to a max number of characters. So basically, if this is my test event (including new lines), and I wanted to capture say the first 1... See more...
Hi folks,   I'm trying to see how I can truncate the entire event to a max number of characters. So basically, if this is my test event (including new lines), and I wanted to capture say the first 10 characters ("Mary had a"), i can't seem to do it.   Mary had a little lamb, little lamb, little lamb. Mary had a little lamb, its fleece was white as snow. And everywhere that Mary went, Mary went, Mary went, and everywhere that Mary went, the lamb was sure to go.       I don't seem to be able to use TRUNCATE, because it seems to evaluate *each line* versus the event as a whole. And MAX_EVENTS would not work either, because it would roll to the next event. (I would be OK with MAX_EVENTS if the behavior was to discard the extra. I have tried this transform, and it seems to want to match each line, and even breaks the events into single line events, as I can't seem to pattern match the newline character.     [truncate_raw_10] SOURCE_KEY = _raw REGEX = ^(.{0,10}) DEST_KEY = _raw FORMAT = $1     Does anyone have any insight? Thanks!
Hello, Is it possible to relate the issues displayed in Splunk UI (attached  below) to OS data or Splunk logs:   In other words, Given the OS metrics(RAM,CPU,SWAP,....) of the servers hos... See more...
Hello, Is it possible to relate the issues displayed in Splunk UI (attached  below) to OS data or Splunk logs:   In other words, Given the OS metrics(RAM,CPU,SWAP,....) of the servers hosting Splunk and Splunk logs, can we relate some trends in this data to below issues: The percentage of high priority searches lagged (33%) over the last 24 hours is very high and exceeded the yellow thresholds (10%) on this Splunk instance. Total Searches that were part of this percentage=18. Total lagged Searches=6 The percentage of small buckets created over the last hour is high and exceeded the red thresholds  for index=...,  and possibly more indexes, on this indexer. At the time this alert fired, total buckets created=13, small buckets=11 The percentage of non high priority searches delayed (54%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=574144. Total delayed Searches=314696 Search peer down Disk space/file system under this mount point ... is exceeding the limits 80%   Would really appreciate your response.
  I want to populate Operation field on basis of API and METHOD field values . My code : <input type="dropdown" token="tkn_ OPERATION"> <label>Select Operation:</label> <fieldForLabel>OP... See more...
  I want to populate Operation field on basis of API and METHOD field values . My code : <input type="dropdown" token="tkn_ OPERATION"> <label>Select Operation:</label> <fieldForLabel>OPERATION</fieldForLabel> <fieldForValue>OPERATION</fieldForValue> <search> <query>| makeresults | eval API="party_interaction_rest" AND METHOD="GET",OPERATION="Alle,LIST_PARTY_INTERACTIONS" | append [| makeresults | eval API="ticket_mgmt_rest" AND METHOD="GET",OPERATION="Alle,LIST_TROUBLE_TICKETS"] | eval OPERATION=split(OPERATION,",") |mvexpand OPERATION| table API METHOD OPERATION | search API="$token_service$" METHOD="$token_method$"</query> </search>   Above code is not working.
Hi Team, I have two panels . For my 1st panel the query is: <title>DataGraphNodes Exceptions Details</title> <table> <search> <query>index=abc ns=sidh-datagraph3-c2 OR sidh-datagraph3 nodeExce... See more...
Hi Team, I have two panels . For my 1st panel the query is: <title>DataGraphNodes Exceptions Details</title> <table> <search> <query>index=abc ns=sidh-datagraph3-c2 OR sidh-datagraph3 nodeException node="*" |rex field=_raw "message=(?P&lt;datetime&gt;\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}\.\d+)\s"|stats count by ns app_name node nodeMethodName nodeException datetime |rename node as "Node"| rename nodeMethodName as "NodeMethod"|rename nodeException as "Node-Exception" | rename datetime as "Log_Time"|fields - count</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <drilldown> <set token="show_panel">true</set> <set token="selected_value">$click.value$</set> </drilldown> </table>   And I am getting result like this: ns                                        app-name                   Node        Node method                Exception   Log-Time sidh-datagraph3  data-graph-acct-b          https       getDetailsBySENo             Invalid Id                2022-06-21 sidh2                          data-acct-b                      https          invalid                                    InvalidId                2022-06-22   Foe 2nd panel I want when I click on 1st panel row the details should come based on the row I will select on 1st panel. My 2nd panel query is: <panel depends="$show_panel$"> <table> <title> Events</title> <search> <query>index=abc ns=sidh-datagraph3-c2 OR sidh-datagraph3 nodeException $node$ $selected_value$ </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="count">100</option> </table> </panel>   But its not coming proper in 2nd panel all the results are coming. I want only the row that I select in 1st panel that exception will come. Can someone guide me.
Using the Splunk Python SDK, is there any way to know the size of the CSV file that will be generated after streaming and writing all the results? I've managed to achieve this with the approach belo... See more...
Using the Splunk Python SDK, is there any way to know the size of the CSV file that will be generated after streaming and writing all the results? I've managed to achieve this with the approach below, but it makes me to download and iterate the CSV lines two times. splunk_job = service.jobs.create(query, **kwargs) # waits job to be done splunk_job_result_args = { "output_mode": "csv" } splunk_job_results = splunk_job.results(**splunk_job_result_args) results_length = 0 for bytes_csv_line in splunk_job_results: results_length += len(bytes_csv_line) splunk_job_results.close() # checks if the results_length exceeds the limit # if not, executes the following: splunk_job_results = splunk_job.results(**splunk_job_result_args) for bytes_csv_line in splunk_job_results: # writes the bytes in a file splunk_job_results.close()