All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

After install of a new Enterprise 9.0 instance, there's a lot of new logging appearing in _internal. Notably, this log line is being generated every 15 seconds and there's no clear indication in doc... See more...
After install of a new Enterprise 9.0 instance, there's a lot of new logging appearing in _internal. Notably, this log line is being generated every 15 seconds and there's no clear indication in documentation how to disable it.     2022-06-23 09:25:05,957 INFO [assist::supervisor_modular_input.py] [context] [build_supervisor_secrets] [4932] Secret load failed, key=tenant_id, error=[HTTP 404] https://127.0.0.1:8090/servicesNS/nobody/splunk_assist/storage/passwords/tenant_id?output_mode=json     source = D:\Splunk\var\log\splunk\splunk_assist_supervisor_modular_input.log sourcetype = splunk_assist_uiassets_modular_input.log* This is a substantial increase in overall volume of logs with "error" in them, not to mention the rest of the logging related to these new "assist supervisor" processes.  splunkd.log is flooded with messages from instance_id_modular_input.py executing.   The Splunk Assist documentation (https://docs.splunk.com/Documentation/Splunk/9.0.0/DMC/AssistIntro) has no information on how to adjust the log level or disable specific components. This is on an instance *without* a Splunk Assist activation code installed, meaning this is generating at this volume out-of-box.   It's incredibly frustrating that searching this log file name "splunk_assist_uiassets_modular_input.log" returns 0 results in all of Splunk Docs. How is this useful if there's no information on what to do with it, and why am I paying more for Cloud Compute to ingest all this additional volume without any instruction for how to configure it? Any assistance in finding relevant documentation would be appreciated. Edit: There's a new .conf file for this - assist.conf - that is completely undocumented. Nothing in the configuration file reference doc page. https://docs.splunk.com/Documentation/Splunk/9.0.0/Admin/assistconf The inputs generating all this extra logging are located in $SPLUNK_HOME/etc/apps/splunk_assist Until more information becomes available, I've disabled them: [supervisor_modular_input://default] disabled = 1 [instance_id_modular_input://default] disabled = 1 [uiassets_modular_input://default] disabled = 1 [selfupdate_modular_input://default] disabled = 1  
Hi all, My team needs to clear an alert with a totally different department before we consider it "published" for the purposes of audit etc. I need a SIMPLE way to mark an alert as "in review"  that... See more...
Hi all, My team needs to clear an alert with a totally different department before we consider it "published" for the purposes of audit etc. I need a SIMPLE way to mark an alert as "in review"  that has the ability to make the distinction between "published" and "in review" clear on dashboards.  Requirements: 1. something simple that a non-tech team won't mess up 2. readable by dashboards Thanks in advance!
We recently deployed 5 new indexers into site 2 our 2-site clustered environment to replace 5 old ones in the same site (2). We have offlined the old indexers and I am now attempting to rebalance the... See more...
We recently deployed 5 new indexers into site 2 our 2-site clustered environment to replace 5 old ones in the same site (2). We have offlined the old indexers and I am now attempting to rebalance the cluster.   I will note that a large amount of bucket fixing activities are taking place currently, as the new indexers in site 2 are copying buckets from site 1 to reestablish data redundancy.   The problem is: When attempting to run a rebalance operation in the GUI from the cluster master, it will begin the rebalance successfully. A couple minutes to an hour go by while the completion % slowly climbs. This is demonstrated in splunkd.log: 06-23-2022 10:19:32.148 -0400 INFO CMMaster - data rebalance started, initial_work=900897 06-23-2022 10:19:32.148 -0400 INFO CMMaster - data rebalance completion percent=0.00 06-23-2022 10:20:02.534 -0400 INFO CMMaster - data rebalance completion percent=1.90 06-23-2022 10:20:32.893 -0400 INFO CMMaster - data rebalance completion percent=1.90 06-23-2022 09:51:49.099 -0400 INFO CMMaster - data rebalance completion percent=3.05 06-23-2022 09:52:21.558 -0400 INFO CMMaster - data rebalance completion percent=3.06     Then, seemingly at random, I get this error message in the logs, and the rebalance suddenly stops.    06-23-2022 10:04:58.657 -0400 INFO FixupStrategy - rebalance skipped all buckets, forcing a stop 06-23-2022 10:04:59.189 -0400 INFO CMMaster - data rebalance complete! percent=100.00 Searching the internet did not yield any results for this error message. does anyone know what could be causing my rebalance to skip all buckets?  
Hello I'm having trouble with metric data not being sent from our HF to our Enterprise deployment. I'll add a diagram later to better explain myself. For now, our deployment looks a bit like th... See more...
Hello I'm having trouble with metric data not being sent from our HF to our Enterprise deployment. I'll add a diagram later to better explain myself. For now, our deployment looks a bit like this: Monitored File -----> UF -----> HF ------> Indexer (WORKS!) We decided to send our collectd metric data through the UF but for some reason, the metric data got lost while the monitored file data reached our indexer: Monitored File ------┐                                         ├------> UF --------> HF ------> Indexer (Only file works) Metric Data ---------┘ As part of the debugging test I ran to solve this I realized that sending the metric data straight to the indexer while having the file data pass through the HF works! while assuring me that the problem is not with the UF or the metric data, I still need the HF to act as a proxy in our network. Since we have the Forwarder License on the HF we can't run searches on it and `splunkd.log` or `metrics.log` are not showing any errors. Can anyone point me to some setting on the HF that I might miss? I've been trying to solve this for a couple of days but can't seem to make any progress.   Edit: Let me know what .conf files or other configurations you need me to share.
Hi Everyone: I have this query on which is comparing the file from last week to the one of this one. I'm doing this to bring new events by date, but when there is no results found it is no showing me... See more...
Hi Everyone: I have this query on which is comparing the file from last week to the one of this one. I'm doing this to bring new events by date, but when there is no results found it is no showing me the Date and a 0, and I need this line to append it to another lookup. | inputlookup append=t NEW.csv | lookup OLD.csv UniqueID OUTPUTNEW UniqueID as NEW | where like(ISSUE,"%Wrong%") | where isnull(NEW) | stats count as New_event by DATE_REPORT | eval Date=strftime(strptime(DATE_REPORT, "%Y-%m-%d %H:%M:%S"), "%m-%d-%Y") | fields Date New_event     I would like to get something like this: Date                           New_event 6-23-2022               0
hi Can I make the below mstats into one SPL? Below there are 2. I am pulling data from the same metric. I have a "where" in one.       | mstats sum("mx.http.server.requests") as total WHERE... See more...
hi Can I make the below mstats into one SPL? Below there are 2. I am pulling data from the same metric. I have a "where" in one.       | mstats sum("mx.http.server.requests") as total WHERE "index"="murex_metrics" mx.env="feature/monitoring/nfr/new-log-metrics-19" | appendcols [ mstats sum("mx.http.server.requests") as declined WHERE "index"="murex_metrics" mx.env="feature/monitoring/nfr/new-log-metrics-19" status.code>=400 | appendpipe [ stats count | eval declined=0 | where count=0 | table declined ]] | eval percent_declined=100 * declined / total | table percent_declined     I looked at this question and I don't think I can use "stats style eval aggr functions"    | stats count(eval(ip=="37.25.139.34"))   https://community.splunk.com/t5/Splunk-Search/How-to-use-eval-with-mstats/td-p/563751 So perhaps I need to run 2 mstats, I was hoping I could just run 1. However i think it is not possible for what i am trying to do.   Rob
Hi folks,   I'm trying to see how I can truncate the entire event to a max number of characters. So basically, if this is my test event (including new lines), and I wanted to capture say the first 1... See more...
Hi folks,   I'm trying to see how I can truncate the entire event to a max number of characters. So basically, if this is my test event (including new lines), and I wanted to capture say the first 10 characters ("Mary had a"), i can't seem to do it.   Mary had a little lamb, little lamb, little lamb. Mary had a little lamb, its fleece was white as snow. And everywhere that Mary went, Mary went, Mary went, and everywhere that Mary went, the lamb was sure to go.       I don't seem to be able to use TRUNCATE, because it seems to evaluate *each line* versus the event as a whole. And MAX_EVENTS would not work either, because it would roll to the next event. (I would be OK with MAX_EVENTS if the behavior was to discard the extra. I have tried this transform, and it seems to want to match each line, and even breaks the events into single line events, as I can't seem to pattern match the newline character.     [truncate_raw_10] SOURCE_KEY = _raw REGEX = ^(.{0,10}) DEST_KEY = _raw FORMAT = $1     Does anyone have any insight? Thanks!
Hello, Is it possible to relate the issues displayed in Splunk UI (attached  below) to OS data or Splunk logs:   In other words, Given the OS metrics(RAM,CPU,SWAP,....) of the servers hos... See more...
Hello, Is it possible to relate the issues displayed in Splunk UI (attached  below) to OS data or Splunk logs:   In other words, Given the OS metrics(RAM,CPU,SWAP,....) of the servers hosting Splunk and Splunk logs, can we relate some trends in this data to below issues: The percentage of high priority searches lagged (33%) over the last 24 hours is very high and exceeded the yellow thresholds (10%) on this Splunk instance. Total Searches that were part of this percentage=18. Total lagged Searches=6 The percentage of small buckets created over the last hour is high and exceeded the red thresholds  for index=...,  and possibly more indexes, on this indexer. At the time this alert fired, total buckets created=13, small buckets=11 The percentage of non high priority searches delayed (54%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=574144. Total delayed Searches=314696 Search peer down Disk space/file system under this mount point ... is exceeding the limits 80%   Would really appreciate your response.
  I want to populate Operation field on basis of API and METHOD field values . My code : <input type="dropdown" token="tkn_ OPERATION"> <label>Select Operation:</label> <fieldForLabel>OP... See more...
  I want to populate Operation field on basis of API and METHOD field values . My code : <input type="dropdown" token="tkn_ OPERATION"> <label>Select Operation:</label> <fieldForLabel>OPERATION</fieldForLabel> <fieldForValue>OPERATION</fieldForValue> <search> <query>| makeresults | eval API="party_interaction_rest" AND METHOD="GET",OPERATION="Alle,LIST_PARTY_INTERACTIONS" | append [| makeresults | eval API="ticket_mgmt_rest" AND METHOD="GET",OPERATION="Alle,LIST_TROUBLE_TICKETS"] | eval OPERATION=split(OPERATION,",") |mvexpand OPERATION| table API METHOD OPERATION | search API="$token_service$" METHOD="$token_method$"</query> </search>   Above code is not working.
Hi Team, I have two panels . For my 1st panel the query is: <title>DataGraphNodes Exceptions Details</title> <table> <search> <query>index=abc ns=sidh-datagraph3-c2 OR sidh-datagraph3 nodeExce... See more...
Hi Team, I have two panels . For my 1st panel the query is: <title>DataGraphNodes Exceptions Details</title> <table> <search> <query>index=abc ns=sidh-datagraph3-c2 OR sidh-datagraph3 nodeException node="*" |rex field=_raw "message=(?P&lt;datetime&gt;\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}\.\d+)\s"|stats count by ns app_name node nodeMethodName nodeException datetime |rename node as "Node"| rename nodeMethodName as "NodeMethod"|rename nodeException as "Node-Exception" | rename datetime as "Log_Time"|fields - count</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <drilldown> <set token="show_panel">true</set> <set token="selected_value">$click.value$</set> </drilldown> </table>   And I am getting result like this: ns                                        app-name                   Node        Node method                Exception   Log-Time sidh-datagraph3  data-graph-acct-b          https       getDetailsBySENo             Invalid Id                2022-06-21 sidh2                          data-acct-b                      https          invalid                                    InvalidId                2022-06-22   Foe 2nd panel I want when I click on 1st panel row the details should come based on the row I will select on 1st panel. My 2nd panel query is: <panel depends="$show_panel$"> <table> <title> Events</title> <search> <query>index=abc ns=sidh-datagraph3-c2 OR sidh-datagraph3 nodeException $node$ $selected_value$ </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="count">100</option> </table> </panel>   But its not coming proper in 2nd panel all the results are coming. I want only the row that I select in 1st panel that exception will come. Can someone guide me.
Using the Splunk Python SDK, is there any way to know the size of the CSV file that will be generated after streaming and writing all the results? I've managed to achieve this with the approach belo... See more...
Using the Splunk Python SDK, is there any way to know the size of the CSV file that will be generated after streaming and writing all the results? I've managed to achieve this with the approach below, but it makes me to download and iterate the CSV lines two times. splunk_job = service.jobs.create(query, **kwargs) # waits job to be done splunk_job_result_args = { "output_mode": "csv" } splunk_job_results = splunk_job.results(**splunk_job_result_args) results_length = 0 for bytes_csv_line in splunk_job_results: results_length += len(bytes_csv_line) splunk_job_results.close() # checks if the results_length exceeds the limit # if not, executes the following: splunk_job_results = splunk_job.results(**splunk_job_result_args) for bytes_csv_line in splunk_job_results: # writes the bytes in a file splunk_job_results.close()  
I tried to install Splunk using below commands in powershell, which installs without any issue and the service runs in services.msc msiexec.exe /i <location of .msi file>  AGREETOLICENSE=Yes SPLUNKU... See more...
I tried to install Splunk using below commands in powershell, which installs without any issue and the service runs in services.msc msiexec.exe /i <location of .msi file>  AGREETOLICENSE=Yes SPLUNKUSERNAME=admin SPLUNKPASSWORD=PASSWORD1 LAUNCHSPLUNK=1 /qb But using the same command, when I run this in AWS, it still pics Windows AMI, but not the Splunk AMI.  Can anyone advise?    
We have data fields in the format, for example, 12Jun22 I need to format like 12-06-2022  as shown in the below table: date expected format 12Jun22 12-06-20222 13Jun22 ... See more...
We have data fields in the format, for example, 12Jun22 I need to format like 12-06-2022  as shown in the below table: date expected format 12Jun22 12-06-20222 13Jun22 13-06-2022
I am trying to compare avg_rt for uWSGI workers for the last 15 mins and the last 7 days and then get a percentage out of it. If the difference is more than 50% then I want to trigger an alert. Her... See more...
I am trying to compare avg_rt for uWSGI workers for the last 15 mins and the last 7 days and then get a percentage out of it. If the difference is more than 50% then I want to trigger an alert. Here is my search host="prod-web-02" source="/var/log/uwsgi/app/uwsgi-metrics.log" earliest=-7d latest=now | stats avg(avg_rt) AS seven_days | append [ search host="prod-web-02" source="/var/log/uwsgi/app/uwsgi-metrics.log" earliest=-15m latest=now | stats avg(avg_rt) AS fifteen_mins ] | eval Result = (( fifteen_mins / seven_days ) * 100 ) | where Result > 50 I am unable to get a Result for whatever number I choose. it is not able to execute this part | eval Result = (( fifteen_mins / seven_days ) * 100 ) | where Result > 50 I am getting values for fifteen_mins and seven_days  seven_days                           fifteen_mins 320588.43640873017     360114.4
Hello, My alert produces a table like this:   Time |ID | FILE_NAME |STATUS _time1 |3 |file1.csv |SUCCESS _time2 |5 |file2.csv |DATA_ERROR     I want to send an Inline table that onl... See more...
Hello, My alert produces a table like this:   Time |ID | FILE_NAME |STATUS _time1 |3 |file1.csv |SUCCESS _time2 |5 |file2.csv |DATA_ERROR     I want to send an Inline table that only contains STATUS=DATA_ERROR. But in the body of the email, I still want to use the token $result.Time$ and $result.FILE_NAME$ from the STATUS=SUCCESS. Email body example: 1. File name success detail: File name: file1.csv Effective time: _time1 2. Data error detail: ID |FILE_NAME|STATUS 5  |file2.scv        |DATA_ERROR So it's basically, hide the STATUS=SUCCESS row- but still use its values in the token email. Thank you in advance
Find large CSV lookups above 400 mb (500 mb limit) : | rest splunk_server=* /servicesNS/-/-/data/transforms/lookups getsize=true f=size f=title f=type f=filename f=eai*|fields splunk_server filenam... See more...
Find large CSV lookups above 400 mb (500 mb limit) : | rest splunk_server=* /servicesNS/-/-/data/transforms/lookups getsize=true f=size f=title f=type f=filename f=eai*|fields splunk_server filename title type size eai:appName |where isnotnull(size)|eval KB = round(size / 1024, 2)|fields - size | sort - KB | search KB>400000   Use this to reduce CSV lookup (example) : | inputlookup file.csv | eval time_epoch = strftime(_time,"%s") | where time_epoch>relative_time(now(),"-100d@d") | outputlookup file.csv append=false  
I have the below query, I need the scatter point visualization for this. time on the x axis and the build duration  on the y axis for different job url as labels How to achieve this. index="maas-01... See more...
I have the below query, I need the scatter point visualization for this. time on the x axis and the build duration  on the y axis for different job url as labels How to achieve this. index="maas-01" sourcetype="jenkins_run:pipeline/describe" source=* "content.stages{}.stage_name"="build:execute" |rename content.stages{}.stage_duration_sec as duration content.stages{}.stage_name as name content.build_id as id | eval trimed_source = trim (source, "jenkins_run:/job/") | eval job_url = substr(trimed_source, 1, len(trimed_source )-2) |search job_url IN ($_job_url$) | table id _time name duration job_url | eval res=mvzip(name, duration) | eval name=mvindex(name, mvfind(res, "^build:execute.+")), duration=mvindex(duration, mvfind(res, "^build:execute.+")) | eval time=strptime(strftime(_time, "%Y-%m-%d %H:%M:%S.%N"),"%Y-%m-%d %H:%M:%S.%N") |eval bEx_Duration_minutes=round(duration/60, 2) | fields job_url time bEx_Duration_minutes I just need the time in human readable format , not any epoch number.  Any possibility of using scatter plot for above query with default _time? or is there any other way we can do this. Below is the visualisation which is getting generated. Need the output like below only but with readable date and time or Date only.    
Hi all, We are upgrading one of our environments from Splunk 8.2.0 to Splunk 9.0. We have an issue once we tried to upgrade the indexers, where the Splunk upgrade process got stuck at this point: ... See more...
Hi all, We are upgrading one of our environments from Splunk 8.2.0 to Splunk 9.0. We have an issue once we tried to upgrade the indexers, where the Splunk upgrade process got stuck at this point: I was looking onn the internet, but I cannot see anything related to this issue. Can someone help me with this? Many thanks in advance. Best regards.
Hi  We face a challenge We have created one alert in which we are monitoring one of the windows service (cloud gate way service) So basically if this service is not running or stopped splunk wi... See more...
Hi  We face a challenge We have created one alert in which we are monitoring one of the windows service (cloud gate way service) So basically if this service is not running or stopped splunk will trigger an alert for that.   Wanted to check if any possibility is there that if Splunk trigger such type of alert then to resolve the same Splunk will go to that server , login the server and will restart the service   We have identified one solution for this  By excute the alert action using the script  MAY I know where we can set the script (host=CSG196) can we deploy the script in host Can anyone suggest to resolve this issue
Hello All, I have a problem with my search. The following search works:   index=test_index sourcetype=test_sourcetype | search Modulename IN ("Test_One","Test_Two")    However, this sear... See more...
Hello All, I have a problem with my search. The following search works:   index=test_index sourcetype=test_sourcetype | search Modulename IN ("Test_One","Test_Two")    However, this search does not work:   index=test_index sourcetype=test_sourcetype | eval helper_modulename = replace("Test_One&form.Modulename=Test_Two", "&form.Modulename=", "\",\"") | eval helper_modulename = "\"" . helper_modulename . "\"" | search Modulename IN (helper_modulename)   The result of helper_modulename is the same string I use in the search that works: Can anyone tell me what I am doing wrong and what needs to be adapted to make it work? Thank you all in advance!