All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello All, We are planning to upgrade phantom platform from 4.10.7 to 5.0.1 version. Can you please let us know the known issues if anyone has upgraded to 5.0.1 version. Any links to go through wou... See more...
Hello All, We are planning to upgrade phantom platform from 4.10.7 to 5.0.1 version. Can you please let us know the known issues if anyone has upgraded to 5.0.1 version. Any links to go through would really help. Thanks, Sharada
I have 2 files in csv format. I want to search whether one of the unique field is present in second file. if it is present mark the value as true else false. Kindly help with the command
I am attempting to eval a new field, from two other fields:     | eval 4XXError=if(metric_name="4XXError", statistic_value, null()) | eval 5XXError=if(metric_name="5XXError", statistic_val... See more...
I am attempting to eval a new field, from two other fields:     | eval 4XXError=if(metric_name="4XXError", statistic_value, null()) | eval 5XXError=if(metric_name="5XXError", statistic_value, null()) | eval total_errors='4XXError'+'5XXError'       when I come to stat them out:     | stats last(4XXError), last(5XXError), last(total_errors) by api_name, http_method, url       the total_errors column is just blank:   where am i going wrong?    also why does 4XXError need to be single-quoted? is it because it starts with a number?  
Hi, I can't move buckets to splunk frozen archive, its gives an errors. 07-19-2022 12:36:37.249 +0300 INFO DatabaseDirectoryManager - Getting size on disk: Unable to get size on disk for bucket i... See more...
Hi, I can't move buckets to splunk frozen archive, its gives an errors. 07-19-2022 12:36:37.249 +0300 INFO DatabaseDirectoryManager - Getting size on disk: Unable to get size on disk for bucket id=kart~37~D99E1DD0-C0D8-4BE9-9A46-D4EDE5EFE178 path="/data/splunk/var/lib/splunk/kart/db/hot_v1_37" (This is usually harmless as we may be racing with a rename in BucketMover or the S2SFileReceiver thread, which should be obvious in log file; the previous WARN message about this path can safely be ignored.) caller=serialize_SizeOnDisk I set a file permission, chown splunk: to frozen_path I increased ulimit to infinite I can create file and folder on frozen path with splunk user, But as ı said, Splunk can't move buckets to frozen archieve. Thank you.
Hi We constantly have clients asking us to explain some of the metric names/types seen in the Metric Browser and still do not have precise answers. Hope someone can assist on this.  Does anyone... See more...
Hi We constantly have clients asking us to explain some of the metric names/types seen in the Metric Browser and still do not have precise answers. Hope someone can assist on this.  Does anyone have a different explanation for the Obs, Sum and Count Metrics seen in the Metric Browser for an App agent?  This page from the documentation, for example describes Obs as "Obs—The observed average of all data points seen for that interval", but if you are looking at one point on the graph for Calls Per Minute, then there can in reality only one metric value, or one actual number. If the agent reports 73 calls per min as the metric for a point in time, then what would the other data points be reporting for that exact same point in time? This is not clear to me.  Similar issue with the Count metric. If we are looking at the graph and seeing 1min intervals of data, see graph below, then what does it mean if the Observed value is 73, but Count is 3? To me it would make sense if the count was always 0 or 1, meaning that at that one point in time the agent reported x number of calls per min as a value/metric, or it reported nothing, but this is not the case. 
Hi All,   I have around 100+ lookups, which get updated daily from indexed data using macro and saved search. I want to find if any of these lookups are getting flushed and row count turns to "0"... See more...
Hi All,   I have around 100+ lookups, which get updated daily from indexed data using macro and saved search. I want to find if any of these lookups are getting flushed and row count turns to "0".  I created a lookup with all the lookup names and tried to pass the output to another lookup command and pull the stats. But this is not working.  Any suggestion to fullfil this requirement would be appreciated Thanks Rajesh
Hello Team, Everytime after the server reboot my machine agent stops reporting & have to start it manually. So anyone can please suggest the procedure how to start it as a service in linux so that ... See more...
Hello Team, Everytime after the server reboot my machine agent stops reporting & have to start it manually. So anyone can please suggest the procedure how to start it as a service in linux so that machine agent will start automatically after the server restart.
Hi,  I am new to AppDynamics and need help in creating health rules, policies, alerts, and notifications. Please let me know if any documents or PDFs which can help me to do this. thanks
Hi All, i am writing a query with the following: index=a0_payservutil_generic_app_audit_npd "kubernetes.labels.release"="mms-au" MNDT|rex field=log "eventName=\"*(?<eventName>[^,\"\s]+)"|rex fiel... See more...
Hi All, i am writing a query with the following: index=a0_payservutil_generic_app_audit_npd "kubernetes.labels.release"="mms-au" MNDT|rex field=log "eventName=\"*(?<eventName>[^,\"\s]+)"|rex field=log "serviceName=\"*(?<serviceName>[^\"]+)"|rex field=log "elapsed=\"*(?<elapsed>[^,\"\s]+)" |search eventName="ACCOUNT_DETAIL" AND serviceName="Alias " | eval newtime=round((elapsed/1000),2)|stats count(newtime) AS TotalNoOfEvents,count(eval(newtime>=1)) AS SLACrossedEvents|eval perc=((SLACrossedEvents/TotalNoOfEvents)*100)|where perc>1|stats avg(newtime) as avg |eval calc=if(newtime>=1,avg,0)|eval eventName="ACCOUNT_DETAIL"|eval serviceName="Alias"|fields eventName serviceName TotalNoOfEvents SLACrossedEvents perc calc i need to calculate average time of events which crossed SLA,  for ex: if 2 events crossed SLA (elapsedtime is greater than 1 sec)..in that one event took 3 sec and another event took 2 seconds then we should display 2.5 as average in particular time.i am not able to fetch it.Can you please help me on the same. Thanks in advance
Hello, I have following table: it keeps the _time, server, CPU load and the predicted CPU load. The last column PREDICTION tells if the given value is a real reading or predicted one based o... See more...
Hello, I have following table: it keeps the _time, server, CPU load and the predicted CPU load. The last column PREDICTION tells if the given value is a real reading or predicted one based on the ML algorithms (important: no ML Toolkit involved, but external result input). Now, I would like to present all this in the dashboard in a decent way, kind like the smartforecastviz macro of the ML Toolkit does - real CPU for server with normal line and the prediction with the dotted one, same color. Possibly all servers in the same line chart. No additional fancy things needed. I guess what would need to happen is the chart line type change, depending on the last column PREDICTION.  Is it doable in any way? Kind Regards, Kamil
Hello,  We want to send and monitor Prometheus Metrics to Splunk EE based on our requirements. Monitoring is possible if we use the Splunk observablity cloud using Prometheus exporters sent via O... See more...
Hello,  We want to send and monitor Prometheus Metrics to Splunk EE based on our requirements. Monitoring is possible if we use the Splunk observablity cloud using Prometheus exporters sent via Open telemetry collectors. But I want open telemetry data (Prometheus Metrics) into Splunk EE, as we have a Splunk Enterprise Licensed version. So, can Prometheus metrics be monitored in Splunk Enterprise without Splunk Observability Cloud? If so, what are the plugins and ways to configure them? Any open suggestions are also welcome.
Hello, I am experiencing an interesting Issue. I am trying to filter for a specific value in a numeric field. Following statement works finde:         index="IndexA" | eval A.distance=tr... See more...
Hello, I am experiencing an interesting Issue. I am trying to filter for a specific value in a numeric field. Following statement works finde:         index="IndexA" | eval A.distance=trim('A.distance',"'") | eval A.distance='A.distance'/100 | search A.distance=1         If I am trying to replace the search with a where, I am getting the Error "Error in 'where' command: Type checking failed. The '==' operator received different types."         index="IndexA" | eval A.distance=trim('A.distance',"'") | eval A.distance='A.distance'/100 | where A.distance=1         Event Coverage if this value is 100% and all the values get for typeof() the result "Number". All of the values do not have a digit after the comma. We are using Splunk Enterprise 8.2.3.3 . Does someone know, why the where statement is yielding an error in this case? Thanks
<form script="search:tabs.js,custom_vizs:autodiscover.js, custom_vizs:dendrogram.js" stylesheet="search:da_service.css,search:tabs.css,custom_vizs:dendrogram.css" hideSplunkBar="true"> ------------... See more...
<form script="search:tabs.js,custom_vizs:autodiscover.js, custom_vizs:dendrogram.js" stylesheet="search:da_service.css,search:tabs.css,custom_vizs:dendrogram.css" hideSplunkBar="true"> --------------------------------------------------------------------------------------------------- common.js:1798 SyntaxError: Unexpected token , in JSON at position 49 at JSON.parse (<anonymous>) at _decodeOptions (dashboard_1.1.js:277) at dashboard_1.1.js:277 at Function._.each._.forEach (common.js:1798) at Function.ready._enableComponentDivs (dashboard_1.1.js:277) at dashboard_1.1.js:277 at Object.execCb (eval at module.exports (common.js:1117), <anonymous>:1658:33) at Module.check (eval at module.exports (common.js:1117), <anonymous>:869:55) at Module.eval (eval at module.exports (common.js:1117), <anonymous>:1121:34) at eval (eval at module.exports (common.js:1117), <anonymous>:132:23) --------------------------------------------------------------------------------------------------- Failed to load resource: net::ERR_CONNECTION_TIMED_OUT  https://e1345286.api.splkmobile.com/1.0/e1345286//0/1 --------------------------------------------------------------------------------------------------- Refused to apply style from 'https://*************:8000/en-US/static/@2e1fca123028.225/app/custom_vizs/dendrogram.css' because its MIME type ('text/html') is not a supported stylesheet MIME type, and strict MIME checking is enabled.
Hello As you can see, the 2 single panel are not correctly aligned is there a way to avoid this without changing the length of the title? thanks  
Hi all,  I am wondering if it is possible to store a hashed value for passwords which are required for some of the receivers? Some receivers, especially database related ones, will require userna... See more...
Hi all,  I am wondering if it is possible to store a hashed value for passwords which are required for some of the receivers? Some receivers, especially database related ones, will require username and passwords to connect and collect measurement data, and I do not feel safe saving passwords in plaintext in agent_config.yaml.   
I use nlp-text-analytics app for similarity between two strings but I get above error   when I run lines 1, 2, and 3, it gives a result of about 14,973,764 (3814*3926)   It is inter... See more...
I use nlp-text-analytics app for similarity between two strings but I get above error   when I run lines 1, 2, and 3, it gives a result of about 14,973,764 (3814*3926)   It is interesting that if I reduce the data, for example, the output of the third line is  13.348.400 (3400* 3926 )  , it does not give an error and the result of the fourth line is returned.   This means that the similarity command has problems with data above 13.5 million my splunk is 8.2 nlp-text-analytics 1.1.3 python_upgrade_readiness_app  4.0.2   
I have a lookup with IP addresses (CIDR), I need to find the intersection of IP addresses. There is a command in splunk called cidrtomatch. I need all fields where there is an intersection in the sam... See more...
I have a lookup with IP addresses (CIDR), I need to find the intersection of IP addresses. There is a command in splunk called cidrtomatch. I need all fields where there is an intersection in the same table to make a checkmark. In the field notes - equate to 1.
We want to create and update incident with some specific Severity level using custom command "snowincident". The available options do not have severity field which we can use (severity field is avail... See more...
We want to create and update incident with some specific Severity level using custom command "snowincident". The available options do not have severity field which we can use (severity field is available in snowevent custom command).
Hello everyone,  I want to be able to have  a dynamic timewrap option on my dashboard. Based on the user input (of specific time range and a time wrap variable), i want some graphs on the dashboard ... See more...
Hello everyone,  I want to be able to have  a dynamic timewrap option on my dashboard. Based on the user input (of specific time range and a time wrap variable), i want some graphs on the dashboard to plot the events from that entered time range and also the events from the day before/ week before, based on the timewrap variable. Is this doable?  I have attached some messy code; not sure if this is doable. Thank you for your advice! 
I need assistance with whitelisting as I can’t make it work.  I’m running the free trial version 9.0.0 of Splunk Enterprise. I have 1 Receiver (on a CentOS VM), and some Windows and CentOS systems (V... See more...
I need assistance with whitelisting as I can’t make it work.  I’m running the free trial version 9.0.0 of Splunk Enterprise. I have 1 Receiver (on a CentOS VM), and some Windows and CentOS systems (VM’s and physical devices) with the Universal Forwarder installed.  I’m getting data in from all my systems.  On the Windows systems I only need to see data from select Windows Security Log Events and would like to exclude all other log data/events.  I’ve read Splunk’s documentation about whitelisting and I guess I just don’t understand what I’m reading.  It doesn’t seem to be working as my license usage hasn’t decreased and/or I don’t know how to verify if it’s working. I created an inputs.conf file in the following location:  /etc/system/local/ on the Universal Forwarders and its content is: [WinEventLog://Security] whitelist=1100,1101,1102,4616,4624,4625,4634,4647,4648,4657,4704,4705,4719,4720,4722,4723,4724,4725,4726,4740,4767,4776,4777,4616 Is this correct? Do I have to put the statement disabled = 0 or is it implied? I haven’t configured anything through Splunk web, do I need to do that? Where do I save the inputs.conf file?  On the Receiver only, on the Universal Forwarders only, or on both? Do I need to include all the statements from the default inputs.conf file in my new one? Besides decreased license usage, is there a way to know if my whitelist is working? Thank you for any and all help.