All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am trying to find the timings between multiple calls under the same extracted field of InterchangeId. When using streamstats range(_time), I get the timing between the calls, however the fi... See more...
Hello, I am trying to find the timings between multiple calls under the same extracted field of InterchangeId. When using streamstats range(_time), I get the timing between the calls, however the first call in order of time has the total time and the last call has a 0 value. I am trying to determine how long it takes between each call in the correct order without it aggregating one of the calls to the total timing value. Below is a screenshot of the results as well as the search. I appreciate any help with this!  
Hi, is it possible to change the default priority of  data source(s) in the TrackMe app using a tag which is defined in TrackMe?   Thanks
I have a LM, CM , few Unix servers containing my Indexers. Would like to patch the unix servers & reboot them. in what order in which Splunk servers (Unix servers) should be patched & rebooted. Some... See more...
I have a LM, CM , few Unix servers containing my Indexers. Would like to patch the unix servers & reboot them. in what order in which Splunk servers (Unix servers) should be patched & rebooted. Some has to be put in maintenacce mode right?
Hi everybody we are seeing bad performances in metrics indexes searches, in particular when a "group by" clause is used on dimensions with many values Of course performance decrease as time interva... See more...
Hi everybody we are seeing bad performances in metrics indexes searches, in particular when a "group by" clause is used on dimensions with many values Of course performance decrease as time interval being searched increases We set up the metric rollup mechanism to aggregate raw values into 1 hour, with the idea of having better performance. Hard to believe: search performance is worse on the aggregated index than on the original one. it seems that the insights of how metrics indexes are built heavily impact our searches. Does anyone have any idea, or specific info on metric indexes beyond what's written in documentation? thanks  
I'm sure this is a simple fix, but I have been a little stuck. I'm trying to take just a text input and use that populate the corresponding dashboard.  I have the input panel added and a simple dashb... See more...
I'm sure this is a simple fix, but I have been a little stuck. I'm trying to take just a text input and use that populate the corresponding dashboard.  I have the input panel added and a simple dashboard added, but the searches are not completing, even though I know it is a valid search.  Any help is appreciated.    <input type="text" token="$Name_tok$" searchWhenChanged="true"> <label>Enter Text</label> <default>*</default> </input> </row> </default> </input> <input type="text" token="Name_tok"> <label> Name</label> <default></default> </input> </fieldset> <row> <panel> <table> <search> <query>index=prod_devices Name=$Name_tok$ | table date, Name, version | dedup Name</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel>
Hi, Can anyone help, As I want to get an alert if : The volume gets drop or if processing time gets increased of a specific server when being compared with last 5 minutes - The query should use volum... See more...
Hi, Can anyone help, As I want to get an alert if : The volume gets drop or if processing time gets increased of a specific server when being compared with last 5 minutes - The query should use volume and average response of current 5 minutes and last 5 minutes. and then if there is difference in volume < 50% or processing time > 60% then alert.
Hello. We are currently utilizing TA-ms-loganalytics from Splunkbase.  Although we are able to ingest data from Log Analytics (Azure), we do encounter situations where the feed stops.  Has anyone el... See more...
Hello. We are currently utilizing TA-ms-loganalytics from Splunkbase.  Although we are able to ingest data from Log Analytics (Azure), we do encounter situations where the feed stops.  Has anyone else faced similar issues?  Below are some error messages we receive ... message from "python /opt/splunk/etc/apps/TA-ms-loganalytics/bin/log_analytics.py" ERRORlocal variable 'data' referenced before assignment message from "python /opt/splunk/etc/apps/TA-ms-loganalytics/bin/log_analytics.py" UnboundLocalError: local variable 'data' referenced before assignment message from "python /opt/splunk/etc/apps/TA-ms-loganalytics/bin/log_analytics.py"     for i in range(len(data["tables"][0]["rows"])): message from "python /opt/splunk/etc/apps/TA-ms-loganalytics/bin/log_analytics.py"   File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/input_module_log_analytics.py", line 86, in collect_events message from "python /opt/splunk/etc/apps/TA-ms-loganalytics/bin/log_analytics.py"     input_module.collect_events(self, ew) message from "python /opt/splunk/etc/apps/TA-ms-loganalytics/bin/log_analytics.py"   File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/log_analytics.py", line 96, in collect_events message from "python /opt/splunk/etc/apps/TA-ms-loganalytics/bin/log_analytics.py"     self.collect_events(ew) message from "python /opt/splunk/etc/apps/TA-ms-loganalytics/bin/log_analytics.py"   File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/modinput_wrapper/base_modinput.py", line 127, in stream_events message from "python /opt/splunk/etc/apps/TA-ms-loganalytics/bin/log_analytics.py" Traceback (most recent call last): If you encountered a similar issue/errors, it would be great to learn how you resolved it.  Thanks. Regards, Max  
This is result of a query that reflects license consumption by day Index 3/2/2021 3/3/2021 3/4/2021 3/5/2021 3/6/2021 3/7/2021 3/8/2021 index01 0.018006 0.018128 0.018065 0.018035 ... See more...
This is result of a query that reflects license consumption by day Index 3/2/2021 3/3/2021 3/4/2021 3/5/2021 3/6/2021 3/7/2021 3/8/2021 index01 0.018006 0.018128 0.018065 0.018035 0.017944 0.017985 0.018042 index02 0.014985 0.009444 0.054803 0.010401 0.006807 0.005035 0.008998 index03 3.468919 3.674277 3.786565 3.133193 2.151094 2.548173 4.531934 index04 0.084911 0.082637 0.090785 0.062404 0.012795 0.031198 0.084129   I'm try to compute a daily difference so we can easily spot variance/trend with the result looking something like this: Index 3/2/2021 3/3/2021 dif day 1 3/4/2021 dif day 2 3/5/2021 dif day 3 3/6/2021 dif day 4 3/7/2021 dif day 5 3/8/2021 dif day 6 index01 0.018006 0.018128 -0.00012 0.018065 0.00006 0.018035 0.00003 0.017944 0.00009 0.017985 -0.00004 0.018042 -0.00006 index02 0.014985 0.009444 0.005541 0.054803 -0.04536 0.010401 0.04440 0.006807 0.00359 0.005035 0.00177 0.008998 -0.00396 index03 3.468919 3.674277 -0.20536 3.786565 -0.11229 3.133193 0.65337 2.151094 0.98210 2.548173 -0.39708 4.531934 -1.98376 index04 0.084911 0.082637 0.002274 0.090785 -0.00815 0.062404 0.02838 0.012795 0.04961 0.031198 -0.01840 0.084129 -0.05293   The query I started from is below & I've tried 20 ways to Sunday get a difference column, but no joy index=_internal source=*license_usage.log* type=Usage earliest=-7d@d latest=@d host=licenseserver | eval GB=round(b/1024/1024/1024,6) | bucket span=1d _time | eval Time=strftime(_time,"%m/%d/%y") | chart sum(GB) AS volume_GB over Time by idx limit=0 | transpose 0 column_name=Index header_field=Time   I'm not married to chart or transpose, its just where it all started.   Any suggestions?
Hi Community. I have this SPL: | tstats summariesonly=true allow_old_summaries=true count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.severity | rename "IDS_Attacks.*" as "*" | ... See more...
Hi Community. I have this SPL: | tstats summariesonly=true allow_old_summaries=true count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.severity | rename "IDS_Attacks.*" as "*" | eval temp="" | chart useother=true first(count) over temp by severity | rename temp as count And its working fine. However, I have values for IDS_Attacks.severity in form of "high" and "High" appart from other values, wich i woudl like to keep intact. The SPL is counting the two values as different values, and I would like them to be merged into one count as "High".   Tried this:   | tstats summariesonly=true allow_old_summaries=true count from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.severity | rename IDS_Attacks.severity as severity2 | eval temp="" | eval severity3 = if(severity2="high","High", severity2) | chart useother=true first(count) over temp by severidad2 | rename temp as count   and its not working. Note I need the SPL to be showing a report from a dashboard. Thanks in advance.
All, I've been trying to find a solution for this for a few days.  We have multiple tools sending data in on their coverage and we would like to have a search that will show hosts which exist in one... See more...
All, I've been trying to find a solution for this for a few days.  We have multiple tools sending data in on their coverage and we would like to have a search that will show hosts which exist in one but not the other, in SQL terms, an OUTER JOIN. I have found that Splunk doesn't support a true outer join, so I'm still searching for a solution. Edit: spelling
Hello! I see that a new version of the Splunk Cloud Gateway (https://splunkbase.splunk.com/app/4250) was released on March 4th.  Is it compatible with Python3?  If not, when? Thanks! Andrew
I have this in the logs for the K8s cluster agent. [INFO]: 2021-03-09 15:23:14 - agentregistrationmodule.go:119 - Initial Agent registration [ERROR]: 2021-03-09 15:23:14 - agentregistrationmodule.g... See more...
I have this in the logs for the K8s cluster agent. [INFO]: 2021-03-09 15:23:14 - agentregistrationmodule.go:119 - Initial Agent registration [ERROR]: 2021-03-09 15:23:14 - agentregistrationmodule.go:131 - Failed to send agent registration request: Status: 404 Not Found, Body: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>AppDynamics - Error report</title><style type="text/css"><!--H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A {color : black;}HR {color : #525D76;}--></style> </head><body><h1>HTTP Status 404 - Not Found</h1><hr/><p><b>type</b> Status report</p><p><b>message</b>Not Found</p><p><b>description</b>The requested resource is not available.</p><hr/><h3>AppDynamics</h3></body></html> [ERROR]: 2021-03-09 15:23:14 - agentregistrationmodule.go:132 - clusterId: -1 [ERROR]: 2021-03-09 15:23:14 - agentregistrationmodule.go:134 - Registration properties: {}
I have two separate deployment apps going to separate classes. One we'll call A is working great with full event filtering and blacklisting via $XmlRegex in inputs.conf see below:   whitelist1 = $X... See more...
I have two separate deployment apps going to separate classes. One we'll call A is working great with full event filtering and blacklisting via $XmlRegex in inputs.conf see below:   whitelist1 = $XmlRegex="<EventID>(1|12|13|6|1100|4624|4625|4634|4648|4663|4672|4688|4719|4722|4724|4732|4733|4735|4737|4739|4778|4779|4946|5140)<\/EventID>" blacklist1 = $XmlRegex="<EventID>4688<\/EventID>.*<Data Name='NewProcessName'>[C-F]:\\Program Files\\Splunk(?:UniversalForwarder)?\\bin\\(?:btool|splunkd|splunk|splunk-(?:MonitorNoHandle|admon|netmon|perfmon|powershell|regmon|winevtlog|winhostinfo|winprintmon|wmi))\.exe"   This is under the WinEventLog stanza. I've now copied this over to class B with more hosts but this whitelist / blacklist isn't working. Our license is getting hit hard by the 4688 events and so need to trim them down considerably. I've read other similar questions but we're using Xml Windows logs so classic responses don't work, I have also tried outright blocking it with props and transforms on our intermediate heavy forwarder with no success there either so either inputs or props and transforms help would be great
Hello All,   I am trying to ingest some Azure data from our DCs.  I have the following two stanzas added to our Splunk_TA_windows inputs.conf and we still do not see any data and do not see any e... See more...
Hello All,   I am trying to ingest some Azure data from our DCs.  I have the following two stanzas added to our Splunk_TA_windows inputs.conf and we still do not see any data and do not see any errors from any of the hosts that have the Azure data.   [WinEventlog://Microsoft-AzureADPasswordProtection-DCAgent/Admin] index = wineventlog disabled = 0 renderXml=true [WinEventlog://Microsoft-AzureADPasswordProtection-DCAgent/Operational] index = wineventlog disabled = 0 renderXml=true     Not sure why we are not seeing any data in Splunk.  The AD admin says he sees logs on the host but not in Splunk.  So to me it seems that Splunk is not ingesting the data and I am lost as to why.   Thanks
I'm trying to upload an ascii file (created on a IBM mainframe) into splunk using the lookup - add new lookup table file option. Its not a CSV format, just text with no comma separated fields (also ... See more...
I'm trying to upload an ascii file (created on a IBM mainframe) into splunk using the lookup - add new lookup table file option. Its not a CSV format, just text with no comma separated fields (also known as a flat/sequential file). I'm getting the following error :- "Encountered the following error while trying to save: File is binary or file encoding is not supported, only utf-8 encoded files are supported". Am i using the correct option to get the data into splunk, if yes, how do i overcome the error?        
HI,  I want to add background color to a table in a panel using css but without using "!important" as I dont want to overwrite existing configuration of panel in dashboard.  Acutal requirement: htt... See more...
HI,  I want to add background color to a table in a panel using css but without using "!important" as I dont want to overwrite existing configuration of panel in dashboard.  Acutal requirement: https://community.splunk.com/t5/Dashboards-Visualizations/Css-Overwrites-existing-feature-within-dashboard/m-p/541368/highlight/true#M37095 <row> <panel depends="$alwaysHideCSS$"> <html> <style> #table1 .table th, .table td{ background-color: #808080 !important; } #table1 .table th, .table tr{ background-color: #FFA500 !important; } #Panel1 .dashboard-panel,#Panel .dashboard-panel { background: #808080 !important; } </style> </html> </panel> </row>
Android agent plugin version:  com.appdynamics:appdynamics-gradle-plugin:20.10.0 Using the Android Gradle plugin and calling an assembly task for a particular Android product flavour & build type p... See more...
Android agent plugin version:  com.appdynamics:appdynamics-gradle-plugin:20.10.0 Using the Android Gradle plugin and calling an assembly task for a particular Android product flavour & build type puts the mapping files in an output folder that the AppDynamics Android plugin cannot find. For example, where "test" is the product flavour and "prod" is the build type. ./gradlew assembleTestProd Gives the output: > Task :app:appDynamicsUploadProguardMappingTestProd <UploadProguardMappingFileTask_Decorated> Proguard is enabled but mapping file was not generated. Please check your build configuration. Gradle puts the mapping file in: build/outputs/mapping/testProd/ But the Appdynamics plugin seems to expect the mapping file to be in build/outputs/mapping/test/ Expectation: AppDynamics plugin should create tasks that upload the mapping file from the same output location as the specific product flavour and build type that was assembled.
I have below json format data in Splunk index we know splunk support json it is already extracted fields like event_simpleName  {"FileDeletedCount":"0","DirectoryCreatedCount":"0","ContextThreadI... See more...
I have below json format data in Splunk index we know splunk support json it is already extracted fields like event_simpleName  {"FileDeletedCount":"0","DirectoryCreatedCount":"0","ContextThreadId":"0","aip":"1.2.3.34","NetworkConnectCount":"0","NetworkListenCount":"0","event_platform":"Mac","NetworkBindCount":"0","NetworkRecvAcceptCount":"0","id":"31chdshduf-eb-a92adkh","NewExecutableWrittenCount":"0","NetworkCloseCount":"0","SuspectStackCount":"0","timestamp":"161233596129","event_simpleName":"EndOfProcess","RawProcessId":"72363","ContextTimeStamp":"1615298594.566","ConfigStateHash":"123345","ContextProcessId":"34ddf404471","AsepWrittenCount":"0","SuspiciousDnsRequestCount":"0","S6677HashData":"481572c78b13ebecd3f35d223d86e484fghlsjdljfldjfrce","ConfigBuild":"1007.4.0012205.1","NetworkCapableAsepWriteCount":"0","ExecutableDeletedCount":"0","TargetProcessId":"343242632616804471","DnsRequestCount":"0","Entitlements":"15","name":"EndOfProcessMacV15","aid":"gsdehlsahfhsafkskcdnnf","cid":"3sdkfksfjsjfjlfsj4d14ab9e0063774b51f9"} i want to create new field for event_simpleName as sysmon and keep original field as well and create new fields for it's value which  EndOfProcess as process_terminated currenlty value doent show in extracted fields    i tried to use props.conf it doent work, deployed on search heads as well HF  FILEDALIAS -sysmon = event_simpleName as symon   any suggestion here ?
Hi, I am trying to post json format to splunk through Http requester, after post to splunk I am able to see in raw format. Before we are using Json Logger. I need to do post call to splunk through... See more...
Hi, I am trying to post json format to splunk through Http requester, after post to splunk I am able to see in raw format. Before we are using Json Logger. I need to do post call to splunk through http requester, what I can see through Json Logger same format I need through Http requester because already some dashboards was there  We don't want to disturb. Can you please help on this. Regards, Yashwanth.
Hey there! We're using Google Cloud App to inject logs sent to GCS by Stackdriver. Stackdriver produces logs in GCS in JSON format with multiple events per file, newline separated. Each resulting j... See more...
Hey there! We're using Google Cloud App to inject logs sent to GCS by Stackdriver. Stackdriver produces logs in GCS in JSON format with multiple events per file, newline separated. Each resulting json can be up to 200MB in size making Splunk input to choke on data. Example (truncated for brevity): gs://mybucket/stackdriver-logs/20210309.json: {"insertId":"c4fc7617-638d-4553-a7c1-861b44b06299","labels":"blah"} {"insertId":"6c386a11-ebed-42e0-9ceb-6db36c8ea40e","labels":{"blah blah"} Can we configure Cloud App plugin or Splunk to  split each json document from file into its own event?