All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

My actual query as all this data.   but after i use transpose  | sort by _time desc | eval mytime=strftime(_time, "%B %d %Y") | fields - _* | transpose header_field=mytime I only s... See more...
My actual query as all this data.   but after i use transpose  | sort by _time desc | eval mytime=strftime(_time, "%B %d %Y") | fields - _* | transpose header_field=mytime I only see the result for first 5 columns    How can i make transpose work for all more than 5days of data Also is there a way to generically format the color. Because the date changes. 
So I have a field (plugin_output)that has a paragraph of hardware info as one value. The only part of the value I'm concerned with is the "Computer SerialNumber". Is it possible to break this value d... See more...
So I have a field (plugin_output)that has a paragraph of hardware info as one value. The only part of the value I'm concerned with is the "Computer SerialNumber". Is it possible to break this value down into multiple values? I've tried field extraction with no luck, it may be possible to do a string search, but I would also need variables to account for the actual serial number value I want. <plugin_output> Computer Manufacturer : VMware, Inc. Computer Model : VMware7,1 Computer SerialNumber : VMware-65 6d 69 60 3b 89 2a a0-3b 4e bb 3f 2a 95 2f 49 Computer Type : Other Computer Physical CPU's : 2 Computer Logical CPU's : 4 CPU0 Architecture : x64 Physical Cores: 2 Logical Cores : 2 CPU1 Architecture : x64 Physical Cores: 2 Logical Cores : 2 Computer Memory : 8190 MB RAM slot #0 Form Factor: DIMM Type : DRAM Capacity : 8192 MB </plugin_output>
Hello Community, Is there a way to create a health rule based on Job Status.
Given a query   | mstats sum(ktm.lag_ms_count) as sum_count where index=ktm   I want to restrict the results based on another attribute like this   | mstats sum(ktm.lag_ms_count) as sum_c... See more...
Given a query   | mstats sum(ktm.lag_ms_count) as sum_count where index=ktm   I want to restrict the results based on another attribute like this   | mstats sum(ktm.lag_ms_count) as sum_count where index=ktm,ktm.lag_ms_mean > 120000   But this doesn't work. Possible to do this kind of filter in mstats? I've been able to do absolute filters   where index=ktm,cluster=app   But the ranged thing doesn't work
Hai Team,  we are getting netapp data it is in the main index as its only support default syslog ports. how i can create create props.conf to filter it like <host::*netapp*> and want it in its own ... See more...
Hai Team,  we are getting netapp data it is in the main index as its only support default syslog ports. how i can create create props.conf to filter it like <host::*netapp*> and want it in its own index   how is the props.conf and transform.conf on this requirement      
I have a basic SPL using mstat but I can't use treills with it? Any ideas why I can't select "severity"       | mstats count("mx.process.logs") as count WHERE "index"="murex_metrics" BY seve... See more...
I have a basic SPL using mstat but I can't use treills with it? Any ideas why I can't select "severity"       | mstats count("mx.process.logs") as count WHERE "index"="murex_metrics" BY severity            
Hello, we have issue reindexing archives as gz files even using crcSalt = <SOURCE> or crcSalt = REINDEXMPLEASE We CAN'T go on each UF and clean fishbucket.   UF (V7.1.4) linux splunkd.log : 07... See more...
Hello, we have issue reindexing archives as gz files even using crcSalt = <SOURCE> or crcSalt = REINDEXMPLEASE We CAN'T go on each UF and clean fishbucket.   UF (V7.1.4) linux splunkd.log : 07-19-2022 18:19:09.129 +0200 INFO ArchiveProcessor - Handling file=/var/log/MAJ-OS.log-20220601.gz 07-19-2022 18:19:09.130 +0200 INFO ArchiveProcessor - reading path=/var/log/MAJ-OS.log-20220601.gz (seek=0 len=1356) 07-19-2022 18:19:09.281 +0200 INFO ArchiveProcessor - Archive with path="/var/log/MAJ-OS.log-20220601.gz" was already indexed as a non-archive, skipping. 07-19-2022 18:19:09.281 +0200 INFO ArchiveProcessor - Finished processing file '/var/log/MAJ-OS.log-20220601.gz', removing from stats It also says "new tailer already processed path..." inputs.conf app from deployment-apps (V8.2.2) : [monitor:///var/log/MAJ-OS.log*] blacklist = archives disabled = false index = inf-servers sourcetype = MAJ-OS crcSalt = <SOURCE>   Thanks for your help.    
Hello, I am writing a search for a dashboard panel and it shows expected result when I use it as a search, but when added in Dashboard, the date time format is changed. This search gets all the r... See more...
Hello, I am writing a search for a dashboard panel and it shows expected result when I use it as a search, but when added in Dashboard, the date time format is changed. This search gets all the requesters with more than 25 tickets created in the last 13 months.     index="tickets" host="TICKET_DATA" source="D:\\Tickets_Data_Export.csv" "Department Name"="ABC" earliest=-14mon@mon | sort 0 -_time | foreach * [ eval newFieldName=replace("<<FIELD>>", "\s+", ""), {newFieldName}='<<FIELD>>' ] | fields - "* *", newFieldName | eval TicketCategory=if(isnull(TicketCategory),"Non-Chargeable",TicketCategory) | eval ID=substr(ID, len(ID)-5, 6) | dedup ID | eval tempDt=strptime(CreatedDate, "%Y-%m-%d %H:%M:%S") | eval YYMM=strftime(tempDt,"%Y-%m") | where tempDt>=relative_time(now(),"-13mon@mon") and tempDt<=relative_time(now(),"@mon") | chart count as Count over RequesterName by YYMM where Count in top13 | addtotals | where Total>=25 | sort 0 -Total       Outpout when run on Search Screen vs Output when added as a panel in Dashboard Studio is attached here. Column YYMM format is different in both cases - date time formatCan you please help how I can get the column names in YYYY-MM (and not full date n time format) in Dashboard Studio panel? Column YYMM displays YYYY-MM correctly when I select "Parallel Coordinates" visulization, but this is something not fit for my use case. Thank you.              
Am trying to access Crowdstrike Intel endpoint where oauth2 token is needed. When I test asset connectivity, I get below error message which I believe is due to the length of the token string. How do... See more...
Am trying to access Crowdstrike Intel endpoint where oauth2 token is needed. When I test asset connectivity, I get below error message which I believe is due to the length of the token string. How do I fix this error ? ERROR MESSAGE Using provided token to authenticate Got error: 401 2 actions failed handle_action exception occurred. Error string: ''access_token''
Hello All, We are planning to upgrade phantom platform from 4.10.7 to 5.0.1 version. Can you please let us know the known issues if anyone has upgraded to 5.0.1 version. Any links to go through wou... See more...
Hello All, We are planning to upgrade phantom platform from 4.10.7 to 5.0.1 version. Can you please let us know the known issues if anyone has upgraded to 5.0.1 version. Any links to go through would really help. Thanks, Sharada
I have 2 files in csv format. I want to search whether one of the unique field is present in second file. if it is present mark the value as true else false. Kindly help with the command
I am attempting to eval a new field, from two other fields:     | eval 4XXError=if(metric_name="4XXError", statistic_value, null()) | eval 5XXError=if(metric_name="5XXError", statistic_val... See more...
I am attempting to eval a new field, from two other fields:     | eval 4XXError=if(metric_name="4XXError", statistic_value, null()) | eval 5XXError=if(metric_name="5XXError", statistic_value, null()) | eval total_errors='4XXError'+'5XXError'       when I come to stat them out:     | stats last(4XXError), last(5XXError), last(total_errors) by api_name, http_method, url       the total_errors column is just blank:   where am i going wrong?    also why does 4XXError need to be single-quoted? is it because it starts with a number?  
Hi, I can't move buckets to splunk frozen archive, its gives an errors. 07-19-2022 12:36:37.249 +0300 INFO DatabaseDirectoryManager - Getting size on disk: Unable to get size on disk for bucket i... See more...
Hi, I can't move buckets to splunk frozen archive, its gives an errors. 07-19-2022 12:36:37.249 +0300 INFO DatabaseDirectoryManager - Getting size on disk: Unable to get size on disk for bucket id=kart~37~D99E1DD0-C0D8-4BE9-9A46-D4EDE5EFE178 path="/data/splunk/var/lib/splunk/kart/db/hot_v1_37" (This is usually harmless as we may be racing with a rename in BucketMover or the S2SFileReceiver thread, which should be obvious in log file; the previous WARN message about this path can safely be ignored.) caller=serialize_SizeOnDisk I set a file permission, chown splunk: to frozen_path I increased ulimit to infinite I can create file and folder on frozen path with splunk user, But as ı said, Splunk can't move buckets to frozen archieve. Thank you.
Hi We constantly have clients asking us to explain some of the metric names/types seen in the Metric Browser and still do not have precise answers. Hope someone can assist on this.  Does anyone... See more...
Hi We constantly have clients asking us to explain some of the metric names/types seen in the Metric Browser and still do not have precise answers. Hope someone can assist on this.  Does anyone have a different explanation for the Obs, Sum and Count Metrics seen in the Metric Browser for an App agent?  This page from the documentation, for example describes Obs as "Obs—The observed average of all data points seen for that interval", but if you are looking at one point on the graph for Calls Per Minute, then there can in reality only one metric value, or one actual number. If the agent reports 73 calls per min as the metric for a point in time, then what would the other data points be reporting for that exact same point in time? This is not clear to me.  Similar issue with the Count metric. If we are looking at the graph and seeing 1min intervals of data, see graph below, then what does it mean if the Observed value is 73, but Count is 3? To me it would make sense if the count was always 0 or 1, meaning that at that one point in time the agent reported x number of calls per min as a value/metric, or it reported nothing, but this is not the case. 
Hi All,   I have around 100+ lookups, which get updated daily from indexed data using macro and saved search. I want to find if any of these lookups are getting flushed and row count turns to "0"... See more...
Hi All,   I have around 100+ lookups, which get updated daily from indexed data using macro and saved search. I want to find if any of these lookups are getting flushed and row count turns to "0".  I created a lookup with all the lookup names and tried to pass the output to another lookup command and pull the stats. But this is not working.  Any suggestion to fullfil this requirement would be appreciated Thanks Rajesh
Hello Team, Everytime after the server reboot my machine agent stops reporting & have to start it manually. So anyone can please suggest the procedure how to start it as a service in linux so that ... See more...
Hello Team, Everytime after the server reboot my machine agent stops reporting & have to start it manually. So anyone can please suggest the procedure how to start it as a service in linux so that machine agent will start automatically after the server restart.
Hi,  I am new to AppDynamics and need help in creating health rules, policies, alerts, and notifications. Please let me know if any documents or PDFs which can help me to do this. thanks
Hi All, i am writing a query with the following: index=a0_payservutil_generic_app_audit_npd "kubernetes.labels.release"="mms-au" MNDT|rex field=log "eventName=\"*(?<eventName>[^,\"\s]+)"|rex fiel... See more...
Hi All, i am writing a query with the following: index=a0_payservutil_generic_app_audit_npd "kubernetes.labels.release"="mms-au" MNDT|rex field=log "eventName=\"*(?<eventName>[^,\"\s]+)"|rex field=log "serviceName=\"*(?<serviceName>[^\"]+)"|rex field=log "elapsed=\"*(?<elapsed>[^,\"\s]+)" |search eventName="ACCOUNT_DETAIL" AND serviceName="Alias " | eval newtime=round((elapsed/1000),2)|stats count(newtime) AS TotalNoOfEvents,count(eval(newtime>=1)) AS SLACrossedEvents|eval perc=((SLACrossedEvents/TotalNoOfEvents)*100)|where perc>1|stats avg(newtime) as avg |eval calc=if(newtime>=1,avg,0)|eval eventName="ACCOUNT_DETAIL"|eval serviceName="Alias"|fields eventName serviceName TotalNoOfEvents SLACrossedEvents perc calc i need to calculate average time of events which crossed SLA,  for ex: if 2 events crossed SLA (elapsedtime is greater than 1 sec)..in that one event took 3 sec and another event took 2 seconds then we should display 2.5 as average in particular time.i am not able to fetch it.Can you please help me on the same. Thanks in advance
Hello, I have following table: it keeps the _time, server, CPU load and the predicted CPU load. The last column PREDICTION tells if the given value is a real reading or predicted one based o... See more...
Hello, I have following table: it keeps the _time, server, CPU load and the predicted CPU load. The last column PREDICTION tells if the given value is a real reading or predicted one based on the ML algorithms (important: no ML Toolkit involved, but external result input). Now, I would like to present all this in the dashboard in a decent way, kind like the smartforecastviz macro of the ML Toolkit does - real CPU for server with normal line and the prediction with the dotted one, same color. Possibly all servers in the same line chart. No additional fancy things needed. I guess what would need to happen is the chart line type change, depending on the last column PREDICTION.  Is it doable in any way? Kind Regards, Kamil
Hello,  We want to send and monitor Prometheus Metrics to Splunk EE based on our requirements. Monitoring is possible if we use the Splunk observablity cloud using Prometheus exporters sent via O... See more...
Hello,  We want to send and monitor Prometheus Metrics to Splunk EE based on our requirements. Monitoring is possible if we use the Splunk observablity cloud using Prometheus exporters sent via Open telemetry collectors. But I want open telemetry data (Prometheus Metrics) into Splunk EE, as we have a Splunk Enterprise Licensed version. So, can Prometheus metrics be monitored in Splunk Enterprise without Splunk Observability Cloud? If so, what are the plugins and ways to configure them? Any open suggestions are also welcome.