All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I'm working with the Splunk Infrastructure Monitoring Add-on, collecting information from Splunk Observability Suite (aka SignalFX) on ITSI, using the "sim flow". I'm trying to build KPI Base se... See more...
Hi, I'm working with the Splunk Infrastructure Monitoring Add-on, collecting information from Splunk Observability Suite (aka SignalFX) on ITSI, using the "sim flow". I'm trying to build KPI Base searches using this command and the information that the add-on is collecting. When I execute the following query: | sim flow query="data('cpu.utilization', filter=filter('host', '*')).publish()" From the events of this result, some of the events related to X hosts have a variable AWSUniqueId that I'd like to obtain. For other hosts this variable doesn't exist and so, it doesn't appear in the event. Therefore, I've tried with the following simple query: | sim flow query="data('cpu.utilization', filter=filter('host', '*')).publish()" | chart values(AWSUniqueId) as AWSUniqueId by host But sometimes I receive all the information (with the correlation of the values), and other times it just shows all the column of AWSUniqueId with empty values, even though if I check on the events the parameter is there. It looks strange since if I just execute the query sometimes it gives the results and other times don't. Has anybody faced this same issue? Could it be a bug on the add-on? Or is not allow what I'm trying to build with this data? Thanks in advance! Best Regards, Raquel
If certain indexes go down and stop reporting over a 24hr - 7 day period how do you run a search to easily identify which ones have gone down? Currently I run two separate searches filtered by 24hrs... See more...
If certain indexes go down and stop reporting over a 24hr - 7 day period how do you run a search to easily identify which ones have gone down? Currently I run two separate searches filtered by 24hrs / 7 days " | tstats dc(host) where index="name" by index | fields dc(host) ". This lists all of the index's currently reporting in then I have to search through the data to find the result, but I would like to optimise it more by using one command too see these results in one search. 
Hi All, can you let me  know the detailed  prerequisites required for adding  peer  to the cluster ? Also help me regarding the pre -requisites and post-requisites required for upgradation of the p... See more...
Hi All, can you let me  know the detailed  prerequisites required for adding  peer  to the cluster ? Also help me regarding the pre -requisites and post-requisites required for upgradation of the plan for version 8.0.5 to 8.1.2?
Dear all,    Current situation is I uploaded a inventory table to Splunk and the table is like below. Hostname  IP  ------------------ hostname1 6.6.6.6 hostname2 7.7.7.7 And I would like to c... See more...
Dear all,    Current situation is I uploaded a inventory table to Splunk and the table is like below. Hostname  IP  ------------------ hostname1 6.6.6.6 hostname2 7.7.7.7 And I would like to check the log collection status (eg. Is the device sending log to splunk, what is the last log time) for the device on the list and produce a list like below.  Hostname  IP  Status Last_log_time ------------------ hostname1 6.6.6.6 Yes 2021-03-03 00:00:00 hostname2 7.7.7.7 No N/A  I tried to use "|metadata" but seems like the metadata is not accurate and may I have some idea of how to do this task? 
Splunk recommends that before you upgrade a distributed environment, confirm that Splunk apps work on the version of Splunk Enterprise that you want to upgrade to. But my query is what could go wron... See more...
Splunk recommends that before you upgrade a distributed environment, confirm that Splunk apps work on the version of Splunk Enterprise that you want to upgrade to. But my query is what could go wrong if some Apps are missed and upgradation happened and that App are not compatible with 8.X.  Is there any chance that we can upgrade apps or add-on after upgraded to desired version or what are the options after upgrade.  Any one faced any similar situation like this? Thank you in advance    
I have the following lookup and have to extract only the bold part which is my filename. inputLookupname -Trans.log Tue Feb 23 11:12:54 IST 2021 - trans_file.sh zouttime.gcaswb8o.600 starts 202102... See more...
I have the following lookup and have to extract only the bold part which is my filename. inputLookupname -Trans.log Tue Feb 23 11:12:54 IST 2021 - trans_file.sh zouttime.gcaswb8o.600 starts 202102231112: /satn/PRY/qoutsa/zpittime.gcaswb8o.600.20210223111125 was moved to INPUT Tue Feb 23 11:12:54 IST 2021 - trans_file.sh zxtytime.glk1a03o.600 starts 202102231112: /satn/PRY/qoutsa/zpittime.gov1a03o.600.20210223105623 was moved to INPUT   How do i capture only the the filename which is in bold?
I want to ignore the actual file name in my exception events so I can group the exceptions . For example, regex on below event should extract only  "Error File not found !!!"  and ignore the actual ... See more...
I want to ignore the actual file name in my exception events so I can group the exceptions . For example, regex on below event should extract only  "Error File not found !!!"  and ignore the actual filename in between.     Error File abracadabra.gz not found !!!     Can you please advise on how to exclude this word in between the fixed format of words . Thank you.
Hi,  I have problem with coloring my table. The picture shows my settings for coloring special fields. However, the 85 percent refer to the maximum of every single page, so on every page there is at... See more...
Hi,  I have problem with coloring my table. The picture shows my settings for coloring special fields. However, the 85 percent refer to the maximum of every single page, so on every page there is at least one orange value, even if the values on this page are very low compared to the global maximum. I would like to relate the 85 percent to the global maximum. Does anybody know, how I could do that?  
We have below log event rows -  correlationKey=abc msg="create cache for 123" correlationKey=abc "read cache for 123" correlationKey=mno "create cache for 456" correlationKey=mno "read cache for ... See more...
We have below log event rows -  correlationKey=abc msg="create cache for 123" correlationKey=abc "read cache for 123" correlationKey=mno "create cache for 456" correlationKey=mno "read cache for 456" correlationKey=xyz "read cache for 123" From the data, we may notice that correlationKey abc/mno have both create/read. But for correlationKey xyz, it does not have "create cache" log, but "read cache" only. We need to find all correlationKey values w/o log entry "create cache for". (abc/mno do not qualify. Only xyz qualify.) Appreciate your great help! - ET
Hello   I have an issue on the X axis of my timechart As you can see in my xml file, I use a scheduled search in order to display the timechart on the last 30 days But considering that today it's... See more...
Hello   I have an issue on the X axis of my timechart As you can see in my xml file, I use a scheduled search in order to display the timechart on the last 30 days But considering that today it's the 3 of March, Splunk display the data between the 3 of February and the 3 of March. It's not a problem for me but the real problem is that on my X axis I have the 1 of Februray and the 2 of February empty because in this case Splunk calculate 30 days considering there is just 28 days in February (please see my screenshot) So, what is the solution : 1) for displaying a line chart on a 30 days period (so between the 1 of February and the 2 of March) 2) Or to avoid to habe the 1 of February and the 2 of February displayed considering that Splunk display the line chart between the 3 of February and the 3 of March Thanks         <query>| loadjob savedsearch="admin:SA_WXCV_sh:Performances - Boot trend" | timechart span=1d eval(round(avg(BootTime)/1000,0)) as "Boot time" | eventstats avg("Boot time") as Average | eval Average=round(Average,0)</query> <earliest>-30d@d</earliest> <latest>now</latest> </search> <option name="charting.axisLabelsX.majorLabelStyle.rotation">-45</option> <option name="charting.axisTitleX.text">Date</option> <option name="charting.axisTitleY.text">Boot time (Average in seconds)</option> <option name="charting.chart">line</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.drilldown">none</option> <option name="charting.fontColor">#000000</option> <option name="height">400</option> <option name="refresh.display">progressbar</option> <option name="charting.chart.overlayFields">Average</option> <option name="charting.fieldColors">{"Boot time": 0x639BF1, "Average":0xFF5A09}</option> <option name="charting.fieldDashStyles">{"Boot time":"solid"}</option> <option name="charting.lineWidth">4px</option> </chart>       Screenshot https://www.cjoint.com/c/KCdgGtebT5h  
Hello, in many linux versions the command netstat is now deprecated. Now you have the problem to use the sourcetype netstat within the Linux/Unix Addon in Splunk. Is there a possibility to use ano... See more...
Hello, in many linux versions the command netstat is now deprecated. Now you have the problem to use the sourcetype netstat within the Linux/Unix Addon in Splunk. Is there a possibility to use another command, e.g. ss instead of netstat in future as sourcetype? Many thanks in advance. Yours sincerely Corina Kolb  
Below are the event count in splunk. I am trying to create "% Free Space" for all three drive (C:, E). 03/02/2021 23:07:18.422 -0600 collection=LogicalDisk ... 1 line omitted ... counter="% Free ... See more...
Below are the event count in splunk. I am trying to create "% Free Space" for all three drive (C:, E). 03/02/2021 23:07:18.422 -0600 collection=LogicalDisk ... 1 line omitted ... counter="% Free Space" instance=D: Value=98.36774827925271 Show all 6 lines host = YYYYYYYY source = Perfmon:LogicalDisksourcetype = Perfmon:LogicalDisk 3/2/21 11:07:18.000 PM ====================== 03/02/2021 23:07:18.422 -0600 collection=LogicalDisk ... 1 line omitted ... counter="% Free Space" instance=C: Value=43.369467322069944 Show all 6 lines host = YYYYYYY source = Perfmon:LogicalDisksourcetype = Perfmon:LogicalDisk 3/2/21 11:07:18.000 PM ======================== 03/02/2021 23:07:18.949 -0600 collection=LogicalDisk ... 1 line omitted ... counter="% Free Space" instance=E: Value=71.4197915987671 Show all 6 lines host = YYYYYYYY source = Perfmon:LogicalDisksourcetype = Perfmon:LogicalDisk 3/2/21 11:07:18.000 PM =========================== 03/02/2021 23:07:18.949 -0600 collection=LogicalDisk ... 1 line omitted ... counter="% Free Space" instance=D: Value=59.03638151425762 Show all 6 lines host = ZZZZZZZZZZ source = Perfmon:LogicalDisksourcetype = Perfmon:LogicalDisk 3/2/21 11:07:18.000 PM   Below splunk script is not working as expected also need the Value field in round(currently getting decimial) Looking for Drive free space in table format for each host that I added in the script. Please help index=perfmon host=XXXXXXX OR host=YYYYYY OR host=ZZZZZZZZ sourcetype="Perfmon:LogicalDisk" counter="% Free Space" instance="C:" OR instance="D:" OR instance="E:" Value |sort counter, Value| stats values(Value), values(instance), values(host) | table values(host) values(instance) values(Value) | rename values(host) as Hostname, values(instance) as drive, values(Value) as Totalfree%
Hi , I have a json structure like this :    { "zip": 67452, "location": "NY", "author": { "book1": { "price": 12 }, "book2": { "price": 11 }, "book3": { ... See more...
Hi , I have a json structure like this :    { "zip": 67452, "location": "NY", "author": { "book1": { "price": 12 }, "book2": { "price": 11 }, "book3": { "price": 124 }, "book4": { "price": 122 } } }   I am trying to group and stats it for nested structure to get the avg price based on the `book*` as key  was able to get the counts accurately using    | spath output=bp path=author.book1.price | stats avg(bp) as avgPrice by location zip | stats list(zip) list(avgPrice) by location   Now the next level grouping is by book* so that the query need not be run multiple times with different book* keys. i tried the rex but the field name is still constant . Any insights will be really helpful. Also is there a way to get the table lines in the stats result where it is not grouped for better readability?  Update: Desired result BookName location zip  avg(price) Book1 Newyork 64673 3433     53421 8678   NewJersey 35362 4435     34235 2425 Book2 Newyork 64673 3433     53421 8678   NewJersey 35362 4435     42352 2425   Arizona 25252 2525 I am able to get the table till location using nested stats, but the `book1` being a "key" rather than a value extraction is difficult  
Hi All,  I have created a scheduled reports (its not accelerated or summary indexed) and event count is populated into another index via collect command:  index=xxxx host=yyyyy| stats count | addi... See more...
Hi All,  I have created a scheduled reports (its not accelerated or summary indexed) and event count is populated into another index via collect command:  index=xxxx host=yyyyy| stats count | addinfo | eval _time = info_min_time | collect index=xyz_summary sourcetype="Trip Plans Other Apps" addTime=T please note: - the indexname contains summary but this index is not a summary index.  - Report runs at morning 7 o'clock for the previous day's data and the _time is overwritten to previous date. This is done so that the count is logged in as previous day's data.   The issue which we are facing here is when we run the same query for that time period after a few days( say a month), we are observing the value inside the index=xyz_summary (i.e. count value) is greater than when we run the original query. Interestingly, the results were both the same, when I ran both queries initially a month before.   Any suggestions why is this happening (is it due to collect command) ? what modification can be done so that we dont get a mismatch here.     Thanks in advance.  Kind regards AG.     
Hi please help im trying to use data input of snmp_ta but when i configured it i received this message. 2021-03-03 10:16:38,974 ERROR Failed to register transport and run dispatcher: bind() for ('... See more...
Hi please help im trying to use data input of snmp_ta but when i configured it i received this message. 2021-03-03 10:16:38,974 ERROR Failed to register transport and run dispatcher: bind() for ('192.168.187.176', 1622) failed: [WinError 10049] The requested address is not valid in its contextcaused by <class 'OSError'>: [WinError 10049] The requested address is not valid in its context stanza:snmp://SampleSNMPServer Please Help.
Currently we are having issues with our scan data comming in to out indexer, so we have to use CSV's for scan data . The data from CSV's we are uploading into Splunk look like this:   Scan Da... See more...
Currently we are having issues with our scan data comming in to out indexer, so we have to use CSV's for scan data . The data from CSV's we are uploading into Splunk look like this:   Scan Date Vuln Blah Feb 11, 2021 11:30:29 EST 4 15 Feb 18, 2021 11:30:29 EST 10 15     I want to pull only the newest scan data, in this case " Feb 18, 2021 11:30:29 EST"?  It doesnt appear "strp time" can run on this date format because of the EST at the end.  I know "substr" exists, but it appears it only works on field names, not field values.  Any ideas? Thanks
I install Splunk ES v5.3.1 on Enterprise v7.3.7.1, then I want to open "Incident Review". However the page has been loading and doesn't display. I can see the other pages like "Glass Table". Pleas... See more...
I install Splunk ES v5.3.1 on Enterprise v7.3.7.1, then I want to open "Incident Review". However the page has been loading and doesn't display. I can see the other pages like "Glass Table". Please let me know how to fix.
Hello All,   I am not so familiar with regex, but looking at some old query have been able to build one for my need. I am looking for help to understand how this is working in terms of regular expr... See more...
Hello All,   I am not so familiar with regex, but looking at some old query have been able to build one for my need. I am looking for help to understand how this is working in terms of regular expression and Splunk rex syntax So the regex I am using is    | rex field=_raw message="(?<message>.*).request"    for the   message=abc ff request-id   where I am trying to extract anything after "=" until "request-id". There could be spaces as well I think "<message>" here is the field name I want to denote The wild card character "*" within the braces indicate everything after "message=" But I don't understand The use of "?". Is this part of the syntax of splunk regex or signifying anything and everything after "message=" i.e. working along with "*" What is the use of braces here? is this indicating the section I am trying to parse? The dot "." after "<message>". Is this splunk syntax? The dot "." after braces. Is this denoting/delimiting/indicating the string which is present after the parsing section The most confusing part is the use of quotes. What would be regex if it is like "message abc ff request-id" and I want to parse anything between message and request
I have events that contain a userId field and I would like to make a line chart to visualize the average count per day of that field. How can I do this? So far I have tried the following and a coupl... See more...
I have events that contain a userId field and I would like to make a line chart to visualize the average count per day of that field. How can I do this? So far I have tried the following and a couple other arrangements but nothing is working. index=foo | stats count by userId, _time | timechart avg(count)   (I am using Splunk enterprise 6.5.1. btw)  
Is there a way to validate default date parsing against ISO8601 ( 2012-11-02'T'14:34:02,781-07:00 ) date/time? I tried | makeresults | eval _raw="2012-11-02'T'14:34:02,781-07:00 foo=bar" and the ti... See more...
Is there a way to validate default date parsing against ISO8601 ( 2012-11-02'T'14:34:02,781-07:00 ) date/time? I tried | makeresults | eval _raw="2012-11-02'T'14:34:02,781-07:00 foo=bar" and the timestamp is not being parsed. I also tried with no success setting sourcetype=log4j Any pointers for the syntax to work this? There are a number of threads without complete approaches around this https://community.splunk.com/t5/forums/searchpage/tab/message?advanced=false&allow_punctuation=false&filter=location&location=forum-board:getting-data-in&q=ISO8601