All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm getting an error when I input earliest and latest keywords with the search query. I have set the time picker corresponding to the values used in the query.  It's showing 'Unknown search command '... See more...
I'm getting an error when I input earliest and latest keywords with the search query. I have set the time picker corresponding to the values used in the query.  It's showing 'Unknown search command 'earliest'' when trying to use those commands. I'm using splunk Enterprise version.   This is my query: index=sample ServiceName="cet.prd.*" |  earliest=-3d latest=now()
@Bazza_12  could you please clarify the part "append your site certs", is this referring to the contents under "splunk_ta_o365/lib/certifi/cacert.pem" ?
Hi @bowesmana , it works great as expected, but is there any way to flag or highlight the differentiate value. because there are 3 fields are compared. so i need to check both lookup in order to fi... See more...
Hi @bowesmana , it works great as expected, but is there any way to flag or highlight the differentiate value. because there are 3 fields are compared. so i need to check both lookup in order to find the missing info.  
@Akmal57  Something like this | inputlookup lookup_A | eval origin="A" | inputlookup append=t lookup_B | eval origin=coalesce(origin, "B") | stats dc(origin) as originCount values(origin) as origin... See more...
@Akmal57  Something like this | inputlookup lookup_A | eval origin="A" | inputlookup append=t lookup_B | eval origin=coalesce(origin, "B") | stats dc(origin) as originCount values(origin) as origins by Hostname IP OS | where originCount=1 where you load both inputs and set origin value to be where the data come, then join the two together with stats and show only those that have a single origin
@rikinet Just make the chart show a stacked chart and as you have only a single value per time, it will show one or the other Here's an example   <dashboard> <label>colourgreen</label> <row> ... See more...
@rikinet Just make the chart show a stacked chart and as you have only a single value per time, it will show one or the other Here's an example   <dashboard> <label>colourgreen</label> <row> <panel> <chart> <search> <query>| makeresults count=20 | streamstats c | eval _time=now() - (c * 60) | eval digital_value=if (random() % 2 == 1, 0.1, 1) | eval analog_value=mvindex(split("0,100,500,1000,5000,10000",","), random() % 6) | fields - c | eval digital_value_red = if(digital_value=0.1, 0.1, null()) | eval digital_value_green = if(digital_value=1, 1, null()) | fields - digital_value </query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">1</option> <option name="charting.axisY2.scale">log</option> <option name="charting.chart">column</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.overlayFields">analog_value</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">none</option> <option name="charting.fieldColors">{digital_value_red: 0xFF0000, digital_value_green: 0x00FF00}</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">right</option> <option name="charting.lineWidth">2</option> <option name="height">406</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </chart> </panel> </row> </dashboard>    
Hi, I have 2 lookup which is lookup A and lookup B. My lookup A will be keep update by splunk query and my lookup B is maintain manually. Both lookup contain same fields which is Hostname, IP and OS... See more...
Hi, I have 2 lookup which is lookup A and lookup B. My lookup A will be keep update by splunk query and my lookup B is maintain manually. Both lookup contain same fields which is Hostname, IP and OS. I need to compare both lookup and bring out the non match Hostname and IP. Please assist me on this. Thank You
Is the forwarder logging any errors about failing to connect to the indexers?
It probably means the splunkbase page hasn't been updated, yet.
Note that with Splunk, there are often more ways to achieve the same goal. For example, you could use this instead of streamstats | accum gbu or | accum gbu as cum_gbu In the long run, streamstat... See more...
Note that with Splunk, there are often more ways to achieve the same goal. For example, you could use this instead of streamstats | accum gbu or | accum gbu as cum_gbu In the long run, streamstats is a more useful command (and takes more time to get your head around), as it supports split by clauses, whereas accum does not, so tends to be more useful.  
Exactly what I was looking for. I haven't come across the streamstats term yet so this is great. Thank you!
So you want your usage to show a cumulative value rather than the value for the specific hour?  If so, just add this to the end | streamstats sum(gbu) as gbu which will accumulate the hourly values... See more...
So you want your usage to show a cumulative value rather than the value for the specific hour?  If so, just add this to the end | streamstats sum(gbu) as gbu which will accumulate the hourly values and replace the hourly values with cumulative total. If you want both values, then add this to the end instead of the above | streamstats sum(gbu) as cum_gbu  this will create a new field with the cumulative total
Thanks for this. The results of this don't seem to "Add up" every hour. I was hoping each hour the number would be greater, but it seems to be giving different numbers, if that makes sense.
Sure you can, just use timechart, like this index=_internal source=*license_usage.log type=Usage pool=* | timechart span=1h sum(b) as gbu | eval gbu=round(gbu/1024/1024/1024,3)
I have a search that gives me the total license usage in gb's for a given time:   index=_internal source=*license_usage.log type=Usage pool=* | stats sum(b) as bu | eval gbu=round(bu/1024/1024/1024... See more...
I have a search that gives me the total license usage in gb's for a given time:   index=_internal source=*license_usage.log type=Usage pool=* | stats sum(b) as bu | eval gbu=round(bu/1024/1024/1024,3) | fields gbu   I'd like to have a timechart/graph to show what the total is each hour of a given day. Is this possible to do with this timechart?  
So to clarify: We have a distributed environment, with a cluster of indexers being managed by a Cluster Master. We have the Search Heads configured as standalone search heads. The Search Peers are n... See more...
So to clarify: We have a distributed environment, with a cluster of indexers being managed by a Cluster Master. We have the Search Heads configured as standalone search heads. The Search Peers are not configured in distsearch.conf on the search heads - they just connect to the cluster master for the list of indexers. We attempted to remove the peers from the list of Search Peers in Distributed Search in Settings, and got an error stating, " Cannot remove peer... This peer is a part of a cluster." As you would expect in a clustered environment. We were able to delete the peers from the Cluster Master, but deleting the peers there is what causes the Search Heads to complain about losing connection to search peers, as it appears the Cluster Master doesn't inform the Search Heads about the change in the search peer list. We were also able to find a window in which there were no scheduled searches running that we could restart the search heads. Restarting the search heads caused it to reload the list of search peers from the cluster master and it stopped giving the error. Is there another way to force search heads to refresh this cached list of search peers from the Cluster Master without restarting them?
something like below where Field A,count,B,C are multivalue existing  already calculated fields but additionally Field E and F are divided based on domain ( pre calculation we did in last query) but ... See more...
something like below where Field A,count,B,C are multivalue existing  already calculated fields but additionally Field E and F are divided based on domain ( pre calculation we did in last query) but in domain signifying their unique combination values.      
Compatibility This is compatibility for the latest version Splunk Enterprise, Splunk Cloud, Splunk IT Service Intelligence Platform Version: 9.0, 8.2 CIM Version: 4.X
Yes, I will create a support request also.  A quick and dirty workaround to get at least the same old columns from Rocky9 I can use field ID_LIKE from /etc/os-release:   if [ -e $OS_FILE ] && ( (... See more...
Yes, I will create a support request also.  A quick and dirty workaround to get at least the same old columns from Rocky9 I can use field ID_LIKE from /etc/os-release:   if [ -e $OS_FILE ] && ( ( (awk -F'=' '/ID_LIKE=/ {print $2}' $OS_FILE | grep -q rhel) ...  
That is a Splunk-supported add-on so you can submit a support request (if you have entitlement) for RHEL9 support.
Hi @inventsekar  Thanks for the details. @Dhivakarpn  and  I  are working together.  Indexer version 9.0.x