All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a search that leverages a kvstore lookup that takes the src IP and then checks the lookup to see what core, content, and zone the IP is associated with:   | lookup zone_lookup cidr_range as ... See more...
I have a search that leverages a kvstore lookup that takes the src IP and then checks the lookup to see what core, content, and zone the IP is associated with:   | lookup zone_lookup cidr_range as src | fillnull value=NULL | search context!="" core!="" zone!="" | eval core=coalesce(core,"null") | eval context=coalesce(context,"null") | eval zone=coalesce(zone,"null")   Unfortunately, we do not have a ROA for this info so we have populated the kvstore lookup from various sources as best we can, but sometimes we'll see src IPs with no zone listed. I do have a table I keep that allows me to fill in those blanks and it's a simple table as follows:   cidr_range zone x.x.x.x/16 zone1 y.y.y.y/24 zone2 z.z.z.z/24 zone3   I'd like to create a search that appends my lookup with this data - how would I write that search? Thx
Hi Splunkers, There is one field is common in 2 indexes. Using that field how can i co-relate and make a table out of it without using JOIN, Append & Appendpipe command ? Because those command will ... See more...
Hi Splunkers, There is one field is common in 2 indexes. Using that field how can i co-relate and make a table out of it without using JOIN, Append & Appendpipe command ? Because those command will take a lot of time and  Please refer to the below pictures   Thanks & regards
Hi, i have to scale down my search head cluster to a standalone one but there is no documentation anywhere, is it possible ?,what steps should i perform ?
Hi everyone, i have a splunk universal forwarder installed in linux machine and configured some log files to forward to indexer. But i am getting below error and data not getting ingested in splunk... See more...
Hi everyone, i have a splunk universal forwarder installed in linux machine and configured some log files to forward to indexer. But i am getting below error and data not getting ingested in splunk. input type: File Error: 0100 ERROR Metrics - Metric with name thruput:thruput already registered 0100 ERROR Metrics - Metric with name thruput:idxSummary already registered  
Hi,   I have implements the Splunk Add-on for Microsoft Cloud Services and while I can get data in the filed names are very difficult to make use of as they are appended with body.fieldname, any ... See more...
Hi,   I have implements the Splunk Add-on for Microsoft Cloud Services and while I can get data in the filed names are very difficult to make use of as they are appended with body.fieldname, any ideas on how to make this more usable?  
I have changed the permissions of ownership chown -R  root:root/opt/splunkforwarder After that, I started Splunk as root user, but after that was finished, the owner:group reverted back to splunk... See more...
I have changed the permissions of ownership chown -R  root:root/opt/splunkforwarder After that, I started Splunk as root user, but after that was finished, the owner:group reverted back to splunk:splunk, respectively. The same situation persists even after restarting Splunk and restarting the OS. Why its revert back to splunk:splunk Wanted to operate with root:root as the owner:group under /opt/splunkforwarder. https://docs.splunk.com/Documentation/Splunk/9.0.1/ReleaseNotes/KnownIssues#Universal_forwarder_issues  
Hi    I wanted to get the details  of the top 5 indexes consuming high license seperated by date  for last 7 days in a single query. 16th -top 5 index --gb 17th -top 5 index --gb 18th top 5... See more...
Hi    I wanted to get the details  of the top 5 indexes consuming high license seperated by date  for last 7 days in a single query. 16th -top 5 index --gb 17th -top 5 index --gb 18th top 5 index  --gb  ......... Please help me with the above query   
Is there a method of tracking a service ceiling over the long term?  I have daily transaction that are being summarized over suitable interval and written to a summary index.   I wish to keep a maxim... See more...
Is there a method of tracking a service ceiling over the long term?  I have daily transaction that are being summarized over suitable interval and written to a summary index.   I wish to keep a maximum of the transaction fields (count, success, by category, etc) for hourly, daily intervals, and have the maximum of the maximum or peak(maximum) for each of those transaction fields representing the service ceiling or maximum observed values for those fields.  The maximum observed value will be later used to calculate a utilization of the service. I am kind of thinking that the answer is probably a daily report that consumes the summary index, calculates the daily maximum observed values, then write the daily maximums to a summary_index stash. I am having trouble approaching the problem and am looking for ideas and/or guidance.  Currently I am playing with streamstats and a window.   | search ... | bin span=600s _time | streamstats window=1 current=f sum(successful) AS previous_successful_transactions | streamstats sum(successful) as successful_transactions | fillnull value=0 previous_successful_transactions successful_transactions, peak_transactions | eval peak_transactions=if(successful_transactions>previous_successful_transactions, successful_transactions, peak_transactions) | chart max(previous_successful_transactions) as previous_successful_transactions max(peak_transactions) as peak_transactions by _time    
Hi I have a basic question about the append limit which is 50000 events max Does it means that only the 50000 first events sorted by timestamp are displayed (from newest to oldest)? And in some... See more...
Hi I have a basic question about the append limit which is 50000 events max Does it means that only the 50000 first events sorted by timestamp are displayed (from newest to oldest)? And in some discussions, it seems that these limit could be overrided with       | sort 0       https://community.splunk.com/t5/Splunk-Search/Using-sort-0-to-avoid-10000-row-limit/m-p/502707 is it true or the only way to change the limit is to modify limits.conf? thanks  
Hi, i don't know where is the problem. The search it's: | rex '(?<field>H.+)\\' | table field I want to use regular express. for parsing field and i want to show in output the result.  The fiel... See more...
Hi, i don't know where is the problem. The search it's: | rex '(?<field>H.+)\\' | table field I want to use regular express. for parsing field and i want to show in output the result.  The field got a path like \Pc\Hardware\Nice\ok . I want to get all of the words after Pc\.  I don't know how to solve, he literaly does nothing and return the same origin data. Thank u
Hello Splunkers, Is there a way to identify/search what SMB version is being used across the network? I am looking to detect SMBv1 specifically to use it as a source for disabling SMBv1 throughout ... See more...
Hello Splunkers, Is there a way to identify/search what SMB version is being used across the network? I am looking to detect SMBv1 specifically to use it as a source for disabling SMBv1 throughout the network. Regards
Hello everyone! I have 2 lookups - 1.csv and 2.csv 1.csv contains such table host user result host1 Alex success host2 Michael fail   2.csv host ... See more...
Hello everyone! I have 2 lookups - 1.csv and 2.csv 1.csv contains such table host user result host1 Alex success host2 Michael fail   2.csv host action host1 action1 host2 action2   I want to make search that will join this two tables in one by field host (but only in search, without changing csv content). It should turn out like this host user result action host1 Alex success action1 host2 Michael fail action   so if there is similar host in two tables - join them in one thank you for your help
I have a query which displays top 10 consumed sourcetype. I would like to enable drilldown to achieve the below task. When user clicks on a particular row, it should open a new tab and a new query ... See more...
I have a query which displays top 10 consumed sourcetype. I would like to enable drilldown to achieve the below task. When user clicks on a particular row, it should open a new tab and a new query should get executed for each row. Right now, I have enabled Drilldown ->On Click-> Link to Search->Custom->Search String->Apply. In this way I can have the same query gets executed on selection of any rows. But I would like to define different queries for different rows based on the selection. Is this possible with Drilldown?
Hi, Please let us know the way to block PUT, OPTIONS, DELETE right at EUM level instead of going with blocking at Nginx / LB level. We have already tried disabling these methods at Nginx / LB lev... See more...
Hi, Please let us know the way to block PUT, OPTIONS, DELETE right at EUM level instead of going with blocking at Nginx / LB level. We have already tried disabling these methods at Nginx / LB level but this gets blocked for HTTP/2 version whereas HTTP/1.1 still returns the response. Regards, Vaishnavi  
How does Splunk performance affect, when the status of "buckets_created_last_60m" and "percent_small_buckets_created_last_24h" became red in health check?
Hi Team, Im trying to combine events which are generated in a specific span of 1hr and show the count as 1 instead of the actual count. I tried with a bucket and its clubbing them the count is sti... See more...
Hi Team, Im trying to combine events which are generated in a specific span of 1hr and show the count as 1 instead of the actual count. I tried with a bucket and its clubbing them the count is still not coming to 1. Irrespective of how many events has been geenrated for a specific condition in a span of 1hr I want to keep it as count 1. Can someone help on how to achieve this .Thanks
Hello Splunkers , I have the below source code and using the base search as index=syslog process!=switchd but its taking a while to load...is there a better way to write the base search to optimize... See more...
Hello Splunkers , I have the below source code and using the base search as index=syslog process!=switchd but its taking a while to load...is there a better way to write the base search to optimize the searches and make the dashboards load faster   <form theme="dark"> <label>basesearch</label> <search id="base"> <query>index=syslog process!=switchd |</query> <earliest>-30m@m</earliest> <latest>now</latest> </search> <fieldset submitButton="false"> <input type="multiselect" token="multi_process" searchWhenChanged="true"> <label>Process</label> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> <fieldForLabel>process</fieldForLabel> <fieldForValue>process</fieldForValue> <search base ="base"> <query>search error OR ERROR OR fail OR failed OR errors OR faulted OR "*NVRM: Xid (PCI*" NOT NOTIFICATION process!=switchd host=$host_preos$ | search $multi_process$ | rex field=_raw "\d{4}\-\d{2}\-\d*\w\d*\:\d*\:\d*\.\d*(\+|\-)\d*\:\d*\s*(\S*)\s*(\S*\:|\S*)\s*(?&lt;Message&gt;(.*))" | search Message!="*failed=0*" Message!="*level=info*" Message=*$search_text$* | stats count by host process | dedup process</query> </search> <valuePrefix>process="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> </input> <input type="text" token="search_text" searchWhenChanged="true"> <label>Search Text</label> <default>*</default> </input> <input type="multiselect" token="host_preos"> <label>Preos Hosts</label> <fieldForLabel>host</fieldForLabel> <fieldForValue>host</fieldForValue> <search base ="base"> <query>search error OR ERROR OR fail OR FAIL OR failed OR Failed OR errors process!=switchd process=* host="preos*" | rex field=_raw "\d{4}\-\d{2}\-\d*\w\d*\:\d*\:\d*\.\d*(\+|\-)\d*\:\d*\s*(\S*)\s*(\S*\:|\S*)\s*(?&lt;Message&gt;(.*))" | search Message!="*failed=0*" Message!="*level=info*" | stats count by host | dedup host</query> </search> <choice value="preos*">All</choice> <default>preos*</default> </input> </fieldset> <row> <panel> <title>Error Message Counts - For Host ($host_preos$)</title> <chart> <search base ="base"> <query>search "*NVRM: Xid (PCI*62*" OR error OR ERROR OR fail OR failed OR errors OR faulted OR "NVRM: Xid" NOT NOTIFICATION process!=switchd host IN($host_preos$) | search $multi_process$ | rex field=_raw "\d{4}\-\d{2}\-\d*\w\d*\:\d*\:\d*\.\d*(\+|\-)\d*\:\d*\s*(\S*)\s*(\S*\:|\S*)\s*(?&lt;Message&gt;(.*))" | search Message!="*failed=0*" Message!="*level=info*" Message=*$search_text$* | timechart span=1h count(_time) by host limit=0</query> </search> <option name="charting.chart">line</option> <option name="charting.drilldown">none</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.placement">right</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> <row> <panel> <title>Overall Error Message Count - For Host ($host_preos$)</title> <table> <search base ="base"> <query>search "*NVRM: Xid (PCI*62*" OR error OR ERROR OR fail OR failed OR errors OR faulted OR "*NVRM: Xid (PCI*" NOT NOTIFICATION process!=switchd host IN($host_preos$) | search $multi_process$ | rex field=_raw "\d{4}\-\d{2}\-\d*\w\d*\:\d*\:\d*\.\d*(\+|\-)\d*\:\d*\s*(\S*)\s*(\S*\:|\S*)\s*(?&lt;Message&gt;(.*))" | search Message!="*failed=0*" Message!="*level=info*" Message=*$search_text$* | stats count by host | addcoltotals labelfield=host | sort -count</query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="host"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format> </table> </panel> <panel> <title>Error Message Count per Process - For Host ($host_preos$)</title> <table> <search base ="base"> <query>search "*NVRM: Xid (PCI*62*" OR error OR ERROR OR fail OR failed OR errors OR faulted OR "*NVRM: Xid (PCI*" NOT NOTIFICATION process!=switchd host IN($host_preos$) | search $multi_process$ | rex field=_raw "\d{4}\-\d{2}\-\d*\w\d*\:\d*\:\d*\.\d*(\+|\-)\d*\:\d*\s*(\S*)\s*(\S*\:|\S*)\s*(?&lt;Message&gt;(.*))" | search Message!="*failed=0*" Message!="*level=info*" Message=*$search_text$* | stats count by host process | addcoltotals labelfield=host | sort -count</query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="host"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format> </table> </panel> </row> <row> <panel> <title>NVRM Xid Error Summary</title> <table> <title>NVRM Xid Error</title> <search base ="base"> <query>searcg "*NVRM: Xid (PCI*" process!=switchd host IN($host_preos$) | rex field=_raw "NVRM\:\sXid\s\(PCI\:(?&lt;PCI_Address&gt;[^ \)]+)\)\:\s(?&lt;Error_Code&gt;[^ ]+)\,\s*pid\=(?&lt;pid&gt;[^ ]+)\,\s*name\=(?&lt;name&gt;[^ ]+)\,\s(?&lt;Error_Message&gt;(.*))" | stats count by host Error_Code Error_Message PCI_Address pid name | addcoltotals count labelfield=host | sort -count | fields host Error_Code Error_Message count PCI_Address pid name</query> </search> <option name="count">5</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="host"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format> </table> </panel> </row> <row> <panel> <title>RmInit Error on Boot Summary</title> <table> <search> <query>index=syslog **RmInit error ==* process!=switchd host IN($host_preos$) | search process="nhc-boot.sh" | rex field=_raw "\[\d*\]\:\s*\[(?&lt;Log_Level&gt;[^\] ]+)" | rex field=_raw "(prolog\:|kernel\:|\[\d*\]\:)(?&lt;Message&gt; *(.+))" | rex field=_raw "NHC\:\s*(?&lt;Error_Message&gt;[^.*]+)\=\=" | search Message!="*failed=0*" Message!="*level=info*" | stats count by host | addcoltotals labelfield=host | sort -count</query> <earliest>$search_time.earliest$</earliest> <latest>$search_time.latest$</latest> </search> <option name="count">10</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="host"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format> </table> </panel> <panel> <title>RmInitAdapter Summary</title> <table> <search> <query>index=syslog *RmInitAdapter* host IN($host_preos$) | search process="kernel" | rex field=_raw "\[\d*\]\:\s*\[(?&lt;Log_Level&gt;[^\] ]+)" | rex field=_raw "(prolog\:|kernel\:|\[\d*\]\:)(?&lt;Message&gt; *(.+))" | rex field=_raw "NVRM\:\sGPU\s*(?&lt;GPU&gt;[^ ]+)\:\s*(?&lt;Error_Message&gt;[^.*]+)" | search Message!="*failed=0*" Message!="*level=info*" _raw="*RmInitAdapter failed*" | stats count by host GPU Error_Message | addcoltotals labelfield=host | sort - count</query> <earliest>$search_time.earliest$</earliest> <latest>$search_time.latest$</latest> </search> <option name="count">6</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="host"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format> </table> </panel> </row> <row> <panel> <title>Error Message - For Host ($host_preos$)</title> <table> <search base ="base"> <query>search "*NVRM: Xid (PCI*62*" OR error OR ERROR OR fail OR failed OR errors OR faulted OR "*NVRM: Xid (PCI*" NOT NOTIFICATION process!=switchd host IN($host_preos$) | search $multi_process$ | rex field=_raw "\d{4}\-\d{2}\-\d*\w\d*\:\d*\:\d*\.\d*(\+|\-)\d*\:\d*\s*(\S*)(\s*\S*\s\[\s*\S*\s|\s*\S*\s\[\S*\s|(\S*)\s*(\S*\:|\S*)\s*)(?&lt;Message&gt;(.*))" | search Message!="*failed=0*" Message!="*level=info*" Message=*$search_text$* | stats count by host process _time Message | addcoltotals labelfield=host | sort -count</query> </search> <option name="count">15</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="color" field="host"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format> </table> </panel> </row> </form>     
In Dashboard Studio scatter plot chart, can we use rectangle markers instead of square markers? Can we rearrange or fixed the markers according to the chart gridlines instead of canvas grid... See more...
In Dashboard Studio scatter plot chart, can we use rectangle markers instead of square markers? Can we rearrange or fixed the markers according to the chart gridlines instead of canvas gridlines? Can I control the width and height of the markers manually in scatter plot chart?
How to include a full log-in Splunk alert message?
Hi,  Kindly assist me as I am not getting the results I anticipate. I wish to have a table like this ClientIP Count Percentage 1.1.1.1 - 1.1.1.255 50 50% 2.1.1.0 - 2.1.1... See more...
Hi,  Kindly assist me as I am not getting the results I anticipate. I wish to have a table like this ClientIP Count Percentage 1.1.1.1 - 1.1.1.255 50 50% 2.1.1.0 - 2.1.1.255 25 25% 3.1.1.0 - 3.1.1.255 25 25% Total 100 100 Presently my query does NOT have the CIDR as I wished . It spits out individual IPs but it would be nice to have the IPs in the same CIDR range grouped in one column. That way I have a nice looking table. I used this query to get individual percentage but not happy with the results. I would really appreciate any help. index=* sourcetype=* | stats count by clientip | eventstats sum(count) as perc | eval percentage = round(count*100/perc,2)