All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone! I have 2 lookups - 1.csv and 2.csv 1.csv contains such table host user result host1 Alex success host2 Michael fail   2.csv host ... See more...
Hello everyone! I have 2 lookups - 1.csv and 2.csv 1.csv contains such table host user result host1 Alex success host2 Michael fail   2.csv host action host1 action1 host2 action2   I want to make search that will join this two tables in one by field host (but only in search, without changing csv content). It should turn out like this host user result action host1 Alex success action1 host2 Michael fail action   so if there is similar host in two tables - join them in one thank you for your help
I have a query which displays top 10 consumed sourcetype. I would like to enable drilldown to achieve the below task. When user clicks on a particular row, it should open a new tab and a new query ... See more...
I have a query which displays top 10 consumed sourcetype. I would like to enable drilldown to achieve the below task. When user clicks on a particular row, it should open a new tab and a new query should get executed for each row. Right now, I have enabled Drilldown ->On Click-> Link to Search->Custom->Search String->Apply. In this way I can have the same query gets executed on selection of any rows. But I would like to define different queries for different rows based on the selection. Is this possible with Drilldown?
Hi, Please let us know the way to block PUT, OPTIONS, DELETE right at EUM level instead of going with blocking at Nginx / LB level. We have already tried disabling these methods at Nginx / LB lev... See more...
Hi, Please let us know the way to block PUT, OPTIONS, DELETE right at EUM level instead of going with blocking at Nginx / LB level. We have already tried disabling these methods at Nginx / LB level but this gets blocked for HTTP/2 version whereas HTTP/1.1 still returns the response. Regards, Vaishnavi  
How does Splunk performance affect, when the status of "buckets_created_last_60m" and "percent_small_buckets_created_last_24h" became red in health check?
Hi Team, Im trying to combine events which are generated in a specific span of 1hr and show the count as 1 instead of the actual count. I tried with a bucket and its clubbing them the count is sti... See more...
Hi Team, Im trying to combine events which are generated in a specific span of 1hr and show the count as 1 instead of the actual count. I tried with a bucket and its clubbing them the count is still not coming to 1. Irrespective of how many events has been geenrated for a specific condition in a span of 1hr I want to keep it as count 1. Can someone help on how to achieve this .Thanks
Hello Splunkers , I have the below source code and using the base search as index=syslog process!=switchd but its taking a while to load...is there a better way to write the base search to optimize... See more...
Hello Splunkers , I have the below source code and using the base search as index=syslog process!=switchd but its taking a while to load...is there a better way to write the base search to optimize the searches and make the dashboards load faster   <form theme="dark"> <label>basesearch</label> <search id="base"> <query>index=syslog process!=switchd |</query> <earliest>-30m@m</earliest> <latest>now</latest> </search> <fieldset submitButton="false"> <input type="multiselect" token="multi_process" searchWhenChanged="true"> <label>Process</label> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> <fieldForLabel>process</fieldForLabel> <fieldForValue>process</fieldForValue> <search base ="base"> <query>search error OR ERROR OR fail OR failed OR errors OR faulted OR "*NVRM: Xid (PCI*" NOT NOTIFICATION process!=switchd host=$host_preos$ | search $multi_process$ | rex field=_raw "\d{4}\-\d{2}\-\d*\w\d*\:\d*\:\d*\.\d*(\+|\-)\d*\:\d*\s*(\S*)\s*(\S*\:|\S*)\s*(?&lt;Message&gt;(.*))" | search Message!="*failed=0*" Message!="*level=info*" Message=*$search_text$* | stats count by host process | dedup process</query> </search> <valuePrefix>process="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> </input> <input type="text" token="search_text" searchWhenChanged="true"> <label>Search Text</label> <default>*</default> </input> <input type="multiselect" token="host_preos"> <label>Preos Hosts</label> <fieldForLabel>host</fieldForLabel> <fieldForValue>host</fieldForValue> <search base ="base"> <query>search error OR ERROR OR fail OR FAIL OR failed OR Failed OR errors process!=switchd process=* host="preos*" | rex field=_raw "\d{4}\-\d{2}\-\d*\w\d*\:\d*\:\d*\.\d*(\+|\-)\d*\:\d*\s*(\S*)\s*(\S*\:|\S*)\s*(?&lt;Message&gt;(.*))" | search Message!="*failed=0*" Message!="*level=info*" | stats count by host | dedup host</query> </search> <choice value="preos*">All</choice> <default>preos*</default> </input> </fieldset> <row> <panel> <title>Error Message Counts - For Host ($host_preos$)</title> <chart> <search base ="base"> <query>search "*NVRM: Xid (PCI*62*" OR error OR ERROR OR fail OR failed OR errors OR faulted OR "NVRM: Xid" NOT NOTIFICATION process!=switchd host IN($host_preos$) | search $multi_process$ | rex field=_raw "\d{4}\-\d{2}\-\d*\w\d*\:\d*\:\d*\.\d*(\+|\-)\d*\:\d*\s*(\S*)\s*(\S*\:|\S*)\s*(?&lt;Message&gt;(.*))" | search Message!="*failed=0*" Message!="*level=info*" Message=*$search_text$* | timechart span=1h count(_time) by host limit=0</query> </search> <option name="charting.chart">line</option> <option name="charting.drilldown">none</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.placement">right</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> <row> <panel> <title>Overall Error Message Count - For Host ($host_preos$)</title> <table> <search base ="base"> <query>search "*NVRM: Xid (PCI*62*" OR error OR ERROR OR fail OR failed OR errors OR faulted OR "*NVRM: Xid (PCI*" NOT NOTIFICATION process!=switchd host IN($host_preos$) | search $multi_process$ | rex field=_raw "\d{4}\-\d{2}\-\d*\w\d*\:\d*\:\d*\.\d*(\+|\-)\d*\:\d*\s*(\S*)\s*(\S*\:|\S*)\s*(?&lt;Message&gt;(.*))" | search Message!="*failed=0*" Message!="*level=info*" Message=*$search_text$* | stats count by host | addcoltotals labelfield=host | sort -count</query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="host"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format> </table> </panel> <panel> <title>Error Message Count per Process - For Host ($host_preos$)</title> <table> <search base ="base"> <query>search "*NVRM: Xid (PCI*62*" OR error OR ERROR OR fail OR failed OR errors OR faulted OR "*NVRM: Xid (PCI*" NOT NOTIFICATION process!=switchd host IN($host_preos$) | search $multi_process$ | rex field=_raw "\d{4}\-\d{2}\-\d*\w\d*\:\d*\:\d*\.\d*(\+|\-)\d*\:\d*\s*(\S*)\s*(\S*\:|\S*)\s*(?&lt;Message&gt;(.*))" | search Message!="*failed=0*" Message!="*level=info*" Message=*$search_text$* | stats count by host process | addcoltotals labelfield=host | sort -count</query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="host"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format> </table> </panel> </row> <row> <panel> <title>NVRM Xid Error Summary</title> <table> <title>NVRM Xid Error</title> <search base ="base"> <query>searcg "*NVRM: Xid (PCI*" process!=switchd host IN($host_preos$) | rex field=_raw "NVRM\:\sXid\s\(PCI\:(?&lt;PCI_Address&gt;[^ \)]+)\)\:\s(?&lt;Error_Code&gt;[^ ]+)\,\s*pid\=(?&lt;pid&gt;[^ ]+)\,\s*name\=(?&lt;name&gt;[^ ]+)\,\s(?&lt;Error_Message&gt;(.*))" | stats count by host Error_Code Error_Message PCI_Address pid name | addcoltotals count labelfield=host | sort -count | fields host Error_Code Error_Message count PCI_Address pid name</query> </search> <option name="count">5</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="host"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format> </table> </panel> </row> <row> <panel> <title>RmInit Error on Boot Summary</title> <table> <search> <query>index=syslog **RmInit error ==* process!=switchd host IN($host_preos$) | search process="nhc-boot.sh" | rex field=_raw "\[\d*\]\:\s*\[(?&lt;Log_Level&gt;[^\] ]+)" | rex field=_raw "(prolog\:|kernel\:|\[\d*\]\:)(?&lt;Message&gt; *(.+))" | rex field=_raw "NHC\:\s*(?&lt;Error_Message&gt;[^.*]+)\=\=" | search Message!="*failed=0*" Message!="*level=info*" | stats count by host | addcoltotals labelfield=host | sort -count</query> <earliest>$search_time.earliest$</earliest> <latest>$search_time.latest$</latest> </search> <option name="count">10</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="host"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format> </table> </panel> <panel> <title>RmInitAdapter Summary</title> <table> <search> <query>index=syslog *RmInitAdapter* host IN($host_preos$) | search process="kernel" | rex field=_raw "\[\d*\]\:\s*\[(?&lt;Log_Level&gt;[^\] ]+)" | rex field=_raw "(prolog\:|kernel\:|\[\d*\]\:)(?&lt;Message&gt; *(.+))" | rex field=_raw "NVRM\:\sGPU\s*(?&lt;GPU&gt;[^ ]+)\:\s*(?&lt;Error_Message&gt;[^.*]+)" | search Message!="*failed=0*" Message!="*level=info*" _raw="*RmInitAdapter failed*" | stats count by host GPU Error_Message | addcoltotals labelfield=host | sort - count</query> <earliest>$search_time.earliest$</earliest> <latest>$search_time.latest$</latest> </search> <option name="count">6</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="host"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format> </table> </panel> </row> <row> <panel> <title>Error Message - For Host ($host_preos$)</title> <table> <search base ="base"> <query>search "*NVRM: Xid (PCI*62*" OR error OR ERROR OR fail OR failed OR errors OR faulted OR "*NVRM: Xid (PCI*" NOT NOTIFICATION process!=switchd host IN($host_preos$) | search $multi_process$ | rex field=_raw "\d{4}\-\d{2}\-\d*\w\d*\:\d*\:\d*\.\d*(\+|\-)\d*\:\d*\s*(\S*)(\s*\S*\s\[\s*\S*\s|\s*\S*\s\[\S*\s|(\S*)\s*(\S*\:|\S*)\s*)(?&lt;Message&gt;(.*))" | search Message!="*failed=0*" Message!="*level=info*" Message=*$search_text$* | stats count by host process _time Message | addcoltotals labelfield=host | sort -count</query> </search> <option name="count">15</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="color" field="host"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format> </table> </panel> </row> </form>     
In Dashboard Studio scatter plot chart, can we use rectangle markers instead of square markers? Can we rearrange or fixed the markers according to the chart gridlines instead of canvas grid... See more...
In Dashboard Studio scatter plot chart, can we use rectangle markers instead of square markers? Can we rearrange or fixed the markers according to the chart gridlines instead of canvas gridlines? Can I control the width and height of the markers manually in scatter plot chart?
How to include a full log-in Splunk alert message?
Hi,  Kindly assist me as I am not getting the results I anticipate. I wish to have a table like this ClientIP Count Percentage 1.1.1.1 - 1.1.1.255 50 50% 2.1.1.0 - 2.1.1... See more...
Hi,  Kindly assist me as I am not getting the results I anticipate. I wish to have a table like this ClientIP Count Percentage 1.1.1.1 - 1.1.1.255 50 50% 2.1.1.0 - 2.1.1.255 25 25% 3.1.1.0 - 3.1.1.255 25 25% Total 100 100 Presently my query does NOT have the CIDR as I wished . It spits out individual IPs but it would be nice to have the IPs in the same CIDR range grouped in one column. That way I have a nice looking table. I used this query to get individual percentage but not happy with the results. I would really appreciate any help. index=* sourcetype=* | stats count by clientip | eventstats sum(count) as perc | eval percentage = round(count*100/perc,2)
I'm trying to export raw linux audit logs to a file.  For example:       splunk.exe "sourcetype=linux:audit _time>xxxx _time<xxxxx" -output rawdata -maxout 0 > outputfile.txt       I'... See more...
I'm trying to export raw linux audit logs to a file.  For example:       splunk.exe "sourcetype=linux:audit _time>xxxx _time<xxxxx" -output rawdata -maxout 0 > outputfile.txt       I'm trying to output a weeks worth but I'm not sure how many event records there are.  I tried setting maxout to 500000 and monitoring using task manager, I will see splunk grow to use 20GB of memory at its peak. I tried setting maxout to 1000000 and it used up all of my free memory. That actual rawdata output is only a few hundred MB, why is it using up so much memory. More importantly, is there a workaround or fix so it doesnt use up so much memory?  I could output in smaller time increments (daily for example) but I don't know if there might be a single day that happened to generate alot of events.  I suppose I could go down to hourly.    
  I initialize a lookup file using:   | makeresults | outputlookup status.csv   I then have this simple search:   | inputlookup status.csv | eval a=if(isnull(a),1,a+1) | outputlookup st... See more...
  I initialize a lookup file using:   | makeresults | outputlookup status.csv   I then have this simple search:   | inputlookup status.csv | eval a=if(isnull(a),1,a+1) | outputlookup status.csv   This works fine when I run it in Search.  The "a" value gets incremented each time the search is executed. But if I put the same thing in a dashboard it results in an error and does not update the lookup table. In a classic dashboard, the panel with this search just says "Search was cancelled". In dashboard studio, a panel with this search as its data source reports this error: "Unexpected non-whitespace character after JSON at position 4" Removing the outputlookup command in either case makes the search work, and it shows an up-to-date value for "a" (given how many times the search was executed in Search). I have no clue what I'm doing wrong with what seems like such a simple thing - really hoping someone can help me!
I am having no luck listing users' memberships with in a group, using ldapsearch. I am not an AD LDAP expert, either. Lets say I have a domain called Foo, and an OU (group) called Bar, with 10 user... See more...
I am having no luck listing users' memberships with in a group, using ldapsearch. I am not an AD LDAP expert, either. Lets say I have a domain called Foo, and an OU (group) called Bar, with 10 users.  Each user has additional memberships to other groups. I am looking to list the membership attr for each user. I am starting with  | ldapsearch domain=default search="(&(objectClass=user))"... but I don't know what to add. Thank you 
Are we able to enable cron style schedule for the custom Query input?  We have a use case where we want to run a Solarwinds query once per day at a certain time.   I've tried updating the inputs.co... See more...
Are we able to enable cron style schedule for the custom Query input?  We have a use case where we want to run a Solarwinds query once per day at a certain time.   I've tried updating the inputs.conf file and it checks ok.  The log shows it shedule and reschedule for the next time, but no data shows up.
Hello! I have recently just downloaded Splunk on my MAC for experimenting/practicing searching and dashboarding. I just picked a random csv file that has planetary information. One of the fields ... See more...
Hello! I have recently just downloaded Splunk on my MAC for experimenting/practicing searching and dashboarding. I just picked a random csv file that has planetary information. One of the fields in my .csv file has a mix of numbers without commas, and three numbers that have a comma. EX: 59,800 I think this is causing those values to not show up in my visualization. Is there a way to remove said comma from the field value? I tried using this below in the source code under the visualization but it says it's an unknown option name. <option name="useThousandSeparators">false</option>  
Hi (お世話になっております) An application logs to "/var/log/messages". (ある既製のアプリケーションから、/var/log/messages にログが出力されています。) However, unfortunately, the delimiter is \x09. (但し、区切り文字が、\x09 となっています。) Is it pos... See more...
Hi (お世話になっております) An application logs to "/var/log/messages". (ある既製のアプリケーションから、/var/log/messages にログが出力されています。) However, unfortunately, the delimiter is \x09. (但し、区切り文字が、\x09 となっています。) Is it possible to replace the delimiter with a space or comma on the "suplunk Universal forwarder" side and forward it? ("suplunk universal fowarder" 側で、区切り文字をスペースやカンマに置き換えてから転送することは可能でしょうか?) The version of 'splunk' is unknown. ("splunk"のバージョンは不明です。) The version of "suplunk Universal forwarder" is "9.0.1". ("suplunk universal fowarder"のバージョンは、"9.0.1"です。) "suplunk Universal forwarder" is installed in RHEL8.5. ("suplunk universal fowarder"は、RHEL8.5にインストールしています。) Thanks! (よろしくお願いいたします)
The UF service failed to start after a reboot on a Windows Server.   I've addressed that issue, but there are logs that were generated during the downtime that are not being forwarded.  Is there any ... See more...
The UF service failed to start after a reboot on a Windows Server.   I've addressed that issue, but there are logs that were generated during the downtime that are not being forwarded.  Is there any way to force the entries up?
Hello, Is there a way to have a playbook automatically trigger when a file is added to an S3 bucket in our AWS account? My initial thought is to have an AWS lambda trigger when a file is added to t... See more...
Hello, Is there a way to have a playbook automatically trigger when a file is added to an S3 bucket in our AWS account? My initial thought is to have an AWS lambda trigger when a file is added to the S3 bucket, then have that lambda publish the file event information to a kafka topic, then have our Splunk SOAR hooked up to poll that kafka topic via the Kafka SOAR App, then have the playbook set up to trigger when something comes in on that poll (if that's even possible). Is this the best way to go about this? Thank you!
When conducting searches, we have observed that the SPL searches were not working based on the "earliest" time range in the SPL search itself. It only worked when we choose the configured presents.
How can you get Splunk Universal Forwards on every host in a Windows Domain in an isolated environment (no internet access)?
hello is it possible to use a base search in a subsearch? I would like to call the base search   <search id="signal1"> <query>`index=test </query> <earliest>$date.earlie... See more...
hello is it possible to use a base search in a subsearch? I would like to call the base search   <search id="signal1"> <query>`index=test </query> <earliest>$date.earliest$</earliest> <latest>$date.latest$</latest> </search>    in my subsearch something like this?   <search base="signal1"> <query>index=test | stats count as "Nombre total d'erreurs" | appendcols [ search base="signal1" > <query>index=test | stats count as "Nombre total d'erreurs"</query>   thanks