All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All I have a question and need to do the following: Search contidtion_1 from (index_1 ) and then get the value of field_1 and the value of field_2. then search the value of field_1 from (ind... See more...
Hi All I have a question and need to do the following: Search contidtion_1 from (index_1 ) and then get the value of field_1 and the value of field_2. then search the value of field_1 from (index_2 )  and get value of field_3. I want to have a difference calculation  between  value of field_2 and value of field_3. It it possible to  achieve this using a single query?
Hi, Is there a way to determine if an index has stopped logging/has gone inactive? I have tried looking through the docs, but am new to splunk and trying to figure this out. I know we can use metada... See more...
Hi, Is there a way to determine if an index has stopped logging/has gone inactive? I have tried looking through the docs, but am new to splunk and trying to figure this out. I know we can use metadata for hosts and sourcetypes, but doesn't seem to work for indexes. Any recommendations? Timo
Hi Folks, I need to split  a multiline field     -2.9416067 53.0374031 0.0       the first line is latitude e the second line is longitude,   is possible extract these two fields from this mu... See more...
Hi Folks, I need to split  a multiline field     -2.9416067 53.0374031 0.0       the first line is latitude e the second line is longitude,   is possible extract these two fields from this multi-line field?
I'm trying to match events in transforms.conf on key=value strings. (like EventCode=103 and so on). It wouldn't work unless I did escape the equals sign with backslash. So config entry like REGEX=C... See more...
I'm trying to match events in transforms.conf on key=value strings. (like EventCode=103 and so on). It wouldn't work unless I did escape the equals sign with backslash. So config entry like REGEX=ComputerName=whatever.domain.com Doesn't seem to work, but REGEX=ComputerName\=whatever.domain.com  does. And I generally don't mind it but I would love to see a piece of docs that says that the equals sign has to be ascaped. Normally it doesn't so I have no idea if it's something to do with regex itself, or with conf file parsing. Can anyone point me to a proper doc?
Hi, Let's imagine I have those raws : Name Value1 Value2 foo 1 2 foo12 1 6 foodazd 5 6 fooaoke 4 3 foo56 2 3 bar 1 2 barjodpez 7 4 barjo 7 4 bar125 ... See more...
Hi, Let's imagine I have those raws : Name Value1 Value2 foo 1 2 foo12 1 6 foodazd 5 6 fooaoke 4 3 foo56 2 3 bar 1 2 barjodpez 7 4 barjo 7 4 bar125 7 5   I would like to create a search that gives : Name Value1 Value2 foo foo12 foodazd fooaoke foo56 1 2 4 5 2 3 6 bar barjodpez barjo bar125 1 7 2 4 5   So to explain with words, I want to merge raws based on the smallest common substring present in the Name column (here, foo and bar). Thanks for your help.
Hello, I have a problem with my distributed environment where some of my instances appear greyed out under CPU and memory utilization. If I open the specific instance panels I see the red icon with ... See more...
Hello, I have a problem with my distributed environment where some of my instances appear greyed out under CPU and memory utilization. If I open the specific instance panels I see the red icon with the errors below.  This is very strange because it affects the indexers only. The only difference with them is that the web service is disabled a per Splunk best practices. I tried with splunk support by opening a case but we couldn't find the solution yet.   Splunk Version: 8.0.3 (indexers), 8.0.5 (everything else)--- Is this a problem?  [subsearch][servername]Failed to fetch REST endpoint uri=https://127.0.0.1:8089/services/server/info?count=0&strict=false from server https://127.0.0.1:8089. Check that the URI path provided exists in the REST API. [subsearch][servername]Unexpected status for to fetch REST endpoint uri=https://127.0.0.1:8089/services/server/info?count=0&strict=false from server=https://127.0.0.1:8089 - Bad Request   --- MORE INFO --- If I run the command manually like: https://10.10.10.1:8089/services/server/status/resource-usage/hostwide I get the output in my browser.   I read this post: https://community.splunk.com/t5/Getting-Data-In/Splunk-Management-Console-Error-subsearch-Rest-Processor-https/m-p/326219#M60633  It talks about the indexer role, my Cluster Master is also "SHC Deployer" (Search Head Cluster Deployer), would this be the role I have to move? Its not an Indexer, I have 6 dedicated indexers and 5 dedicated search heads.
Hi, I have a search that contains millions of events and is extremely slow, is there a way to speed it up?   This is the query:   index=audit | table db_name | dedup db_name | outputlookup audit... See more...
Hi, I have a search that contains millions of events and is extremely slow, is there a way to speed it up?   This is the query:   index=audit | table db_name | dedup db_name | outputlookup audit.csv    
Im practicing in creating Dashboards using base search and my dashboard has a piechart wich says "no results found", but when i click on "open in search" i get the results as stats in search and have... See more...
Im practicing in creating Dashboards using base search and my dashboard has a piechart wich says "no results found", but when i click on "open in search" i get the results as stats in search and have no probs to visualize via the visualizationtab. my Dashboard sourcecode:   <dashboard> <label>Basesearch Test</label> <search id="basesearch"> <query>index=fishingbot</query> <earliest>$timepickearliest$</earliest> <latest>$timepicklatest$</latest> </search> <row> <panel> <title>Sessionstart</title> <single> <search base="basesearch"> <query>eval start = strftime($timepickearliest$, "%d.%m.%Y %H:%M:%S") | table start</query> </search> <option name="drilldown">none</option> </single> </panel> <panel> <title>Sessionende</title> <single> <search base="basesearch"> <query>eval end = strftime($timepicklatest$, "%d.%m.%Y %H:%M:%S") | table end</query> </search> <option name="drilldown">none</option> </single> </panel> </row> <row> <panel> <chart> <search base="basesearch"> <query>rex field=message "\"(?&lt;Loot&gt;\w+)\" gefischt" | stats count by Loot</query> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> </dashboard>   The Timepick comes from a Drilldown in another Dashboard where the Botsessions are listed. I tried it with chart instead of stats, but its the same result. Where did i got the flaw?
Looking for the most efficient way to find 2 way traffic in flow data for a particular set of IP/port/protocol combinations: index=flow protocol=6 AND src_port IN (94, 407, 1417, 1418, 1419, 1420) ... See more...
Looking for the most efficient way to find 2 way traffic in flow data for a particular set of IP/port/protocol combinations: index=flow protocol=6 AND src_port IN (94, 407, 1417, 1418, 1419, 1420) OR dest_port IN (94, 407, 1417, 1418, 1419, 1420) AND NOT src_port IN ( 21, 22) AND NOT dest_port IN ( 21, 22) This gets us the inital data set but having trouble formulating an efficient way to find matching events where src_ip = dest_ip and dest_ip = src_ip from the intial query and flow protocol=6 AND src_port IN (94, 407, 1417, 1418, 1419, 1420) OR dest_port IN (94, 407, 1417, 1418, 1419, 1420) AND NOT src_port IN ( 21, 22) AND NOT dest_port IN ( 21, 22) For example: src_ip = 10.1.1.10,  src_port=94,  dest_ip= 10.1.1.1, dest_port=407  would match: src_ip = 10.1.1.1,  src_port=94,  dest_ip= 10.1.1.10, dest_port=407  src_ip = 10.1.1.1,  src_port=1418,  dest_ip= 10.1.1.10, dest_port=407
Can any one suggest shall we upgrade splunk clustered environment from 8.0.3 to 8.2.2
I want to add its:translate attribute to few elements in source XML of dashboard. Is it possible ? Cause when I tried to do that, splunk throws a warning stating that the node attribute is unknow.
Hi, I have set up a Splunk Enterprise instance (version 8.2.1) and a Universal Forwarder instance on Docker on the same machine, and I'm trying to forward data into the Splunk indexer. Here's what I... See more...
Hi, I have set up a Splunk Enterprise instance (version 8.2.1) and a Universal Forwarder instance on Docker on the same machine, and I'm trying to forward data into the Splunk indexer. Here's what I have so far: On the Splunk Enterprise instance (1.1.1.1): Created an app named "abc" Created an index named "abc_idx" on app "abc" Created a sourcetype named "abc_data" on app "abc" On the Splunk forwarder: Added the indexer: "./bin/splunk add forward-server 1.1.1.1:9997" My very next command was "./bin/splunk add monitor /splunk_forward/log" Then I realized I wanted the monitored logs to be added to the index "abc_idx" and using the sourcetype "abc_data", so I removed the monitor, and then restarted the container. This is when I see the events appearing in the "main" index, so I believe the files did get forwarded. I then ran on the Splunk forwarder: ./bin/splunk add monitor /splunk_forward/log -index abc_idx -sourcetype abc_data But I did not see any event on the index "abc_idx". However, if I run the "oneshot" command, the events show up in the index "abc_idx" Is Splunk refusing to (re)index the same files again, even though they are going to different indexes? Also, I thought the commands I typed would end up in "/opt/splunkforwarder/etc/system/local/inputs.conf"? But I only see "[splunktcp://9997]" in it, not the folder I'm monitoring. Am I looking at the wrong file? However, I see the following in "/opt/splunkforwarder/etc/system/local/outputs.conf": [tcpout:default-autolb-group] server = 1.1.1.1:9997 [tcpout-server://1.1.1.1:9997] So why did my indexer configuration become part of the config file? Preferably, I would like to configure the forwarder using the config files, but I'm not sure exactly which ones to modify - local/inputs.conf and anything else? Thank you.
Hi Folks,   I was wondering how is the best way to collect audit log for the VMWARE Esxi. Regards Alessandro
Hi, I am trying to filter out fields from a table based on its content, example: LP. NAME SURNAME STREET CITY 1. Bob Smith mainst florida 2. Greg Obama secondaryst miami I want to remove sec... See more...
Hi, I am trying to filter out fields from a table based on its content, example: LP. NAME SURNAME STREET CITY 1. Bob Smith mainst florida 2. Greg Obama secondaryst miami I want to remove second row from table based on fact that it have "Greg" AND "Miami" in name and city fields. So far I tried: |search Name NOT "Greg" AND City NOT "Miami" but it didnt work for me. Any ideas?
Yes, I know that filtering was discussed many times here but my case is slightly different. I have a UF pulling events using WMI. It then pushes the events to upstream HFs. On the HFs I tried to do... See more...
Yes, I know that filtering was discussed many times here but my case is slightly different. I have a UF pulling events using WMI. It then pushes the events to upstream HFs. On the HFs I tried to do filtering similarily to https://docs.splunk.com/Documentation/Splunk/8.2.2/Forwarding/Routeandfilterdatad#Filter_WMI_and_Event_Log_events But in my case there are two differences: 1) I didn't want to filter out particular events. I wanted to filter out everything, just keep two kinds of events. Seems easy - just do a default transform with REGEX=. to set queue to nullQueue and then just match the ones you want to index and set queue to indexQueue. Well, it doesn't work. Maybe because: 2) I didn't want to apply this to the whole sourcetype. And here's where I suspect something might have gone wrong because if it was just that my transforms are bad, the default one sending to the nullQueue should work. But it seems that they don't work at all. My definitions: props.conf: [host::TEST...] TRANSFORMS-routing = TEST_default_drop,TEST_index (the hosts I'm getting the data from are called TEST01.domain.com, TEST02.domain.com and so on; I already tried host::TEST*.domain.com) transforms.conf [TEST_default_drop] REGEX=. DEST_KEY=queue FORMAT=nullQueue [TEST_index] REGEX=(?m)^EventCode=(103|104) DEST_KEY=queue FORMAT=indexQueue  Everything seems to be reasonably well, but it doesn't work - I'm getting all data in my index - no filtering at all. I wouldn't want to configure filtering for the whole sourcetype because I might use WMI in the future for other things and this particular filtering is only for this one kind of sources. Does UF set host field to something other than I'm expecting? Can I debug it somehow reasonably? (the UF is kinda a production one in general so it'd not be wise to turn fully blown debug of everything on it on)
we have two device AUSTDPVPN1 and AUSTDPVPN2 and current user logged in count on device as 0 and 2867. I want whenever  AUSTDPVPN1 user login count is 0 it should get replace with login user count o... See more...
we have two device AUSTDPVPN1 and AUSTDPVPN2 and current user logged in count on device as 0 and 2867. I want whenever  AUSTDPVPN1 user login count is 0 it should get replace with login user count of AUSTDPVPAN2. Below mentioned is query an their output Query index="pulse_secure_index" STS20641 | rex field=host (?<device>\w+).* | rex field=msg "STS20641: Number of concurrent users logged in to the device: (?<currUser>\d+)" | eval device=upper(device) | search device=AUSTDPVPN* | stats max(currUser) as currentUser BY device OUTOUT device currentUser AUSTDPVPN1 0 AUSTDPVPN2 2867   Suggest how we can get below mentioned output device currentUser AUSTDPVPN1 2867 AUSTDPVPN2 2867
So in detail, I have a dashboard that read log files to monitor the list of host's status which is UP or DOWN. But when some hosts are offline, then there would be no log file of their. I have a csv ... See more...
So in detail, I have a dashboard that read log files to monitor the list of host's status which is UP or DOWN. But when some hosts are offline, then there would be no log file of their. I have a csv file that list out all the host name, and I want to compare the two list that I have, to have it show the offline hosts, which is the hosts name that show up in the csv file but not on the dashboard that I already have. 
I have a regex in my dashboard that performs the validation whether the input value in the field is Numeric or Not.  The code/regex works perfectly well.  I  need some help in modifying the partic... See more...
I have a regex in my dashboard that performs the validation whether the input value in the field is Numeric or Not.  The code/regex works perfectly well.  I  need some help in modifying the particular regex so that it should also check if the input field contains any special character such as " <double quote> or <space> at the starting or ending or middle of the string (Basically Padding, Spacing or quotes), That would really  help.  The  regex that I have which only checks numeric is as follows. It is under the if(match(value ...section  , Highlighted in Bold ================================================= <input type="text" token="selText"> <label>Enter Only Digits</label> <change> <eval token="validationResult">if(match(value, &quot;^[0-9]+$&quot;), &quot;Numeric&quot;, &quot;Not Numeric&quot;)</eval> </change> </input> </fieldset> ================================================================================
Hello   <row> <panel> <title>Manage alerts</title> <input type="checkbox" token="detail_input_token"> <label></label> <choice value="hide">Hide detail</choice> ... See more...
Hello   <row> <panel> <title>Manage alerts</title> <input type="checkbox" token="detail_input_token"> <label></label> <choice value="hide">Hide detail</choice> <initialValue>hide</initialValue> </input> <html>debug detail_input_token:$detail_input_token$</html> <html rejects="$detail_input_token$"> <body> <p> <A HREF="/manager/myapp/data/lookup-table-files?ns=myapp&amp;pwnr=-&amp;search=mySearch&amp;count=100" target="_blank"> Manage <B>lookup mySearch</B> (csv files). </A> </p> </body> </html> </panel> </row>     It works in splunk 6.4.2 We migrate on splunk 8.2 and it doesn't work now. I do many change without success.  Does somebody have an idea ? Thanks in advance.