All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ok. Several things. 1. I'm not sure why you're trying to join two sql database search results in Splunk. I get it that if you have a hammer everything looks like a nail but don't you have a better t... See more...
Ok. Several things. 1. I'm not sure why you're trying to join two sql database search results in Splunk. I get it that if you have a hammer everything looks like a nail but don't you have a better tool for it? SQL and Splunk are fairly disjoint worlds and while there is some form of interfacing between them. 2. The 50k result limit is not a general subsearch limit. The general subsearch limit is 10k results. 50k is the limits for results for the join command. Only. 3. Splunks runs a search, Splunk gets results (possibly incomplete if - as in your case - subsearch hits limits), Splunk sends results. That's it. You can try searching _internal or even trying accessing specific search job logs for some signs of anomalies but that would have to be a thing completely separate from the main search. You don't get any "metadata" about your search just within a saved search. I already wrote you that. 4. What I would probably do if I wanted to do something like this (still remember about my first point) would be to firstly get the data into Splunk either with a dbconnect input instead of searching remotely on every search or at least by summary indexing (I don't know your source data so I don't know which option would be better). Then you can simply do stats over your data in the index instead of bending over backwards to do joins over external data.
Hi @yuanliu  First, I would like to thank you for your help. This is "partly" related to my previous post that you solved, but I will describe it better here https://community.splunk.com/t5/Splunk... See more...
Hi @yuanliu  First, I would like to thank you for your help. This is "partly" related to my previous post that you solved, but I will describe it better here https://community.splunk.com/t5/Splunk-Search/How-do-I-quot-Left-join-quot-by-appending-CSV-to-an-index-in/m-p/697794#M237015 This is just an example: I have a host table containing IP and hostname, approximately 100k rows with unique IPs I have a contact table containing IP and contact, approximately 1> million rows with unique IPs Both can be accessed with DBX query,  but unfortunately they are both located in different DB connections, so it's not possible to join them at the backend. So, the workaround is to filter out subnet on the contact DB and use subsearches to join the contact DB with the Host DB Due to 50k rows limit using subsearch, I ran a separate query on the contact DB to find out the number of rows for each subnet, then I grouped them together to make sure the number of rows is below 50k. (Please see the diagram below) Group 1 = 40 rows, Group 2 = 45k rows,  and Group 3 = 30k rows. After that, I used left join for each group on the contact DB with the Host DB. Since I don't control the growth of data in the Contact DB,  I am trying to figure out a way to get an email alert if one of the groups exceeded 50k limit. I think I am able to create a scheduled report to produce the stats of each subnet in the group, but going back to my original question: I simply want to know if it's possible for Splunk to send me an email alert only if it meets certain thresholds. The subsearch is only one of my cases. Another case is  I have multiple reports that run daily, I intend to read the reports only if there is a problem, such as empty data, meeting certain thresholds, etc.       Input: Host table ip host 10.0.0.1 host1 10.0.0.2 host2 10.0.0.3 host3 10.1.0.1 host4 10.1.0.2 host5 10.1.0.3 host6 10.2.0.1 host7 10.2.0.2 host8 10.2.0.3 host9 Contact table ip contact 10.0.0.1 person1 10.0.0.2 person2 10.0.0.3 person3 10.1.0.1 person4 10.1.0.2 person5 10.1.0.3 person6 10.2.0.1 person7 10.2.0.2 person8 10.2.0.3 person9 Output:  Join host and contact DB ip host contact 10.0.0.1 host1 person1 10.0.0.2 host2 person2 10.0.0.3 host3 person3 10.1.0.1 host4 person4 10.1.0.2 host5 person5 10.1.0.3 host6 person6 10.2.0.1 host7 person7 10.2.0.2 host8 person8 10.2.0.3 host9 person9
figured out.. my column name had one upper case letter in it.....i think i need to slowdown from the Splunk..ing excitement
TIME_PREFIX is a regex match for what immediately precedes your timestamp.  There are extra quotes, spaces, and what appears to be json key value pair identifiers.  I would make the value more explic... See more...
TIME_PREFIX is a regex match for what immediately precedes your timestamp.  There are extra quotes, spaces, and what appears to be json key value pair identifiers.  I would make the value more explicit and add a MAX_TIMESTAMP_LOOKAHEAD key once you establish a proper match above.
tired both of the below... i only see errors which are  >=499..for some reason i dont see the success ones none of the 200 or showing...something is wrong AND ((status_code>=199 status_Code<300) OR... See more...
tired both of the below... i only see errors which are  >=499..for some reason i dont see the success ones none of the 200 or showing...something is wrong AND ((status_code>=199 status_Code<300) OR (status_code>=499) )  - understand that there is an implied AND in it   AND ((status_code>=199 AND status_Code<300) OR (status_code>=499) )  --explicit AND mentioned  
@PickleRick Updated the post with the settings in place on HF. Data is being received at Heavy Forwader via HEC input. It then gets forwarded to indexers. 
@dural_yyz  I dont see any specific settings for this sourcetype under local props.conf. I added TIME_PREFIX and TZ values but that didnt change anything.  This is on the source which is getting th... See more...
@dural_yyz  I dont see any specific settings for this sourcetype under local props.conf. I added TIME_PREFIX and TZ values but that didnt change anything.  This is on the source which is getting the data, i.e. Heavy Forwarder. Do I need to place any of these settings on indexer/SH as well? [change:auditor] category = Custom pulldown_type = 1 TIME_PREFIX = timeDetected TZ = UTC   System time zone on HF is set to EDT    
You can simply do ...  ((status_code>=199 status_code<300) OR (status_code>=499))  
What are your setting for that sourcetype/source/host? And how are you pushing the events (to which endpoint)?
Please provide your props.conf stanza for the specific sourcetype.  In my experience this is an indication where UTC is not explicitly set and the local HF timezone is being used which is not Eastern... See more...
Please provide your props.conf stanza for the specific sourcetype.  In my experience this is an indication where UTC is not explicitly set and the local HF timezone is being used which is not Eastern.  I'm not saying that is the case here for sure because perhaps you do have the TZ explicitly set. The golden rule is never let Splunk automagically guess the time.  It's right almost always but when it's not it can mess with production data at the worst times.
I know that not every feature in Dashboard Studio has been exposed in the UI yet. I see that you can set tokens on interaction with visualizations but I'm not seeing anything similar for inputs. Does... See more...
I know that not every feature in Dashboard Studio has been exposed in the UI yet. I see that you can set tokens on interaction with visualizations but I'm not seeing anything similar for inputs. Does anyone know if there is a change event handler for inputs in Dashboard Studio like there is in the XML dashboards? I've not seen anything in the docs, but I could just be looking in the wrong place. Thanks.
Oh and I put the tokens in my panel titles only as a sanity debug check.  They have no reason to exist there once your dashboard is finalized.
<input type="dropdown" token="tokEnvironment" searchWhenChanged="true"> This is the problem, this triggers the search when this token changes, you have it in time as well. Here is a sample board I'... See more...
<input type="dropdown" token="tokEnvironment" searchWhenChanged="true"> This is the problem, this triggers the search when this token changes, you have it in time as well. Here is a sample board I've created with multiple panels and different searches.  It will only trigger on submit button press. <form version="1.1" theme="dark"> <label>Answers - Classic</label> <fieldset submitButton="true"> <input type="dropdown" token="tok_idx"> <label>Indexes</label> <fieldForLabel>index</fieldForLabel> <fieldForValue>index</fieldForValue> <search> <query>| tstats count where index=_* by index</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> <input type="time" token="tok_time" searchWhenChanged="false"> <label>Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>$tok_idx$ :: $tok_time$</title> <chart> <title>Total Events</title> <search> <query>| tstats count where index=$tok_idx$ by _time span=1h</query> <earliest>$tok_time.earliest$</earliest> <latest>$tok_time.latest$</latest> </search> <option name="charting.chart">column</option> <option name="charting.drilldown">none</option> </chart> </panel> <panel> <title>$tok_idx$ :: $tok_time$</title> <chart> <title>Total Events by Sourcetype</title> <search> <query>| tstats count where index=$tok_idx$ by _time sourcetype span=1h | timechart sum(count) by sourcetype</query> <earliest>$tok_time.earliest$</earliest> <latest>$tok_time.latest$</latest> </search> <option name="charting.chart">column</option> <option name="charting.drilldown">none</option> </chart> </panel> </row> </form>
May I ask how you changed the UF to run as System? Is it simply a case of setting SPLUNK_OS_USER in splunk-launch.conf like it would be on a linux host? ie: SPLUNK_OS_USER=SYSTEM Thank you, and ap... See more...
May I ask how you changed the UF to run as System? Is it simply a case of setting SPLUNK_OS_USER in splunk-launch.conf like it would be on a linux host? ie: SPLUNK_OS_USER=SYSTEM Thank you, and apologies if this is a really lame question.
waiting for reply
Hi All, i have this calculation and at the end iam using where to get only what i need. splunk suggests that put this into search index= xyz AND source=abc AND sourcetype=S1 AND client="BOFA" AND... See more...
Hi All, i have this calculation and at the end iam using where to get only what i need. splunk suggests that put this into search index= xyz AND source=abc AND sourcetype=S1 AND client="BOFA" AND status_code -- how do i get this to get only the status codes that are >=199 and <300 --> these belong to my success bucket >=499 --> These belong to my error bucket | eval Derived_Status_Code= case( status_code>=199 and status_code<300,"Success", status_code>=499,"Errors", 1=1,"Others" ``` I do not need anything that is not in the above conditions ) |Table <> |Where Derived_Status_Code IN ("Errors',"Success") I want to avoid where and get this into search using AND Thank you so much for your time
I'm looking for average CPU utilization from 10+ hosts in a fixed period last month. However, every time I refresh the URL or the metrics, the number changes drastically. However, when I do the same ... See more...
I'm looking for average CPU utilization from 10+ hosts in a fixed period last month. However, every time I refresh the URL or the metrics, the number changes drastically. However, when I do the same for 2 other hosts, the number remains the same between refreshes. Is it because it is doing sampling somewhere? If so,  where can I disable the sampling config?
I'm trying to configuring Splunk Universal Forwarder to send logs to Logstash. I only have access to the Universal Forwarder (not the Heavy Forwarder), and I need to forward audit logs from several ... See more...
I'm trying to configuring Splunk Universal Forwarder to send logs to Logstash. I only have access to the Universal Forwarder (not the Heavy Forwarder), and I need to forward audit logs from several databases, including MySQL, PostgreSQL, MongoDB, and Oracle. So far, I’ve been able to send TCP syslogs to Logstash using the Universal Forwarder. Additionally, I’ve successfully connected to MySQL using Splunk DB Connect but I’m not receiving any logs from it to Logstash. I would appreciate any advice on forward database audit logs through the Universal Forwarder to Logstash in real time or is there any provision of creating a sink or something? Any help or examples would be great! Thanks in advance.
Hi @LizAndy123 , please try this: | rex "project id : (?<Project_Id>\d+) and metadata id : \w+\sis\s:\s(?<Size>\d+) and time taken to upload is: (?<Upload_Speed>\w+)" that you can test at project ... See more...
Hi @LizAndy123 , please try this: | rex "project id : (?<Project_Id>\d+) and metadata id : \w+\sis\s:\s(?<Size>\d+) and time taken to upload is: (?<Upload_Speed>\w+)" that you can test at project id : (?<Project_Id>\d+) and metadata id : \w+\sis\s:\s(?<Size>\d+) and time taken to upload is: (?<Upload_Speed>\w+) Ciao. Giuseppe
Hi, We have data from Change Auditor coming via HEC setup on a Heavy Forwarder. This HF instance was upgraded to Version 9.2.2. After that, I am seeing a difference in the way Splunk displays new ev... See more...
Hi, We have data from Change Auditor coming via HEC setup on a Heavy Forwarder. This HF instance was upgraded to Version 9.2.2. After that, I am seeing a difference in the way Splunk displays new events on SH. It is now converting UTC->PST.  I ran a search for previous week and for those events it is converting timestamp correctly, from UTC-> Eastern.  I am a little confused since both searches are done from same search head against same set of indexers. If there was a TZ issue, woudn't Splunk have converted both incorrectly?  I also ran same searches on indexer with identical output. Recent events in PST whereas older events continue to show as EST. Here are some examples For previous week   Recent. Splunk shows a UTC->PST conversion instead. I did test this manually via Add Data and Splunk is correctly formatting it to Eastern. How can I troubleshoot why recent events in search are showing PST conversion? My current TZ setting on SH is still set to Eastern Time. Also confirmed that system time for HF, indexers and Search Heads is set to Eastern.  Thanks