All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You can simply do ...  ((status_code>=199 status_code<300) OR (status_code>=499))  
What are your setting for that sourcetype/source/host? And how are you pushing the events (to which endpoint)?
Please provide your props.conf stanza for the specific sourcetype.  In my experience this is an indication where UTC is not explicitly set and the local HF timezone is being used which is not Eastern... See more...
Please provide your props.conf stanza for the specific sourcetype.  In my experience this is an indication where UTC is not explicitly set and the local HF timezone is being used which is not Eastern.  I'm not saying that is the case here for sure because perhaps you do have the TZ explicitly set. The golden rule is never let Splunk automagically guess the time.  It's right almost always but when it's not it can mess with production data at the worst times.
I know that not every feature in Dashboard Studio has been exposed in the UI yet. I see that you can set tokens on interaction with visualizations but I'm not seeing anything similar for inputs. Does... See more...
I know that not every feature in Dashboard Studio has been exposed in the UI yet. I see that you can set tokens on interaction with visualizations but I'm not seeing anything similar for inputs. Does anyone know if there is a change event handler for inputs in Dashboard Studio like there is in the XML dashboards? I've not seen anything in the docs, but I could just be looking in the wrong place. Thanks.
Oh and I put the tokens in my panel titles only as a sanity debug check.  They have no reason to exist there once your dashboard is finalized.
<input type="dropdown" token="tokEnvironment" searchWhenChanged="true"> This is the problem, this triggers the search when this token changes, you have it in time as well. Here is a sample board I'... See more...
<input type="dropdown" token="tokEnvironment" searchWhenChanged="true"> This is the problem, this triggers the search when this token changes, you have it in time as well. Here is a sample board I've created with multiple panels and different searches.  It will only trigger on submit button press. <form version="1.1" theme="dark"> <label>Answers - Classic</label> <fieldset submitButton="true"> <input type="dropdown" token="tok_idx"> <label>Indexes</label> <fieldForLabel>index</fieldForLabel> <fieldForValue>index</fieldForValue> <search> <query>| tstats count where index=_* by index</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> <input type="time" token="tok_time" searchWhenChanged="false"> <label>Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>$tok_idx$ :: $tok_time$</title> <chart> <title>Total Events</title> <search> <query>| tstats count where index=$tok_idx$ by _time span=1h</query> <earliest>$tok_time.earliest$</earliest> <latest>$tok_time.latest$</latest> </search> <option name="charting.chart">column</option> <option name="charting.drilldown">none</option> </chart> </panel> <panel> <title>$tok_idx$ :: $tok_time$</title> <chart> <title>Total Events by Sourcetype</title> <search> <query>| tstats count where index=$tok_idx$ by _time sourcetype span=1h | timechart sum(count) by sourcetype</query> <earliest>$tok_time.earliest$</earliest> <latest>$tok_time.latest$</latest> </search> <option name="charting.chart">column</option> <option name="charting.drilldown">none</option> </chart> </panel> </row> </form>
May I ask how you changed the UF to run as System? Is it simply a case of setting SPLUNK_OS_USER in splunk-launch.conf like it would be on a linux host? ie: SPLUNK_OS_USER=SYSTEM Thank you, and ap... See more...
May I ask how you changed the UF to run as System? Is it simply a case of setting SPLUNK_OS_USER in splunk-launch.conf like it would be on a linux host? ie: SPLUNK_OS_USER=SYSTEM Thank you, and apologies if this is a really lame question.
waiting for reply
Hi All, i have this calculation and at the end iam using where to get only what i need. splunk suggests that put this into search index= xyz AND source=abc AND sourcetype=S1 AND client="BOFA" AND... See more...
Hi All, i have this calculation and at the end iam using where to get only what i need. splunk suggests that put this into search index= xyz AND source=abc AND sourcetype=S1 AND client="BOFA" AND status_code -- how do i get this to get only the status codes that are >=199 and <300 --> these belong to my success bucket >=499 --> These belong to my error bucket | eval Derived_Status_Code= case( status_code>=199 and status_code<300,"Success", status_code>=499,"Errors", 1=1,"Others" ``` I do not need anything that is not in the above conditions ) |Table <> |Where Derived_Status_Code IN ("Errors',"Success") I want to avoid where and get this into search using AND Thank you so much for your time
I'm looking for average CPU utilization from 10+ hosts in a fixed period last month. However, every time I refresh the URL or the metrics, the number changes drastically. However, when I do the same ... See more...
I'm looking for average CPU utilization from 10+ hosts in a fixed period last month. However, every time I refresh the URL or the metrics, the number changes drastically. However, when I do the same for 2 other hosts, the number remains the same between refreshes. Is it because it is doing sampling somewhere? If so,  where can I disable the sampling config?
I'm trying to configuring Splunk Universal Forwarder to send logs to Logstash. I only have access to the Universal Forwarder (not the Heavy Forwarder), and I need to forward audit logs from several ... See more...
I'm trying to configuring Splunk Universal Forwarder to send logs to Logstash. I only have access to the Universal Forwarder (not the Heavy Forwarder), and I need to forward audit logs from several databases, including MySQL, PostgreSQL, MongoDB, and Oracle. So far, I’ve been able to send TCP syslogs to Logstash using the Universal Forwarder. Additionally, I’ve successfully connected to MySQL using Splunk DB Connect but I’m not receiving any logs from it to Logstash. I would appreciate any advice on forward database audit logs through the Universal Forwarder to Logstash in real time or is there any provision of creating a sink or something? Any help or examples would be great! Thanks in advance.
Hi @LizAndy123 , please try this: | rex "project id : (?<Project_Id>\d+) and metadata id : \w+\sis\s:\s(?<Size>\d+) and time taken to upload is: (?<Upload_Speed>\w+)" that you can test at project ... See more...
Hi @LizAndy123 , please try this: | rex "project id : (?<Project_Id>\d+) and metadata id : \w+\sis\s:\s(?<Size>\d+) and time taken to upload is: (?<Upload_Speed>\w+)" that you can test at project id : (?<Project_Id>\d+) and metadata id : \w+\sis\s:\s(?<Size>\d+) and time taken to upload is: (?<Upload_Speed>\w+) Ciao. Giuseppe
Hi, We have data from Change Auditor coming via HEC setup on a Heavy Forwarder. This HF instance was upgraded to Version 9.2.2. After that, I am seeing a difference in the way Splunk displays new ev... See more...
Hi, We have data from Change Auditor coming via HEC setup on a Heavy Forwarder. This HF instance was upgraded to Version 9.2.2. After that, I am seeing a difference in the way Splunk displays new events on SH. It is now converting UTC->PST.  I ran a search for previous week and for those events it is converting timestamp correctly, from UTC-> Eastern.  I am a little confused since both searches are done from same search head against same set of indexers. If there was a TZ issue, woudn't Splunk have converted both incorrectly?  I also ran same searches on indexer with identical output. Recent events in PST whereas older events continue to show as EST. Here are some examples For previous week   Recent. Splunk shows a UTC->PST conversion instead. I did test this manually via Add Data and Splunk is correctly formatting it to Eastern. How can I troubleshoot why recent events in search are showing PST conversion? My current TZ setting on SH is still set to Eastern Time. Also confirmed that system time for HF, indexers and Search Heads is set to Eastern.  Thanks 
I have a log with a sample of the following POST Uploaded File Size for project id : 123 and metadata id : xxxxxxxxxxxx is : 1234 and time taken to upload is: 51ms   So this is project id : 123 S... See more...
I have a log with a sample of the following POST Uploaded File Size for project id : 123 and metadata id : xxxxxxxxxxxx is : 1234 and time taken to upload is: 51ms   So this is project id : 123 Size is 1234 Upload Speed is 51ms I what to extract the project id , size and the upload time as fields  also regarding the upload time I guess I just need the number right.  
Hi @Nicolas2203 , it's a lack in Splunk architecture: there isn't an HA solution for Heavy Forwarders. You have two solutions: install the Add-On on a Search Head Cluster, so the cluster manages a... See more...
Hi @Nicolas2203 , it's a lack in Splunk architecture: there isn't an HA solution for Heavy Forwarders. You have two solutions: install the Add-On on a Search Head Cluster, so the cluster manages add-ons and HA is guaranteed, but many users don't love to have the ingestion systems in the user fornt-end. The second solution is to configure more HFs and manually enable one at a time, but this isn't an automatic recovery solution and yu have to manage checkpoints between HFs. I hint to add a request in Splunk ideas about this. Ciao. Giuseppe
Hi Splunk community, I have a quick question about an app, such as the Microsoft Cloud Services app, in a multiple Heavy Forwarder environment. The app is installed on one Heavy Forwarder and makes... See more...
Hi Splunk community, I have a quick question about an app, such as the Microsoft Cloud Services app, in a multiple Heavy Forwarder environment. The app is installed on one Heavy Forwarder and makes some API calls to Azure to retrieve data from an event hub and store this data in an indexer cluster. If the Heavy Forwarder where the add-on is installed goes down, no logs are retrieved from the event hub. So, what are the best practices for this kind of app, which retrieves logs through API calls, to be more resilient? The same applies to some Cisco add-ons that collect logs from Cisco devices via an API. For now, I will configure the app on another Heavy Forwarder without enabling data collection, but in case of failure, human intervention will be needed. I would be curious to know what solutions you implement for this kind of issue. Thanks Nicolas I'm curious
I am afraid I get the same results even with maxspan
Hi @OgoNARA , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @timtekk , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
@inventsekar I have updated the limits.conf under system/local and it does not impact anything. Issue is still  persist. [default] max_mem_usage_mb = 500 [searchresults] maxresultrows = 86400