Hello! A dashboard runs a search and I want to create an alert for this. So I replicated the search code to the alert. However, now, if there is a change in the dashboard, my alert will not be up...
See more...
Hello! A dashboard runs a search and I want to create an alert for this. So I replicated the search code to the alert. However, now, if there is a change in the dashboard, my alert will not be updated. Is there a way to create an alert with a search like: "search dashboard1" or something so that whatever changes happen to the dashboard, they will be fed into my alert? Thanks!
Hi Splunk Team, I am looking for the API where we can blackout monitoring on Azure VM while these VMs are under patching process. The patch will happen to a group of VMs together based on its tag i...
See more...
Hi Splunk Team, I am looking for the API where we can blackout monitoring on Azure VM while these VMs are under patching process. The patch will happen to a group of VMs together based on its tag in azure. Can you please suggest me an approach to group VM and then blackout monitoring alerts and then re-enable when the patching processing is completed? Thanks in advance George
Hello, I'm trying to debug an issue with an FTP service. I'd like to know that which users are using 'active data connection', where the connecting point would only be the sessionID. I have alread...
See more...
Hello, I'm trying to debug an issue with an FTP service. I'd like to know that which users are using 'active data connection', where the connecting point would only be the sessionID. I have already extracted sessionID and userID as fields. The logs for the sessions are varying between 150-3000 lines of events, and I don't know how to match my search criteria, to extract a userID connected to the sessionID that my search result is returning. index=p_ftp sourcetype=debug "active data connection" | stats values(sessionID) This is giving me the sessionIDs properly, I just need the userIDs from the session it logged usually plenty of lines before. Can you please help me? Thanks a lot in advance
Hi, I am having difficulty in showing up results from splunk query in dashboard panel where it always says 'No results found.' however the Query displays results when searched directly from search ta...
See more...
Hi, I am having difficulty in showing up results from splunk query in dashboard panel where it always says 'No results found.' however the Query displays results when searched directly from search tab or even clicking on 'open in search' magnifier icon button from dashboard panel. I am trying to display top 10 errors in pie chart based on dropdowns(environment, time), In the base query I have search index and in the actual dashboard panel query I have environment that is driven from dropdown value and I also time dropdown. <form>
<label>Label</label>
<search id="base">
<query>index="app_3537255" </query>
<earliest>$time_token.earliest$</earliest>
<latest>$time_token.latest$</latest>
</search>
<fieldset submitButton="false" autoRun="true">
<input type="time" token="time_token" searchWhenChanged="true">
<label>Time</label>
<default>
<earliest>-24h@h</earliest>
<latest>now</latest>
</default>
</input>
<input type="dropdown" token="APIG_ENV" searchWhenChanged="true">
<label>Environment</label>
<choice value="*">All</choice>
<search>
<query>index="app_3537255" | table host | sort host | dedup host</query>
<earliest>-7d@h</earliest>
<latest>now</latest>
</search>
<default>*</default>
<initialValue>*</initialValue>
<fieldForLabel>host</fieldForLabel>
<fieldForValue>host</fieldForValue>
</input>
</fieldset>
<row>
<panel>
<title>Top 10 Common Errors</title>
<chart>
<search base="base">
<query>| search host=$APIG_ENV$ eventtype="nix_errors" OR eventtype="err0r" | rex field=_raw "Error\s(?P<ErrorString>.*)" | eval ErrorString = "Error " + ErrorString | stats count(ErrorString) AS TotalCount BY ErrorString|sort -TotalCount | head 10</query>
</search>
<option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>
<option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>
<option name="charting.axisTitleX.visibility">visible</option>
<option name="charting.axisTitleY.visibility">visible</option>
<option name="charting.axisTitleY2.visibility">visible</option>
<option name="charting.axisX.abbreviation">none</option>
<option name="charting.axisX.scale">linear</option>
<option name="charting.axisY.abbreviation">none</option>
<option name="charting.axisY.scale">linear</option>
<option name="charting.axisY2.abbreviation">none</option>
<option name="charting.axisY2.enabled">0</option>
<option name="charting.axisY2.scale">inherit</option>
<option name="charting.chart">pie</option>
<option name="charting.chart.bubbleMaximumSize">50</option>
<option name="charting.chart.bubbleMinimumSize">10</option>
<option name="charting.chart.bubbleSizeBy">area</option>
<option name="charting.chart.nullValueMode">gaps</option>
<option name="charting.chart.showDataLabels">none</option>
<option name="charting.chart.sliceCollapsingThreshold">0.01</option>
<option name="charting.chart.stackMode">default</option>
<option name="charting.chart.style">shiny</option>
<option name="charting.drilldown">none</option>
<option name="charting.layout.splitSeries">0</option>
<option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>
<option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>
<option name="charting.legend.mode">standard</option>
<option name="charting.legend.placement">right</option>
<option name="charting.lineWidth">2</option>
<option name="refresh.display">progressbar</option>
<option name="trellis.enabled">0</option>
<option name="trellis.scales.shared">1</option>
<option name="trellis.size">medium</option>
</chart>
</panel>
</row>
</form> any reason why this is not working in dashboard panel? any pointers would be appreciated.
LOOK FOR BOLD for quick overview: I want to control the index-time extraction for events linked to an accelerated data model... I am relatively new to Splunk, and recently I've jumped into Accelera...
See more...
LOOK FOR BOLD for quick overview: I want to control the index-time extraction for events linked to an accelerated data model... I am relatively new to Splunk, and recently I've jumped into Accelerated Data Models. I understand a number of aspects about them already: How they differ from regular data models How they do index time extractions Stored in HPAS Populated by scheduled searches What I don't understand is how those summaries for the Accelerated Data Models are built. I understand that ADMs use tsidx files as the summaries of the raw data. "Each search you run scans tsidx files for the search keywords and uses their location references to retrieve from the rawdata file the events to which those keywords refer. Splunk Enterprise creates a separate set of tsidx files for data model acceleration. In this case, it uses the tsidx files as summaries of the data returned by the data model." What I don't understand is how the connection to the raw data and the .tsidx files is made. How are the .tsidx files formed from the event data? When I look at the data models object hierarchy in settings I see the fields that it encompasses: When I do a search like: | datamodel Intrusion_Detection search If I'm correct, it is giving me the search time extraction from indexes related to the accelerated model. The problem is that I get a lot of fields that are useless in cyber security efforts. For instance, maybe I want to know the category of the different attacks that are occurring. It is a calculated field in my accelerated data model. The calculation goes - if( isnull(category) OR category="","unknown",category. This means it will return the category unless there is none. I also don't understand where it gets this variable "category". How is that being pulled from the raw data? I get 100% unknowns is the problem. Is this a problem of event tagging with the Common Information Model or somewhere else in the flow of ingested data? - https://wiki.splunk.com/images/4/45/Splunk_EventProcessing_v19_0_standalone.pdf In the end here is what I want to know to fix this: How do I control what it pulls out of the raw event data? Where is the regex taking place? Is this something to configure with the .tsidx summaries in the indexers? When I have data like "geo-location" or "web-app" in the raw data used with a data model (data that I think it useful), how do I pull that data out into a field that I can use in my accelerated data model. What does the Common Information Model have to do with Accelerated Data Models? Is that where I configure what it pulls out of raw event data? In general, how do I make more custom accelerated data models that pull out new data from events? Additionally I understand that making more fields to pull out of the data also means for an increase in storage size on the indexer. I just want to figure this all out. [EDIT] Is there where I would use the App: Splunk Add-on Builder?
Hi, I'm trying to pass the aggregate function from the dropdown menu in the Splunk dashboard to the time-series chart. for example from dropdown, I want to pass actual, Avg(), max() to belo...
See more...
Hi, I'm trying to pass the aggregate function from the dropdown menu in the Splunk dashboard to the time-series chart. for example from dropdown, I want to pass actual, Avg(), max() to below search index = _internal sourcetype = * | search field=* Exhost=* | chart max(value) by _time,Exhost
Hi How can I find continuously occured events? e.g 1- I have field that call "response time" if some times show "response time" is high not issue but if it continuously occured it's not normal. ...
See more...
Hi How can I find continuously occured events? e.g 1- I have field that call "response time" if some times show "response time" is high not issue but if it continuously occured it's not normal. (couple of milisecond, seconds or minutes ) 2-I have "error code" field error code 404 is normal but if continuously occured, means something is wrong. (couple of milisecond, seconds or minutes ) FYI: this is huge log, consider that performance is a important factor. Any idea? Thanks,
Hi need to compare total numbers if they are different show table that present them 23:57:05.253 app module: PACK: Total:[1010000] from server1 Total:[C000001010000] 23:57:05.254 app module: PACK:...
See more...
Hi need to compare total numbers if they are different show table that present them 23:57:05.253 app module: PACK: Total:[1010000] from server1 Total:[C000001010000] 23:57:05.254 app module: PACK: Total:[1010000] from server1 Total:[C000001000000] diff=second total - first total expected output: Time diff 23:57:05.254 10000 any idea? Thanks,
I have a tstats query that pulls its data from an accelerated data model. I need to grab only the most up to date host event with the latest IP value. I cannot dedup in the data model root search its...
See more...
I have a tstats query that pulls its data from an accelerated data model. I need to grab only the most up to date host event with the latest IP value. I cannot dedup in the data model root search itself as I need to keep track of _time to get point-in-time results as well. Anyways, for the most current point-in-time IP value (right now), dedup is not working as intended. It's showing me the older value. Query without dedup: | tstats latest(_time) as _time FROM datamodel="Host_Info" WHERE nodename="hostinfo" hostname=bobs by hostinfo.hostname hostinfo.ip Results (two values for ip) hostninfo.hostname hostinfo.ip _time bobs 10.10.10.10 2021-10-22 19:55:03 bobs 33.33.33.33 2021-10-22 21:23:06 Query with dedup: | tstats latest(_time) as _time FROM datamodel="Host_Info" WHERE nodename="hostinfo" hostname=bobs by hostinfo.hostname hostinfo.ip | dedup hostname Results (older value, not newer): hostninfo.hostname hostinfo.ip _time bobs 10.10.10.10 2021-10-22 19:55:03 Why isn't dedup working correctly? If I dedup the actual indexed data, before it hits the datamodel, it works fine and shows me the latest hostname and IP.
Hello Splunk Wizards, I know there are plenty of people who've had similar issues, but I haven't been able to use their resolution for my issue. I'm doing a search time field extraction to capture...
See more...
Hello Splunk Wizards, I know there are plenty of people who've had similar issues, but I haven't been able to use their resolution for my issue. I'm doing a search time field extraction to capture login username, which includes a backslash. I have the regex correct (?P<User_Name>(domain\\\\\\S+)) slightly modified from regex 101 for Splunk. In the field extraction wizard, it perfectly grabs all sample data (ex: domain\username). (?P<User_Name>(domain\\\\\\S+)) However, this field doesn't show up in search when looking at the exact same sample data. I've performed a verbose search and made sure all available fields are showing, it's not there. I've tried using groups names I know Splunk isn't already using, no improvement. Pretty sure it was to do with the backslash, because if I modify the regex to (?P<User_Name>domain\S+), the field extraction shows up in search, but it also contains data that isn't exactly correct. (?P<User_Name>domain\S+) I've tried variations with more and less backslashes, none seem to work. I guess I can live with a sloppy field extraction if that's all I can do, but the first regex really is perfect. Any ideas?
In a locked down environment where outbound traffic is explicit, what is the IP range or URL to facilitate the "splunk diag --upload" command? Getting the following error: Unable to fetch case l...
See more...
In a locked down environment where outbound traffic is explicit, what is the IP range or URL to facilitate the "splunk diag --upload" command? Getting the following error: Unable to fetch case list: None Cannot validate upload options, aborting...
Is it possible to change the default search performance to "high_perf" in splunkcloud? On splunkcloud in the search bar you have the option of setting search performance to: standard_perf (search de...
See more...
Is it possible to change the default search performance to "high_perf" in splunkcloud? On splunkcloud in the search bar you have the option of setting search performance to: standard_perf (search default), limited_perf, high_perf or Policy-Based Pool. I have begun using high_perf for my queries since otherwise things are WAY too slow. However it constantly changes me back to the default standard_perf. I cannot find the setting for this anywhere and have had no luck searching for documentation either.
My teammate and I have been trying to summarize our environment to automatically build a data dictionary. Our last feature was to add a lastSeen time to use as a rudimentary data integrity check. ...
See more...
My teammate and I have been trying to summarize our environment to automatically build a data dictionary. Our last feature was to add a lastSeen time to use as a rudimentary data integrity check. Recently this has stopped working on the _internal index. As in tstats max time on _internal is a week ago, even though a straight SPL search on index=_internal returns results for today or any other arbitrary slice of time I query over the last week. This suggests to me that the tsidx is messed up for _internal. But to make matters more confusing, yesterday I was able to submit the same query and get a correct max(_time) for index=_internal. Does anyone have an idea of what is going on with this behavior? Better yet, what I need to do to fix it? If it matters, this is a clustered search head environment and we also have quite a few indexers usual results | tstats count max(_time) as lastSeen where index=_* earliest=-20d@d latest=@m by index
| convert ctime(lastSeen) index count lastSeen _audit 999999999 10/22/2021 15:39:59 _internal 9999999 10/14/2021 20:09:35 _introspection 999999999 10/22/2021 15:39:59 _telemetry 999 10/22/2021 12:05:05
I want to use predicted values in my search and apply them to a time chart. What would be the best way to store these values for future use? I am thinking a summary index would be ideal but I am not ...
See more...
I want to use predicted values in my search and apply them to a time chart. What would be the best way to store these values for future use? I am thinking a summary index would be ideal but I am not sure if there is a different way I might want to store it. I want the time chart to show some bounds as well based on the predicted values to help with analyzing what expected performance/traffic would be like.
I am looking for a way to automate the exporting of node exception data so that I can get it persisted into my Elastic stack for historical purposes as well as for telemetry data analysis?
I'm abl...
See more...
I am looking for a way to automate the exporting of node exception data so that I can get it persisted into my Elastic stack for historical purposes as well as for telemetry data analysis?
I'm able to persist metric data that's easy through the Metric API but I'm not seeing anything similar for this area.
TIA,
Bill Youngman
Hello, I have been asked to monitor our HTTP Event Forwarder. Is there a Health Check in Splunk that would tell me the Forwarder status? Or is there another way I could view if the Event Forwarder...
See more...
Hello, I have been asked to monitor our HTTP Event Forwarder. Is there a Health Check in Splunk that would tell me the Forwarder status? Or is there another way I could view if the Event Forwarder is down without going into Splunk Enterprise? Perhaps a URL that would simply give me an HTTP Status code or something. Thanks, Tom