All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

All,  I have a simple requirement to list failed login attempts from same src_ip in a span of 5 mins.  i have seen 2 options in the community here one using stats and other using streamstats.  Which... See more...
All,  I have a simple requirement to list failed login attempts from same src_ip in a span of 5 mins.  i have seen 2 options in the community here one using stats and other using streamstats.  Which one is more accurate ? @ITWhisperer      index=XYZ sourcetype=ABC eventName=*Get* errorCode!=success | bin _time span=5m | table _time host eventName, app, command, dest, errorCode, region, userName, user_type, user, src_ip | stats values(*) as *, count by src_ip | where count>=5 OR index=XYZ sourcetype=ABC eventName=*Get* errorCode!=success | streamstats time_window=5m count as failed_attempts by src_ip | where failed_attempts > 5 | table _time user failed_attempts src_ip dest host eventName app command, dest errorCode region userName    
Hi all, I have a xml file as below. <?xml version="1.0" encoding="UTF-8"?> <suite name="abc" timestamp="20.08.2021 15:47:20" hostname="kkt2si" tests="5" failures="1" errors="1" time="0"> <case n... See more...
Hi all, I have a xml file as below. <?xml version="1.0" encoding="UTF-8"?> <suite name="abc" timestamp="20.08.2021 15:47:20" hostname="kkt2si" tests="5" failures="1" errors="1" time="0"> <case name="a" time="626" classname="x"> <failure message="failed" /> </case> <case name="b" time="427" classname="x" /> <case name="C" time="616" classname="y" /> <case name="d" time="626" classname="y"> <error message="error" /> </case> <case name="e" time="621" classname="x" /> </suite>   The cases which doesnt have failure or errors are the ones which are passed. I am able to make a list of cases but i am confused how to add a column of the status. Anyone know the solution for this? |spath output=cases path=suite.case{@name}| table cases This is how i extracted the cases. I want to add a column which shows the status. Please suggest some answers.  
Hi All, I am trying to create a dashboard panel in trellis view. I have used the below query: (my search query) | stats count | eval Result=if("count"="0","Ok","Error") | fields - Exception,count ... See more...
Hi All, I am trying to create a dashboard panel in trellis view. I have used the below query: (my search query) | stats count | eval Result=if("count"="0","Ok","Error") | fields - Exception,count With this I can get the dashboard panel as  Please look into the source below: <option name="colorBy">value</option> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="rangeValues">[0,30,70,100]</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">1</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">1</option> <option name="useThousandSeparators">1</option> </single> </panel> Here I have a requirement to change the color of the trellis box. I want it to be green when "Ok" and red when it is "Error".  Please help guide me to achieve the desired output.   Thank you..!!
I have got a complicated task of consolidating two standalone search heads and a single search head cluster (4 nodes) all into a single search head cluster of 3 nodes.   Can someone please advise w... See more...
I have got a complicated task of consolidating two standalone search heads and a single search head cluster (4 nodes) all into a single search head cluster of 3 nodes.   Can someone please advise what would be the most efficient and correct method to accomplish this ?
Hi There,  I have a query that I use to extract all database modifications. However, I want to exclude SELECT from capturing via this query. I want to extract only INSERT, DELETE, UPDATE.  My Que... See more...
Hi There,  I have a query that I use to extract all database modifications. However, I want to exclude SELECT from capturing via this query. I want to extract only INSERT, DELETE, UPDATE.  My Query: index="database_db" source=database_audit sourcetype="database_audit" | eval "Database Modifications:" = "Modification on " + host, "Date and Time" = EXTENDED_TIMESTAMP_NY, "Type" = SQL_TEXT, "User" = DB_USER , "Source" = sourcetype | rex field=_raw "SQL_TEXT=\S(?P<Type>\W?......)\s" | rex field=_raw "DB_USER=(?P<UserName>..........)" | table "Date and Time", "Database Modifications:" ,"Type", "User", "Source" Can anybody help ? Thank you.
Hi, Does anyone have a good example from Logstash to Splunk HEC? I only get "services/collector/raw" working with logstash but would prefer more to use /collector or /event so we can easy change so... See more...
Hi, Does anyone have a good example from Logstash to Splunk HEC? I only get "services/collector/raw" working with logstash but would prefer more to use /collector or /event so we can easy change sourcetype. I see that in case of /collector or /event message must be constructed in a special way. So If anyone have good logstash example. as we are using also multiple index-es, we would like to dynamically change and parse message logs and then parse with good sourcetype stanza and deliver to different index. depends on log type (eg. different OS, or network equipment, etc...)
index=anIndex sourcetype=aSourceType ("*Starting application:*" AND (host="aHostName*")) | stats values(host) AS ServerList The above query gives me a list of distinct server names.  What I am attem... See more...
index=anIndex sourcetype=aSourceType ("*Starting application:*" AND (host="aHostName*")) | stats values(host) AS ServerList The above query gives me a list of distinct server names.  What I am attempting to do is use this query for an alert and provide the list of server's but only when the # of servers in the distinct list returned in the above query is less than a specified #. I will be configuring the alert to trigger when the results are > 0 since the trigger condition will be in the query and not the alert. How do I modify the query above to only provide ServerList if the # of distinct servers in that list is < 10 ?  
Dears, Can we integrate the Fireeye HX with Splunk using GUI or not ? If not let me know the process for CLI. 
Hello!   A dashboard runs a search and I want to create an alert for this. So I replicated the search code to the alert. However, now, if there is a change in the dashboard, my alert will not be up... See more...
Hello!   A dashboard runs a search and I want to create an alert for this. So I replicated the search code to the alert. However, now, if there is a change in the dashboard, my alert will not be updated.   Is there a way to create an alert with a search like: "search dashboard1" or something so that whatever changes happen to the dashboard, they will be fed into my alert? Thanks!
Hi Splunk Team, I am looking for the API where  we can blackout monitoring on Azure VM while these VMs are under patching process. The patch will happen to a group of VMs together based on its tag i... See more...
Hi Splunk Team, I am looking for the API where  we can blackout monitoring on Azure VM while these VMs are under patching process. The patch will happen to a group of VMs together based on its tag in azure. Can you please suggest me an approach to group VM and then blackout monitoring alerts and then re-enable when the patching processing is completed?   Thanks in advance George
Hello, I'm trying to debug an issue with an FTP service. I'd like to know that which users are using 'active data connection', where the connecting point would only be the sessionID. I have alread... See more...
Hello, I'm trying to debug an issue with an FTP service. I'd like to know that which users are using 'active data connection', where the connecting point would only be the sessionID. I have already extracted sessionID and userID as fields. The logs for the sessions are varying between 150-3000 lines of events, and I don't know how to match my search criteria, to extract a userID connected to the sessionID that my search result is returning.    index=p_ftp sourcetype=debug "active data connection" | stats values(sessionID)   This is giving me the sessionIDs properly, I just need the userIDs from the session it logged usually plenty of lines before. Can you please help me? Thanks a lot in advance
Hi, I am having difficulty in showing up results from splunk query in dashboard panel where it always says 'No results found.' however the Query displays results when searched directly from search ta... See more...
Hi, I am having difficulty in showing up results from splunk query in dashboard panel where it always says 'No results found.' however the Query displays results when searched directly from search tab or even clicking on 'open in search' magnifier icon button from dashboard panel. I am trying to display top 10 errors in pie chart based on dropdowns(environment, time), In the base query I have search index and in the actual dashboard panel query I have environment that is driven from dropdown value and I also time dropdown.       <form> <label>Label</label> <search id="base"> <query>index="app_3537255" </query> <earliest>$time_token.earliest$</earliest> <latest>$time_token.latest$</latest> </search> <fieldset submitButton="false" autoRun="true"> <input type="time" token="time_token" searchWhenChanged="true"> <label>Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="APIG_ENV" searchWhenChanged="true"> <label>Environment</label> <choice value="*">All</choice> <search> <query>index="app_3537255" | table host | sort host | dedup host</query> <earliest>-7d@h</earliest> <latest>now</latest> </search> <default>*</default> <initialValue>*</initialValue> <fieldForLabel>host</fieldForLabel> <fieldForValue>host</fieldForValue> </input> </fieldset> <row> <panel> <title>Top 10 Common Errors</title> <chart> <search base="base"> <query>| search host=$APIG_ENV$ eventtype="nix_errors" OR eventtype="err0r" | rex field=_raw "Error\s(?P&lt;ErrorString&gt;.*)" | eval ErrorString = "Error " + ErrorString | stats count(ErrorString) AS TotalCount BY ErrorString|sort -TotalCount | head 10</query> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">pie</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">right</option> <option name="charting.lineWidth">2</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </chart> </panel> </row> </form>     any reason why this is not working in dashboard panel? any pointers would be appreciated.
LOOK FOR BOLD for quick overview: I want to control the index-time extraction for events linked to an accelerated data model... I am relatively new to Splunk, and recently I've jumped into Accelera... See more...
LOOK FOR BOLD for quick overview: I want to control the index-time extraction for events linked to an accelerated data model... I am relatively new to Splunk, and recently I've jumped into Accelerated Data Models.  I understand a number of aspects about them already: How they differ from regular data models How they do index time extractions Stored in HPAS Populated by scheduled searches What I don't understand is how those summaries for the Accelerated Data Models are built.  I understand that ADMs use tsidx files as the summaries of the raw data. "Each search you run scans tsidx files for the search keywords and uses their location references to retrieve from the rawdata file the events to which those keywords refer.  Splunk Enterprise creates a separate set of tsidx files for data model acceleration. In this case, it uses the tsidx files as summaries of the data returned by the data model." What I don't understand is how the connection to the raw data and the .tsidx files is made.   How are the .tsidx files formed from the event data? When I look at the data models object hierarchy in settings I see the fields that it encompasses: When I do a search like:   | datamodel Intrusion_Detection search   If I'm correct, it is giving me the search time extraction from indexes related to the accelerated model.  The problem is that I get a lot of fields that are useless in cyber security efforts.  For instance, maybe I want to know the category of the different attacks that are occurring.  It is a calculated field in my accelerated data model.  The calculation goes - if( isnull(category) OR category="","unknown",category.  This means it will return the category unless there is none.  I also don't understand where it gets this variable "category".  How is that being pulled from the raw data? I get 100% unknowns is the problem.   Is this a problem of event tagging with the Common Information Model or somewhere else in the flow of ingested data? - https://wiki.splunk.com/images/4/45/Splunk_EventProcessing_v19_0_standalone.pdf In the end here is what I want to know to fix this: How do I control what it pulls out of the raw event data? Where is the regex taking place? Is this something to configure with the .tsidx summaries in the indexers? When I have data like "geo-location" or "web-app" in the raw data used with a data model (data that I think it useful), how do I pull that data out into a field that I can use in my accelerated data model. What does the Common Information Model have to do with Accelerated Data Models? Is that where I configure what it pulls out of raw event data? In general, how do I make more custom accelerated data models that pull out new data from events? Additionally I understand that making more fields to pull out of the data also means for an increase in storage size on the indexer.  I just want to figure this all out. [EDIT] Is there where I would use the App: Splunk Add-on Builder?  
Hi I have field that call city name is it possible without latitude or longitude, use map to show data on map just with city name?   Thanks
Hi,   I'm trying to pass the aggregate function from the dropdown menu in the Splunk dashboard to the time-series chart. for example from dropdown, I want to pass  actual, Avg(), max() to belo... See more...
Hi,   I'm trying to pass the aggregate function from the dropdown menu in the Splunk dashboard to the time-series chart. for example from dropdown, I want to pass  actual, Avg(), max() to below search  index = _internal sourcetype = * |  search field=* Exhost=*  | chart max(value) by _time,Exhost      
Hi How can I find continuously occured events? e.g 1- I have field that call "response time"  if some times show "response time" is high not issue but if it continuously occured it's not normal. ... See more...
Hi How can I find continuously occured events? e.g 1- I have field that call "response time"  if some times show "response time" is high not issue but if it continuously occured it's not normal. (couple of milisecond, seconds or minutes ) 2-I have "error code" field error code 404 is normal but if continuously occured, means something is wrong. (couple of milisecond, seconds or minutes )   FYI: this is huge log, consider that performance is a important factor. Any idea? Thanks,
Hi is there any universal or general rex to extract every known intersting fields like  (url, uri, user, email, ip, etc) from logs? Thanks
Hi need to compare total numbers if they are different show table that present them 23:57:05.253 app module: PACK: Total:[1010000] from server1 Total:[C000001010000] 23:57:05.254 app module: PACK:... See more...
Hi need to compare total numbers if they are different show table that present them 23:57:05.253 app module: PACK: Total:[1010000] from server1 Total:[C000001010000] 23:57:05.254 app module: PACK: Total:[1010000] from server1 Total:[C000001000000] diff=second total - first total expected output: Time                                 diff 23:57:05.254        10000 any idea? Thanks,
I have a tstats query that pulls its data from an accelerated data model. I need to grab only the most up to date host event with the latest IP value. I cannot dedup in the data model root search its... See more...
I have a tstats query that pulls its data from an accelerated data model. I need to grab only the most up to date host event with the latest IP value. I cannot dedup in the data model root search itself as I need to keep track of _time to get point-in-time results as well. Anyways, for the most current point-in-time IP value (right now), dedup is not working as intended. It's showing me the older value. Query without dedup:     | tstats latest(_time) as _time FROM datamodel="Host_Info" WHERE nodename="hostinfo" hostname=bobs by hostinfo.hostname hostinfo.ip     Results (two values for ip) hostninfo.hostname hostinfo.ip _time bobs 10.10.10.10 2021-10-22 19:55:03 bobs 33.33.33.33 2021-10-22 21:23:06 Query with dedup:     | tstats latest(_time) as _time FROM datamodel="Host_Info" WHERE nodename="hostinfo" hostname=bobs by hostinfo.hostname hostinfo.ip | dedup hostname     Results (older value, not newer): hostninfo.hostname hostinfo.ip _time bobs 10.10.10.10 2021-10-22 19:55:03 Why isn't dedup working correctly? If I dedup the actual indexed data, before it hits the datamodel, it works fine and shows me the latest hostname and IP.