All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That particular property is keyed off of your Y fields, not the X-field that you'd like.  For example, to change the colors with your field names, you'd do something like this - but I understand this... See more...
That particular property is keyed off of your Y fields, not the X-field that you'd like.  For example, to change the colors with your field names, you'd do something like this - but I understand this isn't what you're looking for:   <option name="charting.fieldColors">{"limit":0x333333,"spend":0xd93f3c}</option>   I came up with a real hack way to do this...someone who knows more than me might have a better way,,, Since this chart property is keyed off the name of the y-fields, then you could custom-name all the Y-fields so you can reference them in the fieldColors property.  Here is some example SPL with a bunch of evals to really show how you can split these up:   | makeresults format=csv data="market,limit,spend \"AU Pre\", 1462912, 884854 \"AU Post\", 2160567, 1166031 \"DE Pre\", 91217, 76973 \"DE Post\", 160221, 97906" | eval AU_Pre_limit = if(market="AU Pre",limit,null) | eval AU_Post_limit = if(market="AU Post",limit,null) | eval DE_Pre_limit = if(market="DE Pre",limit,null) | eval DE_Post_limit = if(market="DE Post",limit,null) | table market, AU_Pre_limit, AU_Post_limit, DE_Pre_limit, DE_Post_limit, spend     Then you can build a fieldColors property like this:   <option name="charting.fieldColors"> {"AU_Pre_limit":0x333333,"AU_Post_limit":0xd93f3c, "DE_Pre_limit":0xeeeeee,"DE_Post_limit":0x65a637, "spend":0xaa0000} </option>   Here is an example dashboard.  I will also attach the SimpleXML in a PDF so you can try it out:   ** Again...this is very hack and I typically try and keep as much formatting/display info out of my SPL as I can (or at least put it way at the end or do it as a post-process).  This totally breaks any MVC-patterns I like to follow.  But...if you have something you just need to get working and pretty for now, this will do.    
You would do a second stats to roll them up like this ... index=guardium ruleDesc="OS Command Injection" | stats count by dbUser, DBName, serviceName, sql | eval category = case( count < 6, "1-5... See more...
You would do a second stats to roll them up like this ... index=guardium ruleDesc="OS Command Injection" | stats count by dbUser, DBName, serviceName, sql | eval category = case( count < 6, "1-5", count < 11, "6-10", count < 16, "11-15", 1==1, "16+" ) | stats count by category Then you would set up a drilldown on the chart to pass a token to another search and limit it based on the token..
Hi community, | eval ycw = strftime(_time, "%Y_%U") | stats count(eval("FieldA"="True")) as FieldA_True,               count(eval('FieldB'="True")) as FieldB_True,               count(eval('Field... See more...
Hi community, | eval ycw = strftime(_time, "%Y_%U") | stats count(eval("FieldA"="True")) as FieldA_True,               count(eval('FieldB'="True")) as FieldB_True,               count(eval('FieldC'="True")) as FieldC_True by ycw | table ycw, FieldA_True, FieldB_True, FieldC_True I get 0 result even though there is data. Could anyone please suggest a correct query? BR
Another option would  be to set the delimiter to a comma in the multiselect then in the macro use the IN keyword like this ... index IN ($index_scope$) ... #The rest of the macro# ....
you are correct, some of the fields are automatically extracted as part of the Event heading, but none of the fields I am interested in are available, such as: tracePoint content.attributes[]  //no... See more...
you are correct, some of the fields are automatically extracted as part of the Event heading, but none of the fields I am interested in are available, such as: tracePoint content.attributes[]  //not interested in the headers applicationName applicationVersion environmen By the way, I tried what you suggested: | spath input=data but I see no change in my search results. Thank you  
Question about your "second search". By that do you mean you want to do a drilldown when the  user clicks on one of the result rows to display the information surrounding that event? If  so, what kin... See more...
Question about your "second search". By that do you mean you want to do a drilldown when the  user clicks on one of the result rows to display the information surrounding that event? If  so, what kind of dashboard are you using (simple xml, or dashboard studio)?
Do you mean something like index=* | stats values(*) as * by sourcetype | foreach * [eval fields = mvappend(fields, if("<<FIELD>>" != "sourcetype", "<<FIELD>>", null()))] | stats values(fields) ... See more...
Do you mean something like index=* | stats values(*) as * by sourcetype | foreach * [eval fields = mvappend(fields, if("<<FIELD>>" != "sourcetype", "<<FIELD>>", null()))] | stats values(fields) as fields by sourcetype
This would be done on a heavy forwarder or the indexer(s), whichever the events hit first. The below link has information for how to do this. You can do it with SEDCMD in a props.conf. The code below... See more...
This would be done on a heavy forwarder or the indexer(s), whichever the events hit first. The below link has information for how to do this. You can do it with SEDCMD in a props.conf. The code below is an excerpt from that page that shows specifically how you would do this. In this case this <Data Name='IpPort'>0</Data> is being turned into this <Data Name='IpPort'></Data>. #For XmlWinEventLog:Security SEDCMD-cleanxmlsrcport = s/<Data Name='IpPort'>0<\/Data>/<Data Name='IpPort'><\/Data>/ https://docs.splunk.com/Documentation/WindowsAddOn/latest/User/Configuration
I'm pretty sure this is the same as Passing multiselect token to the macros.  Answer is given there.
Hi meetmshah thanks for the follow up! I was able to fix this issue by adding this agurment to the search values(field_name)
We are in the process of deploying our endpoint logging strategy. Right now, we are using CrowdStrike as our EDR. As far as I can tell if we wanted to use the logs collected by the CrowdStrike agent ... See more...
We are in the process of deploying our endpoint logging strategy. Right now, we are using CrowdStrike as our EDR. As far as I can tell if we wanted to use the logs collected by the CrowdStrike agent and forward that into Splunk we have to pay for the FDR license, which at the moment due to budget constraints we cannot. When I look at the correlation searches that utilize the Endpoint Data model most of those detections are based on data that originates from Endpoint Detection and Response (EDR) agents. Since in our case we cannot utilize that data coming from CrowdStrike, could we use Sysmon instead to collect the data that we need to implement those corrections searches? This is one of the use cases that I was interested in implementing https://research.splunk.com/endpoint/1a93b7ea-7af7-11eb-adb5-acde48001122/
The values OP is seeking is in the field message. (From the illustration in OP, the event is JSON - but it is best to illustrate with raw text, not a copy from Splunk's formatted event view.)  So | ... See more...
The values OP is seeking is in the field message. (From the illustration in OP, the event is JSON - but it is best to illustrate with raw text, not a copy from Splunk's formatted event view.)  So | rex field=message "Received update request (?<IL_Customer>[^\.]+)\. Size of array: (?<ArraySize>\d+)" (Also slightly more efficient because the regex engine would be scanning smaller strings.)
Have you tried "| table *"?  In other words, is that message the raw events?  Because if it is, Splunk would have already given you all the fields like correlationId, message, content.clientId, conte... See more...
Have you tried "| table *"?  In other words, is that message the raw events?  Because if it is, Splunk would have already given you all the fields like correlationId, message, content.clientId, content.attributes.reasonPhrase, and so on. If the message is in a field named "data", you can use spath to extract it.     | spath input=data     Either way, your sample would give these fields and values fieldname fieldvalue applicationName cfl-service-integration-proxy applicationVersion 61808 category com.cfl.api.service-integration content.attributes.reasonPhrase OK content.attributes.statusCode 200 content.clientId 1234567 correlationId 3-f86043c0-6c3c-11ee-8502-123c53e78683 elapsed 435 environment dev message API Response priority INFO threadName [cfl-service-integration-proxy].proxy.BLOCKING @78f55ba timestamp 2023-9-16T15:59:22.083Z tracePoint END Hope this helps.  
Do you know about the Distributed Management Console? Monitoring Splunk Enterprise overview - Splunk Documentation   You could peek into the dashboards/queries in there for help on building the SP... See more...
Do you know about the Distributed Management Console? Monitoring Splunk Enterprise overview - Splunk Documentation   You could peek into the dashboards/queries in there for help on building the SPL.  Also, there might be some alerts you can turn on.   Run DMC for the win.  
You didn't show how $index_scope$ is used inside the macro.  Using hints from the sample values, that dropdown based on these values used to work as desired, and from semantic heuristics, as well as ... See more...
You didn't show how $index_scope$ is used inside the macro.  Using hints from the sample values, that dropdown based on these values used to work as desired, and from semantic heuristics, as well as your choice of token name, I can only speculate that inside you macro `compliance(7)`, you use the first input in a search command, i.e., (I'll call the first input $input1$ - which correspond to $mytok$ in my previous illustration), the macro has a command like index=$input1$ The following will be based on this speculation.  If this is too far from the real macro code, the analysis will not apply although I will try to be as general as can be meaningfully presented. (As you can see, I wouldn't have to make wild guesses which may well be incorrect had you provided relative information.) The second part of the analysis will focus on input options that you can set when setting a multiselect inputs as I exemplified earlier. In your sample code, none of these is set.  In that case, Splunk will use a space as default delimiter, and give no prefix and no suffix.  I have not found the tutorial about input, but definitive information is in input (form).  To help you understand how these choices affect resultant token, I drafted this test dashboard for you to play with: <form version="1.1"> <label>Checkbox test</label> <fieldset submitButton="false"> <input type="checkbox" token="index_scope" searchWhenChanged="true"> <label>Choose console</label> <choice value="1T*">Standard</choice> <choice value="2A*">Scada</choice> <choice value="2S*">AWS</choice> <default>1T*</default> <initialValue>1T*</initialValue> </input> </fieldset> <row> <panel> <table> <search> <query>| makeresults | eval index_scope = "$index_scope$"</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form> The third part of analysis is straightforward with search command.  Understandably, if you select "Standard" from a dropdown, or if only Standard is checked, your macro will get the command search index=1T* (The search command is implied if it is on the first line.  Otherwise you must write it explicitly.)  Now, if you select "Standard" and "Scada", the macro will get search index=1T* 2A* I suspect that you are expecting something like index=1T* OR index=2A* instead.  Is this correct?  One way to do this, obviously, is to set Delimiter to " OR ", and value prefix to "index=".  Note the space before and after keyword "OR" is important. <form version="1.1" theme="light"> <label>Checkbox test</label> <fieldset submitButton="false"> <input type="checkbox" token="index_scope" searchWhenChanged="true"> <label>Choose console</label> <choice value="1T*">Standard</choice> <choice value="2A*">Scada</choice> <choice value="2S*">AWS</choice> <default>1T*</default> <initialValue>1T*</initialValue> <delimiter> OR </delimiter> <valuePrefix>index=</valuePrefix> </input> </fieldset> <row> <panel> <table> <search> <query>| makeresults | eval index_scope = "$index_scope$"</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>  
Does anyone have SPL to monitor for capacity % on an index cluster? I'd like to watch each indexer/data volume and receive an alert if they breach a 90% threshold.
I'll preface this with there are some best practices I'm skipping over for a production dashboard - formally creating a lookup & setting permissions, scheduling a saved search (aka report) to create ... See more...
I'll preface this with there are some best practices I'm skipping over for a production dashboard - formally creating a lookup & setting permissions, scheduling a saved search (aka report) to create this lookup, etc.  I'm also assuming you have admin access to your environment since this example uses data you would have in your index=_internal.  The important thing here is the concept of referencing a lookup and not having an in-line search. I have attached an XML so you can see the SimpleXML dashboard I created for this example.  The left input dropdown does an in-line search to populate the dropdown values (and this could be what you're seeing as slow).  This means it is searching over, and the right input dropdown still runs a search, but all that search does is load a lookup csv file for the data - it's really quick!   The search I run for the left input is the following, and it is configured in the XML to look over the past 120d: index=_internal | dedup component | table component | sort component   The search I run for the right input is the following, and the timeframe doesn't matter - all it does is load a csv, but the results in that csv lookup are the same format/data as the search above: | inputlookup internal_component_list.csv Note:  that's not the real search that generates the csv.  It is just loading it.  To generate the csv, I ran the following search.  It's real similar to the one for the left dropdown, but I added the outputlookup command to make that csv: index=_internal earliest=-120d | dedup component | table component | sort component | outputlookup internal_component_list.csv You can take this outputlookup search and schedule to run once a week (or however often is appropriate for your data).  The key is this search can be scheduled to run behind the scenes when no one is waiting on the results. I just scheduled it as a report: And its search configuration looks like this:   And for this example I decided to schedule it weekly (but notice that the search looks back 120 days with the earliest=-120d in the SPL).  I'm essentially building my dropdown data weekly from the past 120 days of events in _internal:  
I am new to Splunk and I have the following message which I would like to parse into a table of columns:     {dt.trace_id=837045e132ad49311fde0e1ac6a6c18b, dt.span_id=169aa205dab448fc, dt.trace_sa... See more...
I am new to Splunk and I have the following message which I would like to parse into a table of columns:     {dt.trace_id=837045e132ad49311fde0e1ac6a6c18b, dt.span_id=169aa205dab448fc, dt.trace_sampled=true} { "correlationId": "3-f0d89f31-6c3c-11ee-8502-123c53e78683", "message": "API Request", "tracePoint": "START", "priority": "INFO", "category": "com.cfl.api.service", "elapsed": 0, "timestamp": "2023-10-16T15:59:09.051Z", "content": { "clientId": "", "attributes": { "headers": { "accept-encoding": "gzip,deflate", "content-type": "application/json", "content-length": "92", "host": "hr-fin.svr.com", "connection": "Keep-Alive", "user-agent": "Apache-HttpClient/4.5.5 (Java/16.0.2)" }, "clientCertificate": null, "method": "POST", "scheme": "https", "queryParams": {}, "requestUri": "/cfl-service-api/api/process", "queryString": "", "version": "HTTP/1.1", "maskedRequestPath": "/api/queue/send", "listenerPath": "/cfl-service-api/api/*", "localAddress": "/localhost:8082", "relativePath": "/cfl-service-api/api/process", "uriParams": {}, "rawRequestUri": "/cfl-service-api/api/process", "rawRequestPath": "/cfl-service-api/api/process", "remoteAddress": "/123.123.123.123:123", "requestPath": "/cfl-service-api/api/process" } }, "applicationName": "cfl-service-api", "applicationVersion": "6132", "environment": "dev", "threadName": "[cfl-service-api].proxy.BLOCKING @78f55ba" }       Thank you so much for your help.
Hi @skyasa You can find what new features are released in our Splunk Enterprise release notes https://docs.splunk.com/Documentation/Splunk/9.1.1/ReleaseNotes/MeetSplunk#What.27s_New_in_9.1 or our Wha... See more...
Hi @skyasa You can find what new features are released in our Splunk Enterprise release notes https://docs.splunk.com/Documentation/Splunk/9.1.1/ReleaseNotes/MeetSplunk#What.27s_New_in_9.1 or our What's New in Dashboard Studio release notes https://docs.splunk.com/Documentation/Splunk/9.1.1/DashStudio/WhatNew  I hope that helps! 
Only the key and value fields are visible because that is what the fields command does.  If you prefer different names then change "key" to "field_header" throughout the query.  Likewise for "value".