All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If you look at the sceenshot... my goal is to get the values between the 10 and 20, but ignore / delete the outlier with -9,4 (faulty value from sensor), which is to see on the absolute min max graph... See more...
If you look at the sceenshot... my goal is to get the values between the 10 and 20, but ignore / delete the outlier with -9,4 (faulty value from sensor), which is to see on the absolute min max graph... in the timechart you see that as litte edge... The sensor puts every minute a value 24/7. With "timechart" curve you wont see the small number of outlier by 1440 values, but in the "stats" min max per day, you will see extreme values like the -9,4, which is absolutly unlogical by a minimum average from ~10. In order to know which of the Min Max of the day is wrong / outlier, I came up with the idea of ​​verifying the values ​​using the timechart Min Max, that was my idea... I hope it is understandable
Hey @tread_splunk , Not gonna lie, it seems a bit confuse to understand your goal here. Both actions search and GET_PASSWORD only resides in _audit index, while internal index will have other kind ... See more...
Hey @tread_splunk , Not gonna lie, it seems a bit confuse to understand your goal here. Both actions search and GET_PASSWORD only resides in _audit index, while internal index will have other kind of information. IF what you want is just use the internal logs to get the source clientip for that user (not exactly related to the action calls though) you can try something like this: index=_audit (action=search OR action=GET_PASSWORD) | stats count as audit_count by user | join user [ search index=_internal sourcetype=splunkd_access user=* clientip=* | stats count as internal_count by user clientip] | table user clientip audit_count internal_count The counts on audit and internal is the part that doesn't make much sense to me unless you want to filter the URI in the internal logs to something that is triggered during action=search or action=GET_PASSWORD, so you can customize my query a bit more. If I'm tripping, please help me understanding your goal so I can try to give you more insights if any.
Will do. Thanks for your speedy response! 
Hi Ravi, The strategy is to configure your multivalue input with using prefix, sufix and delimiter, like this: <fieldset submitButton="false"> <input type="multiselect" token="field1"> <label>fie... See more...
Hi Ravi, The strategy is to configure your multivalue input with using prefix, sufix and delimiter, like this: <fieldset submitButton="false"> <input type="multiselect" token="field1"> <label>field1</label> <choice value="value1">value1</choice> <choice value="value2">value2</choice> <choice value="value3">value3</choice> <delimiter>,</delimiter> <prefix>| stats sum(</prefix> <suffix>)</suffix> </input> </fieldset> Via UI it will look like this:   So in your search you'll simply do: |makeresults $field1$   where |makeresults is your actual search. The token will append | sum(<selected_values>)   Then you can add the proper grouping fields and etc to make that happen as you want it to. (like add the "as Total by Jobname" after the ")" suffix
Perhaps an irrelevant question, but did you engage your local infrastructure or security team; or whomever is managing Splunk?   If your org does not have a proper process for making these sorts of... See more...
Perhaps an irrelevant question, but did you engage your local infrastructure or security team; or whomever is managing Splunk?   If your org does not have a proper process for making these sorts of adjustments because you only have access to a UF, there may be larger challenges to tackle. Every Splunk deployment is different, so providing more specific advice is harder when you cannot spell out all of the cans and cant dos, since at least some of the possible solutions mean that you have proper administrative access to all of your relevant Splunk components, of which a HF is very much an important piece. That aside, you should be working with your Splunk admins to get the right configurations in place.
I have a multi select drop down menu with field names as values.    When i one or mone values from the drop down menu, those fields/columns need to totaled. I tried the below code as sugged by Me... See more...
I have a multi select drop down menu with field names as values.    When i one or mone values from the drop down menu, those fields/columns need to totaled. I tried the below code as sugged by Meta AI..But it is not producing any result. Please help me <dashboard> <label>Sum Selected Fields</label> <row> <panel> <input type="dropdown" token="selected_fields"> <label>Select Fields</label> <choice value="field1">Field 1</choice> <choice value="field2">Field 2</choice> <choice value="field3">Field 3</choice> </input> <chart> <search> | eval sum_fields="$selected_fields$" | stats sum(eval(split(sum_fields, ","))) as Total by Jobname </search> </chart> </panel> </row> </dashboard>
I have the following fabricated search which is a pretty close representation of what I actually want to do and gives me the results I want... (index=_audit (action=search OR action=GET_PASSWORD)) O... See more...
I have the following fabricated search which is a pretty close representation of what I actually want to do and gives me the results I want... (index=_audit (action=search OR action=GET_PASSWORD)) OR (index=_internal [ search index=_audit (action=search OR action=GET_PASSWORD) | dedup user | table user] ) | stats count(eval(index="_audit")) as count, values(clientip) as clientip,count(eval(index="_internal")) as internalCount by user i.e for everyone who has performed a search or GET_PASSWORD in one index, I want to know something about them gathered from both indexes.  I can't get past the feeling that I shouldn't need to repeat the "index=_audit (action=search OR action=GET_PASSWORD)" search, which in the actual search is whole lot of SPL, so duplicating it makes things untidy.  Macros aside, can anyone come up with a more elegant solution?
Hi @BKDRockz , I undertand that in this way you don't consume license but using dbxquery in searches isn't the best approach to extract data from a database because the db-connect is a very slow ext... See more...
Hi @BKDRockz , I undertand that in this way you don't consume license but using dbxquery in searches isn't the best approach to extract data from a database because the db-connect is a very slow extracting tool. The best approach is to extract data separately using both the queries saving results in an index and then using the indexed data for a search. In addition don't use join because it's a very slow command: you can dind in Community many examples of correlation searches. I hint to redesign your ingestion and search process. Ciao. Giuseppe
Hi @JoshuaJJ , check if yu have network issues in communication between Indexers and License Master. Otherwise, open a case to Splunk Support. Remember to download and send them a diag from the ma... See more...
Hi @JoshuaJJ , check if yu have network issues in communication between Indexers and License Master. Otherwise, open a case to Splunk Support. Remember to download and send them a diag from the machine that's sending the message and from an indexer. Ciao. Giuseppe
The way I handle this is to have a script run as a cron job from a maintenance server that queries the process scheduler tables in the database and writes out what I want to see about process schedul... See more...
The way I handle this is to have a script run as a cron job from a maintenance server that queries the process scheduler tables in the database and writes out what I want to see about process scheduler jobs as key=value pairs into a text file and have a Splunk UF monitor the text file. Ugly but it works.
Good morning,  Getting a weird error this morning when trying to run searches. It is saying that m license is expired, or I have exceeded your license limits too many times.    1. I have a valid E... See more...
Good morning,  Getting a weird error this morning when trying to run searches. It is saying that m license is expired, or I have exceeded your license limits too many times.    1. I have a valid Enterprise License at about 750GB a day 2. Within the license manager all is well. No violations, valid license, etc.  3. Peers are associated to the license group (750GB is what I allocated for it)  4. Everything looks green with no messages    Not sure what is causing this issue but sometimes search will work and sometimes it wont. However, it  will always throw the litsearch error. 
@Hojeong-Seo  I am also facing the same issue. Can you please help me with the solution.
| spath test.nsps{} output=nsps | mvexpand nsps | spath input=nsps Name output=Name | spath input=nsps ReadOnlyConsumerNames{} output=ReadOnlyConsumerNames | table Name ReadOnlyConsumerNames
ITSI for Alert $result.service_name$ on host $result.src$ $result.description$ An event has been detected: Host: $result.host$ Source: $result.source$ Error Code: $result.error_code$ Description... See more...
ITSI for Alert $result.service_name$ on host $result.src$ $result.description$ An event has been detected: Host: $result.host$ Source: $result.source$ Error Code: $result.error_code$ Description: $result.description$ I'm fairly new to ITSI and Splunk in general and I couldn't find out any information on tokens that clearly. The only token that is working right now is $result.description$,. Any assistance will be much appreciated.    Thank you  
It is something that should rather be handled during the ingestion phase - clean your data before indexing.
Hi  I have an event which has prod and test based on env...if it is test it goes to nsps [{},{} ] object an check for the name say A,B,C,D an get their associate  ReadOnlyConsumerNames in tabular fo... See more...
Hi  I have an event which has prod and test based on env...if it is test it goes to nsps [{},{} ] object an check for the name say A,B,C,D an get their associate  ReadOnlyConsumerNames in tabular format Output as: Name      ReadOnlyConsumerNames  A               Application, Lst,data B               Application, Lst C             Lst D            Lst,Gt,PT       { [-] prod: { [] } test: { [-] DistinctAdminConsumers: [ [-] App pd. ] DistinctAdminUser: 2 DistinctReadConsumers: [ [-] Application. GT. Technology. data ] DistinctReadUser: 4 TotalAdminUser: 20 TotalNSPCount: 10 TotalReadUsers: 13 nsps: [ [-] { [-] AdminConsumerNames: [ [-] App. pd. ] AdminUserCount: 2 Name: A ReadOnlyConsumerNames: [ [-] Application Lst data ] ReadonlyUserCount: 3 } { [-] AdminConsumerNames: [ [-] App Data ] AdminUserCount: 2 Name: B ReadOnlyConsumerNames: [ [-] Application Lst ] ReadonlyUserCount: 3 } { [-] AdminConsumerNames: [ [-] preprod pd ] AdminUserCount: 2 Name: C ReadOnlyConsumerNames: [ [-] Lst ] ReadonlyUserCount: 1 } { [-] AdminConsumerNames: [ [+] ] AdminUserCount: 2 Name: D ReadOnlyConsumerNames: [ [-] Lst Gt PT ] ReadonlyUserCount: 1 } ] } }    
Since you have a SHC try this search on each individual SH to see if there is a config mismatch (I'm thinking if you grew from single to cluster maybe). | rest splunk_server=local /services/search/... See more...
Since you have a SHC try this search on each individual SH to see if there is a config mismatch (I'm thinking if you grew from single to cluster maybe). | rest splunk_server=local /services/search/distributed/peers The output should help you determine if one of the SH is out of sync. Other than that is your SHC set to indexer discovery via the CM which may still have those entries?
Hey im trying to play sounds in my dashboard studio dashboard. I heard its not possible because dashboard studio is not as customizable as classic dashboard. Does anyone know any workaround before I'... See more...
Hey im trying to play sounds in my dashboard studio dashboard. I heard its not possible because dashboard studio is not as customizable as classic dashboard. Does anyone know any workaround before I'll have to switch to classic dashboard?
HEC tokens are always stored plan text in .conf files as the token is not related to any authentication or authorization feature.  The token just helps funnel the incoming data to the appropriate ind... See more...
HEC tokens are always stored plan text in .conf files as the token is not related to any authentication or authorization feature.  The token just helps funnel the incoming data to the appropriate index and props configurations. All the encryption and secrecy items should be handled inside the TLS certificates to allow the HTTP handshake to occur.
You could design an input which sets the fields a particular team needs and use that token in a table command alongside the fields which are common to all teams | table _time comoon1 common2 $teamfi... See more...
You could design an input which sets the fields a particular team needs and use that token in a table command alongside the fields which are common to all teams | table _time comoon1 common2 $teamfields$