All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hey, thanks for the feedback.  Yes, I tried stats/eventstats.  I can get the counts to show, but I don't see how I can get the query to count up the total per app, then use that to calculate the perc... See more...
Hey, thanks for the feedback.  Yes, I tried stats/eventstats.  I can get the counts to show, but I don't see how I can get the query to count up the total per app, then use that to calculate the percent of each error per app.  Everything I try, like the snips I showed in my question just don't seem to get me closer to being able to show percentages. 
Hey Giuseppe, thanks for the thought.  I did try the chart command.  The way you have it does show it more cleanly then I had in my initial question, but I still have the same question/problem of how... See more...
Hey Giuseppe, thanks for the thought.  I did try the chart command.  The way you have it does show it more cleanly then I had in my initial question, but I still have the same question/problem of how I can calculate and show the values as the percentages.  Everything I do with just chart is still showing the counts and not percents.  
The LDAP settings you used probably are incorrect.  Verify them with your LDAP admin. Confirm the firewall(s) allow connections from Splunk search heads to the LDAP server.
Having access to _internal does not mean you have access to all other indexes.  Check with your Splunk admin to see if your role is allowed to access index abc.
Splunk is unable to search frozen buckets in any location.  Frozen buckets must be thawed before they can be searched. As I understand it, FS-S3 is intended to allow searching of raw data resident i... See more...
Splunk is unable to search frozen buckets in any location.  Frozen buckets must be thawed before they can be searched. As I understand it, FS-S3 is intended to allow searching of raw data resident in an S3 bucket.  It's not for searching "cooked" data.
Please note: I added a missing "2" at the end of the transforms.conf code. If your problem is resolved, then please click the "Accept as Solution" button to help future readers.
Is federated search able to search frozen buckets in s3? Or only raw logs?
If you want to search the splunk internal audit events, use the _audit index index=_audit ref: https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/AuditSplunkactivity If you would like t... See more...
If you want to search the splunk internal audit events, use the _audit index index=_audit ref: https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/AuditSplunkactivity If you would like to fetch this information using the API, you can run a Splunk search by configuring a splunk-authenticated user with permission to read the _audit index, generating them a token, then using the REST API to dispatch a search and return the results. Ref: https://docs.splunk.com/Documentation/Splunk/9.1.2/RESTREF/RESTsearch#search.2Fv2.2Fjobs.2Fexport specifically the /search/v2/jobs/export endpoint, including your authorization token and the splunk search to list the audit events.  
Testing this sample file on my local I think something like this could work. [ <SOURCETYPE NAME> ] ... LINE_BREAKER=([\r\n]+)\s*\{\s*[\r\n]+\s*\"auditId\" TIME_FORMAT=%Y-%m-%dT%H:%M:%S TIME_PREF... See more...
Testing this sample file on my local I think something like this could work. [ <SOURCETYPE NAME> ] ... LINE_BREAKER=([\r\n]+)\s*\{\s*[\r\n]+\s*\"auditId\" TIME_FORMAT=%Y-%m-%dT%H:%M:%S TIME_PREFIX=(?:.*[\r\n]+)*\"audittime\":\s*\" SEDCMD-remove_trailing_comma=s/\,$//g SEDCMD-remove_trailing_bracket=s/\][\r\n]+$//g TRANSFORMS-remove_header=remove_json_header   This is a parsed event from the sampled file. I am getting a warning about the timestamp, but this is not because it is unable to find it but because the datetime exceeds my set limit for MAX_DAYS_AGO/MAX_DAYS_HENCE. Note the transform included in the props, This is needed to remove the first part of the json file that the events are nested in. There will need to be an accompanying stanza in transforms.conf specifying regex used to regognize the event to send to null queue. It probably would look something like this. [remove_json_header] REGEX = ^\s*\{\s*[\r\n]+\"items\":\s*\[ DEST_KEY = queue FORMAT = nullQueue  
I recommend using the website https://regex101.com/ to test your regex and ensure it is definitely matching. When your regex is inserted, it does not seem to match the space character between "auditI... See more...
I recommend using the website https://regex101.com/ to test your regex and ensure it is definitely matching. When your regex is inserted, it does not seem to match the space character between "auditId" and the following colon (:) I would also recommend splitting the json events so that they have the curly brackets like so: { "event1keys" : "event1values", .... } { "event2keys" : "event2values", .... } Thus your LINE_BREAKER value should also match the opening curly brace and its newline, and its first capture group should include the discardable characters between events such as commas LINE_BREAKER=(,?[\r\n]+){\s*\"auditId\" I also recommend setting SHOULD_LINEMERGE  to false to prevent Splunk from re-assembling multi-line events after the split.
Hello, Line breaker in my props configuration for the json formatted file is not working, it's not breaking the json events. My props and sample json events are giving below. Any recommendation wil... See more...
Hello, Line breaker in my props configuration for the json formatted file is not working, it's not breaking the json events. My props and sample json events are giving below. Any recommendation will be highly appreciated, thank you! props [myprops] CHARSET=UTF-8 KV_MODE-json LINE_BREAKER=([\r\n]+)\"auditId\"\: SHOULD_LINEMERGE=true TIME_PREFIX="audittime": " TIME_FORMAT=%Y-%m-%dT%H:%M:%S TRUNCATE=9999 Sample Events { "items": [ { "auditId" : 15067, "secId": "mtt01", "audittime": "2016-07-31T12:24:37Z", "links": [ { "name":"conanicaldba", "href": "https://it.for.dev.com/opa-api" }, { "name":"describedbydba", "href": "https://it.for.dev.com/opa-api/meta-data" } ] }, { "auditId" : 16007, "secId": "mtt01", "audittime": "2016-07-31T12:23:47Z", "links": [ { "name":"conanicaldba", "href": "https://it.for.dev.com/opa-api" }, { "name":"describedbydba", "href": "https://it.for.dev.com/opa-api/meta-data" } ] }, { "auditId" : 15165, "secId": "mtt01", "audittime": "2016-07-31T12:22:51Z", "links": [ { "name":"conanicaldba", "href": "https://it.for.dev.com/opa-api" }, { "name":"describedbydba", "href": "https://it.for.dev.com/opa-api/meta-data" } ] } ] ​
I think I was just a bit confused when I asked this question haha.   Conflicts only occur for the same stanzas with the same attributes but different values. That's when the precedence comes in. Bu... See more...
I think I was just a bit confused when I asked this question haha.   Conflicts only occur for the same stanzas with the same attributes but different values. That's when the precedence comes in. But for other stanzas defined in apps, it will all be joined together into one final conf file that is used for that instance which makes sense.   Thanks for your reply
Hi @IAskALotOfQs, at first you should analyze your conf files and identify and solve eventual conflicts so the precedence isn't so relevant. Then, unless you in your documentation is required to in... See more...
Hi @IAskALotOfQs, at first you should analyze your conf files and identify and solve eventual conflicts so the precedence isn't so relevant. Then, unless you in your documentation is required to install some app or add-on on the Indexers, you could create a custom add-on (called e.g. "TA_for_Indexers") contaning the conf files you need, usually indexes.conf, but only one with all the required configurations. Ciao. Giuseppe
If I correctly understood what you are asking for I was able to achieve it by doing this. | inputlookup <lookup_2> ``` checking for match against host field from lookup_2 against the FQDN fi... See more...
If I correctly understood what you are asking for I was able to achieve it by doing this. | inputlookup <lookup_2> ``` checking for match against host field from lookup_2 against the FQDN field in lookup_1 ``` | lookup <lookup_1> FQDN as host OUTPUT FQDN as host_match ``` checking for match against host field from lookup_2 against the IP field in lookup_1 ``` | lookup <lookup_1> IP as host OUTPUT IP as ip_match ``` coalesce the fqdn and ip matches into one field ``` | eval asset_match=coalesce(host_match, ip_match) | fields - host_match, ip_match ``` filter off hosts that matches were found for ``` | where isnull(asset_match)   Example of lookup_1: Example of lookup_2: Example of final output: You can see in the final output that the only 2 entries returned are ones who's host values do not have any matches against FQDN or IP in lookup_1.
Lookup 1  : Contains fields such as  AssetName  FQDN and IP Address Lookup 2 :  Contains fields such as Host Index and source type  Expected Output : Need to compare host value from lookup 2 with... See more...
Lookup 1  : Contains fields such as  AssetName  FQDN and IP Address Lookup 2 :  Contains fields such as Host Index and source type  Expected Output : Need to compare host value from lookup 2 with FQDN and IP address in Lookup 1 and output must be missing devices details
I believe this should do it. <input> ..... <change> <condition match="'value'==&quot;BT&quot; OR 'value'==&quot;UART&quot;"> <unset token="show_wifi_connect_type"></unset> </co... See more...
I believe this should do it. <input> ..... <change> <condition match="'value'==&quot;BT&quot; OR 'value'==&quot;UART&quot;"> <unset token="show_wifi_connect_type"></unset> </condition> </change> ... </input>   You can see on the screenshots it worked as expected when testing locally.  
Able to see events in index=_internal but not in index=abc for a particular host  , what could be reason.
@madhav_dholakia - Got it. I don't think that level of token manipulation is possible on Dashboard Studio. You can try Simple XML for that.   I hope this helps!!~!
I was thinking about this just now...   How is it possible to have more than 1 app/add-on functioning on an Indexer? Because now that I understand global-level context and precedence, one app's con... See more...
I was thinking about this just now...   How is it possible to have more than 1 app/add-on functioning on an Indexer? Because now that I understand global-level context and precedence, one app's configurations will always take precedence over another due to lexicographical naming.    (I am aware system/local will override all config changes)     E.G. There is an indexer with 3 apps. Alpha, Bravo and Charlie. Each of their directories will be as follows:   - SPLUNK_HOME/etc/apps/Alpha/local (highest precedence) - SPLUNK_HOME/etc/apps/Bravo/local - SPLUNK_HOME/etc/apps/Charlie/local (lowest precedence) If I want my indexer to have Charlie functionality, that wouldn't work if I have the 2 above in the example running.    What is a fix for this?