All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@dtburrows3  Thank you for your support! It work!
This was a writeup that I did for this   Backup Splunk Stop and Backup the entire Splunk folder if able. /opt/splunk/bin/splunk stop   tar -zcvf splunk_pre_secret.tar.gz /opt/splunk/etc   F... See more...
This was a writeup that I did for this   Backup Splunk Stop and Backup the entire Splunk folder if able. /opt/splunk/bin/splunk stop   tar -zcvf splunk_pre_secret.tar.gz /opt/splunk/etc   Find encrypted passwords find /opt/splunk/etc -name '*.conf' -exec grep -inH '\$[0-9]\$' {} \;   Record the context (file location, stanza, parameter) Can decrypt the hashed passwords with the following /opt/splunk/bin/splunk show-decrypted --value 'PASSWORDHASH'   Updating the splunk.secret Copy the splunk.secret file from 192.168.70.2 to /opt/splunk/etc/auth/splunk.secret on the target system. cp /home/dapslunk/splunk.secret /opt/splunk/etc/auth/splunk.secret   Ensure the permissions are correct 400 ll /opt/splunk/etc/auth/splunk.secret   Update all of the password sections Use the following to find any missed passwords that have not been corrected. find /opt/splunk/etc -name '*.conf' -exec grep -inH '\$[0-9]\$' {} \;   Restart Splunk /opt/splunk/bin/splunk restart   Verify Access to Splunk GUI If any splunk commands that require authentication work Connection to license master /cluster/ deployment server If any inputs have data coming in If LDAP authentication works If all passwords are encrypted. Use the command from before.
I think this SPL should do what you are asking.       <base_search> | stats count as count by appName, resultCode | eventstats sum(count) as app_total_coun... See more...
I think this SPL should do what you are asking.       <base_search> | stats count as count by appName, resultCode | eventstats sum(count) as app_total_count by appName | eval pct_of_total=round(('count'/'app_total_count')*100, 3) | chart sum(pct_of_total) as pct_of_total over appName by resultCode | fillnull value=0.000         Using dummy data on my local instance I was able to get the output to look like this. where each cell value represents the percentage of events that occurred with that status code for each app. For a full overview of how I simulated locally you can reference this code.       | makeresults | eval appName="app1", resultCode=500 | append [ | makeresults | eval appName="app1", resultCode=500 ] | append [ | makeresults | eval appName="app1", resultCode=split("404|404|200|404|500|404|200", "|") ] | append [ | makeresults | eval appName="app2", resultCode=split("200|200|404|200", "|") ] | append [ | makeresults | eval appName="app3", resultCode=split("404|200|200|200|500|404|200", "|") ] | mvexpand resultCode ``` below is the relevant SPL ``` | stats count as count by appName, resultCode | eventstats sum(count) as app_total_count by appName | eval pct_of_total=round(('count'/'app_total_count')*100, 3) | chart values(pct_of_total) as pct_of_total over appName by resultCode | fillnull value=0.000     Edit: And to use this at scale (more that 10 unique status_codes) you would need to add a limit=<number> parameter to the chart command, otherwise it will populate another values named "OTHER" after 10 unique values are transformed. This is what a bigger table would look like Notice there is also another field at the right of the table to give more context at the total count of status codes seen for each application so that percentages can be inferred to a total count if desired. Code to do this would look something like this. (Added comments for each line to add detail)     <base_search> ``` simple count of events for each unique combo of appName/resultCode values ``` | stats count as count by appName, resultCode ``` sum up the counts accross each unique appName ``` | eventstats sum(count) as app_total_count by appName ``` add aditional rows as a Total count for each unique appName value (This is optional since this step is just providing more context to the final table) ``` | appendpipe [ | stats sum(count) as count by appName | eval resultCode="Total", app_total_count='count' ] ``` calculate percentage of occurrence for a particular resultCode across each unique appName with the exception of the Total rows, for these we want to just carry over the total app count (for context) ``` | eval pct_of_total=if( 'resultCode'=="Total", 'count', round(('count'/'app_total_count')*100, 2) ) ``` This chart command builds a sort contingency table, using the derived 'pct_of_total' values and putting them in there respective cell when appName values are in the left-most column and using the values from resultCode as the table header. Note: If a specific resultCode that showed for one app that did not show for another, then charting this way will leave a blank slot in that cell. To get around this we will use a fillnull command in the next step of the search. Adding the limit=<number> parameter sets your upper limit of the number of columns to generate for each unique resultCode value. This defaults to 10 if not specified and for resultCode values exceeding the top 10 will fall into the bucket "OTHER" and be tabled that way. Example: If you only want to see the top 3 resultCode values in the table and dont want an additional column "OTHER", you have the option to also set 'useother=false' along with 'limit=3' ``` | chart limit=100 useother=true sum(pct_of_total) over appName, by resultCode ``` renaming the "Total" column name to something more descriptive ``` | rename Total as "Total Count for App" ``` fill any blank cells with 0 as this is the implied percentage if it is null ``` | fillnull value=0.00      
Hi , How long after did you get the results ?
Hi @Mahendra.Shetty, As you noticed, this question was asked over a year ago. It may not get a reply. I would suggest asking the question fresh on the community or you can even try contacting Suppo... See more...
Hi @Mahendra.Shetty, As you noticed, this question was asked over a year ago. It may not get a reply. I would suggest asking the question fresh on the community or you can even try contacting Support since you are a partner. How do I submit a Support ticket? An FAQ  If you happen to find a solution, please do share it here or on your new post, if you make one.
Hey, thanks for the feedback.  Yes, I tried stats/eventstats.  I can get the counts to show, but I don't see how I can get the query to count up the total per app, then use that to calculate the perc... See more...
Hey, thanks for the feedback.  Yes, I tried stats/eventstats.  I can get the counts to show, but I don't see how I can get the query to count up the total per app, then use that to calculate the percent of each error per app.  Everything I try, like the snips I showed in my question just don't seem to get me closer to being able to show percentages. 
Hey Giuseppe, thanks for the thought.  I did try the chart command.  The way you have it does show it more cleanly then I had in my initial question, but I still have the same question/problem of how... See more...
Hey Giuseppe, thanks for the thought.  I did try the chart command.  The way you have it does show it more cleanly then I had in my initial question, but I still have the same question/problem of how I can calculate and show the values as the percentages.  Everything I do with just chart is still showing the counts and not percents.  
The LDAP settings you used probably are incorrect.  Verify them with your LDAP admin. Confirm the firewall(s) allow connections from Splunk search heads to the LDAP server.
Having access to _internal does not mean you have access to all other indexes.  Check with your Splunk admin to see if your role is allowed to access index abc.
Splunk is unable to search frozen buckets in any location.  Frozen buckets must be thawed before they can be searched. As I understand it, FS-S3 is intended to allow searching of raw data resident i... See more...
Splunk is unable to search frozen buckets in any location.  Frozen buckets must be thawed before they can be searched. As I understand it, FS-S3 is intended to allow searching of raw data resident in an S3 bucket.  It's not for searching "cooked" data.
Please note: I added a missing "2" at the end of the transforms.conf code. If your problem is resolved, then please click the "Accept as Solution" button to help future readers.
Is federated search able to search frozen buckets in s3? Or only raw logs?
If you want to search the splunk internal audit events, use the _audit index index=_audit ref: https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/AuditSplunkactivity If you would like t... See more...
If you want to search the splunk internal audit events, use the _audit index index=_audit ref: https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/AuditSplunkactivity If you would like to fetch this information using the API, you can run a Splunk search by configuring a splunk-authenticated user with permission to read the _audit index, generating them a token, then using the REST API to dispatch a search and return the results. Ref: https://docs.splunk.com/Documentation/Splunk/9.1.2/RESTREF/RESTsearch#search.2Fv2.2Fjobs.2Fexport specifically the /search/v2/jobs/export endpoint, including your authorization token and the splunk search to list the audit events.  
Testing this sample file on my local I think something like this could work. [ <SOURCETYPE NAME> ] ... LINE_BREAKER=([\r\n]+)\s*\{\s*[\r\n]+\s*\"auditId\" TIME_FORMAT=%Y-%m-%dT%H:%M:%S TIME_PREF... See more...
Testing this sample file on my local I think something like this could work. [ <SOURCETYPE NAME> ] ... LINE_BREAKER=([\r\n]+)\s*\{\s*[\r\n]+\s*\"auditId\" TIME_FORMAT=%Y-%m-%dT%H:%M:%S TIME_PREFIX=(?:.*[\r\n]+)*\"audittime\":\s*\" SEDCMD-remove_trailing_comma=s/\,$//g SEDCMD-remove_trailing_bracket=s/\][\r\n]+$//g TRANSFORMS-remove_header=remove_json_header   This is a parsed event from the sampled file. I am getting a warning about the timestamp, but this is not because it is unable to find it but because the datetime exceeds my set limit for MAX_DAYS_AGO/MAX_DAYS_HENCE. Note the transform included in the props, This is needed to remove the first part of the json file that the events are nested in. There will need to be an accompanying stanza in transforms.conf specifying regex used to regognize the event to send to null queue. It probably would look something like this. [remove_json_header] REGEX = ^\s*\{\s*[\r\n]+\"items\":\s*\[ DEST_KEY = queue FORMAT = nullQueue  
I recommend using the website https://regex101.com/ to test your regex and ensure it is definitely matching. When your regex is inserted, it does not seem to match the space character between "auditI... See more...
I recommend using the website https://regex101.com/ to test your regex and ensure it is definitely matching. When your regex is inserted, it does not seem to match the space character between "auditId" and the following colon (:) I would also recommend splitting the json events so that they have the curly brackets like so: { "event1keys" : "event1values", .... } { "event2keys" : "event2values", .... } Thus your LINE_BREAKER value should also match the opening curly brace and its newline, and its first capture group should include the discardable characters between events such as commas LINE_BREAKER=(,?[\r\n]+){\s*\"auditId\" I also recommend setting SHOULD_LINEMERGE  to false to prevent Splunk from re-assembling multi-line events after the split.
Hello, Line breaker in my props configuration for the json formatted file is not working, it's not breaking the json events. My props and sample json events are giving below. Any recommendation wil... See more...
Hello, Line breaker in my props configuration for the json formatted file is not working, it's not breaking the json events. My props and sample json events are giving below. Any recommendation will be highly appreciated, thank you! props [myprops] CHARSET=UTF-8 KV_MODE-json LINE_BREAKER=([\r\n]+)\"auditId\"\: SHOULD_LINEMERGE=true TIME_PREFIX="audittime": " TIME_FORMAT=%Y-%m-%dT%H:%M:%S TRUNCATE=9999 Sample Events { "items": [ { "auditId" : 15067, "secId": "mtt01", "audittime": "2016-07-31T12:24:37Z", "links": [ { "name":"conanicaldba", "href": "https://it.for.dev.com/opa-api" }, { "name":"describedbydba", "href": "https://it.for.dev.com/opa-api/meta-data" } ] }, { "auditId" : 16007, "secId": "mtt01", "audittime": "2016-07-31T12:23:47Z", "links": [ { "name":"conanicaldba", "href": "https://it.for.dev.com/opa-api" }, { "name":"describedbydba", "href": "https://it.for.dev.com/opa-api/meta-data" } ] }, { "auditId" : 15165, "secId": "mtt01", "audittime": "2016-07-31T12:22:51Z", "links": [ { "name":"conanicaldba", "href": "https://it.for.dev.com/opa-api" }, { "name":"describedbydba", "href": "https://it.for.dev.com/opa-api/meta-data" } ] } ] ​
I think I was just a bit confused when I asked this question haha.   Conflicts only occur for the same stanzas with the same attributes but different values. That's when the precedence comes in. Bu... See more...
I think I was just a bit confused when I asked this question haha.   Conflicts only occur for the same stanzas with the same attributes but different values. That's when the precedence comes in. But for other stanzas defined in apps, it will all be joined together into one final conf file that is used for that instance which makes sense.   Thanks for your reply
Hi @IAskALotOfQs, at first you should analyze your conf files and identify and solve eventual conflicts so the precedence isn't so relevant. Then, unless you in your documentation is required to in... See more...
Hi @IAskALotOfQs, at first you should analyze your conf files and identify and solve eventual conflicts so the precedence isn't so relevant. Then, unless you in your documentation is required to install some app or add-on on the Indexers, you could create a custom add-on (called e.g. "TA_for_Indexers") contaning the conf files you need, usually indexes.conf, but only one with all the required configurations. Ciao. Giuseppe
If I correctly understood what you are asking for I was able to achieve it by doing this. | inputlookup <lookup_2> ``` checking for match against host field from lookup_2 against the FQDN fi... See more...
If I correctly understood what you are asking for I was able to achieve it by doing this. | inputlookup <lookup_2> ``` checking for match against host field from lookup_2 against the FQDN field in lookup_1 ``` | lookup <lookup_1> FQDN as host OUTPUT FQDN as host_match ``` checking for match against host field from lookup_2 against the IP field in lookup_1 ``` | lookup <lookup_1> IP as host OUTPUT IP as ip_match ``` coalesce the fqdn and ip matches into one field ``` | eval asset_match=coalesce(host_match, ip_match) | fields - host_match, ip_match ``` filter off hosts that matches were found for ``` | where isnull(asset_match)   Example of lookup_1: Example of lookup_2: Example of final output: You can see in the final output that the only 2 entries returned are ones who's host values do not have any matches against FQDN or IP in lookup_1.