All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @SN1368 ...May i know some more details please: if this is a production splunk or a testing/dev splunk..  is it clustered or non-clustered..  how many UF's you have got.. 
See if this helps.  It uses actual times rather than relative ones, but the format is there. index=_internal status=* earliest=-30m ``` Get the most recent status for each API every 5 minutes | tim... See more...
See if this helps.  It uses actual times rather than relative ones, but the format is there. index=_internal status=* earliest=-30m ``` Get the most recent status for each API every 5 minutes | timechart span=5m latest(status) as status by API ``` Convert timestamp to time (HH:MM) ``` | eval _time=strftime(_time,"%H:%M") ``` Flip the display so time is across the top and API down the side ``` | transpose 0 header_field=_time column_name="API" ``` Fill in blank cells ``` | fillnull value="-"
Hi, we are logging api requests in Splunk. I would like to create a sort of health check table where every column represents the status code of the last API call in previous 5 minutes. While each ... See more...
Hi, we are logging api requests in Splunk. I would like to create a sort of health check table where every column represents the status code of the last API call in previous 5 minutes. While each row is a different API. Here an example of what the output should be Any Idea how I could achieve that in Splunk? Each row represents a different API ( request.url), while the status code is stored in response.status Thank you
One of these should work, depending on which count must be greater than 2 index=idx-sec-cloud sourcetype=rubrik:json NOT summary="*on demand backup*" (custom_details.eventName="Snapshot.BackupFaile... See more...
One of these should work, depending on which count must be greater than 2 index=idx-sec-cloud sourcetype=rubrik:json NOT summary="*on demand backup*" (custom_details.eventName="Snapshot.BackupFailed" NOT (custom_details.errorId="Oracle.RmanStatusDetailsEmpty")) OR (custom_details.eventName="Snapshot.BackupFromLocationFailed" NOT (custom_details.errorId="Fileset.FailedDataThresholdNas" OR custom_details.errorId="Fileset.FailedFileThresholdNas" OR custom_details.errorId="Fileset.FailedToFindFilesNas")) OR (custom_details.eventName="Vmware.VcenterRefreshFailed") OR (custom_details.eventName="Hawkeye.IndexOperationOnLocationFailed") OR (custom_details.eventName="Hawkeye.IndexRetryFailed") OR (custom_details.eventName="Storage.SystemStorageThreshold") OR (custom_details.eventName="ClusterOperation.DiskLost") OR (custom_details.eventName="ClusterOperation.DiskUnhealthy") OR (custom_details.eventName="Hardware.DimmError") OR (custom_details.eventName="Hardware.PowerSupplyNeedsReplacement") | eventstats count | where (count > 2 AND (custom_details.eventName="Mssql.LogBackupFailed")) OR count <=2 OR OR (custom_details.eventName!="Mssql.LogBackupFailed") | eventstats earliest(_time) AS early_time latest(_time) AS late_time index=idx-sec-cloud sourcetype=rubrik:json NOT summary="*on demand backup*" (custom_details.eventName="Snapshot.BackupFailed" NOT (custom_details.errorId="Oracle.RmanStatusDetailsEmpty")) OR (custom_details.eventName="Snapshot.BackupFromLocationFailed" NOT (custom_details.errorId="Fileset.FailedDataThresholdNas" OR custom_details.errorId="Fileset.FailedFileThresholdNas" OR custom_details.errorId="Fileset.FailedToFindFilesNas")) OR (custom_details.eventName="Vmware.VcenterRefreshFailed") OR (custom_details.eventName="Hawkeye.IndexOperationOnLocationFailed") OR (custom_details.eventName="Hawkeye.IndexRetryFailed") OR (custom_details.eventName="Storage.SystemStorageThreshold") OR (custom_details.eventName="ClusterOperation.DiskLost") OR (custom_details.eventName="ClusterOperation.DiskUnhealthy") OR (custom_details.eventName="Hardware.DimmError") OR (custom_details.eventName="Hardware.PowerSupplyNeedsReplacement") | eventstats count(eval(custom_details.eventName="Mssql.LogBackupFailed")) | where (count > 2 AND (custom_details.eventName="Mssql.LogBackupFailed")) OR count <=2 OR OR (custom_details.eventName!="Mssql.LogBackupFailed") | eventstats earliest(_time) AS early_time latest(_time) AS late_time
Hi All, Below is my query-   index=idx-sec-cloud sourcetype=rubrik:json NOT summary="*on demand backup*" (custom_details.eventName="Snapshot.BackupFailed" NOT (custom_details.errorId="Oracle.Rman... See more...
Hi All, Below is my query-   index=idx-sec-cloud sourcetype=rubrik:json NOT summary="*on demand backup*" (custom_details.eventName="Snapshot.BackupFailed" NOT (custom_details.errorId="Oracle.RmanStatusDetailsEmpty")) OR (custom_details.eventName="Mssql.LogBackupFailed") OR (custom_details.eventName="Snapshot.BackupFromLocationFailed" NOT (custom_details.errorId="Fileset.FailedDataThresholdNas" OR custom_details.errorId="Fileset.FailedFileThresholdNas" OR custom_details.errorId="Fileset.FailedToFindFilesNas")) OR (custom_details.eventName="Vmware.VcenterRefreshFailed") OR (custom_details.eventName="Hawkeye.IndexOperationOnLocationFailed") OR (custom_details.eventName="Hawkeye.IndexRetryFailed") OR (custom_details.eventName="Storage.SystemStorageThreshold") OR (custom_details.eventName="ClusterOperation.DiskLost") OR (custom_details.eventName="ClusterOperation.DiskUnhealthy") OR (custom_details.eventName="Hardware.DimmError") OR (custom_details.eventName="Hardware.PowerSupplyNeedsReplacement") | eventstats earliest(_time) AS early_time latest(_time) AS late_time     I am trying to write a condition that excludes all custom_details.eventName="Mssql.LogBackupFailed" (check line 3) unless it the count is greater than 2. Now sure how to proceed.   Thanks in advance.
Hello, I would like to pass one of the columns/fields from one of my Splunk search results as an input to a lambda function using "Add to SNS alert" action option.  The value I am trying to pass is... See more...
Hello, I would like to pass one of the columns/fields from one of my Splunk search results as an input to a lambda function using "Add to SNS alert" action option.  The value I am trying to pass is a url, which I have saved under field "url" and in the action options, am entering the AWS account name, topic name but am unable to understand how/what exactly should I pass in the message field. Currently, this is what I have set the message field to $result.url$, but its not working. Please can someone help me with this.
hello after I upgraded Splunk to the 9.1.1 version, some parts of the overview page in the distributed monitoring console were not completed and were empty. other tabs in the distributed monitoring ... See more...
hello after I upgraded Splunk to the 9.1.1 version, some parts of the overview page in the distributed monitoring console were not completed and were empty. other tabs in the distributed monitoring console do not have any problems. Does anyone have any idea what is the problem? Thanks
Hello, I am attempting to Splunk search via Macro. When using the Splunk search UI, the relevant information comes up, however, despite using the same user, the API search doesn’t get the same resul... See more...
Hello, I am attempting to Splunk search via Macro. When using the Splunk search UI, the relevant information comes up, however, despite using the same user, the API search doesn’t get the same results after further investigation, when looking for the Macro on the website I can see it, but not via the API. What could be causing this?
@bwhite have you check the internal logs for the remaining 2 server which is not reporting to splunk
Hi @bwhite, Some checks: have you other logs (e.g. internal or application) from the missing servers? did you check that the TA_nix was correctly deployed to thos servers? did you check that in ... See more...
Hi @bwhite, Some checks: have you other logs (e.g. internal or application) from the missing servers? did you check that the TA_nix was correctly deployed to thos servers? did you check that in thos servers the user runnig Splunk has the grants to read files and execute scripts? Ciao. Giuseppe
Hi @felipesodre , correct, sorry! Ciao. Giuseppe
Hi @splunkguy , to move a standalone Search Head to another standalone SH, you have to make the same steps, obviously without the cluster: make a copy of the Apps to migrate from the old SH to the... See more...
Hi @splunkguy , to move a standalone Search Head to another standalone SH, you have to make the same steps, obviously without the cluster: make a copy of the Apps to migrate from the old SH to the new one, copy the above Apps in the new SH in $SPLUNK_HOME/etc/apps. Ciao. Giuseppe
Hi @ucorral, let me understand: Splunk read the correct timestamp from the log, did you configure the Timezone in the props.conf (https://docs.splunk.com/Documentation/Splunk/9.1.1/Data/Applytimezon... See more...
Hi @ucorral, let me understand: Splunk read the correct timestamp from the log, did you configure the Timezone in the props.conf (https://docs.splunk.com/Documentation/Splunk/9.1.1/Data/Applytimezoneoffsetstotimestamps)? Then, did you configure the timezone for the user in GUI [<your_user_name> > Preferences]? Ciao. Giuseppe  
Hello @PickleRick  Thanks for the heads up, I'll delete them from the props.conf, however the information is still reaching 6 hours late, What could be the best recommendation?   Thanks,
I edited the conf files on my local server before deploying, so I know they are all identical. I have 5 servers. I copied the Splunk_TA_nix folder to apps. 3 of the 5 have data showing up for the ... See more...
I edited the conf files on my local server before deploying, so I know they are all identical. I have 5 servers. I copied the Splunk_TA_nix folder to apps. 3 of the 5 have data showing up for the new "os" index. splunkd.log, in fact the whole splunk/log folder, didn't have any errors. But it also didn't have any mention of "idx=os" on the missing servers. I ran some of the scripts in Splunk_TA_nix/bin in debug mode. No errors. What log file or index do I check to debug the issue?
You mean index=activitylog_activityreceiver Environment="AWS-DEV6" MessageTemplate="Received Post Method for activity: {Activity}" | spath input=Properties.Activity | table ActivityType, ClientId,... See more...
You mean index=activitylog_activityreceiver Environment="AWS-DEV6" MessageTemplate="Received Post Method for activity: {Activity}" | spath input=Properties.Activity | table ActivityType, ClientId, Source, Properties.Activity
Ok, but if you don't have this information in the logs, how should Splunk help here? It's the source's responsibility to produce logs. If you have means of 1) identifying unambigously which instance ... See more...
Ok, but if you don't have this information in the logs, how should Splunk help here? It's the source's responsibility to produce logs. If you have means of 1) identifying unambigously which instance of a program hit the firewall rule and 2) logging spawning of processes then maybe you could somehow correlate that together. But if you don't have this info how would you like to get it? Guess?
For reasons I don't understand, the problem is fixed when I increase the JVM option --XXMaxJavaStackTraceDepth above 10. Any value below 10 causes the NoClassDefFoundError error. If anyone has an ex... See more...
For reasons I don't understand, the problem is fixed when I increase the JVM option --XXMaxJavaStackTraceDepth above 10. Any value below 10 causes the NoClassDefFoundError error. If anyone has an explaination, much appreciated (does it have anything to do with maximum-activity-trace-stack default value of 10 ?
@gcuselloSmall correction - bucket is eligible for rotation to frozen if _latest_ event in it is older than the retention limit, not earliest.
It looks as if you were getting your events from the file but getting the date parsed in the wrong timezone. Most of the settings you showed belong on the receiving Splunk instance, not the forwarder... See more...
It looks as if you were getting your events from the file but getting the date parsed in the wrong timezone. Most of the settings you showed belong on the receiving Splunk instance, not the forwarder. And don't set SHOULD_LINEMERGE to true. This one should almost never be set to true. Also indexed extractions should not be overused. And this case doesn't seem to be one justifying use of indexed extractions (here I disagree with @gcusello )