All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I did a recent upgrade of Splunk, but now notice my clients are not phoning in for some reason. This is my first upgrade in production environment, any help troubleshooting would be great. I still se... See more...
I did a recent upgrade of Splunk, but now notice my clients are not phoning in for some reason. This is my first upgrade in production environment, any help troubleshooting would be great. I still see my client configs on the backend but not sure why they are not reporting on the GUI. 
I'm using Splunk Enterprise 9.2.1 on Windows. On my search head a have a bunch of apps (40+) laid out as follows:   /etc/apps/myapp1 /etc/apps/myapp2 /etc/apps/myapp3   Etc. Each app has of c... See more...
I'm using Splunk Enterprise 9.2.1 on Windows. On my search head a have a bunch of apps (40+) laid out as follows:   /etc/apps/myapp1 /etc/apps/myapp2 /etc/apps/myapp3   Etc. Each app has of course it's own dashboards defined etc. Now i'd like to group all these dashboards under one app and create a menu system for them.  Now I control each app under GIT and can deploy them using a Devops cycle. What I would like to do is create this new app but simply reference the dashboards that reside in the other apps in this new app so that I keep my source/version control. is this possible or would I simlpy have to copy all the dashaboards/views into this new app?
So I did some research of when the uptick happened. It started last Monday before I starting upgrading Splunk. I blacklisted the host that were having the large amount of audit logs and reached out t... See more...
So I did some research of when the uptick happened. It started last Monday before I starting upgrading Splunk. I blacklisted the host that were having the large amount of audit logs and reached out to the department for those host. Looks like it wasnt an app but servers possibly added or ingesting more due to a change. Will find out more once the department responds. Until then, will keep them blacklisted so that we stay under our license amount 
Thanks, this worked. Two additional questions, why the chart command specifically? And for this statement: | eval total=true + false Is the reason this line works because there are only tw... See more...
Thanks, this worked. Two additional questions, why the chart command specifically? And for this statement: | eval total=true + false Is the reason this line works because there are only two values available to the previous statement, being true and false? It is not the case here, but if there were three values; full, partial and none - would the same type of statement require defining these somewhere before this?
The problem lies with the stats command.  It's a transforming command that only returns the fields explicitly named in the command.  That means not all of the fields used in later calculations are av... See more...
The problem lies with the stats command.  It's a transforming command that only returns the fields explicitly named in the command.  That means not all of the fields used in later calculations are available so the calculation results are null.
Here is some old posts which explain why this works like it works. https://community.splunk.com/t5/Deployment-Architecture/Right-number-and-size-of-hot-warm-cold-buckets/m-p/681343/highlight/true#M... See more...
Here is some old posts which explain why this works like it works. https://community.splunk.com/t5/Deployment-Architecture/Right-number-and-size-of-hot-warm-cold-buckets/m-p/681343/highlight/true#M27995 https://community.splunk.com/t5/Getting-Data-In/Indexes-configuration/m-p/564271 https://community.splunk.com/t5/Splunk-Enterprise/Splunk-shows-only-9-months-270-days-data-How-do-I-increase-the/td-p/624944 https://community.splunk.com/t5/Deployment-Architecture/Hot-Warm-Cold-bucket-sizing-How-do-I-set-up-my-index-conf-with/m-p/634696 https://community.splunk.com/t5/Splunk-Enterprise/Why-is-the-age-of-the-data-larger-than-the/m-p/548676 https://community.splunk.com/t5/Deployment-Architecture/How-do-I-expire-a-bucket-with-future-events/m-p/32158 https://community.splunk.com/t5/Deployment-Architecture/script-to-convert-the-bucket-name-to-the-time-range-of-the-data/m-p/91126 And there are lot more with searching that issue from Google.
base search | eval DeviceCompliance='deviceDetail.isCompliant' | chart count by userPrincipalName DeviceCompliance | eval total=true + false | rename true as compliant | eval percent=((compliant/tota... See more...
base search | eval DeviceCompliance='deviceDetail.isCompliant' | chart count by userPrincipalName DeviceCompliance | eval total=true + false | rename true as compliant | eval percent=((compliant/total)*100) | table userPrincipalName compliant total percent
Do you mean to use your "base search" and then filter based on profile for each panel? For example | where profile="Dev"  
Could you please try: https://xxxx.signalfx.com/#/apm/traces/2459682daf1fe95db9bbff2042a1ec0e?startTime=-15m&endTime=Now  
Here  is what I can see on my own test instance. As you see there are a lot more information than only count of buckets. You can also click that magnifying glass and then you see the exact SPL ... See more...
Here  is what I can see on my own test instance. As you see there are a lot more information than only count of buckets. You can also click that magnifying glass and then you see the exact SPL query how to get this information. Then you can modify this for better answer your needs.  
Are you using the same Java version on both environments?
@jawahir007 its very tedious to specify the status of 11 month old bucket start and end time.  suppose data age of event is 427 days and I want to delete 30 days of data then it data age now wou... See more...
@jawahir007 its very tedious to specify the status of 11 month old bucket start and end time.  suppose data age of event is 427 days and I want to delete 30 days of data then it data age now would be 397.Now how can I identify the epoch of start and end time of bucket ?
@isoutamo  Yes you are correct.  The acceleration detail has an Summary Id , which does correspond to the savedsearch_name  _ACCELERATE_<redacted>_search_nobody_<Summary Id>_ACCELERATE_ This... See more...
@isoutamo  Yes you are correct.  The acceleration detail has an Summary Id , which does correspond to the savedsearch_name  _ACCELERATE_<redacted>_search_nobody_<Summary Id>_ACCELERATE_ This confirms the issue is the License Usage Data Cube  cube report/acceleration. I will need to adjust the search resources to prevent the skipping. Thank you!!!
@isoutamo  Yes From MC i can take refrence of number of bucket counts atleast.
Hi @uagraw01 : The Splunk SPL command below might be helpful for you... |dbinspect index=your_index In Splunk, the dbinspect command is used to gather detailed metadata about the index buckets in a... See more...
Hi @uagraw01 : The Splunk SPL command below might be helpful for you... |dbinspect index=your_index In Splunk, the dbinspect command is used to gather detailed metadata about the index buckets in a specified index or set of indexes. This command provides information about the state, size, and other characteristics of the index buckets, which can help with monitoring storage, troubleshooting indexing issues, and understanding how Splunk is managing the data on disk. Key Information Provided by dbinspect: Bucket ID: A unique identifier for each bucket. Index Name: The index to which the bucket belongs. Start and End Time: The time range of events contained within the bucket. Bucket State: The current state of the bucket, such as: hot: Currently being written to. warm: Closed and searchable. cold: Moved to colder storage. frozen: No longer searchable, either deleted or archived. thawed: Restored from archive (frozen) and searchable again. Size on Disk: The storage size of the bucket. Event Count: The number of events contained in the bucket. Size before Compression: The size of the bucket before compression. Example Use Cases: 1. Search for Buckets by State: To filter for buckets in a specific state (e.g., cold or warm buckets), you can modify the query like this: | dbinspect index=your_index | search state="warm" OR state="cold" | table bucketId, index, startEpoch, endEpoch, state, sizeOnDiskMB This query filters for buckets that are either in the warm or cold state and displays useful details such as the bucket ID, size, and time range 2. Analyze Bucket Sizes: You can use dbinspect to analyze how much storage each bucket is consuming and understand your disk usage: | dbinspect index=your_index | stats sum(sizeOnDiskMB) as totalSize by index This query calculates the total disk size used by the specified index. 3. Find Old Buckets: To find the oldest buckets in an index based on their time ranges: | dbinspect index=your_index | sort startEpoch | table bucketId, index, startEpoch, endEpoch, state, sizeOnDiskMB This helps to identify which buckets contain the oldest data and may be candidates for deletion based on your data retention policies. ------ If you find this solution helpful, please consider accepting it and awarding karma points !!    
I am working on obtaining all user logins for a specified domain, then displaying what percent of those logins were from compliant devices. I start by creating a couple fields for 'ease of reading' -... See more...
I am working on obtaining all user logins for a specified domain, then displaying what percent of those logins were from compliant devices. I start by creating a couple fields for 'ease of reading' - these fields do produce data as expected, however, the table comes out with 'null' for the percent values. I have tried the below variations in pipeflow unfortunately with similar results - when trying to create a 'total' value by creating then combining compliant and noncompliant to divide, the total field does not have data either. base search | eval DeviceCompliance='deviceDetail.isCompliant' | eval compliant=if(DeviceCompliance="true",DeviceCompliance,null()) | stats count as total by userPrincipalName | eval percent=((compliant/total)*100) | table userPrincipalName total percent base search | eval DeviceCompliance='deviceDetail.isCompliant' | eval compliant=if(DeviceCompliance="true",DeviceCompliance,null()) | eval noncompliant=if(DeviceCompliance="false",DeviceCompliance,null()) | eval total=sum(compliant+noncompliant) | stats count by userPrincipalName | table userPrincipalName compliant total | eval percent=((compliant/total)*100) | table userPrincipalName total percent  
I have to create a base search for a dashboard and I am kinda stuck. Any help would be appreciated. index=service msg.message="*uri=/v1/payment-options*" eHttpMethodType="GET" | fields index, msg.s... See more...
I have to create a base search for a dashboard and I am kinda stuck. Any help would be appreciated. index=service msg.message="*uri=/v1/payment-options*" eHttpMethodType="GET" | fields index, msg.springProfile,msg.transactionId,eHttpStatusCode,eHttpMethodType,eClientId,eURI | dedup msg.transactionId | rename msg.springProfile as springProfile | eval profile = case(like(springProfile, "%dev%"), "DEV", like(springProfile, "%qa%"), "QA", like(springProfile, "%uat%"), "UAT") | eval request= case(like(eURI, "%/v1/payment-options%"), "PaymentOptions", like(eURI, "%/v1/account%"), "AccountTransalation") | stats count as "TotalRequests", count(eval(eHttpStatusCode=201 or eHttpStatusCode=204 or eHttpStatusCode=200)) as "TotalSuccessfulRequests", count(eval(eHttpStatusCode=400)) as "Total400Faliures", count(eval(eHttpStatusCode=422)) as "Total422Faliures", count(eval(eHttpStatusCode=404)) as "Total404Faliures", count(eval(eHttpStatusCode=500)) as "Total500Faliures", by profile, eClientId Now that I want to include the stats in the basesearch else my values/events  would be truncated. My problem is I need to also count  | stats count as "TotalRequests", count(eval(eHttpStatusCode=201 or eHttpStatusCode=204 or eHttpStatusCode=200)) as "TotalSuccessfulRequests" by request  for each of the profile such as Dev, QA,UAT to display in 3 different panels. How to incorparate this in the above basesearch
No, It gave the same error "Unauthorized access" 
Check out the custom Webhook Alert Action on Splunkbase. The default Webhook Alert Action allows you only to configure the URL. Custom Alert Webhook | Splunkbase
That was it!  Thanks for solving!