All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for the response.  I was trying to see the accuracy of both the queries as I see difference in the counts, the one that you provided and the one from accepted answer. I edit the query to use ... See more...
Thanks for the response.  I was trying to see the accuracy of both the queries as I see difference in the counts, the one that you provided and the one from accepted answer. I edit the query to use some mock data. I am not able to use the mock data with the query in accepted answer, I would appreciate if you can help me fix that so that I can compare the results. | makeresults format=csv data="interactionid,_time,elapsed,msgsource 1,2025-07-31,00:00.756,retrieveAPI 2,2025-07-31,00:00.556,createAPI 3,2025-07-31,00:00.156,createAPI 4,2025-07-31,00:00.256,updateAPI 5,2025-07-31,00:00.356,retrieveAPI 6,2025-07-31,00:00.156,retrieveAPI 7,2025-07-31,00:01.056,createAPI 8,2025-07-31,00:00.256,retrieveAPI 9,2025-07-31,00:06.256,updateAPI 10,2025-07-31,00:10.256,createAPI" | rex field=elapsed "^(?<minutes>\\d+):(?<seconds>\\d+)\\.(?<milliseconds>\\d+)" | eval TimeMilliseconds = (tonumber(minutes) * 60 * 1000) + (tonumber(seconds) * 1000) + (tonumber(milliseconds)) | timechart span=1d count as AllTransactions, avg(TimeMilliseconds) as AvgDuration count(eval(TimeMilliseconds<=1000)) as "TXN_1000", count(eval(TimeMilliseconds>1000 AND TimeMilliseconds<=2000)) as "1sec-2sec" count(eval(TimeMilliseconds>2000 AND TimeMilliseconds<=5000)) as "2sec-5sec", by msgsource | untable _time msgsource count | eval group=mvindex(split(msgsource,": "),0) | eval msgsource=mvindex(split(msgsource,": "),1) | eval _time=_time.":".msgsource | xyseries _time group count | eval msgsource=mvindex(split(_time,":"),1) | eval _time=mvindex(split(_time,":"),0) | table _time msgsource AllTransactions AvgDuration TXN_1000 "1sec-2sec" "2sec-5sec" This query created the table but the counts are all 0s. And, here is the edited query  that you shared that shows the results: | makeresults format=csv data="interactionid,_time,elapsed,msgsource 1,2025-07-31,00:00.756,retrieveAPI 2,2025-07-31,00:00.556,createAPI 3,2025-07-31,00:00.156,createAPI 4,2025-07-31,00:00.256,updateAPI 5,2025-07-31,00:00.356,retrieveAPI 6,2025-07-31,00:00.156,retrieveAPI 7,2025-07-31,00:01.056,createAPI 8,2025-07-31,00:00.256,retrieveAPI 9,2025-07-31,00:06.256,updateAPI 10,2025-07-31,00:10.256,createAPI" | eval total_milliseconds = 1000 * (strptime("00:" . elapsed, "%T.%N") - relative_time(now(), "-0d@d")) | eval timebucket = case(total_milliseconds <= 1000, "TXN_1000", total_milliseconds <= 2000, "1sec-2sec", total_milliseconds <= 5000, "2sec-5sec", true(), "5sec+") | rename msgsource as API | bucket _time span=1d | eventstats avg(total_milliseconds) as AvgDur by _time API | stats count by AvgDur _time API timebucket | tojson output_field=api_time _time API AvgDur | chart values(count) over api_time by timebucket | addtotals | spath input=api_time | rename time as _time | fields - api_time You query shows the correct result but the fields are not in a order how I want to display. Any help to fix both queries would be appreciated.
Hi @splunklearner  Without your full dashboard code its going to be hard for me to make these changes blind, however if you have a look at the CSS within the code I provided there are a number of se... See more...
Hi @splunklearner  Without your full dashboard code its going to be hard for me to make these changes blind, however if you have a look at the CSS within the code I provided there are a number of settings you can update, such as font-size which is currently 15px but could be changed down to 10px for much smaller text. If this has been helpful please consider adding karma to the relevant posts. Many thanks Will
Hi @muku  How does the app convert the file, is it that the app using a monitor:// stanza within the inputs.conf and then applies props/transforms to manipulate the file, or is it done with a modula... See more...
Hi @muku  How does the app convert the file, is it that the app using a monitor:// stanza within the inputs.conf and then applies props/transforms to manipulate the file, or is it done with a modular input? Ultimately, the app might need to go on a forwarder if the data resides there or is pulled from there, and/or indexers if there are index-time extractions being applied. If there are search-time extractions applied then the app will also need to go on the searchheads. If you're able to provide more info then we will be able to give more tailored advice.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @abhi04  Are you using Aggregation Policies to trigger your alerts, or KPI Alerts? Im not sure how to achieve this with KPI Alerts, but if you are using aggregation policies then you might be ab... See more...
Hi @abhi04  Are you using Aggregation Policies to trigger your alerts, or KPI Alerts? Im not sure how to achieve this with KPI Alerts, but if you are using aggregation policies then you might be able to add some logic in here (similar to how you would apply a lookup) to do an eval based on the value.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I need to configure a certain customer app to ingest files.Those files needs an add-on which will convert them to be read by splunk, they are in ckls format.I have the add-on already and I have confi... See more...
I need to configure a certain customer app to ingest files.Those files needs an add-on which will convert them to be read by splunk, they are in ckls format.I have the add-on already and I have configured in deployments app already. How do I connect with the customer App so as it can show on dashboard?    
Performance notice text seems not aligned and want this box to be bit small (may be length wise) because I have nearly 10 Dropdowns and below that there is again text and panels. Because of this note... See more...
Performance notice text seems not aligned and want this box to be bit small (may be length wise) because I have nearly 10 Dropdowns and below that there is again text and panels. Because of this note which is bigger, panels are not visible initially. Users need to scroll down. Felt bit awkward for me
I have a KPI for Instance status:   index=xxxxx  source="yyyyy" | eval UpStatus=if(Status=="up",1,0) | stats last(UpStatus) as val by Instance host Status Now the val is 0 or 1 and Status fiel... See more...
I have a KPI for Instance status:   index=xxxxx  source="yyyyy" | eval UpStatus=if(Status=="up",1,0) | stats last(UpStatus) as val by Instance host Status Now the val is 0 or 1 and Status field is Up or Down  The split by field is Instance host and threshold is based on val  The alert triggers fine but I want to put the field in email alert $result.Status$ Instead of $result.val$ But I dont see the field Status in tracked alerts. How can I make this field Status shows in tracked alerts index or events generated so that I can use it in my email (This is to avoid confusuion, instead of saying 0, 1 it will say up or down)
Hi @splunklearner  Sorry Im not following what you're asking for here, please could you clarify? You can edit the HTML to say whatever you need, just apply the same styling as in the example.  D... See more...
Hi @splunklearner  Sorry Im not following what you're asking for here, please could you clarify? You can edit the HTML to say whatever you need, just apply the same styling as in the example.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @elend  Yes, you can configure Splunk (since 7.2 I think) to use mixture of local storage and S3-compliant storage, including the likes of Amazon S3 using Splunk's SmartStore functionality, this ... See more...
Hi @elend  Yes, you can configure Splunk (since 7.2 I think) to use mixture of local storage and S3-compliant storage, including the likes of Amazon S3 using Splunk's SmartStore functionality, this essentially uses your local storage for hot buckets and as a local cache for buckets which are also stored in S3. Its more of a complex beast than I can go into here, and there are lots of things to consider - for example this is generally considered a one-way exercise!   https://docs.splunk.com/Documentation/SVA/current/Architectures/SmartStore gives a good overview of the architecture, benefits and next steps. Check out https://help.splunk.com/en/splunk-enterprise/administer/manage-indexers-and-indexer-clusters/9.3/manage-smartstore/configure-smartstore for more info on setting up smartstore as well as https://help.splunk.com/en/splunk-enterprise/administer/manage-indexers-and-indexer-clusters/9.4/deploy-smartstore/deploy-smartstore-on-a-new-standalone-indexer which has some info on setting this up on a single indexer (as a starter, this will depend on your specific environment architecture).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Narendra_Rao  There are a number of different ways to get AWS Cloudwatch logs out of AWS into your on-prem environment, ultimately I think this will depend on how where your VPN terminates and w... See more...
Hi @Narendra_Rao  There are a number of different ways to get AWS Cloudwatch logs out of AWS into your on-prem environment, ultimately I think this will depend on how where your VPN terminates and which AWS services can connect to it. I tend to go with using AWS Firehose which sends to your Splunk HEC endpoint - Check out https://aws.amazon.com/blogs/big-data/deliver-decompressed-amazon-cloudwatch-logs-to-amazon-s3-and-splunk-using-amazon-data-firehose/ for more information on this. Alternatively you can send using AWS Lambda instead of Firehose, this also sends to HEC - Check out https://www.splunk.com/en_us/blog/platform/stream-amazon-cloudwatch-logs-to-splunk-using-aws-lambda.html for more info on this. There may be others, but ultimately it depends on your connection - do either of these look suitable for your environment? Let me know if you need more info?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
We have logs already in CloudWatch. What is the best way to take the logs from cloudwatch to splunk on prem. We have a vpn established too between them . So based on this any Add ons or other viable... See more...
We have logs already in CloudWatch. What is the best way to take the logs from cloudwatch to splunk on prem. We have a vpn established too between them . So based on this any Add ons or other viable solution other than Add ons. If yes : Any details /steps etc.
You've just stumbled across SmartStore (S2).  S2 keeps hot buckets local and copies warm buckets to S3.  A cache of roughly 30 days of data is retained locally for faster search performance. To impl... See more...
You've just stumbled across SmartStore (S2).  S2 keeps hot buckets local and copies warm buckets to S3.  A cache of roughly 30 days of data is retained locally for faster search performance. To implement S2 correctly, see https://docs.splunk.com/Documentation/Splunk/9.4.2/Indexer/AboutSmartStore
Thank you all for confirming and your various suggestions!!
I don't see where you address your the issue of the service not starting after you complete the install "Alright, that may get it installed, but afterwards you will notice it does not want to start, ... See more...
I don't see where you address your the issue of the service not starting after you complete the install "Alright, that may get it installed, but afterwards you will notice it does not want to start, but that's ok, I will show you what to do to get it to start in the follow up post."
Looks like my calculations were warped by this bug: https://bugs.openjdk.org/browse/JDK-8307488?force_isolation=true When I throw out of first sample of each thread, the results make a lot more sen... See more...
Looks like my calculations were warped by this bug: https://bugs.openjdk.org/browse/JDK-8307488?force_isolation=true When I throw out of first sample of each thread, the results make a lot more sense.
Hi @livehybrid  Thanks for the pointers!! We were able to implement TLS1.3 using Java 8 (OpenJDK) as well, so looks like latest Java 8 version also contains the required support for TLS1.3; Previou... See more...
Hi @livehybrid  Thanks for the pointers!! We were able to implement TLS1.3 using Java 8 (OpenJDK) as well, so looks like latest Java 8 version also contains the required support for TLS1.3; Previously it was offered only though Java 11 like you said but now it is possible with Java 8 as well.   Thanks.
Hi there, I want to point the secondary storage of my splunk indexer to mix with another storage, like point it to cloud storage? so it will like this one is the common: [volume:hot1] path = /mn... See more...
Hi there, I want to point the secondary storage of my splunk indexer to mix with another storage, like point it to cloud storage? so it will like this one is the common: [volume:hot1] path = /mnt/fast_disk maxVolumeDataSizeMB = 100000 [volume:s3volume] storageType = remote path = s3://<bucketname>/rest/of/path   is there a mechanism or reference to did this?
This is how it is showing. Can you please format more the performance notice.. We have this as well below dropdowns. Any chance can we club all these all in one note...confused
hi @splunklearner  How about this?   <dashboard version="1.1" theme="light"> <label>Your dashboard name</label> <!-- ===== NOTICE PANEL ===== --> <row> <panel> <html> ... See more...
hi @splunklearner  How about this?   <dashboard version="1.1" theme="light"> <label>Your dashboard name</label> <!-- ===== NOTICE PANEL ===== --> <row> <panel> <html> <div style=" background: linear-gradient(120deg,#fff5f5 0%,#fff 100%); border-left: 6px solid #ff9800; box-shadow: 0 2px 6px rgba(0,0,0,.12); border-radius: 6px; padding: 18px 24px; font-family: -apple-system,BlinkMacSystemFont,Segoe UI,Helvetica,Arial,sans-serif; font-size: 15px; line-height: 1.45;"> <h3 style="color:#d84315; margin:0 0 8px 0; display:flex; align-items:center;"> <!-- unicode icon (search engine–friendly, scales with text size) --> <span style="font-size:32px; margin-right:12px;">⚠️</span> Performance notice </h3> <p style="margin:0 0 10px 0; color:#424242;"> Avoid running the dashboard for long date ranges <strong>(Last 30 days)</strong> unless strictly needed – it may impact performance. </p> <p style="margin:0; color:#424242;"> Before you continue, please select the <strong>Index Name</strong> above. The dashboard will remain empty until an index is chosen. </p> </div> </html> </panel> </row> <!-- rest of your dashboard --> </dashboard>  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
In the AppD metric browser for Java apps, there is a metric called Allocated-Objects (MB).  I thought I understood what this was, but I'm getting some unexpected results after making a code change. ... See more...
In the AppD metric browser for Java apps, there is a metric called Allocated-Objects (MB).  I thought I understood what this was, but I'm getting some unexpected results after making a code change. We had a service that had an allocation rate that was too high for the request volume, IMO, so we took a 1-minute flight recording in a test environment.  Total allocation, according to the flight recording samples, was around 35GB.  Based on where the samples were coming from, we made a code change.  When we retested, the total allocation for the same test over 1-minute was only 9GB, approximately 75% less. When we deployed the change and re-ran our endurance test, we saw an object allocation rate that was only slightly lower than the baseline.  Dividing the allocation rate by the request volume, the number had only gone down from 12MB/req to 10MB/req. We do not have verbose GC enabled, so I can't check against that. What could be causing the numbers to be so similar?  Is the allocation rate reported by AppD reliable? thanks