All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @muku  How does the app convert the file, is it that the app using a monitor:// stanza within the inputs.conf and then applies props/transforms to manipulate the file, or is it done with a modula... See more...
Hi @muku  How does the app convert the file, is it that the app using a monitor:// stanza within the inputs.conf and then applies props/transforms to manipulate the file, or is it done with a modular input? Ultimately, the app might need to go on a forwarder if the data resides there or is pulled from there, and/or indexers if there are index-time extractions being applied. If there are search-time extractions applied then the app will also need to go on the searchheads. If you're able to provide more info then we will be able to give more tailored advice.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @abhi04  Are you using Aggregation Policies to trigger your alerts, or KPI Alerts? Im not sure how to achieve this with KPI Alerts, but if you are using aggregation policies then you might be ab... See more...
Hi @abhi04  Are you using Aggregation Policies to trigger your alerts, or KPI Alerts? Im not sure how to achieve this with KPI Alerts, but if you are using aggregation policies then you might be able to add some logic in here (similar to how you would apply a lookup) to do an eval based on the value.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I need to configure a certain customer app to ingest files.Those files needs an add-on which will convert them to be read by splunk, they are in ckls format.I have the add-on already and I have confi... See more...
I need to configure a certain customer app to ingest files.Those files needs an add-on which will convert them to be read by splunk, they are in ckls format.I have the add-on already and I have configured in deployments app already. How do I connect with the customer App so as it can show on dashboard?    
Performance notice text seems not aligned and want this box to be bit small (may be length wise) because I have nearly 10 Dropdowns and below that there is again text and panels. Because of this note... See more...
Performance notice text seems not aligned and want this box to be bit small (may be length wise) because I have nearly 10 Dropdowns and below that there is again text and panels. Because of this note which is bigger, panels are not visible initially. Users need to scroll down. Felt bit awkward for me
I have a KPI for Instance status:   index=xxxxx  source="yyyyy" | eval UpStatus=if(Status=="up",1,0) | stats last(UpStatus) as val by Instance host Status Now the val is 0 or 1 and Status fiel... See more...
I have a KPI for Instance status:   index=xxxxx  source="yyyyy" | eval UpStatus=if(Status=="up",1,0) | stats last(UpStatus) as val by Instance host Status Now the val is 0 or 1 and Status field is Up or Down  The split by field is Instance host and threshold is based on val  The alert triggers fine but I want to put the field in email alert $result.Status$ Instead of $result.val$ But I dont see the field Status in tracked alerts. How can I make this field Status shows in tracked alerts index or events generated so that I can use it in my email (This is to avoid confusuion, instead of saying 0, 1 it will say up or down)
Hi @splunklearner  Sorry Im not following what you're asking for here, please could you clarify? You can edit the HTML to say whatever you need, just apply the same styling as in the example.  D... See more...
Hi @splunklearner  Sorry Im not following what you're asking for here, please could you clarify? You can edit the HTML to say whatever you need, just apply the same styling as in the example.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @elend  Yes, you can configure Splunk (since 7.2 I think) to use mixture of local storage and S3-compliant storage, including the likes of Amazon S3 using Splunk's SmartStore functionality, this ... See more...
Hi @elend  Yes, you can configure Splunk (since 7.2 I think) to use mixture of local storage and S3-compliant storage, including the likes of Amazon S3 using Splunk's SmartStore functionality, this essentially uses your local storage for hot buckets and as a local cache for buckets which are also stored in S3. Its more of a complex beast than I can go into here, and there are lots of things to consider - for example this is generally considered a one-way exercise!   https://docs.splunk.com/Documentation/SVA/current/Architectures/SmartStore gives a good overview of the architecture, benefits and next steps. Check out https://help.splunk.com/en/splunk-enterprise/administer/manage-indexers-and-indexer-clusters/9.3/manage-smartstore/configure-smartstore for more info on setting up smartstore as well as https://help.splunk.com/en/splunk-enterprise/administer/manage-indexers-and-indexer-clusters/9.4/deploy-smartstore/deploy-smartstore-on-a-new-standalone-indexer which has some info on setting this up on a single indexer (as a starter, this will depend on your specific environment architecture).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Narendra_Rao  There are a number of different ways to get AWS Cloudwatch logs out of AWS into your on-prem environment, ultimately I think this will depend on how where your VPN terminates and w... See more...
Hi @Narendra_Rao  There are a number of different ways to get AWS Cloudwatch logs out of AWS into your on-prem environment, ultimately I think this will depend on how where your VPN terminates and which AWS services can connect to it. I tend to go with using AWS Firehose which sends to your Splunk HEC endpoint - Check out https://aws.amazon.com/blogs/big-data/deliver-decompressed-amazon-cloudwatch-logs-to-amazon-s3-and-splunk-using-amazon-data-firehose/ for more information on this. Alternatively you can send using AWS Lambda instead of Firehose, this also sends to HEC - Check out https://www.splunk.com/en_us/blog/platform/stream-amazon-cloudwatch-logs-to-splunk-using-aws-lambda.html for more info on this. There may be others, but ultimately it depends on your connection - do either of these look suitable for your environment? Let me know if you need more info?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
We have logs already in CloudWatch. What is the best way to take the logs from cloudwatch to splunk on prem. We have a vpn established too between them . So based on this any Add ons or other viable... See more...
We have logs already in CloudWatch. What is the best way to take the logs from cloudwatch to splunk on prem. We have a vpn established too between them . So based on this any Add ons or other viable solution other than Add ons. If yes : Any details /steps etc.
You've just stumbled across SmartStore (S2).  S2 keeps hot buckets local and copies warm buckets to S3.  A cache of roughly 30 days of data is retained locally for faster search performance. To impl... See more...
You've just stumbled across SmartStore (S2).  S2 keeps hot buckets local and copies warm buckets to S3.  A cache of roughly 30 days of data is retained locally for faster search performance. To implement S2 correctly, see https://docs.splunk.com/Documentation/Splunk/9.4.2/Indexer/AboutSmartStore
Thank you all for confirming and your various suggestions!!
I don't see where you address your the issue of the service not starting after you complete the install "Alright, that may get it installed, but afterwards you will notice it does not want to start, ... See more...
I don't see where you address your the issue of the service not starting after you complete the install "Alright, that may get it installed, but afterwards you will notice it does not want to start, but that's ok, I will show you what to do to get it to start in the follow up post."
Looks like my calculations were warped by this bug: https://bugs.openjdk.org/browse/JDK-8307488?force_isolation=true When I throw out of first sample of each thread, the results make a lot more sen... See more...
Looks like my calculations were warped by this bug: https://bugs.openjdk.org/browse/JDK-8307488?force_isolation=true When I throw out of first sample of each thread, the results make a lot more sense.
Hi @livehybrid  Thanks for the pointers!! We were able to implement TLS1.3 using Java 8 (OpenJDK) as well, so looks like latest Java 8 version also contains the required support for TLS1.3; Previou... See more...
Hi @livehybrid  Thanks for the pointers!! We were able to implement TLS1.3 using Java 8 (OpenJDK) as well, so looks like latest Java 8 version also contains the required support for TLS1.3; Previously it was offered only though Java 11 like you said but now it is possible with Java 8 as well.   Thanks.
Hi there, I want to point the secondary storage of my splunk indexer to mix with another storage, like point it to cloud storage? so it will like this one is the common: [volume:hot1] path = /mn... See more...
Hi there, I want to point the secondary storage of my splunk indexer to mix with another storage, like point it to cloud storage? so it will like this one is the common: [volume:hot1] path = /mnt/fast_disk maxVolumeDataSizeMB = 100000 [volume:s3volume] storageType = remote path = s3://<bucketname>/rest/of/path   is there a mechanism or reference to did this?
This is how it is showing. Can you please format more the performance notice.. We have this as well below dropdowns. Any chance can we club all these all in one note...confused
hi @splunklearner  How about this?   <dashboard version="1.1" theme="light"> <label>Your dashboard name</label> <!-- ===== NOTICE PANEL ===== --> <row> <panel> <html> ... See more...
hi @splunklearner  How about this?   <dashboard version="1.1" theme="light"> <label>Your dashboard name</label> <!-- ===== NOTICE PANEL ===== --> <row> <panel> <html> <div style=" background: linear-gradient(120deg,#fff5f5 0%,#fff 100%); border-left: 6px solid #ff9800; box-shadow: 0 2px 6px rgba(0,0,0,.12); border-radius: 6px; padding: 18px 24px; font-family: -apple-system,BlinkMacSystemFont,Segoe UI,Helvetica,Arial,sans-serif; font-size: 15px; line-height: 1.45;"> <h3 style="color:#d84315; margin:0 0 8px 0; display:flex; align-items:center;"> <!-- unicode icon (search engine–friendly, scales with text size) --> <span style="font-size:32px; margin-right:12px;">⚠️</span> Performance notice </h3> <p style="margin:0 0 10px 0; color:#424242;"> Avoid running the dashboard for long date ranges <strong>(Last 30 days)</strong> unless strictly needed – it may impact performance. </p> <p style="margin:0; color:#424242;"> Before you continue, please select the <strong>Index Name</strong> above. The dashboard will remain empty until an index is chosen. </p> </div> </html> </panel> </row> <!-- rest of your dashboard --> </dashboard>  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
In the AppD metric browser for Java apps, there is a metric called Allocated-Objects (MB).  I thought I understood what this was, but I'm getting some unexpected results after making a code change. ... See more...
In the AppD metric browser for Java apps, there is a metric called Allocated-Objects (MB).  I thought I understood what this was, but I'm getting some unexpected results after making a code change. We had a service that had an allocation rate that was too high for the request volume, IMO, so we took a 1-minute flight recording in a test environment.  Total allocation, according to the flight recording samples, was around 35GB.  Based on where the samples were coming from, we made a code change.  When we retested, the total allocation for the same test over 1-minute was only 9GB, approximately 75% less. When we deployed the change and re-ran our endurance test, we saw an object allocation rate that was only slightly lower than the baseline.  Dividing the allocation rate by the request volume, the number had only gone down from 12MB/req to 10MB/req. We do not have verbose GC enabled, so I can't check against that. What could be causing the numbers to be so similar?  Is the allocation rate reported by AppD reliable? thanks  
<form version="1.1" theme="light"> <label>Dashboard</label> <!-- Hidden base search for dropdowns --> <search id="base_search"> <query> index=$index$ ---------- </query> <earliest>$time_tok.earliest$... See more...
<form version="1.1" theme="light"> <label>Dashboard</label> <!-- Hidden base search for dropdowns --> <search id="base_search"> <query> index=$index$ ---------- </query> <earliest>$time_tok.earliest$</earliest> <latest>$time_tok.latest$</latest> </search> <fieldset submitButton="false"></fieldset> <row> <panel> <html> <p>⚠️ Kindly avoid running the Dashboard for extended time frames <b>(Last 30 days)</b> unless absolutely necessary, as it may impact performance.</p> <p>To get started, Please make sure to select your <b>Index Name</b> - this is required to display the dashboard data </p> </html> </panel> </row> This is how I am writing the description. But I am not satisfied because it is not eye catchy. When the user opens the dashboard he should see this note first, i want in that way. I am not aware of HTML as well. Can some one help me. Copied icon from google and it seems small in dashboard.  
Try something like this | rex field=field_in_hhmmss "((?<days>\d+)\+)?((?<hours>\d+):)?((?<minutes>\d+):)?(?<seconds>[\d\.]+)" | eval formatted=if(days > 0,days." days, ","").if(days > 0 OR hours > ... See more...
Try something like this | rex field=field_in_hhmmss "((?<days>\d+)\+)?((?<hours>\d+):)?((?<minutes>\d+):)?(?<seconds>[\d\.]+)" | eval formatted=if(days > 0,days." days, ","").if(days > 0 OR hours > 0,hours." hours, ","").if(days > 0 OR hours > 0 OR minutes > 0,minutes." mins, ","").if(seconds > 0,seconds." secs","")