All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @splunklearner  Sorry Im not following what you're asking for here, please could you clarify? You can edit the HTML to say whatever you need, just apply the same styling as in the example.  D... See more...
Hi @splunklearner  Sorry Im not following what you're asking for here, please could you clarify? You can edit the HTML to say whatever you need, just apply the same styling as in the example.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @elend  Yes, you can configure Splunk (since 7.2 I think) to use mixture of local storage and S3-compliant storage, including the likes of Amazon S3 using Splunk's SmartStore functionality, this ... See more...
Hi @elend  Yes, you can configure Splunk (since 7.2 I think) to use mixture of local storage and S3-compliant storage, including the likes of Amazon S3 using Splunk's SmartStore functionality, this essentially uses your local storage for hot buckets and as a local cache for buckets which are also stored in S3. Its more of a complex beast than I can go into here, and there are lots of things to consider - for example this is generally considered a one-way exercise!   https://docs.splunk.com/Documentation/SVA/current/Architectures/SmartStore gives a good overview of the architecture, benefits and next steps. Check out https://help.splunk.com/en/splunk-enterprise/administer/manage-indexers-and-indexer-clusters/9.3/manage-smartstore/configure-smartstore for more info on setting up smartstore as well as https://help.splunk.com/en/splunk-enterprise/administer/manage-indexers-and-indexer-clusters/9.4/deploy-smartstore/deploy-smartstore-on-a-new-standalone-indexer which has some info on setting this up on a single indexer (as a starter, this will depend on your specific environment architecture).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Narendra_Rao  There are a number of different ways to get AWS Cloudwatch logs out of AWS into your on-prem environment, ultimately I think this will depend on how where your VPN terminates and w... See more...
Hi @Narendra_Rao  There are a number of different ways to get AWS Cloudwatch logs out of AWS into your on-prem environment, ultimately I think this will depend on how where your VPN terminates and which AWS services can connect to it. I tend to go with using AWS Firehose which sends to your Splunk HEC endpoint - Check out https://aws.amazon.com/blogs/big-data/deliver-decompressed-amazon-cloudwatch-logs-to-amazon-s3-and-splunk-using-amazon-data-firehose/ for more information on this. Alternatively you can send using AWS Lambda instead of Firehose, this also sends to HEC - Check out https://www.splunk.com/en_us/blog/platform/stream-amazon-cloudwatch-logs-to-splunk-using-aws-lambda.html for more info on this. There may be others, but ultimately it depends on your connection - do either of these look suitable for your environment? Let me know if you need more info?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
We have logs already in CloudWatch. What is the best way to take the logs from cloudwatch to splunk on prem. We have a vpn established too between them . So based on this any Add ons or other viable... See more...
We have logs already in CloudWatch. What is the best way to take the logs from cloudwatch to splunk on prem. We have a vpn established too between them . So based on this any Add ons or other viable solution other than Add ons. If yes : Any details /steps etc.
You've just stumbled across SmartStore (S2).  S2 keeps hot buckets local and copies warm buckets to S3.  A cache of roughly 30 days of data is retained locally for faster search performance. To impl... See more...
You've just stumbled across SmartStore (S2).  S2 keeps hot buckets local and copies warm buckets to S3.  A cache of roughly 30 days of data is retained locally for faster search performance. To implement S2 correctly, see https://docs.splunk.com/Documentation/Splunk/9.4.2/Indexer/AboutSmartStore
Thank you all for confirming and your various suggestions!!
I don't see where you address your the issue of the service not starting after you complete the install "Alright, that may get it installed, but afterwards you will notice it does not want to start, ... See more...
I don't see where you address your the issue of the service not starting after you complete the install "Alright, that may get it installed, but afterwards you will notice it does not want to start, but that's ok, I will show you what to do to get it to start in the follow up post."
Looks like my calculations were warped by this bug: https://bugs.openjdk.org/browse/JDK-8307488?force_isolation=true When I throw out of first sample of each thread, the results make a lot more sen... See more...
Looks like my calculations were warped by this bug: https://bugs.openjdk.org/browse/JDK-8307488?force_isolation=true When I throw out of first sample of each thread, the results make a lot more sense.
Hi @livehybrid  Thanks for the pointers!! We were able to implement TLS1.3 using Java 8 (OpenJDK) as well, so looks like latest Java 8 version also contains the required support for TLS1.3; Previou... See more...
Hi @livehybrid  Thanks for the pointers!! We were able to implement TLS1.3 using Java 8 (OpenJDK) as well, so looks like latest Java 8 version also contains the required support for TLS1.3; Previously it was offered only though Java 11 like you said but now it is possible with Java 8 as well.   Thanks.
Hi there, I want to point the secondary storage of my splunk indexer to mix with another storage, like point it to cloud storage? so it will like this one is the common: [volume:hot1] path = /mn... See more...
Hi there, I want to point the secondary storage of my splunk indexer to mix with another storage, like point it to cloud storage? so it will like this one is the common: [volume:hot1] path = /mnt/fast_disk maxVolumeDataSizeMB = 100000 [volume:s3volume] storageType = remote path = s3://<bucketname>/rest/of/path   is there a mechanism or reference to did this?
This is how it is showing. Can you please format more the performance notice.. We have this as well below dropdowns. Any chance can we club all these all in one note...confused
hi @splunklearner  How about this?   <dashboard version="1.1" theme="light"> <label>Your dashboard name</label> <!-- ===== NOTICE PANEL ===== --> <row> <panel> <html> ... See more...
hi @splunklearner  How about this?   <dashboard version="1.1" theme="light"> <label>Your dashboard name</label> <!-- ===== NOTICE PANEL ===== --> <row> <panel> <html> <div style=" background: linear-gradient(120deg,#fff5f5 0%,#fff 100%); border-left: 6px solid #ff9800; box-shadow: 0 2px 6px rgba(0,0,0,.12); border-radius: 6px; padding: 18px 24px; font-family: -apple-system,BlinkMacSystemFont,Segoe UI,Helvetica,Arial,sans-serif; font-size: 15px; line-height: 1.45;"> <h3 style="color:#d84315; margin:0 0 8px 0; display:flex; align-items:center;"> <!-- unicode icon (search engine–friendly, scales with text size) --> <span style="font-size:32px; margin-right:12px;">⚠️</span> Performance notice </h3> <p style="margin:0 0 10px 0; color:#424242;"> Avoid running the dashboard for long date ranges <strong>(Last 30 days)</strong> unless strictly needed – it may impact performance. </p> <p style="margin:0; color:#424242;"> Before you continue, please select the <strong>Index Name</strong> above. The dashboard will remain empty until an index is chosen. </p> </div> </html> </panel> </row> <!-- rest of your dashboard --> </dashboard>  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
In the AppD metric browser for Java apps, there is a metric called Allocated-Objects (MB).  I thought I understood what this was, but I'm getting some unexpected results after making a code change. ... See more...
In the AppD metric browser for Java apps, there is a metric called Allocated-Objects (MB).  I thought I understood what this was, but I'm getting some unexpected results after making a code change. We had a service that had an allocation rate that was too high for the request volume, IMO, so we took a 1-minute flight recording in a test environment.  Total allocation, according to the flight recording samples, was around 35GB.  Based on where the samples were coming from, we made a code change.  When we retested, the total allocation for the same test over 1-minute was only 9GB, approximately 75% less. When we deployed the change and re-ran our endurance test, we saw an object allocation rate that was only slightly lower than the baseline.  Dividing the allocation rate by the request volume, the number had only gone down from 12MB/req to 10MB/req. We do not have verbose GC enabled, so I can't check against that. What could be causing the numbers to be so similar?  Is the allocation rate reported by AppD reliable? thanks  
<form version="1.1" theme="light"> <label>Dashboard</label> <!-- Hidden base search for dropdowns --> <search id="base_search"> <query> index=$index$ ---------- </query> <earliest>$time_tok.earliest$... See more...
<form version="1.1" theme="light"> <label>Dashboard</label> <!-- Hidden base search for dropdowns --> <search id="base_search"> <query> index=$index$ ---------- </query> <earliest>$time_tok.earliest$</earliest> <latest>$time_tok.latest$</latest> </search> <fieldset submitButton="false"></fieldset> <row> <panel> <html> <p>⚠️ Kindly avoid running the Dashboard for extended time frames <b>(Last 30 days)</b> unless absolutely necessary, as it may impact performance.</p> <p>To get started, Please make sure to select your <b>Index Name</b> - this is required to display the dashboard data </p> </html> </panel> </row> This is how I am writing the description. But I am not satisfied because it is not eye catchy. When the user opens the dashboard he should see this note first, i want in that way. I am not aware of HTML as well. Can some one help me. Copied icon from google and it seems small in dashboard.  
Try something like this | rex field=field_in_hhmmss "((?<days>\d+)\+)?((?<hours>\d+):)?((?<minutes>\d+):)?(?<seconds>[\d\.]+)" | eval formatted=if(days > 0,days." days, ","").if(days > 0 OR hours > ... See more...
Try something like this | rex field=field_in_hhmmss "((?<days>\d+)\+)?((?<hours>\d+):)?((?<minutes>\d+):)?(?<seconds>[\d\.]+)" | eval formatted=if(days > 0,days." days, ","").if(days > 0 OR hours > 0,hours." hours, ","").if(days > 0 OR hours > 0 OR minutes > 0,minutes." mins, ","").if(seconds > 0,seconds." secs","")
To be fully honest, it's a "double donut" version.  I wouldn't be surprised if it was a bit buggy. 1. Don't just jump head-first into a version just because it's just been released. Unless there are... See more...
To be fully honest, it's a "double donut" version.  I wouldn't be surprised if it was a bit buggy. 1. Don't just jump head-first into a version just because it's just been released. Unless there are fixes for issues hitting you or patches for known vulnerabilities, there's usually no reason to upgrade. Splunk can handle a wide range of older forwarders pretty well. 2. What you can do to help in product development and bug fixing is to gather the installation logs and raise a support ticket. (and - if the problem isn't internal to the installer but can be bypassed or it's triggered by some specific set of conditions - share the knowledge)
I'm not sure but you might need to have to use the --user option as well. In my tests I don't see any output if I give --app but not give --user
Then @livehybrid 's solution should work. When you're getting data from a HF (or any other "full" Splunk instance) you're getting it as already parsed and it completely bypasses most of the props/tra... See more...
Then @livehybrid 's solution should work. When you're getting data from a HF (or any other "full" Splunk instance) you're getting it as already parsed and it completely bypasses most of the props/transforms mechanics, except for RULESETs.
If you're ok with the timestamp just being assigned to an event (no need to have it explicitly written in the event itself),  just parse out the timestamp, cut the whole header and just leave the jso... See more...
If you're ok with the timestamp just being assigned to an event (no need to have it explicitly written in the event itself),  just parse out the timestamp, cut the whole header and just leave the json part on its own. Timestamp recognition takes place very early in the ingestion pipeline so you can do this way and not have to have the "timestamp" field in your json. You'll just have the _time field.
1. As @ITWhisperer noticed - you might be reinventing the wheel since probably the value comes from some earlier time-based data so there could be no need for rendering and parsing this value back an... See more...
1. As @ITWhisperer noticed - you might be reinventing the wheel since probably the value comes from some earlier time-based data so there could be no need for rendering and parsing this value back and forth 2. Are you sure (you might be, just asking) that you want to calculate average of the averages? If the overall average is what you're seeking, an average of averages will not give you that.