All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hey @harryvdtol  Some good news -  "Trellis layout support has been expanded to more visualizations. Now, in addition to single value visualizations, you can apply trellis layout to area, line, bar... See more...
Hey @harryvdtol  Some good news -  "Trellis layout support has been expanded to more visualizations. Now, in addition to single value visualizations, you can apply trellis layout to area, line, bar, and column charts." in Splunk Enterprise10.0 - Check out this blog for more info.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @spamarea1  Do you have any encrypted fields in the input configuration? It might be that these arent copied when an input is cloned - this might explain why you are getting a 401 error from your... See more...
Hi @spamarea1  Do you have any encrypted fields in the input configuration? It might be that these arent copied when an input is cloned - this might explain why you are getting a 401 error from your API if its missing some credentials/password etc. If you've clone it, try updating any encrypted value - if appropriate.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @dwong-rtr  In Splunk Cloud Platform, you cannot customise the "From" email address for triggered alert emails; emails are always sent from alerts@splunkcloud.com and this cannot be changed due t... See more...
Hi @dwong-rtr  In Splunk Cloud Platform, you cannot customise the "From" email address for triggered alert emails; emails are always sent from alerts@splunkcloud.com and this cannot be changed due to how Splunk Cloud manages outbound mail for security and deliverability reasons. The "Send email as" option is intentionally disabled on Splunk Cloud.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Since Splunk cannot be changed, you will have to change your email policy to allow messages from the specified email address.
Thanks will this be more secure than before? 
We currently have email as a trigger action for Searches, Reports and Alerts. The issue arises when we try to email certain company email addresses because the address is configured to only allow int... See more...
We currently have email as a trigger action for Searches, Reports and Alerts. The issue arises when we try to email certain company email addresses because the address is configured to only allow internal email messages (like a distribution list type email address). The email coming from Splunk Cloud is from  alerts@splunkcloud.com. We would prefer not to make internal email addresses allow receipt of external emails. There is no way to configure the "From" address in the Triggered Actions section. Ideally what was proposed was that we somehow configure Splunk to send the email as if it came from an internal service email address for our company. I found some documentation on Email configuration however where I would insert an internal email address to be the "FROM", the documentation states "Send email as: This value is set by your Splunk Cloud Platform implementation and cannot be changed. Entering a value in this field has no effect."  Any suggestions on how to accomplish this without too much time investment?
Maybe I'm a little dense, but I tried using the --app context and the report was blank, no results.  For example I tried both, and the command returned no results: splunk cmd btool commands list --d... See more...
Maybe I'm a little dense, but I tried using the --app context and the report was blank, no results.  For example I tried both, and the command returned no results: splunk cmd btool commands list --debug dbxlookup --app=search splunk cmd btool --app=dbconnect commands list --debug dbxlookup What am I missing?
You're a bit stuck in choosing lesser evil. But. You can leverage the TERM() command. Instead of matching search-time extracted field, since stash uses key=value pairs, you can set your filter to ... See more...
You're a bit stuck in choosing lesser evil. But. You can leverage the TERM() command. Instead of matching search-time extracted field, since stash uses key=value pairs, you can set your filter to (index=prod) OR (index=opco_summary AND (TERM(service=juniper-prod) OR TERM(service=juniper-cont)))
The frozenTimePeriodInSecs setting does not apply to hot buckets.  You should, however, see warm buckets once a hot bucket fills up or becomes 90 days old.  I can't explain why you don't.
Thanks @livehybrid and @richgalloway both suggestions are helpful. I was able to use btool to find what indexes.conf each index is using and then I did change  maxHotSpanSecs to the suggested # and I... See more...
Thanks @livehybrid and @richgalloway both suggestions are helpful. I was able to use btool to find what indexes.conf each index is using and then I did change  maxHotSpanSecs to the suggested # and I see more warm buckets. If this going to trigger data deletion that's over an year old, that's great, I will wait and see. However, regardless what was set for maxHotSpanSecs, shouldn't frozenTimePeriodInSecs have triggered the expiration of data and delete? O I sure am not clear how maxHotSpanSecs and frozenTimePeriodInSecs work together and affects the retention period. If one can explain, it would be great. 
Hello everyone, I’m trying to track all the resources loaded on a page — specifically, the ones that appear in the browserResourceRecord index. Right now, I only see a portion of the data, and the c... See more...
Hello everyone, I’m trying to track all the resources loaded on a page — specifically, the ones that appear in the browserResourceRecord index. Right now, I only see a portion of the data, and the captured entries seem completely random. My final goal is to correlate a browser_record session (via cguid) with its corresponding entries in browserResourceRecord. Currently, I’m able to do the reverse: occasionally, a page is randomly captured in browserResourceRecord, and I can trace it back to the session it belongs to. But I can’t do the opposite — which is what I actually need. I’ve tried various things in the RUM script. My most recent changes involved setting the following capture parameters: config.resTiming = { sampler: "TopN", maxNum: 500, bufSize: 500, clearResTimingOnBeaconSend: true }; Unfortunately, this hasn’t worked either. I also suspected that resources only appear when they violate the Resource Performance thresholds, so I configured extremely low thresholds — but this didn’t help either. What I’d really like is to have access to something similar to a HAR file — with all the resource information — and make it available via Analytics, so I can download and compare it. Unfortunately, the session waterfall isn't downloadable — which is a major limitation. Thank you, Marco.
Addon Builder 4.5.0. This app adds a data input automatically. This is a good thing, I then go to add new to complete the configuration. Everything is running good. A few days later, I thought of a... See more...
Addon Builder 4.5.0. This app adds a data input automatically. This is a good thing, I then go to add new to complete the configuration. Everything is running good. A few days later, I thought of a better name for the input. I cloned the original, put a different name, and kept all the same config. I disabled the original. I noticed that I can still run the script and see the API output, but when I searched for the output, I did not find it. I started to see 401 errors instead. I went back to the data inputs and disabled the clone and enabled the original and all is back to normal. Is there a rule to cloning the data input for the addon builder that says not to clone?      
Addon Builder 4.5.0,  Modular input using my Python code.   In this example the collection interval is set for 30 seconds. I added a log to verify it is running here: log_file = "/opt/splunk/etc/... See more...
Addon Builder 4.5.0,  Modular input using my Python code.   In this example the collection interval is set for 30 seconds. I added a log to verify it is running here: log_file = "/opt/splunk/etc/apps/TA-api1/logs/vosfin_cli.log"   The main page (Configure Data Collection) shows all the 'input names' that I built. But looking at the 'event count', I see a 0.  When I go into the log, it shows it running and giving me data ok.  Why doesn't the event count up every time the script runs?    Is there addition configuration in inputs, props or web.conf that I need to add/edit to make it count up?    
Thanks.
The fact is the same email settings were tested for UAT but in UAT all the email alerts rightly came to Inbox.Only form enterprise it is landing in Junk
Hi @livehybrid , Thank you for your response. We are actually testing the Universal Forwarder only. Also, just to clarify, the fresh installation is working fine on the Windows 10 VM. The issue oc... See more...
Hi @livehybrid , Thank you for your response. We are actually testing the Universal Forwarder only. Also, just to clarify, the fresh installation is working fine on the Windows 10 VM. The issue occurs only during the upgrade process.
HI @debdutsaini , replace stats with table in the last line of your query like below index=* | eval device = coalesce(dvc, device_name) | eval is_valid_str=if(match(device, "^[a-zA-Z0-9_\-.,$]*... See more...
HI @debdutsaini , replace stats with table in the last line of your query like below index=* | eval device = coalesce(dvc, device_name) | eval is_valid_str=if(match(device, "^[a-zA-Z0-9_\-.,$]*$"), "true", "false") | where is_valid_str="false" | table _time index device _raw
Hi @Namo ,   When Splunk alert emails land in your junk/spam folder, it's usually an issue not with Splunk itself, but with how the email is being handled by your mail server, client, or spam filte... See more...
Hi @Namo ,   When Splunk alert emails land in your junk/spam folder, it's usually an issue not with Splunk itself, but with how the email is being handled by your mail server, client, or spam filters. If you control your mail client or domain filters: Add the From address to your safe sender list. Whitelist the Splunk server IP or domain in your Exchange / Outlook / Gmail policies.
Hi @Namo  This is typically an email server/client configuration issue rather than a Splunk problem. The emails are being flagged as spam by your email provider's filters. Are you able to add Splun... See more...
Hi @Namo  This is typically an email server/client configuration issue rather than a Splunk problem. The emails are being flagged as spam by your email provider's filters. Are you able to add Splunk server to safe senders list?  The other things to check are the email server reputation of the SMTP server configured in Splunk as bad reputation of email server can also cause your receiving server to flag as spam, the sending SMTP service should have proper SPF/DKIM/DMARC records to reduce being detected as spam.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@Namo  If Splunk is sending emails from a domain that lacks proper authentication (SPF, DKIM, DMARC), email providers may flag it as spam. Check this internally with your IT team.  Ensure the sendi... See more...
@Namo  If Splunk is sending emails from a domain that lacks proper authentication (SPF, DKIM, DMARC), email providers may flag it as spam. Check this internally with your IT team.  Ensure the sending domain is properly configured: SPF: Add Splunk’s sending IP to your domain’s SPF record. DKIM: Sign outgoing emails with DKIM if possible. DMARC: Set up a DMARC policy to monitor and enforce authentication