All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi all, When upgrading from v9.4.1 to a newer version (including 10) on MacOS (arm) i receive the error message: -> Currently configured KVSTore database path="/Users/andreas/splunk/var/lib/splunk/... See more...
Hi all, When upgrading from v9.4.1 to a newer version (including 10) on MacOS (arm) i receive the error message: -> Currently configured KVSTore database path="/Users/andreas/splunk/var/lib/splunk/kvstore" -> Currently used KVSTore version=4.2.22. Expected version=4.2 or version=7.0 CPU Vendor: GenuineIntel CPU Family: 6 CPU Model: 44 CPU Brand: \x AVX Support: No SSE4.2 Support: Yes AES-NI Support: Yes   There seems to be an issue with determine AVX correctly thru rosetta?! - Anyway, i tried to upgrade on v9.4.1using ~/splunk/bin/splunk start-standalone-upgrade kvstore -version 7.0 -dryRun true and receive the error In handler 'kvstoreupgrade': Missing Mongod Binaries :: /Users/andreas/splunk/bin/mongod-7.0; /Users/andreas/splunk/bin/mongod-6.0; /Users/andreas/splunk/bin/mongod-5.0; /Users/andreas/splunk/bin/mongod-4.4; Please make sure they are present under :: /Users/andreas/splunk/bin before proceeding with upgrade. Upgrade Path = /Users/andreas/splunk/bin/mongod_upgrade not found Please make sure upgrade tool binary exists under /Users/andreas/splunk/bin The error that mongod-4.4, mongod-5.0, mongod-6.0 and mongod-7.0 are missing is correct, the files are not there. There are not in delivered splunk .tgz for macos. The linux tarball includes them..  any hints? best regards, Andreas
THATS what I forgot to mention. They are pushing everything to a HF on their side, that is pushing to my HF's. I will try that out. Thanks!
Hi @jessieb_83  Are they sending from a UF or a HF? If you arent having much look with this props/transform combo then it sounds like the data might be arriving to you already parsed, thus what you'... See more...
Hi @jessieb_83  Are they sending from a UF or a HF? If you arent having much look with this props/transform combo then it sounds like the data might be arriving to you already parsed, thus what you're doing here wont have an affect. If the data has already been through a HF then you could try this instead: # props.conf [linux_messages_syslog] RULESET-dropSyslog = dropLog # transforms [dropLog] INGEST_EVAL = queue="nullQueue" If you are sure its coming from a UF then you could try setting "REGEX = ." on your existing config, however I think what you had should have worked.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Too follow up.  Using  [Service] AmbientCapabilities=CAP_DAC_READ_SEARCH This fails under the following conditions: If you have an old `Splunkd.service` file, with a line using =!, like the fo... See more...
Too follow up.  Using  [Service] AmbientCapabilities=CAP_DAC_READ_SEARCH This fails under the following conditions: If you have an old `Splunkd.service` file, with a line using =!, like the following: ExecStart=!/opt/splunk/bin/splunk _internal_launch_under_systemd If so, you will need to recreate the Splunkd.service file. If you utilize the "Data inputs --> Files & directories" monitor method for ingest the /var/log/audit/audit,log files this fails. This works with a current Splunk version (mine is 9.3.5) created Splunkd.service file and using the Splunk_TA_nix script method of ingest using rlog.sh. Kuddos to @livehybrid for causing me to review and realize I had an out of date Splunkd.service file  
I'm at a loss and hoping for an assist.  Running a distributed Splunk instance, I used the Deployment Server to push props.conf and transforms.conf to my heavy forwarders to drop specific external c... See more...
I'm at a loss and hoping for an assist.  Running a distributed Splunk instance, I used the Deployment Server to push props.conf and transforms.conf to my heavy forwarders to drop specific external customer logs logs at the HF.    We're receiving logs from several external customers, each with their own index. I'm in progress of dividing each customer into sub indexes like {customer}-network, {customer}-authentication and {customer}-syslog Yes, I'm trying to dump all Linux syslog. This a temporary move while their syslog are flooding millions of errors before we're able to finish moving them to their new {customer}-syslog index. I did inform them and they're working it, with no ETA. I've been over a dozen posts on the boards, I've asked two different AI's how to do this backwards and forwards, I've triple checked spelling, placement & permissions. I tried pushing the configs to the indexers from the cluster manager and that didn't work either. I created copies of the configs in ~/etc/system/local/ and no dice.  I've done similar in my lab with success.  I verified the customer inputs.conf is declaring the sourcetype as linux_messages_syslog I'm at a total loss of why this isn't working.   props.conf: [linux_messages_syslog] TRANSFORMS-dropLog = dropLog transforms.conf: [dropLog] REGEX = (?s)^.*$ DEST_KEY = queue FORMAT = nullQueue   Anyone have any idea what got'cha I'm getting got by?
Hey @harryvdtol  Some good news -  "Trellis layout support has been expanded to more visualizations. Now, in addition to single value visualizations, you can apply trellis layout to area, line, bar... See more...
Hey @harryvdtol  Some good news -  "Trellis layout support has been expanded to more visualizations. Now, in addition to single value visualizations, you can apply trellis layout to area, line, bar, and column charts." in Splunk Enterprise10.0 - Check out this blog for more info.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @spamarea1  Do you have any encrypted fields in the input configuration? It might be that these arent copied when an input is cloned - this might explain why you are getting a 401 error from your... See more...
Hi @spamarea1  Do you have any encrypted fields in the input configuration? It might be that these arent copied when an input is cloned - this might explain why you are getting a 401 error from your API if its missing some credentials/password etc. If you've clone it, try updating any encrypted value - if appropriate.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @dwong-rtr  In Splunk Cloud Platform, you cannot customise the "From" email address for triggered alert emails; emails are always sent from alerts@splunkcloud.com and this cannot be changed due t... See more...
Hi @dwong-rtr  In Splunk Cloud Platform, you cannot customise the "From" email address for triggered alert emails; emails are always sent from alerts@splunkcloud.com and this cannot be changed due to how Splunk Cloud manages outbound mail for security and deliverability reasons. The "Send email as" option is intentionally disabled on Splunk Cloud.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Since Splunk cannot be changed, you will have to change your email policy to allow messages from the specified email address.
Thanks will this be more secure than before? 
We currently have email as a trigger action for Searches, Reports and Alerts. The issue arises when we try to email certain company email addresses because the address is configured to only allow int... See more...
We currently have email as a trigger action for Searches, Reports and Alerts. The issue arises when we try to email certain company email addresses because the address is configured to only allow internal email messages (like a distribution list type email address). The email coming from Splunk Cloud is from  alerts@splunkcloud.com. We would prefer not to make internal email addresses allow receipt of external emails. There is no way to configure the "From" address in the Triggered Actions section. Ideally what was proposed was that we somehow configure Splunk to send the email as if it came from an internal service email address for our company. I found some documentation on Email configuration however where I would insert an internal email address to be the "FROM", the documentation states "Send email as: This value is set by your Splunk Cloud Platform implementation and cannot be changed. Entering a value in this field has no effect."  Any suggestions on how to accomplish this without too much time investment?
Maybe I'm a little dense, but I tried using the --app context and the report was blank, no results.  For example I tried both, and the command returned no results: splunk cmd btool commands list --d... See more...
Maybe I'm a little dense, but I tried using the --app context and the report was blank, no results.  For example I tried both, and the command returned no results: splunk cmd btool commands list --debug dbxlookup --app=search splunk cmd btool --app=dbconnect commands list --debug dbxlookup What am I missing?
You're a bit stuck in choosing lesser evil. But. You can leverage the TERM() command. Instead of matching search-time extracted field, since stash uses key=value pairs, you can set your filter to ... See more...
You're a bit stuck in choosing lesser evil. But. You can leverage the TERM() command. Instead of matching search-time extracted field, since stash uses key=value pairs, you can set your filter to (index=prod) OR (index=opco_summary AND (TERM(service=juniper-prod) OR TERM(service=juniper-cont)))
The frozenTimePeriodInSecs setting does not apply to hot buckets.  You should, however, see warm buckets once a hot bucket fills up or becomes 90 days old.  I can't explain why you don't.
Thanks @livehybrid and @richgalloway both suggestions are helpful. I was able to use btool to find what indexes.conf each index is using and then I did change  maxHotSpanSecs to the suggested # and I... See more...
Thanks @livehybrid and @richgalloway both suggestions are helpful. I was able to use btool to find what indexes.conf each index is using and then I did change  maxHotSpanSecs to the suggested # and I see more warm buckets. If this going to trigger data deletion that's over an year old, that's great, I will wait and see. However, regardless what was set for maxHotSpanSecs, shouldn't frozenTimePeriodInSecs have triggered the expiration of data and delete? O I sure am not clear how maxHotSpanSecs and frozenTimePeriodInSecs work together and affects the retention period. If one can explain, it would be great. 
Hello everyone, I’m trying to track all the resources loaded on a page — specifically, the ones that appear in the browserResourceRecord index. Right now, I only see a portion of the data, and the c... See more...
Hello everyone, I’m trying to track all the resources loaded on a page — specifically, the ones that appear in the browserResourceRecord index. Right now, I only see a portion of the data, and the captured entries seem completely random. My final goal is to correlate a browser_record session (via cguid) with its corresponding entries in browserResourceRecord. Currently, I’m able to do the reverse: occasionally, a page is randomly captured in browserResourceRecord, and I can trace it back to the session it belongs to. But I can’t do the opposite — which is what I actually need. I’ve tried various things in the RUM script. My most recent changes involved setting the following capture parameters: config.resTiming = { sampler: "TopN", maxNum: 500, bufSize: 500, clearResTimingOnBeaconSend: true }; Unfortunately, this hasn’t worked either. I also suspected that resources only appear when they violate the Resource Performance thresholds, so I configured extremely low thresholds — but this didn’t help either. What I’d really like is to have access to something similar to a HAR file — with all the resource information — and make it available via Analytics, so I can download and compare it. Unfortunately, the session waterfall isn't downloadable — which is a major limitation. Thank you, Marco.
Addon Builder 4.5.0. This app adds a data input automatically. This is a good thing, I then go to add new to complete the configuration. Everything is running good. A few days later, I thought of a... See more...
Addon Builder 4.5.0. This app adds a data input automatically. This is a good thing, I then go to add new to complete the configuration. Everything is running good. A few days later, I thought of a better name for the input. I cloned the original, put a different name, and kept all the same config. I disabled the original. I noticed that I can still run the script and see the API output, but when I searched for the output, I did not find it. I started to see 401 errors instead. I went back to the data inputs and disabled the clone and enabled the original and all is back to normal. Is there a rule to cloning the data input for the addon builder that says not to clone?      
Addon Builder 4.5.0,  Modular input using my Python code.   In this example the collection interval is set for 30 seconds. I added a log to verify it is running here: log_file = "/opt/splunk/etc/... See more...
Addon Builder 4.5.0,  Modular input using my Python code.   In this example the collection interval is set for 30 seconds. I added a log to verify it is running here: log_file = "/opt/splunk/etc/apps/TA-api1/logs/vosfin_cli.log"   The main page (Configure Data Collection) shows all the 'input names' that I built. But looking at the 'event count', I see a 0.  When I go into the log, it shows it running and giving me data ok.  Why doesn't the event count up every time the script runs?    Is there addition configuration in inputs, props or web.conf that I need to add/edit to make it count up?    
Thanks.
The fact is the same email settings were tested for UAT but in UAT all the email alerts rightly came to Inbox.Only form enterprise it is landing in Junk