All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yes, you can contact Professional Services Team to use Caesar tool to achieve the goal.
We have a 4 node SHC connected to a deployer.  For a usecase, I created a simple custom app that is just putting handful of dashboards together. Due to ease of use, I create this directly on SHC and... See more...
We have a 4 node SHC connected to a deployer.  For a usecase, I created a simple custom app that is just putting handful of dashboards together. Due to ease of use, I create this directly on SHC and all knowledge objects replicated among the members. During the next bundle push, will deployer delete this app from SHC as it has no knowledge of it? Should I move this app under shcluster/apps folder on the Deployer as well to be safe? Thanks, ~Abhi 
hi, i don't know if it is the same issue but could you check this requirements. For example, is your cpu supported avx / avx2 instructions, if yes, is it enabled ? https://docs.splunk.com/Documenta... See more...
hi, i don't know if it is the same issue but could you check this requirements. For example, is your cpu supported avx / avx2 instructions, if yes, is it enabled ? https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/MigrateKVstore https://www.mongodb.com/docs/manual/administration/production-notes/ i hope this help
hi, i don't know if it is the same issue but could you check this requirements. For example, is your cpu supported avx / avx2 instructions, if yes, is it enabled ? https://docs.splunk.com/Documenta... See more...
hi, i don't know if it is the same issue but could you check this requirements. For example, is your cpu supported avx / avx2 instructions, if yes, is it enabled ? https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/MigrateKVstore https://www.mongodb.com/docs/manual/administration/production-notes/ i hope this help
Hi, can you try this : index="sample_idx" $serialnumber$ log_level=info | regex message="(?:Unit[\s]+state[\s]+update[\s]+from[\s]+cook[\s]+client[\s]+target)" this try to filter data that contain... See more...
Hi, can you try this : index="sample_idx" $serialnumber$ log_level=info | regex message="(?:Unit[\s]+state[\s]+update[\s]+from[\s]+cook[\s]+client[\s]+target)" this try to filter data that contains the bold text with words separated by one or more space. is that what you are looking for ? i'm sorry if i misunderstand  
Sorry for the delay @livehybrid. We just managed to get access to a customer's Splunk environment this week and it was very productive! Here are our findings: we validated that our Splunk app ... See more...
Sorry for the delay @livehybrid. We just managed to get access to a customer's Splunk environment this week and it was very productive! Here are our findings: we validated that our Splunk app works with Splunk Enterprise Security, on a standalone node we were able to recreate the bug in their clustering setup: we found out that not all of our Python scripts were executable, preventing execution in that context (Can't load script error) the source of the download errors was finally root-caused: Splunk Enterprise Security hijacks the Python module path order. So when we were trying to import our application's bin/utils.py in our own code, it was trying to import /opt/splunk/etc/apps/SA-ThreatIntelligence/bin/utils.py When we overrode sys.path in our script in the customer environment, the application worked again. The simplest work-around is to prefix all of our script files with ipinfo_ to prevent module name collision. We still feel that the Python module path hijacking should not be happening. Not sure if this is a bug that Splunk Platform teams should fix. If I need to file a bug somewhere, let me know! we noticed that we don't have to assign a bearer token to the Splunk admin user for our REST API calls to work with the Python SDK. We can use another user (e.g. ipinfo_admin) with a restricted set of permissions. We are still trying to figure out what the smallest amount of permissions are required for things to work. Next step is applying all of the fixes above and see if it resolves our customers' problems. I'll reach out in this thread if new issues pop up.
So I had help before that after a search I could send a report on a schedule and send a token to a mattermost channel I can send the token and it works, but I am doing a search where one of the fiel... See more...
So I had help before that after a search I could send a report on a schedule and send a token to a mattermost channel I can send the token and it works, but I am doing a search where one of the fields is a sum  Example stats sum(SizeGB) which is getting the sum of a Project ID for a day. What the search is doing is getting the total number of Data uploaded for a Project and the report works great however I was want to send the figure as a token in the alert - I can send the project id but not the sum - I have tried $testresult.sum(SizeGB)$ and also I did an eval of the Sum and called it total_size and tried that as a token and it is just blank.
Try something like this | eval Date=strftime(_time, "%Y-%m-%d") | search NOT ( [ | inputlookup holidays.csv | eval HolidayDate=strftime(strptime(HolidayDate,"%Y-%m-%d")+86400,"%Y-%m-%d") | fields Ho... See more...
Try something like this | eval Date=strftime(_time, "%Y-%m-%d") | search NOT ( [ | inputlookup holidays.csv | eval HolidayDate=strftime(strptime(HolidayDate,"%Y-%m-%d")+86400,"%Y-%m-%d") | fields HolidayDate ])
I want to get all action.correlationsearch.label into an autocomplete field of a custom UI. Displaying all the correlation search names in this dropdown and filtering on typing. I have ruled out usi... See more...
I want to get all action.correlationsearch.label into an autocomplete field of a custom UI. Displaying all the correlation search names in this dropdown and filtering on typing. I have ruled out using  https://<deployment-name>splunkcloud.com:8089/services/search/typeahead as that field does not have the field I need in the prefix.  Is there a method using a splunk endpoint with actual typeahead functionality where this is possible? I know I can use /services/saved/searches to get the rules and then implement filtering logic.  
The latest appinspect tool (splunk-appinspect inspect ....) returns a failure on our Add-on ..which is using cffi backend cpython.  below is the text form of error message .. included a screenshot of... See more...
The latest appinspect tool (splunk-appinspect inspect ....) returns a failure on our Add-on ..which is using cffi backend cpython.  below is the text form of error message .. included a screenshot of the failure.    Our App/Add-on is using this module for a long time .. multiple old versions  have it.. and we never seen this failure before.  We have got this package from the wheelodex (URL included below) to enable the support for python3-cffi URL : https://www.wheelodex.org/projects/cffi/wheels/cffi-1.17.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl/     A default value of 25 for max-messages will be used. Binary file standards Check that every binary file is compatible with AArch64. FAILURE: Found AArch64-incompatible binary file. Remove or rebuild the file to be AArch64-compatible. File: linux_x86_64/bin/lib/_cffi_backend.cpython-39-x86_64-linux-gnu.so File: linux_x86_64/bin/lib/_cffi_backend.cpython-39-x86_64-linux-gnu.so        Even the old version of apps that are published and available on splunk appstore are running into this failure.   Any insights on how to get this addressed ??
@gcusello  im getting a Error Error in 'EvalCommand': The arguments to the 'strftime' function are invalid. My search | eval Date=strftime(_time, "%Y-%m-%d") | search NOT ( [ | inputlookup hol... See more...
@gcusello  im getting a Error Error in 'EvalCommand': The arguments to the 'strftime' function are invalid. My search | eval Date=strftime(_time, "%Y-%m-%d") | search NOT ( [ | inputlookup holidays.csv | eval HolidayDate=strftime(strptime(HolidayDate,"%Y-%m-%d")+86400)) | fields HolidayDate ]
Thanks @livehybrid, so there does not seem to be a solution. The issue we are trying to address is if someone gets their hands on a HEC token, they could just send data to our Splunk Cloud instance ... See more...
Thanks @livehybrid, so there does not seem to be a solution. The issue we are trying to address is if someone gets their hands on a HEC token, they could just send data to our Splunk Cloud instance via that HEC token. We have set specific indexes for specific tokens, so that should limit this, but just trying to find a way to identify what is sending to a specific HEC token so we can monitor this. Do you have anymore info on how you added a custom field into the HEC payload?    
Sorry but the ACS API will not help in this case. 
I have been receiving HTTP Events from an invalid token, and want to trace them back to the source. However, the HEC is behind an NGINX load-balancer, so I need to configure the HEC to use proxied_i... See more...
I have been receiving HTTP Events from an invalid token, and want to trace them back to the source. However, the HEC is behind an NGINX load-balancer, so I need to configure the HEC to use proxied_ip to find the original IP.  connection_host = [ip|dns|proxied_ip|none] * "proxied_ip" checks whether an X-Forwarded-For header was sent (presumably by a proxy server) and if so, sets the host to that value. Otherwise, the IP address of the system sending the data is used. * No default.  I would also like to apply it to every token, as all HEC ingest goes through the LB. However, it looks like this option is only available at a per-token level. HTTP Event Collector (HEC) - Local stanza for each token | inputs.conf  Nothing changed when I set it under [http]  Seems like this was implemented incorrectly...
Hi @bapun18  I think this ultimately comes down to how you clone them, and when. If you clone a corrupted installation then theres a good chance you will end up with a corrupted clone of it. If you ... See more...
Hi @bapun18  I think this ultimately comes down to how you clone them, and when. If you clone a corrupted installation then theres a good chance you will end up with a corrupted clone of it. If you take a clone and have it waiting offline, then it could be hugely out of the date. The success of this approach really depends on your Splunk architecture, and how your configuration and data is managed. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @msmadhu  When you say it isnt working, is Splunk starting up but you are not seeing your certificate presented? Or are there errors starting up Splunk? I would have a look at $SPLUNK_HOME/var/l... See more...
Hi @msmadhu  When you say it isnt working, is Splunk starting up but you are not seeing your certificate presented? Or are there errors starting up Splunk? I would have a look at $SPLUNK_HOME/var/log/splunk/splunkd.log for any errors relating to SSL as this might give some direction on where to go next. Feel free to post some logs here for us to look at to try and help further. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will  
If you're still experiencing issues, please take a look here https://splunk.my.site.com/customer/s/article/KV-store-status-failed-after-upgrade-to-9-4 The suggestion of concatenating CA certs resolv... See more...
If you're still experiencing issues, please take a look here https://splunk.my.site.com/customer/s/article/KV-store-status-failed-after-upgrade-to-9-4 The suggestion of concatenating CA certs resolved the errors and Splunk was able to upgrade/initialize kvstore after a restart of splunkd.
If you're still experiencing issues, please take a look here https://splunk.my.site.com/customer/s/article/KV-store-status-failed-after-upgrade-to-9-4 The suggestion of concatenating CA certs resolv... See more...
If you're still experiencing issues, please take a look here https://splunk.my.site.com/customer/s/article/KV-store-status-failed-after-upgrade-to-9-4 The suggestion of concatenating CA certs resolved the errors and Splunk was able to upgrade/initialize kvstore after a restart of splunkd.
As I mentioned, I already have the sslcertificate.pem file and have followed the steps outlined in the section 'Install and configure certificates on the Splunk Enterprise management port", but not w... See more...
As I mentioned, I already have the sslcertificate.pem file and have followed the steps outlined in the section 'Install and configure certificates on the Splunk Enterprise management port", but not working.   
Adding this because we ran into a similar problem (not the same upgrade version), but were able to resolve the issue.  KVStore wasn't liking our certificates so we stopped splunk, removed the server... See more...
Adding this because we ran into a similar problem (not the same upgrade version), but were able to resolve the issue.  KVStore wasn't liking our certificates so we stopped splunk, removed the server.pem file and the $SPLUNK_HOME/var/lib/splunk/kvstore/mongo/mongod.lock, and started Splunk. KVStore didn't like our cert and it fixed our issue. Hopefully it helps anyone else stumbling across this thread.