All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I want to get all action.correlationsearch.label into an autocomplete field of a custom UI. Displaying all the correlation search names in this dropdown and filtering on typing. I have ruled out usi... See more...
I want to get all action.correlationsearch.label into an autocomplete field of a custom UI. Displaying all the correlation search names in this dropdown and filtering on typing. I have ruled out using  https://<deployment-name>splunkcloud.com:8089/services/search/typeahead as that field does not have the field I need in the prefix.  Is there a method using a splunk endpoint with actual typeahead functionality where this is possible? I know I can use /services/saved/searches to get the rules and then implement filtering logic.  
The latest appinspect tool (splunk-appinspect inspect ....) returns a failure on our Add-on ..which is using cffi backend cpython.  below is the text form of error message .. included a screenshot of... See more...
The latest appinspect tool (splunk-appinspect inspect ....) returns a failure on our Add-on ..which is using cffi backend cpython.  below is the text form of error message .. included a screenshot of the failure.    Our App/Add-on is using this module for a long time .. multiple old versions  have it.. and we never seen this failure before.  We have got this package from the wheelodex (URL included below) to enable the support for python3-cffi URL : https://www.wheelodex.org/projects/cffi/wheels/cffi-1.17.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl/     A default value of 25 for max-messages will be used. Binary file standards Check that every binary file is compatible with AArch64. FAILURE: Found AArch64-incompatible binary file. Remove or rebuild the file to be AArch64-compatible. File: linux_x86_64/bin/lib/_cffi_backend.cpython-39-x86_64-linux-gnu.so File: linux_x86_64/bin/lib/_cffi_backend.cpython-39-x86_64-linux-gnu.so        Even the old version of apps that are published and available on splunk appstore are running into this failure.   Any insights on how to get this addressed ??
@gcusello  im getting a Error Error in 'EvalCommand': The arguments to the 'strftime' function are invalid. My search | eval Date=strftime(_time, "%Y-%m-%d") | search NOT ( [ | inputlookup hol... See more...
@gcusello  im getting a Error Error in 'EvalCommand': The arguments to the 'strftime' function are invalid. My search | eval Date=strftime(_time, "%Y-%m-%d") | search NOT ( [ | inputlookup holidays.csv | eval HolidayDate=strftime(strptime(HolidayDate,"%Y-%m-%d")+86400)) | fields HolidayDate ]
Thanks @livehybrid, so there does not seem to be a solution. The issue we are trying to address is if someone gets their hands on a HEC token, they could just send data to our Splunk Cloud instance ... See more...
Thanks @livehybrid, so there does not seem to be a solution. The issue we are trying to address is if someone gets their hands on a HEC token, they could just send data to our Splunk Cloud instance via that HEC token. We have set specific indexes for specific tokens, so that should limit this, but just trying to find a way to identify what is sending to a specific HEC token so we can monitor this. Do you have anymore info on how you added a custom field into the HEC payload?    
Sorry but the ACS API will not help in this case. 
I have been receiving HTTP Events from an invalid token, and want to trace them back to the source. However, the HEC is behind an NGINX load-balancer, so I need to configure the HEC to use proxied_i... See more...
I have been receiving HTTP Events from an invalid token, and want to trace them back to the source. However, the HEC is behind an NGINX load-balancer, so I need to configure the HEC to use proxied_ip to find the original IP.  connection_host = [ip|dns|proxied_ip|none] * "proxied_ip" checks whether an X-Forwarded-For header was sent (presumably by a proxy server) and if so, sets the host to that value. Otherwise, the IP address of the system sending the data is used. * No default.  I would also like to apply it to every token, as all HEC ingest goes through the LB. However, it looks like this option is only available at a per-token level. HTTP Event Collector (HEC) - Local stanza for each token | inputs.conf  Nothing changed when I set it under [http]  Seems like this was implemented incorrectly...
Hi @bapun18  I think this ultimately comes down to how you clone them, and when. If you clone a corrupted installation then theres a good chance you will end up with a corrupted clone of it. If you ... See more...
Hi @bapun18  I think this ultimately comes down to how you clone them, and when. If you clone a corrupted installation then theres a good chance you will end up with a corrupted clone of it. If you take a clone and have it waiting offline, then it could be hugely out of the date. The success of this approach really depends on your Splunk architecture, and how your configuration and data is managed. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @msmadhu  When you say it isnt working, is Splunk starting up but you are not seeing your certificate presented? Or are there errors starting up Splunk? I would have a look at $SPLUNK_HOME/var/l... See more...
Hi @msmadhu  When you say it isnt working, is Splunk starting up but you are not seeing your certificate presented? Or are there errors starting up Splunk? I would have a look at $SPLUNK_HOME/var/log/splunk/splunkd.log for any errors relating to SSL as this might give some direction on where to go next. Feel free to post some logs here for us to look at to try and help further. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will  
If you're still experiencing issues, please take a look here https://splunk.my.site.com/customer/s/article/KV-store-status-failed-after-upgrade-to-9-4 The suggestion of concatenating CA certs resolv... See more...
If you're still experiencing issues, please take a look here https://splunk.my.site.com/customer/s/article/KV-store-status-failed-after-upgrade-to-9-4 The suggestion of concatenating CA certs resolved the errors and Splunk was able to upgrade/initialize kvstore after a restart of splunkd.
If you're still experiencing issues, please take a look here https://splunk.my.site.com/customer/s/article/KV-store-status-failed-after-upgrade-to-9-4 The suggestion of concatenating CA certs resolv... See more...
If you're still experiencing issues, please take a look here https://splunk.my.site.com/customer/s/article/KV-store-status-failed-after-upgrade-to-9-4 The suggestion of concatenating CA certs resolved the errors and Splunk was able to upgrade/initialize kvstore after a restart of splunkd.
As I mentioned, I already have the sslcertificate.pem file and have followed the steps outlined in the section 'Install and configure certificates on the Splunk Enterprise management port", but not w... See more...
As I mentioned, I already have the sslcertificate.pem file and have followed the steps outlined in the section 'Install and configure certificates on the Splunk Enterprise management port", but not working.   
Adding this because we ran into a similar problem (not the same upgrade version), but were able to resolve the issue.  KVStore wasn't liking our certificates so we stopped splunk, removed the server... See more...
Adding this because we ran into a similar problem (not the same upgrade version), but were able to resolve the issue.  KVStore wasn't liking our certificates so we stopped splunk, removed the server.pem file and the $SPLUNK_HOME/var/lib/splunk/kvstore/mongo/mongod.lock, and started Splunk. KVStore didn't like our cert and it fixed our issue. Hopefully it helps anyone else stumbling across this thread. 
Which part(s) of that document are you struggling to understand?  At what point do the steps not work for you?
Hi , We have a cluster of 3 searchheads and 3 indexers 2+1 primary and DR setup for both indexers and searchhead. If a DR indexer and a searchheads got corrupted, instead of  creating a new VM and i... See more...
Hi , We have a cluster of 3 searchheads and 3 indexers 2+1 primary and DR setup for both indexers and searchhead. If a DR indexer and a searchheads got corrupted, instead of  creating a new VM and install fresh splunk on the new VM and add it to the searchhed and indexer cluster is there a chance we can clone the existing searchhead and indexer VM to the new searchhead and indexer VM, and make it join the cluster.
I have a follow up on this or should I start again? I can send the token and it works, but I am doing a search where one of the fields is a sum  Example stats sum(SizeGB) What the search is doing ... See more...
I have a follow up on this or should I start again? I can send the token and it works, but I am doing a search where one of the fields is a sum  Example stats sum(SizeGB) What the search is doing is getting the total number of Data uploaded for a Project and the report works great however I was want to send the figure as a token in the alert - I can send the project id but not the sum - I have tried $testresult.sum(SizeGB)$ and also I did an eval of the Sum and called it total_size and tried that as a token and it is just blank.
I have an SSL certificate .pem provided by my organization and I need to configure it in Splunk HF. Please assist with any document referrals or steps. I have already gone through the Splunk document... See more...
I have an SSL certificate .pem provided by my organization and I need to configure it in Splunk HF. Please assist with any document referrals or steps. I have already gone through the Splunk documentation below but had no luck https://docs.splunk.com/Documentation/Splunk/9.4.0/Security/ConfigureandinstallcertificatesforLogObserver
Hello @tscroggins  I have a problem with your spl request because some results are truncated, with your help, i tested this : index=aws_app_corp-it_datastage earliest=-5d@d latest=@d | spath i... See more...
Hello @tscroggins  I have a problem with your spl request because some results are truncated, with your help, i tested this : index=aws_app_corp-it_datastage earliest=-5d@d latest=@d | spath input=_raw | search PROJECTNAME="*" INVOCATIONID="*" RUNMAJORSTATUS="*" RUNMINORSTATUS="*" | eval status=case( RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FWW", "Completed with Warnings", RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FOK", "Successful Launch", RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FWF", "Failure", RUNMAJORSTATUS="STA" AND RUNMINORSTATUS="RUN", "In Progress", 1=1, "Unknown") | eval tmp=JOBNAME."|".PROJECTNAME."|".INVOCATIONID."|".strftime(_time, "%Y-%m-%d %H:%M:%S") | eval date=strftime(_time, "%Y-%m-%d") | eval value=if(status=="Unknown", "Unknown", "start time: ".coalesce(strftime(strptime(RUNSTARTTIMESTAMP, "%Y-%m-%d %H:%M:%S.%Q"), "%H:%M"), "").urldecode("%0a"). if(status=="In Progress", "Running", "end time: ".coalesce(strftime(strptime(RUNENDTIMESTAMP, "%Y-%m-%d %H:%M:%S.%Q"), "%H:%M"), ""))).urldecode("%0a").status | xyseries tmp date value | eval tmp=split(tmp, "|"), Job_Name=mvindex(tmp, 0), Project_Name=mvindex(tmp, 1), Geographical_Zone=mvindex(tmp, 2) | fields - tmp | table Job_Name Project_Name Geographical_Zone * |search Geographical_Zone="EMEA" Job_Name="*" Project_Name="*" | fillnull value="Unknown" 1306 results With the first request I send you, index=aws_app_corp-it_datastage earliest=-5d@d latest=@d | spath input=_raw | eval StartTime=strptime(RUNSTARTTIMESTAMP, "%Y-%m-%d %H:%M:%S.%Q") | eval EndTime=strptime(RUNENDTIMESTAMP, "%Y-%m-%d %H:%M:%S.%Q") | eval Date=strftime(_time, "%Y-%m-%d") | eval Geographical_Zone=INVOCATIONID | eval Duration=round(abs(EndTime - StartTime)/60, 2) | eval Status = case( RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FWW", "Completed with Warnings", RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FOK", "Completed", RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FWF", "Failure", RUNMAJORSTATUS="STA" AND RUNMINORSTATUS="RUN", "In Progress", 1=1, "Unknown") | eval StartTimeFormatted=strftime(StartTime, "%H:%M:%S.%1N") | eval EndTimeFormatted=strftime(EndTime, "%H:%M:%S.%1N") | eval StartTimeDisplay=if(isnotnull(StartTimeFormatted), "Start time: ".StartTimeFormatted, "Start time: N/A") | eval EndTimeDisplay=if(isnotnull(EndTimeFormatted), "End time: ".EndTimeFormatted, "End time: N/A") | table JOBNAME PROJECTNAME Geographical_Zone _time Date RUNSTARTTIMESTAMP StartTimeDisplay RUNENDTIMESTAMP EndTimeDisplay Status | rename JOBNAME as Job_Name, PROJECTNAME as Project_Name |search Job_Name="*" Geographical_Zone="EMEA" Date="*" Project_Name="*" Status="*" |sort -Date | table Job_Name Project_Name Geographical_Zone Date StartTimeDisplay EndTimeDisplay Status | dedup Job_Name Project_Name Geographical_Zone Date StartTimeDisplay EndTimeDisplay Status 2352 results so it doesn't work because some failed jobs don't appear, for example
the query is not getting expected result,  i was runining for last 90 days but didnt get the result.   
Hey @Racer73b ! Found this one pretty frustrating myself.  There's lots of prior posts on the topic and I was able to eventually figure it out. You need to first create unique fields for your val... See more...
Hey @Racer73b ! Found this one pretty frustrating myself.  There's lots of prior posts on the topic and I was able to eventually figure it out. You need to first create unique fields for your value thresholds in your search.  See below example: | makeresults | eval ImpactLevel="45,55,85" | makemv delim="," ImpactLevel | mvexpand ImpactLevel | eval "Low Impact"=if('ImpactLevel'<50,'ImpactLevel',null()) | eval "Medium Impact"=if('ImpactLevel'>49 AND 'ImpactLevel'<80,'ImpactLevel',null()) | eval "High Impact"=if('ImpactLevel'>79,'ImpactLevel',null()) | fields - ImpactLevel Then, in your json, make those the fields you want to assign colors to and ensure that stackmode is set to stacked to ignore the nulls. ... "y": "> primary | frameBySeriesNames('Low Impact','Medium Impact','High Impact')", "seriesColorsByField": { "Low Impact": "#73BB8B", "Medium Impact": "#F1A657", "High Impact": "#dc4e41" }, "stackMode": "stacked" ... You will likely have to a little bit of tweaking to get it working the way you want but hopefully this gets you on your way.   Cheers!
feb 01 10:24:12 myhostname 2025-02-01 10:24:12,999, myhostname, audit.admin.com.cd.etc info feb 01 10:24:12 myhostname 2025-02-01 10:24:12,999, myhostname, audit.system.com.cd.etc info inputs.conf ... See more...
feb 01 10:24:12 myhostname 2025-02-01 10:24:12,999, myhostname, audit.admin.com.cd.etc info feb 01 10:24:12 myhostname 2025-02-01 10:24:12,999, myhostname, audit.system.com.cd.etc info inputs.conf  sourcetype = rsa:syslog my props.conf   I would like to change sourcetype base "admin", OR "system" depend on raw events. [rsa:syslog] TRANSFORMS-change_sourcetype = change_admin_sourcetype, change_system_sourcetype my transforms.conf [change_admin_sourcetype] DESK_KEY = MetaData:Sourcetype REGEX = \,\s+auddit\.admin FORMAT = sourcetype::rsa:admin [change_system_sourcetype] DESK_KEY = MetaData:Sourcetype REGEX = \,\s+auddit\.system FORMAT = sourcetype::rsa:system