All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  I am trying to integrate log4j with splunk as shown below and I am getting error - Log4j2-TF-1-AsyncLoggerConfig-1 ERROR Unable to send HTTP in appender [httptest] java.io.IOException: 401 Unau... See more...
Hi  I am trying to integrate log4j with splunk as shown below and I am getting error - Log4j2-TF-1-AsyncLoggerConfig-1 ERROR Unable to send HTTP in appender [httptest] java.io.IOException: 401 Unauthorized. <Http name="httptest" url="https://prd-p-xtpce.splunkcloud.com:8088/services/collector/raw"> <Property name="Authorization" value="Splunk xxxxxxx-998c-4547-beea-xxxxxx"/> <Property name="disableCertificateValidation" value="true"/> <JsonLayout properties="true"/> </Http>   The token is valid and able to post data from postman.   Thanks
I want to change the color of title and description to black. I can't change it in Dashboard Studio, so I think I need to change it from the original.   [original source]   Help me ........ See more...
I want to change the color of title and description to black. I can't change it in Dashboard Studio, so I think I need to change it from the original.   [original source]   Help me ...... T. T
so I configured a 4 member search head cluster successfully with a captain and a deployer server however when I got to deploy a simple app such as the GlobalBanner it aint working per this doc Use t... See more...
so I configured a 4 member search head cluster successfully with a captain and a deployer server however when I got to deploy a simple app such as the GlobalBanner it aint working per this doc Use the deployer to distribute apps and configuration updates  I create the following subdirectory and conf file on my Deployer server C:\Program Files\Splunk\etc\shcluster\apps\GlobalBanner\local\global-banner.conf   and I push the bundle using this command from the Deployer server splunk apply shcluster-bundle -target https://MySearchHeadCaptain.MyDomain.net:8089   and the file ends up in the following directory on the search head cluster members C:\Program Files\Splunk\etc\apps\GlobalBanner\default\gloval-banner.conf Why does it end up in the default sub folder?   My Deployer server is setup with the push mode set to full like so  [shclustering] pass4SymmKey = $7$SoMeJuNk= shcluster_label = shcluster1 deployer_push_mode = full   1,000 thanks to anyone who can set me straight
Hey i want to setup stream add-on on splunk for a distributed environment where I have  one HF, one Indexer and one SH , I want to setup for forwarding DNS logs only. Can someone tell me how to confi... See more...
Hey i want to setup stream add-on on splunk for a distributed environment where I have  one HF, one Indexer and one SH , I want to setup for forwarding DNS logs only. Can someone tell me how to configure and setup??  
Hi  I am trying to upgrade my SPLUNK environment from 7.x to 8.1.9 I want to make sure if my universal fowarder which is 6.5.3 is compatibile with 8.1.9   Thanks Suresh
Hello everyone,  I have a search for after hour logins between 6pm and 6am. Right now I have event codes 4625 and 4624 with logon_type 2 and 3. This alert picks up windows automated services, but I ... See more...
Hello everyone,  I have a search for after hour logins between 6pm and 6am. Right now I have event codes 4625 and 4624 with logon_type 2 and 3. This alert picks up windows automated services, but I was wondering if there was a way that I can have this search only pick up on user accounts instead of windows automated services. My search string is    index=(myindexname) source="wineventlog:security" Account_Name=* EventCode=4625 OR EventCode=4624 Logon_Type=2 OR Logon_Type=2 Logon_Process=Kerberos earliest=-7@d-6h latest=-7d@d+6h  
Failed to contact license manager: reason='Unable to connect to license manager=https://SplunkInstance01.MyDomain.net:8089 Error connecting:error:14090086:SSL routines:ssl3_get_server_certificate:ce... See more...
Failed to contact license manager: reason='Unable to connect to license manager=https://SplunkInstance01.MyDomain.net:8089 Error connecting:error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name.', first failure time= [...] first mea culpa for asking this as I am sure it has been asked before but I couldn't quite understand how to use the openssl verify command, when I try to run it I get this error: C:\Program Files\Splunk\etc\auth\distServerKeys>openssl verify -CAfile private.pem trusted.pem WARNING: can't open config file: C:\\jnkns\\workspace\\build-home/ssl/openssl.cnf Error loading file private.pem I also tried to run it from the bin subdirectory, home of the openssl utility  C:\Program Files\Splunk\bin>openssl verify -CAfile "C:\Program Files\Splunk\etc\auth\distServerKeys\private.pem" "C:\Program Files\Splunk\etc\auth\distServerKeys\trusted.pem" WARNING: can't open config file: C:\\jnkns\\workspace\\build-home/ssl/openssl.cnf Error loading file C:\Program Files\Splunk\etc\auth\distServerKeys\private.pem I suspect this private \ public key pair combination may still be the stale default self signed combination cause Splunk to frown upon it, however what is perplexing to me is that it works on the other 20 plus servers, so I throw yourself upon your mercy for help please note we have over 2 dozen Splunk servers running version 9.0.0 all on Windows platforms and this is the only server getting this error, all servers use our own Microsoft CA internal Enterprise certificates based on a two tier (RootCA \ IntermediateCA architecture) so I think I know what I am doing ha ha at least in terms of certing  
I am setting up an alert to notify when a message is received more than a 100 times in a week. I figured it out for the total, but not within a week time range. Any help is appreciated.       'Bi... See more...
I am setting up an alert to notify when a message is received more than a 100 times in a week. I figured it out for the total, but not within a week time range. Any help is appreciated.       'Bitgo webhook error' | stats count as Bitgo_Webhook_Errors | where Bitgo_Webhook_Errors >=100    
Hi! Long time listener, first time caller here. Our custom search command needs some slow initialization, which we would prefer to skip on repeated calls. Is there a way to keep the process alive a... See more...
Hi! Long time listener, first time caller here. Our custom search command needs some slow initialization, which we would prefer to skip on repeated calls. Is there a way to keep the process alive after the first invocation and only call dispatch() afterwards? Something that does what scripttype = persist does for REST endpoints. My understanding is that the protocol would allow this. Calling dispatch() in a loop sadly doesn't work - would've been too easy, huh? (It does work in so far that the command finishes - it's just that the script gets started again the next time)
New to Splunk.  The addons for VMware and other virtual products seem to be components which were once collective packages.  There are 30+ options which perform seperate functions.  Which are necessa... See more...
New to Splunk.  The addons for VMware and other virtual products seem to be components which were once collective packages.  There are 30+ options which perform seperate functions.  Which are necessary to have a general overview of my environment and which ones are still receiving support? I'm just overwhelmed by the amount of options and I'm not sure which ones are applicable.
Hi,  If I want to show the percentage, then I use  <option name="charting.chart.showPercent">true</option> but if I want to display the absolute value in the pie chart, I tried the following, i... See more...
Hi,  If I want to show the percentage, then I use  <option name="charting.chart.showPercent">true</option> but if I want to display the absolute value in the pie chart, I tried the following, it does not work. <option name="charting.chart.showValue">true</option> Thanks for help.
Hi I have this SPL query but getting this error? Error in 'rename' command: Usage: rename [old_name AS/TO/-> new_name]+. Any ideas why or how to resolve this please? | tstats count where index=... See more...
Hi I have this SPL query but getting this error? Error in 'rename' command: Usage: rename [old_name AS/TO/-> new_name]+. Any ideas why or how to resolve this please? | tstats count where index=os earliest=-7d latest=-3h by host, _time span=3h | stats median(count) as median by host | join host [| tstats count where index=os earliest=-3h by host] | eval percentage_diff=((count/median)*100)-100 | where percentage_diff<-5 OR percentage_diff>5 | sort percentage_diff | rename median as “Median Event Count Past Week”, count as “Event Count of Events Past 3 Hours”, percentage_diff as “Percentage Difference”  
I am using a query and getting the logs but getting "**Setting up error code and description**" as the error message string for all the errors, need to extract those error which have error as "error ... See more...
I am using a query and getting the logs but getting "**Setting up error code and description**" as the error message string for all the errors, need to extract those error which have error as "error in calling tarik services" but it is not extracting, need help i dont know how to use rex.....please help me index=dep_ago Appid=APP-0431 prod "error" this command i am using but not getting this "error in calling tarik services" error or any other string only this coming **Setting up error code and description**" with all the details in logs please help.................
Hi, I have created an advance threat protection incidents  Correlation Search which is   generating notable events how I can make it to reduce the notables which it is generating. Thanks 
I updated an alert description using the REST API (port 8089).  When I use the API to list the description it shows the updated description.  When I look at the alert using the web page (port 8000) i... See more...
I updated an alert description using the REST API (port 8089).  When I use the API to list the description it shows the updated description.  When I look at the alert using the web page (port 8000) it still has the old version.  There are multiple instances of Splunk and a load balancer, but I do not know the specifics.  I always use the same IP address to access Splunk.     For the API access I use a token under my username.  Is my token the problem?  My user has enough rights to create and change alerts.  Although when I list all alerts using |rest/servicesNS/-/-/saved/searches I get a warning Restricting results of the "rest" operator to the local instance because you do not have the "dispatch_rest_to_indexers" capability Thanks.  
I have a problem where not all values are showing up in a chart - and the values that do show up are rather flatlined. For example, here is the data I gathered for this chart:   However, non... See more...
I have a problem where not all values are showing up in a chart - and the values that do show up are rather flatlined. For example, here is the data I gathered for this chart:   However, none of the earlier values show up in the chart. I have remade the index and the data is good coming in from the CSV files.   Can anyone help me identify what's wrong?   Many thanks.
We had some feeds with host="unassigned". the following tstats will not return any result for some feeds, but it works for some other feeds: tstats count where index=aindex by host,sourcetype,index
Hello. I'm fairly new to Splunk and SPL so bear with me here. I have the following scenario: I have an existing lookup file that was created by a search and is then updated daily by a similar sav... See more...
Hello. I'm fairly new to Splunk and SPL so bear with me here. I have the following scenario: I have an existing lookup file that was created by a search and is then updated daily by a similar saved search. So to sum it up, run a search, append contents of the lookup file, remove old events, and finally output the data to the lookup file again overwriting the old contents of the lookup file. If the search with the appended lookup file data and after clean-up results in zero events I still want the lookup file to remain. Now, when reading the Splunk docs I get a bit confused regarding create_empty and override_if_empty optional arguments. For create_empty, Splunk docs state "If set to true and there are no results, a zero-length file is created." So since outputlookup normally overwrites the file if it already exists is this the case even when writing no results? Same question for override_if_empty, which seems to be doing something similar. If override_if_empty is set to false, does outputlookup overwrite the lookup file with a zero length list when the search has no results? My saved search to update the lookup file looks approximately like this: | "get external data" | fields blah blah blah | fields - _* | rename blah blah blah | eval time=now() | inputlookup "my existing lookup file" append=true | sort 0 - time | where time > relative_time(now(), "-7d@d") OR isnull(time) | outputlookup "my existing lookup file" So do I need to add create_empty=true and override_if_empty=false? Or do I just need one of them, and if so which one? Grateful for any clarification on this matter. Thanks in advance.
I am looking integration for Appdynamics with BMC event manager.  My appdynamics controller is hosted on SaaS. How i can post event data to BMC event manager tool. Please share if anyone has done the... See more...
I am looking integration for Appdynamics with BMC event manager.  My appdynamics controller is hosted on SaaS. How i can post event data to BMC event manager tool. Please share if anyone has done the same integration on Saas environvent.
could someone please let me know where I'm going wrong in my query ? | spath service_roles{} output=service_role | stats count by cluster_name date service_role | spath input=service_role servic... See more...
could someone please let me know where I'm going wrong in my query ? | spath service_roles{} output=service_role | stats count by cluster_name date service_role | spath input=service_role service output=service_name | spath input=service_role role_status{} output=status | rex max_match=0 field=_raw "hostname: <(?<hostname>.*)> type: <(?<type>.*)>" | eval status=mvzip(hostname,type) | mvexpand status | rex field=status "(?<hostname>[^~]+)~(?<type>[^~]+)" | dedup cluster_name, service_name | table cluster_name, service_name, hostname, type