All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi here are the endpoints which you must use. Select the correct one based on your SCP instance type. Configure HTTP Event Collector on Splunk Enterprise r. Ismo
Hi you told here what is your solution to your issue, but what is your issue and especially why you are sending same event to two separate clusters? That means also duplicate licenses costs. Basic... See more...
Hi you told here what is your solution to your issue, but what is your issue and especially why you are sending same event to two separate clusters? That means also duplicate licenses costs. Basically you could do this by replicating sourcetype and then removed this field from replicated sourcetype. But maybe there is better solution when we understand your real issue? r.Ismo
Presumably, you are talking about a column chart. The colours only apply to the series, so unless you have different fields with the names you provided, the columns for the series will all be the sam... See more...
Presumably, you are talking about a column chart. The colours only apply to the series, so unless you have different fields with the names you provided, the columns for the series will all be the same colour. If you could provide details of the search you are using in your chart, we might be able to help you.
I use the good old grep command when I needed a list of indexes referenced in all inputs on all folders ; like this:   splunk btool inputs list --debug | grep index  
Hi are you sure that this is the correct outputs.conf definition for your host to sending events into SCP? Usually this is named something like  100_<your splunk stack name>. You can check the rea... See more...
Hi are you sure that this is the correct outputs.conf definition for your host to sending events into SCP? Usually this is named something like  100_<your splunk stack name>. You can check the real configurations by  splunk btool outputs list tcpout --debug This shows what those configurations are and where those are defined. Basically you should use that UF configuration which you have downloaded from your SCP stack. r. Ismo 
I have created one Dashboard and trying to add different field color. I navigated to "source " >tried updating XML code as "charting.fieldColors">{"Failed Logins":"#FF9900", "NonCompliant_Keys":"#FF0... See more...
I have created one Dashboard and trying to add different field color. I navigated to "source " >tried updating XML code as "charting.fieldColors">{"Failed Logins":"#FF9900", "NonCompliant_Keys":"#FF0000", "Successful Logins":"#009900", "Provisioning Successful":"#FFFF00"</option>" but still all clumns are showing as "Purple"   Can someone help me with it?
Hosted by AWS. Yes, port 443 works.
We are looking to configure the Splunk Add-on for Microsoft Cloud Services to use a Service Principal as opposed to a client key.  The documentation for the Add-On does not provide insight into how o... See more...
We are looking to configure the Splunk Add-on for Microsoft Cloud Services to use a Service Principal as opposed to a client key.  The documentation for the Add-On does not provide insight into how one would configure the Splunk Add-on for Microsoft Cloud Services to work with a Service Principal.  Does the Splunk Add-on for Microsoft Cloud Services service principals for authentication?  
I have a heavy forwarder that sends the same event to two different indexer cluster. Now this event has a new field "X" that I only want to see in one of the indexer clusters.  I know in the props.c... See more...
I have a heavy forwarder that sends the same event to two different indexer cluster. Now this event has a new field "X" that I only want to see in one of the indexer clusters.  I know in the props.conf I can configure the sourcetype to do the removal of the field but that would be on the sourcetype level. Is there any way to remove it on one copy and not the other?  Alternatively I could do the props.conf change on the indexer level instead.
Try this query index=test | stats count(eval(status>399)) as Errors,count as Total_Requests, values(Status) as list_of_Status by consumers | eval Error_Percentage=((Errors/Total_Requests)*100)
Nit: the instance is *managed* by Splunk, but it is *hosted* by either AWS or GCP.  Contact your Splunk admin if you don't know which host you have. If you're not on a trial account then the port nu... See more...
Nit: the instance is *managed* by Splunk, but it is *hosted* by either AWS or GCP.  Contact your Splunk admin if you don't know which host you have. If you're not on a trial account then the port number will be 443. Make sure the computer you are connecting from is on your Splunk Cloud Allowed IP List.
We have new SH node which we are trying to add to the Search head cluster,  updated the configs in shcluster config and other configs.  After adding this node in the cluster , now we have two nodes ... See more...
We have new SH node which we are trying to add to the Search head cluster,  updated the configs in shcluster config and other configs.  After adding this node in the cluster , now we have two nodes as pert of  the SH cluster. We can see both the nodes up and running part of the cluster,   when we check it with "splunk show shcluster-status". But, when we check the kvstore status with " splunk show kvstore-status" old nodes shows as captain , but the newly built node is not joining this cluster and giving the below error in the logs. Error in Splunkd log on the search head which has issue.. 12-04-2024 16:36:45.402 +0000 ERROR KVStoreBulletinBoardManager [534432 KVStoreConfigurationThread] - Local KV Store has replication issues. See introspection data and mongod.log for details. Cluster has not been configured on this member. KVStore cluster has not been configured We have configured all the cluster related info on the newly built search head server(server.conf), dont see any configs missing. We also see below error on the SH ui page messages tab.. Failed to synchronize configuration with KVStore cluster. Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: search-head01:8191; the following nodes did not respond affirmatively: search-head01:8191 failed with Error connecting to search-head01:8191 (172.**.***.**:8191) :: caused by :: compression disabled. Anyone else faced this error before...need some support here...
I need to display list of all failed status code in column by consumers Final Result: Consumers Errors Total_Requests Error_Percentage list_of_Status Test 10 100 10  5... See more...
I need to display list of all failed status code in column by consumers Final Result: Consumers Errors Total_Requests Error_Percentage list_of_Status Test 10 100 10  500 400 404           Is there a way we can display the failed status codes as well in of list of status coloumn index=test | stats count(eval(status>399)) as Errors,count as Total_Requests by consumers | eval Error_Percentage=((Errors/Total_Requests)*100)
Thank you for your response.   Yes, I know it is not an HEC endpoint. That detail was included to illustrate that it is not a cURL syntax error. It is a paid account, and the instance is hosted by ... See more...
Thank you for your response.   Yes, I know it is not an HEC endpoint. That detail was included to illustrate that it is not a cURL syntax error. It is a paid account, and the instance is hosted by splunk. I am mostly getting curl: (28) Failed to connect to <org>.splunkcloud.com port 8088 after 21053ms: could not connect to server   Just to clarify the purpose of this. I am writing a script to ingest data from another of our services over http.  Thank you for your help.
It would help if you told us a little about the issues you having rather than just saying you have issues. We also need to know which platform you use (AWS or GCP) and if it is a trial or paid accou... See more...
It would help if you told us a little about the issues you having rather than just saying you have issues. We also need to know which platform you use (AWS or GCP) and if it is a trial or paid account.  Those answers are used at https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/UsetheHTTPEventCollector#Send_data_to_HTTP_Event_Collector to determine the correct endpoint.  It could indeed be a port error. The URL from which you got a response is a REST API endpoint, not a HEC endpoint.
Thank you @richgalloway  I appreciate the information.  It looks like I was trying to do something that isn't possible.  I'll review the documentation you sent and look at trying this as a dashboard.... See more...
Thank you @richgalloway  I appreciate the information.  It looks like I was trying to do something that isn't possible.  I'll review the documentation you sent and look at trying this as a dashboard. Thanks again!
Workflow actions are an interactive feature used in search results to perform something on an event.  See https://dev.splunk.com/enterprise/docs/devtools/customworkflowactions and https://docs.splunk... See more...
Workflow actions are an interactive feature used in search results to perform something on an event.  See https://dev.splunk.com/enterprise/docs/devtools/customworkflowactions and https://docs.splunk.com/Documentation/Splunk/9.3.2/Knowledge/CreateworkflowactionsinSplunkWeb#Control_workflow_action_appearance_in_field_and_event_menus for more information. That said, workflow actions are not applicable to reports. If you put the report in a dashboard, then you add a drilldown that uses the same search as your workflow action.
Estimados. donde podría encontrar la métrica Availability Trend de un job, para usarla en un dashboard. seria para hacerlo tal cual como esta en la imagen , pero en un dash. Translated version De... See more...
Estimados. donde podría encontrar la métrica Availability Trend de un job, para usarla en un dashboard. seria para hacerlo tal cual como esta en la imagen , pero en un dash. Translated version Dear all, where could I find the Availability Trend metric for a job to use it in a dashboard? I want to replicate it exactly as it appears in the image but in a dashboard. ^Post edited by @Ryan.Paredez to translate the post. 
Hello, I am having issues configuring the HTTP Event Collector on my organizations Splunk cloud instance. I have set up a token, and have been trying to test using the example curl commands. However... See more...
Hello, I am having issues configuring the HTTP Event Collector on my organizations Splunk cloud instance. I have set up a token, and have been trying to test using the example curl commands. However, I am having issues discerning which endpoint is the correct one. I have tested out several endpoint formats: - https://<org>.splunkcloud.com:8088/services/collector - https://<org>.splunkcloud.com:8088/services/collector/event - https://http-inputs-<org>.splunkcloud.com:8088/services/collector... - several other that I have forgotten.  For context, I do receive a response when I get from https://<org>.splunkcloud.com/services/server/info From what I understand, you cannot change the port from 8088 on a cloud instance, so I do not think it is a port error.  Can anyone point me to any resources that would be able to help me determine the correct endpoint? (Not this: Set up and use HTTP Event Collector in Splunk Web - Splunk Documentation. I've browsed for hours trying to find a more comprehensive resource.)   Thank you!  
Hi @roopeshetty , Can you elaborate more about what did you try already when you mentioned " We tried many options using proxy settings but none of them are working."?   Also, it is not sure if yo... See more...
Hi @roopeshetty , Can you elaborate more about what did you try already when you mentioned " We tried many options using proxy settings but none of them are working."?   Also, it is not sure if you are running in a standalone environment or a clustered one, and if the proxy configs you tried were in conf files or added via REST. Check this documentation about some good example on how to configure proxy and non-proxy addresses, and make sure that you define the http/https_proxy correctly (use the same config mentioned in your browser for reference if that is using a direct proxy address instead of a auto-discovery script.) Configure splunkd to use your HTTP Proxy Server - Splunk Documentation Notice that you must pass the authentication in the URL if your proxy requires it. ( like http://user:pass@myproxy.com:80)