All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you  livehybrid, I will tryout your suggestions and respond back to you.
Hi there are different cert types which contains different options. Basically it depends which kind of web server certificate you have, can you use it also for server’s management cert. If it’s pure... See more...
Hi there are different cert types which contains different options. Basically it depends which kind of web server certificate you have, can you use it also for server’s management cert. If it’s pure client certificate (web can be that) then it didn’t work as server needs server certificate. You can read more e.g. from https://lantern.splunk.com/Splunk_Platform/Product_Tips/Administration/Securing_the_Splunk_platform_with_TLS —— Leaf (client/server) certificates Leaf means that the certificate is unable to sign any additional certificates. They are often referred to as client or server certificates because that’s generally what they represent, but these are not technical TLS terms. Splunk platform systems use server certificates, meaning the certificate should represent the system(s) in the Subject Alternative Name (SAN) line and Common Name (CN) value. Splunk platform allows wildcard CN/SANs to be used. You can also put multiple hosts in the SAN, but this can become difficult to manage or update compared to a wildcard. Universal forwarders (or web browsers, if desired) use client certificates. These are called client certificates because they don’t need to represent (the CN/SAN) the system they’re installed on. They only need to be signed by an issuer that the Splunk platform server trusts. You might hear these referred to as forwarder certificates in the Splunk ecosystem. You’ll also often hear the term “(full) certificate chain” when reading about TLS. A certificate chain is a leaf certificate that has the proper issuer certificates under it in a single file. In Splunk we automatically create the chain by using the client/serverCert and sslRootCAPath values automatically, so you should not create a "full chain certificate". You should place the server/client certificate and private key in one file, and all of your issuer certificates in another file. r. Ismo
Hi @ptrsnk  First of all, I dont think the "privKeyPath" key is a valid key in inputs.conf. Infact you should just be using serverCert and giving the path to your full certificate chain (in PEM form... See more...
Hi @ptrsnk  First of all, I dont think the "privKeyPath" key is a valid key in inputs.conf. Infact you should just be using serverCert and giving the path to your full certificate chain (in PEM format), including key and CA. yourCert.pem <YourSSLCert> <YourPrivateKey> <YourCertCA> You will also need to specify sslPassword if you are using an encrypted private key for your cert. For more information check out the inputs.conf spec page at https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/Inputsconf There is also another useful answer at https://community.splunk.com/t5/Security/TCP-Data-Input-and-SSL/m-p/483077 with more context and suggestions. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
We have an existing Splunk 9.1.3 Enterprise environment and run Splunkweb at port 8000 using an outside CA signed certificate for https.  A partner wants to stream syslog data to our Splunk using a s... See more...
We have an existing Splunk 9.1.3 Enterprise environment and run Splunkweb at port 8000 using an outside CA signed certificate for https.  A partner wants to stream syslog data to our Splunk using a secure connection.  I added the following to inputs.conf located in system/local. [tcp-ssl:6514] sourcetype = syslog index=syslog disabled = 0 [SSL] privKeyPath = /opt/splunk/etc/auth/splunkweb/2024/splprkey.key serverCert = /opt/splunk/etc/auth/splunkweb/2024/prcert.pem requireClientCert = false After a restart ,I used openssl to test the connection.  Port 8000 worked normally as expected; the certificate was returned and I could see the TLS negotiation in Wireshark   The openssl  connection to port 6154 did not work .  A connection was made and openssl did send a "Client Hello" which was visible in Wireshark,  but other than an ACK the Splunk server never sent anything further. Based on an article I read, I also copied the certificate path to the server.conf file, but that didn't change anything.  What am I missing? Is it incorrect to assume the same cert could be used for different ports? Any assistance appreciated! Thanks,
I found that deploying a Splunk Heavy Forwarder and defining trust and permissions through an Instance Role to be effective for this.
+1 on thay. Generally you should avoid putting anything in system/local. Those settings have the highest priority and cannot be overriden by apps (with an exception for indexer cluster apps pushed f... See more...
+1 on thay. Generally you should avoid putting anything in system/local. Those settings have the highest priority and cannot be overriden by apps (with an exception for indexer cluster apps pushed from manager into peer-apps so you cannot manage those settings easily without physically touching the server.
Hello, Does this help? https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-check-my-license-usage-at-the-account-level/ta-p/34492
Hi Team,   I am using splunk otel to gather logs from GKE to splunk cloud platformand I see the below errors: otel-collector 2025-02-25T23:29:46.515Z error reader/reader.go:214 failed to process t... See more...
Hi Team,   I am using splunk otel to gather logs from GKE to splunk cloud platformand I see the below errors: otel-collector 2025-02-25T23:29:46.515Z error reader/reader.go:214 failed to process token {"kind": "receiver", "name": "filelog", "data_type": "logs", "component": "fileconsumer", "path": "/var/log/pods/lxysdsdb/istio-proxy/0.log", "error": "failed to send entry after error: remove: field does not exist: attributes.time"} How can I resolve this?   I am using the below helm template values, can someone point out to what can be changed? I am using cri and otel (not fluentd) to collect the logs. # This is an example of using insecure configurations clusterName: "${cluster_name}" splunkPlatform: endpoint: ${endpoint} token: ${global_token} index: ${index_name} metricsIndex: "${index_name}_metrics" insecureSkipVerify: true logsEnabled: true metricsEnabled: false tracesEnabled: false logsEngine: otel cloudProvider: "gcp" distribution: "gke" agent: enabled: true ports: otlp: containerPort: 4317 hostPort: 4317 protocol: TCP enabled_for: [traces, metrics, logs, profiling] otlp-http: containerPort: 4318 protocol: TCP enabled_for: [metrics, traces, logs, profiling] resources: limits: cpu: ${logging_cpu_requests} memory: ${logging_memory_requests} podLabels: %{ for label, value in labels ~} ${label}: "${value}" %{ endfor ~} clusterReceiver: enabled: false logsCollection: # Container logs collection containers: enabled: true # Container runtime. One of `docker`, `cri-o`, or `containerd` # Automatically discovered if not set. containerRuntime: "${log_format_type}" excludePaths: %{ for path in exclude_path ~} - ${path} %{ endfor ~} # Boolean for ingesting the agent's own log excludeAgentLogs: true  
As it has said, you basically could do it, but in reality I don't encourage you to do it. As @livehybrid said your main issue will be to keep your clone sync with all changes what are happening on S... See more...
As it has said, you basically could do it, but in reality I don't encourage you to do it. As @livehybrid said your main issue will be to keep your clone sync with all changes what are happening on SHC member and also on indexers. In reality this means that you must have almost online sync to keep your clone enough fresh to use it successfully when time comes. It's much better to add additional nodes to your DR site if you really need to have it up and running if master goes down and also your DR site's node is down. When you are using Splunk's multisite cluster option you actually should have enough indexers on your primary and secondary sites. And if you are running this e.g. on AWS or Azure then just utilize those AZs to make your multisite environment. If you need to do this over regions / continents then I think that only reasonable way is use this multisite option. But then you cannot use SmartStore as it needs that all buckets are in same region. On SHC side you need to ensure that you have enough members to get captain election works after breakup. But to give you a better answer we must know what is the threat for you are planning to prepare. r. Ismo
Hi based on your steps you have probably do something else that you have planned? Your steps said that you have removed your Captain on SHC and then you try to add another member to you SHC with us... See more...
Hi based on your steps you have probably do something else that you have planned? Your steps said that you have removed your Captain on SHC and then you try to add another member to you SHC with using your previous Captain, which aren't anymore member of that SHC. It's obviously that it cannot work that way. After your first step your SHC has elected a new Captain. You can see it from MC or any other member of SHC. Use GUI or cli to see that like  splunk show shcluster-status --verbose This told to you which is a new captain of your SHC. If you are trying to move captain to another member then you should use  splunk transfer shcluster-captain -mgmt_uri <URI>:<management_port> run this on current captain and use a new captain as mgmt_uri parameter. r. Ismo 
Basically this is true, but how you can create (officially supported way) those apps locally on SHC nodes? You cannot do it on CLI and GUI there isn't option for creating app in any SHC member! When... See more...
Basically this is true, but how you can create (officially supported way) those apps locally on SHC nodes? You cannot do it on CLI and GUI there isn't option for creating app in any SHC member! When you are creating a new SHC, you must use empty nodes which cannot contains old apps from SH.  If you need to migrate old SH into SHC then you must replicate those apps and other data via deployer. This means that you must create an empty app (just base structure like app_xx/bin, app_xx/default, app_xx/metadata, app_xx/lookups) on deployer and then push it into your SHC members. After you have done it this way, you can add other KOs like dashboards, reports, alerts etc. via GUI on any SHC members and those are replicated correctly between nodes.  What will happen those files in SHC member app_xx/local folder (and default) is depending on your push settings like @kiran_panchavat already told. One notice your SHC size. It should always have odd number of members. RAFT protocol which are keeping your SHC up and running with correct captain needs this. Otherwise there are big possibility that it cannot elect a new captain after something has happened to old captain!
hi @pedropiin  Does this work? | eval time_var=strptime(json_extract(_raw,"payload.eventProcessedAt"), "%Y-%m-%dT%H:%M:%S") Please let me know how you get on and consider adding karma to this or a... See more...
hi @pedropiin  Does this work? | eval time_var=strptime(json_extract(_raw,"payload.eventProcessedAt"), "%Y-%m-%dT%H:%M:%S") Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Please mark some answer as Solution, so other people will see it later and get help!
You probably have getting that data in with Splunk's Windos and *nix Add-ones? If not then I strongly recommend you to use those! With those you will get events as CIM compliant. That way it's much e... See more...
You probably have getting that data in with Splunk's Windos and *nix Add-ones? If not then I strongly recommend you to use those! With those you will get events as CIM compliant. That way it's much easier to look some other apps from splunk base which are using CIM to create those queries for you dashboard(s). Here is some apps from splunkbase: https://splunkbase.splunk.com/app/4240 https://splunkbase.splunk.com/app/1621 https://splunkbase.splunk.com/app/742 https://splunkbase.splunk.com/app/3757 https://splunkbase.splunk.com/app/4882 https://splunkbase.splunk.com/app/3786 https://splunkbase.splunk.com/app/6722 Those are just order which I get from my SCP instance, not any preferred etc. r. Ismo
Thank you so much! This was it. 
Hi Ismo, Thank you! I came to that conclusion yesterday and contacted the admins. Regards, Evgeny
When you are doing those labs, please remember that you are doing that work for "customer"! This means that Client have given you already some instructions/plan/architecture how he/she see the resu... See more...
When you are doing those labs, please remember that you are doing that work for "customer"! This means that Client have given you already some instructions/plan/architecture how he/she see the result. When there is something confusing or not described enough clearly you should ask it from your Client just like in real life. It's always best to discuss with Client and ensure that both parties have exactly same understanding what you are doing! Worst think what you could do in those labs is make some guesses Always ask if you are not sure what client wants. You should know how you do it after that.
I just wondering if you have those events on file in some linux node, why you don't read those by UF directly? It's much better solution than add additional rsyslog there. Of course you can do it if ... See more...
I just wondering if you have those events on file in some linux node, why you don't read those by UF directly? It's much better solution than add additional rsyslog there. Of course you can do it if it's mandatory for some unknown reason, but personally I try to avoid this kind of additional component. Another thing. You should never use _json as a sourcetype for any real data source. It's like syslog, basically it told that this file contains some json data, but nothing else. In Splunk we should use always sourcetype as told what is format of this event. As you know there are several different json schemas and you should have separate sourcetype names for those. You should just copy the definition of _json sourcetype for your own sourcetype. You should have already defined naming standards for all you Splunk KOs to keep your environment in manageable state. There are several different naming standards which you can look and use what is best for your company. Or just create your own combining the part of other. 
You have messed with table and fields. You should never use table before stats etc. As table always move processing into SH side. So you should do index=main sourcetype="access_combined_wcookie" act... See more...
You have messed with table and fields. You should never use table before stats etc. As table always move processing into SH side. So you should do index=main sourcetype="access_combined_wcookie" action=purchase status=200 file="success.do" | fields JSESSIONID, action, status | stats count by JSESSIONID, action, status | rename JSESSIONID as UserSessions This can use several indexers to do preliminary phase of stats, then send smaller result sets to SH which finally combine those to give true result of stats. 
Hi Obviously you have normal user role or some non power/admin user role. In normal case only admin or power user can create and run alerts. As @marnall said you must have correct capability to cr... See more...
Hi Obviously you have normal user role or some non power/admin user role. In normal case only admin or power user can create and run alerts. As @marnall said you must have correct capability to create alerts. You should ask that from your Splunk Admins which may give it to you or not. That depends on your company policy who can create and run those. r. Ismo