All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Please advise as to whether a specific license is needed to support indexing on a heavy forwarder; Like an indexing license?
Thank you for responding! Yes, it's coming from syslog server with UF installed going to Cloud. I unfortunately don't have any HFs available for use and setting up another one at this time is not an ... See more...
Thank you for responding! Yes, it's coming from syslog server with UF installed going to Cloud. I unfortunately don't have any HFs available for use and setting up another one at this time is not an option for me.
Hi! Thank you so much for your response and explanation. It seems like maybe I have not properly deployed these to the indexing tier. Forgive me for the beginner question, but I think the sourcetype... See more...
Hi! Thank you so much for your response and explanation. It seems like maybe I have not properly deployed these to the indexing tier. Forgive me for the beginner question, but I think the sourcetype I created already belongs to the 000-self-service app - is this what you meant by deploying the config using self service? Screenshot below (I didn't capture the full sourcetype name):     
Key question here is, since you're saying it's syslog and you definitely not sending syslog straight to Cloud, what your ingestion process look like? Do you have any HFs on-prem?
This is likely what you were/are looking for.   https://cloud.google.com/chronicle/docs/install/install-forwarder
It looks like you have the right steps. I would download the splunkclouduf app to my workstation and then install it on the Deployment Server (DS) using the GUI (Install app from file).  After that,... See more...
It looks like you have the right steps. I would download the splunkclouduf app to my workstation and then install it on the Deployment Server (DS) using the GUI (Install app from file).  After that, copy the /opt/splunk/etc/apps/100_splunkcloud directory to /opt/splunk/etc/deployment-app.  DO NOT RENAME the 100_splunkcloud app.
Replied to my own post. Derp. Hi! Thank you so much for your response and explanation. It seems like maybe I have not properly deployed these to the indexing tier. Forgive me for the beginner ... See more...
Replied to my own post. Derp. Hi! Thank you so much for your response and explanation. It seems like maybe I have not properly deployed these to the indexing tier. Forgive me for the beginner question, but I think the sourcetype I created already belongs to the 000-self-service app - is this what you meant by deploying the config using self service? Screenshot below (I didn't capture the full sourcetype name):  
The key insight is that KV_MODE=json is applied at search-time on the Search Head, while SEDCMDs are part of the parsing pipeline (Typing / Regexreplacement) that must occur during indexing. In Splu... See more...
The key insight is that KV_MODE=json is applied at search-time on the Search Head, while SEDCMDs are part of the parsing pipeline (Typing / Regexreplacement) that must occur during indexing. In Splunk Cloud, that should've done it we need make sure your sourcetype configuration with these SEDCMDs is properly deployed to the indexing tier, not just the search head (could use SEDCMDs on sh), since that's where the actual parsing/transformation of the data needs to happen. Try to deploy your SEDCMD config using self service app and see if that makes difference.  Also if you don't want to write props and transforms. checkout: https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/DataIngest#Create_a_ruleset_with_the_Ingest_Actions_page https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/IngestProcessor/AboutIngestProcessorSolution If my reply helps, please upvote.
After  Splunk forwarder version got upgrade from 9.0.5.0 to 9.3.1.0 windows server are having issue in forwarding the data to Splunk. Splunkd is stopping often in different server after restarting... See more...
After  Splunk forwarder version got upgrade from 9.0.5.0 to 9.3.1.0 windows server are having issue in forwarding the data to Splunk. Splunkd is stopping often in different server after restarting splund it start forwarding the data but issue comes again after 2,3 days  what actions to be taken to make the logs flow easily to Splunk  
Yes, the LM can be on a CM.  See https://docs.splunk.com/Documentation/Splunk/9.3.1/Indexer/Systemrequirements#Additional_roles_for_the_manager_node Which instance is reporting that error?  Have you... See more...
Yes, the LM can be on a CM.  See https://docs.splunk.com/Documentation/Splunk/9.3.1/Indexer/Systemrequirements#Additional_roles_for_the_manager_node Which instance is reporting that error?  Have you checked the firewalls to confirm access to port 8089 is permitted?
Hi All, Our current setup involves Splunk Search Heads hosted in Splunk Cloud and managed by Support. The existing Deployment Master server is hosted on Azure, where it has been operating smoothly, ... See more...
Hi All, Our current setup involves Splunk Search Heads hosted in Splunk Cloud and managed by Support. The existing Deployment Master server is hosted on Azure, where it has been operating smoothly, supporting around 900+ clients that send logs to Splunk through it. Now, we’re planning to migrate the Deployment Master from Azure to an on-premises Nutanix environment. We’ve built a new server on-premises with the necessary hardware specifications and are preparing to install the latest Splunk Enterprise package (version 9.3.1) downloaded from the Splunk website. We’ll place this package in the `/tmp` directory on the new server, extract it in the `/opt` directory, accept the license agreement, and start Splunk services. Once up, we’ll access the GUI to import the Enterprise licenses. Next, I’ll download the Splunk Universal Forwarder Credential package (Splunkclouduf app) from the Splunk Cloud Search Head. Could you confirm whether this downloaded app should be placed in the `/opt/splunk/etc/apps`, `/opt/splunk/etc/deployment-apps`, or `/tmp` directory on the new server? From there, we can proceed with the installation. Please confirm. Once installed, the Splunkclouduf app will create a `100_splunkcloud` folder in the `/opt/splunk/etc/apps` directory. Should I then copy the `100_splunkcloud` folder to the `/opt/splunk/etc/deployment-apps` directory? Also can we rename the folder name from "100_splunkcloud" to some custom name  Additionally, the next step will involve transferring all deployment apps from the `deployment-apps` directory on the old server (`/opt/splunk/etc/deployment-apps`) to the new server in the same location—please confirm if this is correct. Finally: - Update the `deploymentclient` app on both the old and new Deployment Master servers with the new server name. - Reload the server class on the old Deployment Master server. - Verify that all clients are reporting to the new Deployment Master server.   Want to get it clarified whether these steps are correct or if i missed out anything kindly let me know. So that my new DM server should be running fine post migration.
Can someone suggest if we can configure Cluster Master to work as License Master also ?   I tried to configure but it's throwing error   reason='Unable to connect to license manager=https://xx.xx... See more...
Can someone suggest if we can configure Cluster Master to work as License Master also ?   I tried to configure but it's throwing error   reason='Unable to connect to license manager=https://xx.xx.xx.xx:8089 Read Timeout'
  We have plan to migrate the old physical server to new physical server and the server is a Search Head component in Splunk Environment. for the new physical server we will be receiving new IP add... See more...
  We have plan to migrate the old physical server to new physical server and the server is a Search Head component in Splunk Environment. for the new physical server we will be receiving new IP address, my query is how to configure new IP to the existing Splunk Server Environment Our Splunk Environment has 1 - Cluster master 4 - indexer 1 - deployment server 1- Search Head 1- monitoring console 1- License Master DR Servers 1 - Search Head 1- Indexer
I have a custom command that I call that populates a lookup but when I run the command, it only runs the script 5-20 times (it changes every time) while getting 20,000+ results. I'm wanting to run a ... See more...
I have a custom command that I call that populates a lookup but when I run the command, it only runs the script 5-20 times (it changes every time) while getting 20,000+ results. I'm wanting to run a query that sends the information into a custom script, to then populate a lookup, almost as if it's recursive. I'm thinking this is a performance issue of the script (it is a Python script so it's not the fastest). This is an example command of what it looks like:  index="*" host="example.org" | map search="| customcommand \"$src$\""
I have syslogs coming into Splunk that need some cleaning up - it's essentially JSON with a few extra characters here and there (but enough to be improperly formatted). I'd really like to be able to ... See more...
I have syslogs coming into Splunk that need some cleaning up - it's essentially JSON with a few extra characters here and there (but enough to be improperly formatted). I'd really like to be able to use KV_MODE = json to auto extract fields, but those additional characters prevent this from happening. So I wrote a few SEDCMDs to remove those additional characters and applied the following stanzas to a new sourcetype: However, in our distributed Splunk Cloud environment, these SEDCMDs are not working. There are no errors in the _internal index pertaining to this sourcetype, and I can tell the sourcetype is applying because any key/value pairs in the data that pop up before the extra characters are automatically extracted at search-time as expected (so at least I know the KV_MODE stanza is trying to work). Because the SEDCMDs are not removing the extra characters, the other fields are not being auto-extracted. In my all-in-one test environment, the SEDCMDs work perfectly alongside KV_MODE to clean up the data and pull out the fields. I can't quite determine why it isn't working in Cloud - the syslog servers forwarding this data have Universal Forwarders so I understand why the sourcetype isn't applying at that level... but this sourcetype should be hitting the indexers and applied there, no? What am I missing?   
Hi @santoshpatil01 , Your request is lacking useful information for anyone to help. It is not clear the query format that will serve as your base search so maybe you'll have to adjust that to your ... See more...
Hi @santoshpatil01 , Your request is lacking useful information for anyone to help. It is not clear the query format that will serve as your base search so maybe you'll have to adjust that to your reality. Basically you'll need to have a base query that returns all raw data for the tokens wherever they are, and then you create the panels accordingly. If you don't have the input fields to set the tokens, you'll need to set them as well on each panel OR in the dashboard header depending on the filter active necessity. In each panel, mention the base search making this a linked search, and use as query something like this: Total request number for security token/priority token filtered by partner name | search partner=$token.partner$ | stats count as "Total Requests" by security_token, priority_token Duplicate request number filtered by partner name and customer ID (to check if current expiration time for both tokens are appropriate) | search partner=$token.parner$ AND customerId=$token.customerId$ | stats count by parner, customerId | where count>1 Priority token usage filtered by partner name | search partner=$token.parner$ | stats count by token_name Response time analysis for security token/priority token | stats avg(response_time) as response_time by security_token, priority_token Or if you need 90th percentile instead: | stats p90(response_time) as response_time by security_token, priority_token Again, this is just a scratch in the surface as I don't know your query, field names and additional information, but it should be enough for you to kick this off and play around.
Luckily, at the beginning of the search Splunk is actually quite smart in optimizing out some common issues. For example, if I run this (index= index_1 OR index= index_2) (kubernetes_namespace="kub... See more...
Luckily, at the beginning of the search Splunk is actually quite smart in optimizing out some common issues. For example, if I run this (index= index_1 OR index= index_2) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="PaymentErrorHandler" OR logger="PaymentStatusClientImpl") | search "* Did not observe any item or terminal signal within*" on my  home Splunk instance (let's ignore the fact that I won't have any matching events obviously but that's not the point) and see the job detail dashboard  I can see this | search ("* Did not observe any item or terminal signal within*" (index=index_1 OR index=index_2) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="PaymentErrorHandler" OR logger="PaymentStatusClientImpl")) as optimized search.  And if we go to job log we can see this [ AND any did item not ns observe or signal terminal within* [ OR index::index_1 index::index_2 ] [ OR kube ose ] [ OR paymenterrorhandler paymentstatusclientimpl ] ] As base lispy search. As we can see, Splunk was not only able to "flat" both searches into single one but also noticed that the initial wildcard was before a major breaker and a such wouldn't affect the sought terms. But as a general rule of thumb - yes it's a good practice to keep your searches "tidy" and avoid wildcards at the beginning of search terms.
Hello, I tried the command, but same results, always 68 millions of events. I'll try to contact support, thanks for your help !  
Hi @super_edition , ok, in other words, you need to do a join with nother search, is it correct? if you haven't so many events, you could use the join command. If instead you're sure to have the m... See more...
Hi @super_edition , ok, in other words, you need to do a join with nother search, is it correct? if you haven't so many events, you could use the join command. If instead you're sure to have the message.tracers.ek-correlation-id{} field in all events, you could use this field as correlation key: (index= index_1 OR index= index_1) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="PaymentErrorHandler") "Did not observe any item or terminal signal within" OR logger="PaymentStatusClientImpl" | eval clusters=coalesce(openshift_cluster, kubernetes_cluster) | stats values(clusters) as cluster values(host) as hostname count(host) as count values(paymentStatusResponse.orderCode) AS order_code BY message.tracers.ek-correlation-id{} Ciao. Giuseppe
First and foremost - don't do two things at once - either upgrade and then migrate or migrate then upgrade. Also - what things you don't understand? It's impossible to do a step by step instructions... See more...
First and foremost - don't do two things at once - either upgrade and then migrate or migrate then upgrade. Also - what things you don't understand? It's impossible to do a step by step instructions to do something like that without at least some knowledge and understanding on your side what you're doing. Should anything go wrong how will you be able to troubleshoot and fix your installation?