All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I currently have an intake that is exceeding 100GB per day and I would like to know what are the best practice recommendations to support this intake without affecting performance. How man... See more...
Hello, I currently have an intake that is exceeding 100GB per day and I would like to know what are the best practice recommendations to support this intake without affecting performance. How many servers or indexers are needed and their minimum and recommended specifications?
Hi Team, We are exploring the Splunk cloud. We need below clarification. 1) Is AWS Splunk Cloud instance supports common information model (CIM) ? 2) Is Splunk enterprise security included into... See more...
Hi Team, We are exploring the Splunk cloud. We need below clarification. 1) Is AWS Splunk Cloud instance supports common information model (CIM) ? 2) Is Splunk enterprise security included into AWS Splunk cloud license ? 3) Can we make the search api call from other application to get the AWS Splunk Cloud indexed data (CIM supported) ? 4) Can You provide a demo of AWS Splunk Cloud (Saas).
  i have setup on prem new SH cluster and Deployment server with Splunk enterprise version 8.2.5. I have configure new 3 SH as slave and pointed to License Master but Salve not syncing with Licen... See more...
  i have setup on prem new SH cluster and Deployment server with Splunk enterprise version 8.2.5. I have configure new 3 SH as slave and pointed to License Master but Salve not syncing with License Master. Note: We have three license pool in License master and I have update pool stanza in server.conf as well but no luck. Please suggest.   I have performed below config steps on server.conf on 3 SH and Deployment Host Separately.  Select a new passcode to fill in for pass4SymmKey. SSH to the Splunk instance. Edit the /opt/splunk/etc/system/local/server.conf file. Under the [general] stanza pass4SymmKey field, replace the hashed value with the new passcode in plain text. It will stay in plain text until Splunk services are restarted. Save the changes to the server.conf file. Restart Splunk services on that node.     Here is the server.conf on SH as License Slave. ----------------------------------------------------------------------------------------------------------------------------------------------- [general] serverName = SHHost123 pass4SymmKey = Same is License Master   [license] master_uri = https://x.x.x.x:8089 active_group = Enterprise   [sslConfig] sslPassword = 12344…   [lmpool:auto_generated_pool_download-trial] description = auto_generated_pool_download-trial quota = MAX slaves = * stack_id = download-trial   [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder quota = MAX slaves = * stack_id = forwarder   [lmpool:auto_generated_pool_free] description = auto_generated_pool_free quota = MAX slaves = * stack_id = free   [lmpool:auto_generated_pool_enterprise] description = auto_generated_pool_enterprise1 quota = MAX slaves = * stack_id = enterprise   [replication_port://9023]   [shclustering] conf_deploy_fetch_url = http://x.x.x.x:8089 disabled = 0 mgmt_uri = https://x.x.x.x:8089 pass4SymmKey = 23467…. shcluster_label = shclusterHost_1 id = D6E63C0A-234S-4F45-A995-FDDE1H71B622
Hello, I am trying to install SSL certificate for Splunk to permit HTTPS access to the console. As part of the procedure, I have generated the CSR, Key and the signed PEM certificates. I uploaded t... See more...
Hello, I am trying to install SSL certificate for Splunk to permit HTTPS access to the console. As part of the procedure, I have generated the CSR, Key and the signed PEM certificates. I uploaded the files to the Splunk host and created (and edited) the server.conf with the following information [settings] enableSplunkWebSSL = true privKeyPath = /opt/splunk/etc/auth/mycerts/mySplunkWebPrivateKey.key serverCert = /opt/splunk/etc/auth/mycerts/mySplunkWebCertificate.pem I also disabled the [sslConfig] stanza in server.conf. When I try to restart Splunk, the service fails with the following errors. Please advise on how to fix the issue. WARNING: Cannot decrypt private key in "/opt/splunk/splunk/etc/auth/mycerts/mySplunkWebPrivateKey1.key" with> Feb 03 17:09:16 frczprmccinfsp1 splunk[1559898]: WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifySer>   Thanks in advance. Siddarth
Hello guys, Can anyone please help me to create a DOS/DDOS alert without using any application in splunk.  For example:  if source IPs sending thousands of TCP packets simultaneously within the... See more...
Hello guys, Can anyone please help me to create a DOS/DDOS alert without using any application in splunk.  For example:  if source IPs sending thousands of TCP packets simultaneously within the 15-20 minutes or so.   I can't seem to find any docs that related to this. TIA
I find myself using Splunk Cloud and I see that the licensing is being exceeded on daily. In the Cloud Monitoring Console APP there is no option that allows me to see what the sourcetype is and thi... See more...
I find myself using Splunk Cloud and I see that the licensing is being exceeded on daily. In the Cloud Monitoring Console APP there is no option that allows me to see what the sourcetype is and this would help me to know exactly which source has increased usage.
I'm looking to create a search for users that have reset their password and then within a certain amount of time logged off.    Anybody know the best way of producing a search for this?   Muc... See more...
I'm looking to create a search for users that have reset their password and then within a certain amount of time logged off.    Anybody know the best way of producing a search for this?   Much appreciated for any help with his. 
Hello, We are still facing the following issue when we put in maintenance mode our Indexer Cluster and we stop one Indexer. Basically all the Indexers stop ingesting data, increasing their queues... See more...
Hello, We are still facing the following issue when we put in maintenance mode our Indexer Cluster and we stop one Indexer. Basically all the Indexers stop ingesting data, increasing their queues, waiting for splunk-optimize to finish the job. This usually happens when we stop the Indexer after a long time since last time. Here below an example of the error message that appears on all the Indexers at once, on different bucket directory:     throttled: The index processor has paused data flow. Too many tsidx files in idx=myindex bucket="/xxxxxxx/xxxx/xxxxxxxxxx/splunk/db/myindex/db/hot_v1_648" , waiting for the splunk-optimize indexing helper to catch up merging them. Ensure reasonable disk space is available, and that I/O write throughput is not compromised.     Checking further, going into the bucket directory, I was able to see hunderds of .tsidx files. What splunk-optimize does is to merge those .tsidx files.   We are running Splunk Enterprise 9.0.2 and: - on each Indexer the disk reach 150K IOPS - we already performed this set-up that improved the effect, but hasn't solved it:     indexes.conf [default] maxRunningProcessGroups = 12 processTrackerServiceInterval = 0     Note: we kept maxConcurrentOptimizes=6 as default, because we have to keep maxConcurrentOptimizes <= maxRunningProcessGroups (this has been also confirmed by Splunk support, that informed me maxConcurrentOptimizes is no longer used (or used with less effect) since 7.x and it is there mainly for compatibility) - I know since 9.0.x there is the possibility to manually run splunk-optimize over the affected buckets, but this seems to me more a workaround than a solution. Considering a deployment can have multiple Indexers it is not straightforward   What do you suggest to solve this issue?   Thanks a lot, Edoardo
Hi, I want to create a search out of the below event, to raise an alert if the particular system having the label lostinterface or label is  not there  and in profiles we have 2 values i.e  tndsubne... See more...
Hi, I want to create a search out of the below event, to raise an alert if the particular system having the label lostinterface or label is  not there  and in profiles we have 2 values i.e  tndsubnet1 and  tndsubnet2, how we can make the search to seperate out the systems in tndsubnets 1 and tndsubnets 2 accordingly to make a search Thanks..
I have a query where I'm looking for users who are performing large file transfers (>50MB).  This query runs every day and as a result we have hosts that are legit.  These hosts names are extracted f... See more...
I have a query where I'm looking for users who are performing large file transfers (>50MB).  This query runs every day and as a result we have hosts that are legit.  These hosts names are extracted from the dst_host field of the results from my search.  As we compile a list of valid hosts, we can simply add that to the query to be excluded from the search like:  index=* sourcetype=websense* AND (http_method="POST" OR http_method="PUT" OR http_method="CONNECT") AND bytes_out>50000000 NOT (dst_host IN (google.com, webex.com, *.zoom.us) OR dst_ip=1.2.3.4) I know there's a better way to add the excluded host or IPs in a file that I can query against to exclude but I'm not sure how to do that.  I don't want to update the query everyday with hosts that should be excluded but rather a living document that can be updated with hosts or IPs that should excluded. Can someone send point me in the right direction for this issue.
Hi ,  I want to rename to Required Parameters Longitude and Latitude are missing or invalid to a new value Required Parameters missing.   index="****" k8s.namespace.name="*****" "Error" OR "Exc... See more...
Hi ,  I want to rename to Required Parameters Longitude and Latitude are missing or invalid to a new value Required Parameters missing.   index="****" k8s.namespace.name="*****" "Error" OR "Exception" | rex field=_raw "(?<error_msg>Required Parameters Longitude and Latitude are missing or invalid)" | stats count by error_msg | sort count desc   Any help will be great
Hello everyone, I have the following field and example value: sourcePort=514.000 I'd like to format these fields in such a way, that only the first digits until the point are kept. Furthermore, t... See more...
Hello everyone, I have the following field and example value: sourcePort=514.000 I'd like to format these fields in such a way, that only the first digits until the point are kept. Furthermore, this should only apply to a certain group of events (group one).  Basically:  before: sourcePort=514.000 after:    sourcePort=514 What I have until now: search... | eval sourcePort=if(group=one, regex part, sourcePort) The regex to match only the digits is  ^\d{1,5} However, I am unsure how to work with the regex and if it is even possible to achieve my goal using this. Thanks in advance
Hi there,   I am trying to ingest data which is stored within the profile of a user's AddData location: C:\Users\(User ID)\AppData\Local\UiPath\Logs but can't pull in any events. I've tried lots o... See more...
Hi there,   I am trying to ingest data which is stored within the profile of a user's AddData location: C:\Users\(User ID)\AppData\Local\UiPath\Logs but can't pull in any events. I've tried lots of different stanzas like  [monitor://C:\Users\DKX*\AppData\ [monitor://C:\Users\DKX$\AppData\ [monitor://C:\Users\...\AppData\ [monitor://C:\Users\%userprofile%\AppData\ Any idea why it isn't working? I know i've not added in all my stanza attempts but could it be due to the Splunk service account not having access to that location?
Hello everyone,   I am passing the dates as token but it shows the error in both the condition. Cond1: | where (Date>="$date_start$" AND Date<="$date_end$") Cond2: | where (Date>="2022-06-01"... See more...
Hello everyone,   I am passing the dates as token but it shows the error in both the condition. Cond1: | where (Date>="$date_start$" AND Date<="$date_end$") Cond2: | where (Date>="2022-06-01" AND Date<="2022-06-02") Please help
Hi, I've been told, that using field extractions on json is not best practis and that I should use calculated fields instead. In some cases thats easy and I can use replace or other methods to do tha... See more...
Hi, I've been told, that using field extractions on json is not best practis and that I should use calculated fields instead. In some cases thats easy and I can use replace or other methods to do that but in some it is more difficult.  I have some events giving me information about software versions. When I try to extract the version string from as follows, I get the results for events containing this string. In all other cases I get the complete string instead. What I need is the matching string or nothing. I couldn't figure out how to do that. replace(message, "^My Software Version (\S+).*", "\1")  
  I try use macros to get external indexes in child dataset VPN, but search with tstats on this dataset doesn't work.  Example of search:     | tstats values(sourcetype) as sourcetyp... See more...
  I try use macros to get external indexes in child dataset VPN, but search with tstats on this dataset doesn't work.  Example of search:     | tstats values(sourcetype) as sourcetype from datamodel=authentication.authentication where nodename=authentication.VPN by nodename       But when I explicitly enumerate the indexes, then everything works! And also it work with macros when i use search:     | from datamodel ...     What's problem? 
Hi, I am trying to create the metric from ADQL searches. However, when I create the metric, it keeps fluctuating between one and zero. However, when I see the same thing in ADQL searches I get some ... See more...
Hi, I am trying to create the metric from ADQL searches. However, when I create the metric, it keeps fluctuating between one and zero. However, when I see the same thing in ADQL searches I get some value (5 in this case as you can see in the ADQL query screenshot) Analytics ADQL query Screenshot: Analytics metric screenshot: Am I going wrong somewhere?  Thanking you in advance
I have edited edited Input.conf file as below. [Bamboo://localhost:8085] sourcetype = bamboo interval = 60 server = http://localhost:8085 protocol = https port = 8085 username = bamboo_user ... See more...
I have edited edited Input.conf file as below. [Bamboo://localhost:8085] sourcetype = bamboo interval = 60 server = http://localhost:8085 protocol = https port = 8085 username = bamboo_user password = bamboo_pwd disabled = 0 Is above file is correct? Based on this HTTP event collector will generate the token in Splunk web enterprise right?      
hello, My need is to use Splunk Entreprise to serve multiple client organizations using a single instance=> Multitenancy function use. I have some installed Splunk Apps using only one index and t... See more...
hello, My need is to use Splunk Entreprise to serve multiple client organizations using a single instance=> Multitenancy function use. I have some installed Splunk Apps using only one index and they manage the data coming from multiple clients, how can I separate them on the dashboard ?  how can i create Role-based permissions per customer ? Does Splunk Entreprise  supports natively Multitenancy function ? how can I achieve my goal ? Bests, Yassine.  
Hello. I have three lists of names of different technologies, I would like to put the technologies in a menu or multiselect so that when I select each technology it brings me the names of each one ... See more...
Hello. I have three lists of names of different technologies, I would like to put the technologies in a menu or multiselect so that when I select each technology it brings me the names of each one that is selected, for example:   My list is:   My multiselect input is:   When selecting each option, I would like it to show me all the users like the following table: try doing the following | makeresults |eval input = "$ms_Be1Voild$" //This is the token of my multiselect input |eval array = mvjoin(input, ",") |fields array But the result is the following: Active Directory,o365,Windows Could anyone help me please.