All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everybody!   I am researching to integrate splunk with my service running in Docker. In my case, the splunk enterprise runs on a different host. One way to achieve this is to use docker's... See more...
Hello everybody!   I am researching to integrate splunk with my service running in Docker. In my case, the splunk enterprise runs on a different host. One way to achieve this is to use docker's built-in splunk logging driver. I see that one of the configuration parameter is "splunk-token": "" which is the splunk Http Event Collector token that needs to be created in the Splunk enterprise. My question is - Would I be required to create separate HEC tokens for each of the microservice projects. Let's say, if we have 10 microservices projects running which need to integrate with Splunk. Does that mean I would have to create 10 different tokens in Splunk enterprise?
Today we have an onprime cluster with physical index servers. Usually our disks fail, leaving the cluster performance down. Any query that would help catch disk error?
I am writing a query to correlate across two different indexes. One index has userID field. I want the query to match a field in the second index and output additional fields from the second index. ... See more...
I am writing a query to correlate across two different indexes. One index has userID field. I want the query to match a field in the second index and output additional fields from the second index. index 'idx1' has field name usr. For the sake of this example, there is a user called 'jdoe' index 'idx2' has a field name called user, which contains 'jdoe' along with another field called account ID, which has the name spelled out 'John Doe'. I want the query to use the usr field content from idx1 and use that info to pull the contents of 'account ID' field in index. What's the best way to accomplish this?
I've created tokens with local authentication and shared the ID and password with the users of the applications needing access. So a single userid, mapped to a role, works for multiple users running ... See more...
I've created tokens with local authentication and shared the ID and password with the users of the applications needing access. So a single userid, mapped to a role, works for multiple users running Grafana, in this case. I'd like to do the same thing using SAML. Is it possible (and reasonable) to create a "generic token", for lack of a better phrase, and provide that to the custom dashboard users? It may still be Grafana, or maybe some other mechanism to yet be determined, passing a search string to splunk.  For example, if I had 10 users to configure to use API access, it would seem that I'd need to generate 10 tokens, based on what I get out of the documentation. Can anyone direct me? Thank you in advance.  
Background: I am sending data to Splunk Cloud through an Intermediate Forwarder, which is a universal forwarder from multiple source instances (both in Pacific time and UTC) that do not support HEC ... See more...
Background: I am sending data to Splunk Cloud through an Intermediate Forwarder, which is a universal forwarder from multiple source instances (both in Pacific time and UTC) that do not support HEC and the correct TLS versions. As of now, this is the only way of sending the logs.  Sources (PT and UTC) > Intermediate Forwarder > Splunk Cloud Problem:  Some of the source instances are in Pacific time zone and the intermediate forwarder is in UTC. The logs that are coming from the instances that are in PT, are showing up in UTC time in Splunk Cloud. So Splunk Cloud is showing that the logs are from a time earlier than the they really got generated.  I should also say that the logs have a timestamp (PT - correct time) in them, always within the first 70 characters or so at most. That is the time I want them to be shown in.  Also changing the intermediate forwarder timezone to PT fixes the issue for the instances in PT but messes up the instances that are in UTC.  As a solution, I saw that timezones can be configured in the props.conf file, located in (after creation) /opt/splunkforwarder/etc/system/local directory.  Here is what it looks like for me:    [host::<some of the hosts>] TZ = US/Pacific   I have tried adding the following argument in the stanza (according to this post https://community.splunk.com/t5/Getting-Data-In/Universal-Forwarder-and-props-conf-and-transforms-conf/m-p/39732/highlight/true#M7401) but it is of no help.    force_local_processing = true    However, this just does not work. I can see logs in Splunk Cloud but they are in UTC.  I have also tried using <sourcetype> and source::<source> instead of [host::<myhost>] but it doesn't do anything.  Any other things that I can try?
I have a bunch of panels for distinctive purposes. (say about 100-200). In my use case scenario: I would like to be able to share these panels with other users so they can create their own dashboar... See more...
I have a bunch of panels for distinctive purposes. (say about 100-200). In my use case scenario: I would like to be able to share these panels with other users so they can create their own dashboards. OR Is there a way to generate dashboards using some Splunk Sdk or some service as a code? Like if I want a dashboard to contain 10 panels, can i just use the panel source codes and combine them into a dashboard in a way thats reliable and wont cause issues on dashboard imports? Would be happy to clarify more if needed. Thanks.
Hello Splunkers , I wrote a python script that explores the splunk-var indexes and calculates their total size, and then asks the user if they’d like to back it up. After the user indicates which... See more...
Hello Splunkers , I wrote a python script that explores the splunk-var indexes and calculates their total size, and then asks the user if they’d like to back it up. After the user indicates which indexes they’d like to back up, it copies all buckets and other metadata in the db path (excluding the hot bucket) to a dir that is specified as a command line arg. I want to know How to actually back up files (is it as simple as copying out the dir and then later copying it in and restarting splunk) Best implement bucket policies (maxHotSpanSecs) Understand bucket rollover when we have unexpected behavior What indexes.conf should  I use to have the bucket have one day worth of data   Thanks in Advance
In Splunk monitoring console  Forwarders: Deployment panel has duplicate entries one is active with latest version and other old version as missing. How i can remove old dated missing entries form h... See more...
In Splunk monitoring console  Forwarders: Deployment panel has duplicate entries one is active with latest version and other old version as missing. How i can remove old dated missing entries form host without building the forwarder asset table.  Suggestions please? sample :  
I have the raw data in format : {"col1":"1",{col2":"2"},{.........(continue) which if I have to visualize using https://codebeautify.org/string-to-json-online : Object{1} ->a{4} col1: 1 ... See more...
I have the raw data in format : {"col1":"1",{col2":"2"},{.........(continue) which if I have to visualize using https://codebeautify.org/string-to-json-online : Object{1} ->a{4} col1: 1 col2: 2 col3: 3 col4: 4 ->b[3] ->0{3} col5:"5 col6[0] 0:6 ->1{3} col5: "55" col6[1] 0:66 ->2{3} col5: 55 col6[1] 0:666  And if my Splunk query is like  index="api" | rename a.col1 as "col1",a.col2 as "col2", b{}.col5 as "col5", b{}.col6{} as "col6" | table "col1","col2","col5","col6" it display me: col1 col2 col5 col6 1 2 5 55 6 66 666 Moreover , if I export it in csv It only shows me first value of array(multi-value) col1 col2 col3 col4 1 2 5 6   but should be like : (each row 1:1 mapped) MY DESIRED TABLE col1 col2 col5 col6 1 2 5 6 1 2 55 66 1 2 55 666  
One of my field in raw data is multivalue(like array) . I can see those values in a column in Splunk , but when I try to export them to csv then only the 1st value gets copied and rest disappears .... See more...
One of my field in raw data is multivalue(like array) . I can see those values in a column in Splunk , but when I try to export them to csv then only the 1st value gets copied and rest disappears . eg: In Splunk col1 val1 val2 val2 val3 val4   While exporting col1 val1 val2
Hello, I currently have an intake that is exceeding 100GB per day and I would like to know what are the best practice recommendations to support this intake without affecting performance. How man... See more...
Hello, I currently have an intake that is exceeding 100GB per day and I would like to know what are the best practice recommendations to support this intake without affecting performance. How many servers or indexers are needed and their minimum and recommended specifications?
Hi Team, We are exploring the Splunk cloud. We need below clarification. 1) Is AWS Splunk Cloud instance supports common information model (CIM) ? 2) Is Splunk enterprise security included into... See more...
Hi Team, We are exploring the Splunk cloud. We need below clarification. 1) Is AWS Splunk Cloud instance supports common information model (CIM) ? 2) Is Splunk enterprise security included into AWS Splunk cloud license ? 3) Can we make the search api call from other application to get the AWS Splunk Cloud indexed data (CIM supported) ? 4) Can You provide a demo of AWS Splunk Cloud (Saas).
  i have setup on prem new SH cluster and Deployment server with Splunk enterprise version 8.2.5. I have configure new 3 SH as slave and pointed to License Master but Salve not syncing with Licen... See more...
  i have setup on prem new SH cluster and Deployment server with Splunk enterprise version 8.2.5. I have configure new 3 SH as slave and pointed to License Master but Salve not syncing with License Master. Note: We have three license pool in License master and I have update pool stanza in server.conf as well but no luck. Please suggest.   I have performed below config steps on server.conf on 3 SH and Deployment Host Separately.  Select a new passcode to fill in for pass4SymmKey. SSH to the Splunk instance. Edit the /opt/splunk/etc/system/local/server.conf file. Under the [general] stanza pass4SymmKey field, replace the hashed value with the new passcode in plain text. It will stay in plain text until Splunk services are restarted. Save the changes to the server.conf file. Restart Splunk services on that node.     Here is the server.conf on SH as License Slave. ----------------------------------------------------------------------------------------------------------------------------------------------- [general] serverName = SHHost123 pass4SymmKey = Same is License Master   [license] master_uri = https://x.x.x.x:8089 active_group = Enterprise   [sslConfig] sslPassword = 12344…   [lmpool:auto_generated_pool_download-trial] description = auto_generated_pool_download-trial quota = MAX slaves = * stack_id = download-trial   [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder quota = MAX slaves = * stack_id = forwarder   [lmpool:auto_generated_pool_free] description = auto_generated_pool_free quota = MAX slaves = * stack_id = free   [lmpool:auto_generated_pool_enterprise] description = auto_generated_pool_enterprise1 quota = MAX slaves = * stack_id = enterprise   [replication_port://9023]   [shclustering] conf_deploy_fetch_url = http://x.x.x.x:8089 disabled = 0 mgmt_uri = https://x.x.x.x:8089 pass4SymmKey = 23467…. shcluster_label = shclusterHost_1 id = D6E63C0A-234S-4F45-A995-FDDE1H71B622
Hello, I am trying to install SSL certificate for Splunk to permit HTTPS access to the console. As part of the procedure, I have generated the CSR, Key and the signed PEM certificates. I uploaded t... See more...
Hello, I am trying to install SSL certificate for Splunk to permit HTTPS access to the console. As part of the procedure, I have generated the CSR, Key and the signed PEM certificates. I uploaded the files to the Splunk host and created (and edited) the server.conf with the following information [settings] enableSplunkWebSSL = true privKeyPath = /opt/splunk/etc/auth/mycerts/mySplunkWebPrivateKey.key serverCert = /opt/splunk/etc/auth/mycerts/mySplunkWebCertificate.pem I also disabled the [sslConfig] stanza in server.conf. When I try to restart Splunk, the service fails with the following errors. Please advise on how to fix the issue. WARNING: Cannot decrypt private key in "/opt/splunk/splunk/etc/auth/mycerts/mySplunkWebPrivateKey1.key" with> Feb 03 17:09:16 frczprmccinfsp1 splunk[1559898]: WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifySer>   Thanks in advance. Siddarth
Hello guys, Can anyone please help me to create a DOS/DDOS alert without using any application in splunk.  For example:  if source IPs sending thousands of TCP packets simultaneously within the... See more...
Hello guys, Can anyone please help me to create a DOS/DDOS alert without using any application in splunk.  For example:  if source IPs sending thousands of TCP packets simultaneously within the 15-20 minutes or so.   I can't seem to find any docs that related to this. TIA
I find myself using Splunk Cloud and I see that the licensing is being exceeded on daily. In the Cloud Monitoring Console APP there is no option that allows me to see what the sourcetype is and thi... See more...
I find myself using Splunk Cloud and I see that the licensing is being exceeded on daily. In the Cloud Monitoring Console APP there is no option that allows me to see what the sourcetype is and this would help me to know exactly which source has increased usage.
I'm looking to create a search for users that have reset their password and then within a certain amount of time logged off.    Anybody know the best way of producing a search for this?   Muc... See more...
I'm looking to create a search for users that have reset their password and then within a certain amount of time logged off.    Anybody know the best way of producing a search for this?   Much appreciated for any help with his. 
Hello, We are still facing the following issue when we put in maintenance mode our Indexer Cluster and we stop one Indexer. Basically all the Indexers stop ingesting data, increasing their queues... See more...
Hello, We are still facing the following issue when we put in maintenance mode our Indexer Cluster and we stop one Indexer. Basically all the Indexers stop ingesting data, increasing their queues, waiting for splunk-optimize to finish the job. This usually happens when we stop the Indexer after a long time since last time. Here below an example of the error message that appears on all the Indexers at once, on different bucket directory:     throttled: The index processor has paused data flow. Too many tsidx files in idx=myindex bucket="/xxxxxxx/xxxx/xxxxxxxxxx/splunk/db/myindex/db/hot_v1_648" , waiting for the splunk-optimize indexing helper to catch up merging them. Ensure reasonable disk space is available, and that I/O write throughput is not compromised.     Checking further, going into the bucket directory, I was able to see hunderds of .tsidx files. What splunk-optimize does is to merge those .tsidx files.   We are running Splunk Enterprise 9.0.2 and: - on each Indexer the disk reach 150K IOPS - we already performed this set-up that improved the effect, but hasn't solved it:     indexes.conf [default] maxRunningProcessGroups = 12 processTrackerServiceInterval = 0     Note: we kept maxConcurrentOptimizes=6 as default, because we have to keep maxConcurrentOptimizes <= maxRunningProcessGroups (this has been also confirmed by Splunk support, that informed me maxConcurrentOptimizes is no longer used (or used with less effect) since 7.x and it is there mainly for compatibility) - I know since 9.0.x there is the possibility to manually run splunk-optimize over the affected buckets, but this seems to me more a workaround than a solution. Considering a deployment can have multiple Indexers it is not straightforward   What do you suggest to solve this issue?   Thanks a lot, Edoardo
Hi, I want to create a search out of the below event, to raise an alert if the particular system having the label lostinterface or label is  not there  and in profiles we have 2 values i.e  tndsubne... See more...
Hi, I want to create a search out of the below event, to raise an alert if the particular system having the label lostinterface or label is  not there  and in profiles we have 2 values i.e  tndsubnet1 and  tndsubnet2, how we can make the search to seperate out the systems in tndsubnets 1 and tndsubnets 2 accordingly to make a search Thanks..
I have a query where I'm looking for users who are performing large file transfers (>50MB).  This query runs every day and as a result we have hosts that are legit.  These hosts names are extracted f... See more...
I have a query where I'm looking for users who are performing large file transfers (>50MB).  This query runs every day and as a result we have hosts that are legit.  These hosts names are extracted from the dst_host field of the results from my search.  As we compile a list of valid hosts, we can simply add that to the query to be excluded from the search like:  index=* sourcetype=websense* AND (http_method="POST" OR http_method="PUT" OR http_method="CONNECT") AND bytes_out>50000000 NOT (dst_host IN (google.com, webex.com, *.zoom.us) OR dst_ip=1.2.3.4) I know there's a better way to add the excluded host or IPs in a file that I can query against to exclude but I'm not sure how to do that.  I don't want to update the query everyday with hosts that should be excluded but rather a living document that can be updated with hosts or IPs that should excluded. Can someone send point me in the right direction for this issue.