All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm trying to use DB Connect on our search heads to do something like this ...   | dbxquery query="My Query" connection="My_Connection"   This sorta works, but only on one search head. The issue ... See more...
I'm trying to use DB Connect on our search heads to do something like this ...   | dbxquery query="My Query" connection="My_Connection"   This sorta works, but only on one search head. The issue seems to be when the identities.conf file syncs to the other heads the encrypted password is not readable by the other instances. It works on the machine that the SQL identity was created on but no others. So, I'm thinking I either need to somehow get identities.conf to not sync and manually create it on each search head or the other search heads need to be able to read the encrypted password. Or maybe there is another solution I'm not thinking of. Anybody have any thoughts with this? Thanks.
Please share a SPL to alert when a UF/HF stops sending data or there is a significant change ingestion by Splunk from them. I have had times when a UF/HF stops sending or sends little all of a sudden... See more...
Please share a SPL to alert when a UF/HF stops sending data or there is a significant change ingestion by Splunk from them. I have had times when a UF/HF stops sending or sends little all of a sudden & I find the next day or so. Thanks a million.
Hi All, We have 3 Search Heads in cluster which are Linux based. We use LDAP authentication for all the users. We noticed that we are not able to open any of the existing LDAP strategies. We are abl... See more...
Hi All, We have 3 Search Heads in cluster which are Linux based. We use LDAP authentication for all the users. We noticed that we are not able to open any of the existing LDAP strategies. We are able to navigate to the below page- Settings>Authentication Method>LDAP Settings to see the list of existing LDAP strategies but when I click on any one it gives "404 Error" (screenshot attached) URL:https://servername:8000/en-US/manger/launcher/authentication/providers/LDAP Also when I use the (request ID = [617fb7c4737f9eac401190) for this in Search it gives me an error "ERROR [617fb7c4737f9eac401190] admin:1272 - getSingleEntity - unable to load the section form definition for endpoint=authentication/providers/LDAP" We use Splunk Enterprise 8.0.0. Not sure if this a capability related issue or not. LDAP strategies are working fine in other servers like Indexer/Heavy Forwarders etc Any help would be highly appreciated. Thanks, Neerav
Hi, we have got a inputs.conf with : [monitor:///home/.../.bash_history] disabled = 0 crcSalt = <SOURCE> whitelist = \.bash_history$ Just to monitor the .bash_history file.  But when i look at ... See more...
Hi, we have got a inputs.conf with : [monitor:///home/.../.bash_history] disabled = 0 crcSalt = <SOURCE> whitelist = \.bash_history$ Just to monitor the .bash_history file.  But when i look at "./splunk list monitor"  it list every file in the /home/... folders.  Besides that.. the splunkd process just uses much cpu. (no wonder with so many files in the "list monitor" i think). Why is the splunkd on the universal forwarder monitoring every file in the /home/... folders while all he has to do is check .bash_history? What am i doing wrong with this input?   thanks in advance Jari p.s. Splunk version 8.1.3
Hi I have the following complex statement with multiple mstats. The issue is I think I have to do joins to get the data to work for me correctly, however, this is expencive for time. It takes 5 sec... See more...
Hi I have the following complex statement with multiple mstats. The issue is I think I have to do joins to get the data to work for me correctly, however, this is expencive for time. It takes 5 seconds to get back 1 hour. I want to get back 10 hours.  Is there any other way I can pull multiple data and bring them together without using a join?     | mstats min("mx.process.cpu.utilization") as cpuPerc min("mx.process.threads") as nbOfThreads min("mx.process.memory.usage") as memoryCons min("mx.process.file_descriptors") as nbOfOpenFiles min("mx.process.up.time") as upTime avg("mx.process.creation.time") as creationTime WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=30s BY pid cmd service.type host.name service.name replica.name component.name threshold | rename "service.name" as service_name | rename "replica.name" as replica_name | rename "service.type" as service_type | eval T_NbOfThreads=if(isnull(nbOfThreads),"",threshold) | eval T_MemoryCons=if(isnull(memoryCons),"",threshold) | eval T_NbOfOpenFiles=if(isnull(nbOfOpenFiles),"",threshold) | stats values(cpuPerc) as cpuPerc values(nbOfThreads) as nbOfThreads values(memoryCons) as memoryCons values(nbOfOpenFiles) as nbOfOpenFiles values(upTime) as upTime values(creationTime) as creationTime values(T_NbOfOpenFiles) as T_NbOfOpenFiles values(T_MemoryCons) as T_MemoryCons values(T_CpuPerc) as T_CpuPerc values(T_NbOfThreads) as T_NbOfThreads by _time pid cmd service_type host.name service_name replica_name component.name | eval Process_Name=((service_name . " # ") . replica_name) | sort 0 - _time | dedup _time pid | join type=left Process_Name _time [| mstats min("mx.replica.status") as Replica WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=30s BY "service.name" replica.name service.type | rename "service.name" as service_name | rename "replica.name" as replica_name | eval Process_Name=((service_name . " # ") . replica_name) | table Process_Name, Replica, "service.type", _time | sort 0 - _time | dedup _time Process_Name] | sort Process_Name _time | table _time, Process_Name, Replica | streamstats last(Replica) as Replica | sort - _time | append maxout=200000 [| mstats min("mx.replica.status") as Replica WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=30s BY "service.name" replica.name service.type | rename "service.name" as service_name | rename "replica.name" as replica_name | rename "service.type" as service_type | eval Process_Name=((service_name . " # ") . replica_name) | table Process_Name, Replica, service_type, _time | sort 0 - _time | dedup _time Process_Name | join type=left Process_Name,_time [| mstats min("mx.process.cpu.utilization") as cpuPerc min("mx.process.threads") as nbOfThreads min("mx.process.memory.usage") as memoryCons min("mx.process.file_descriptors") as nbOfOpenFiles min("mx.process.up.time") as upTime avg("mx.process.creation.time") as creationTime WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=30s BY pid cmd service.type host.name service.name replica.name component.name threshold | rename "service.name" as service_name | rename "replica.name" as replica_name | rename "service.type" as service_type | eval T_NbOfThreads=if(isnull(nbOfThreads),"",threshold) | eval T_MemoryCons=if(isnull(memoryCons),"",threshold) | eval T_NbOfOpenFiles=if(isnull(nbOfOpenFiles),"",threshold) | stats values(cpuPerc) as cpuPerc values(nbOfThreads) as nbOfThreads values(memoryCons) as memoryCons values(nbOfOpenFiles) as nbOfOpenFiles values(upTime) as upTime values(creationTime) as creationTime values(T_NbOfOpenFiles) as T_NbOfOpenFiles values(T_MemoryCons) as T_MemoryCons values(T_CpuPerc) as T_CpuPerc values(T_NbOfThreads) as T_NbOfThreads by _time pid cmd service_type host.name service_name replica_name component.name | eval Process_Name=((service_name . " # ") . replica_name) | sort 0 - _time | dedup _time pid] | rex field=Process_Name "(?<service_name2>.*) # (?<replica_name2>.*)" | eval service_name=if(isnull(service_name),service_name2,service_name) | eval replica_name=if(isnull(replica_name),replica_name2,replica_name)] | sort 0 - _time | dedup _time pid | join type=left [| mstats min("mx.process.resources.status") as Resources WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=30s BY "service.name" replica.name | rename "service.name" as service_name | rename "replica.name" as replica_name | eval Process_Name=((service_name . " # ") . replica_name) | sort 0 - _time | dedup _time Process_Name | table Process_Name, Status, Resources | eval Resources=rtrim(Resources,substr(Resources,-7)) | eval Resources=if((Resources == ""),0,Resources)] | eval Status=(Resources * Replica) | eval Status=if((Status == 4),2,if((Status == 0),0,1)) | eval Replica=case((Process_Name == "xmlserver # xmlserver"),"2",(Process_Name == "zookeeper # zookeeper"),"2",(Process_Name == "fileserver # fileserver"),"2",true(),Replica) | search Process_Name!="*ANT_TASK*" | eval Replica=if((Replica == 1),0,Replica) | timechart min(Replica) as Process_Status          
Hello, I have been struggling with something that probably is common sense to experts. Part of the Splunk messages that I deal with are mostly structured like the one pasted in the end [1]. The mess... See more...
Hello, I have been struggling with something that probably is common sense to experts. Part of the Splunk messages that I deal with are mostly structured like the one pasted in the end [1]. The message is persisted full size, however, when it is part of some search result the "object" part which is JSON gets cut off to the following:   {"objectName":"<some_string>"   I know that there is some kind of default limitation that a field cannot exceed 10 000 characters and if it does it could end up like this, however, the problem is also observed for messages that have a total length of 6 000 characters. There must be something else that I currently miss. I also went through similar questions here that suggested enriching the search queries with a regex that will force the complete field extraction in the search results, like:    | rex object=(?<object>.+)$   This does the job for testing purposes but I would like to find another solution because my searches are executed through the Splunk REST API, it is not an option to hardcode such regexes for multiple fields. I assume that the solution could be accomplished by a configuration on the Splunk side and I would really appreciate it if someone with more experience could take a look. In addition to the setup at my side, I have one search head and two indexers, the problem is observed no matter if I execute the search through the search head or directly on the indexers. Thank you in advance. Best Regards, Martin [1] sample message:   formatVersion="<some_version>", serverTimestamp="<some_timestamp>", crtAccount="<some_string>", crtApplication="<some_string>", crtComponent="<some_string>", crtTenantId="<some_string>", crtPermissions="<some_string>", crtHostname="<some_string>", accountExt="<some_string>", clientTimestamp="<some_timestamp>", messageId="<some_string>", category="<some_string>", loggedByClass="<some_string>", correlation_id="<some_string>", ip_address="<some_string>", username="<some_string>", tenantId="<some_string>", verb={"action":"update"}, "object="{ "objectName":"<some_string>", "objectAttributs":{ "System details":{ "oldValue":"<some_string>", "newValue":"<some_string>" } }, "auditedObject":{ "type":"<some_string>", "id":{ "key":"<some_string>" } } }    
The Splunk Documentation has steps to upgrade a Universal Forwarder to a Heavy Forwarder. But not any steps on downgrading.  Is it the same steps except swap where it says UF for HF and HF for UF? ... See more...
The Splunk Documentation has steps to upgrade a Universal Forwarder to a Heavy Forwarder. But not any steps on downgrading.  Is it the same steps except swap where it says UF for HF and HF for UF? I'm guessing it would go like:  1. Install UF and stop the HF 2. Copy $SPLUNK_HOME/var/lib/splunk/* from HF to the UF 3. Copy over the inputs.conf, outputs.conf, etc from the HF to the UF?     Does it work like this?  
Hi, I have a requirement to blacklist all  computer accounts (ending with $) in Security Event Code 4769. So far I have created following filter in inputs.conf but it is not working.     [Win... See more...
Hi, I have a requirement to blacklist all  computer accounts (ending with $) in Security Event Code 4769. So far I have created following filter in inputs.conf but it is not working.     [WinEventLog://Security] disabled = 0 renderXml = 1 source = XmlWinEventLog:Security blacklist1 = EventCode="4769" Message="(?:<Data Name='ServiceName'>).+\$"      I checked regex and it is working on regex builder App but filtering is not working. I am still receiving events with computer accounts. I referred and tried out various splunk forum questions on the same but no luck. Any help will be appreciated. Thanks for your time.
My apologies if this question seems mundane or was answered elsewhere but I have searched to no avail.  I am completely new to Splunk and am pathfinding the installation and configuration for use as ... See more...
My apologies if this question seems mundane or was answered elsewhere but I have searched to no avail.  I am completely new to Splunk and am pathfinding the installation and configuration for use as a syslog and audit log store similar to how ELK is often used.  While we will add additional data sources at some point my primary focus is on collecting and forwarding /var/log/audit/audit.log and /var/log/auth.log from various Ubuntu hosts into Splunk 8.2(.2.1) Enterprise. My initial attempt involved installing the UF alongside the Splunk server installation which did not turn out well.  Realizing that they are essentially the same daemon and use the same default ports they obviously conflict.  So instead I attempted to use the Splunk installation itself as so:   user@splunkhost:~$ sudo /opt/splunk/bin/splunk add forward-server splunkserver:9997 user@splunkhost:~$ sudo /opt/splunk/bin/splunk list forward-server user@splunkhost:~$ sudo /opt/splunk/bin/splunk add monitor /var/log/audit/audit.log -index main -sourcetype %audit-log% user@splunkhost:~$ sudo /opt/splunk/bin/splunk add monitor /var/log/auth.log -index main -sourcetype %auth-log%    However this also did not work and caused the pipeline to essentially become stuck and back up.  I believe the error message was something about the TCP output processor pausing the data flow.  I am just unsure why. Essentially I need to collect the security logs from the Splunk server host and index them in Splunk along with everything else but am at a loss as to how this can be accomplished. Any help or pointers would be most appreciated.  Thank you!
Hey. I'm trying to add the "Drilldown" and "Contributing Events" to our Splunk notables. I have added to this parameter with the relevant search and they didn't appear in our notables: action.... See more...
Hey. I'm trying to add the "Drilldown" and "Contributing Events" to our Splunk notables. I have added to this parameter with the relevant search and they didn't appear in our notables: action.notable.param.drilldown_search action.notable.param.drilldown_name Splunk version: 8.0.3 This is notable that I have configured. It contains a drilldown parameter, but as you see it didn't appeared in the notable:  
I currently have 4 indexers as part of my Splunk deployment. I am upgrading these indexers with new hardware. I am going to join the 4 new indexers to the existing indexer cluster and then ultimatel... See more...
I currently have 4 indexers as part of my Splunk deployment. I am upgrading these indexers with new hardware. I am going to join the 4 new indexers to the existing indexer cluster and then ultimately retire the 4 old indexers once the data is redistributed across the cluster. But, once all of the indexers are in the same cluster I seem to have two options (I think) for making sure that data is distributed across the new indexers: Option 1 Rebalance data across all 8 indexers...   splunk rebalance cluster-data -action start   ...and then retire the old indexers as normal. Option 2 Put each indexer in detention one by one and then retire in the following way, which as I understand it will move data off the indexer in the process...   splunk offline --enforce-counts   I've read the documentation around these topics, however Option 2 was mentioned to me in a previous post and so I just wanted clarification. Many thanks. Edit: Or, thinking about it some more, would I just use Option 1 to rebalance the data and then use Option 2 to remove the old indexers one by one?
Hi, I need to extract a value  from a message field, which has multiple data values. as like below, message:{user: xxxx,age:yy,gender:xxxx, position:"nnnn", place:yyy} In the above i need to extra... See more...
Hi, I need to extract a value  from a message field, which has multiple data values. as like below, message:{user: xxxx,age:yy,gender:xxxx, position:"nnnn", place:yyy} In the above i need to extract the position value, which may have n number datas present after this. So, I need to extract the position value by its name alone.  And also this position value can be there with a name as designation also    
I have a query structured like below with main search and sub search where the main search includes lookup, |inputlookup tci|search tag.name="ap" |rename tag.name as tags|dedup indicator|table indic... See more...
I have a query structured like below with main search and sub search where the main search includes lookup, |inputlookup tci|search tag.name="ap" |rename tag.name as tags|dedup indicator|table indicator confidence rating ownerName tags|union[search sourcetype="cisco:*" action=allowed |rename src_ip as indicator|dedup indicator|table indicator confidence rating ownerName tags]|stats count values(confidence) as confidence values(rating) as rating values(ownerName) as ownerName values(tags) as tags by indicator|where count>1|table indicator confidence rating ownerName tags   I wanted the results of this query to be lookup into one more source type and take out raw data. I have tried the below but it doesn't work, sourcetype="symantec:*"[|inputlookup tci|search tag.name="ap" |rename tag.name as tags|dedup indicator|table indicator confidence rating ownerName tags|union[search sourcetype="cisco:*" action=allowed |rename src_ip as indicator|dedup indicator|table indicator confidence rating ownerName tags]|stats count values(confidence) as confidence values(rating) as rating values(ownerName) as ownerName values(tags) as tags by indicator|where count>1|table indicator confidence rating ownerName tags]|table _raw Please suggest any alternatives to lookout for source type where we have to derive the result from nested sub searches with lookups.
Hi, anyone knows how to onboard J-Boss Application Servers to the Splunk Enterprise. Need the tutorial for the configuration and explanations.  Please help.
Hi,   We are able to fetch update logs from our WSUS server using add-on for windows. However, we want to display approved/unapproved update status in Splunk itself without having to go to the serv... See more...
Hi,   We are able to fetch update logs from our WSUS server using add-on for windows. However, we want to display approved/unapproved update status in Splunk itself without having to go to the server. Any suggestions.
I have some error keywords. These words all come in Raw data. I put them in a lookup file, lookup file name is mylookup.csv . Now I need to get an email alert when I have the word triggered in that ... See more...
I have some error keywords. These words all come in Raw data. I put them in a lookup file, lookup file name is mylookup.csv . Now I need to get an email alert when I have the word triggered in that file. Thanks in advance
Hello Team, In my org they installed the below certs in particular role, need to know by seeing below table which category it may comes to. Can anyone please explain how this. We are checking this l... See more...
Hello Team, In my org they installed the below certs in particular role, need to know by seeing below table which category it may comes to. Can anyone please explain how this. We are checking this link but not understand. About securing Splunk Enterprise with SSL - Splunk Documentation Role Cert Remarks Internal Heavy Forwarders /opt/splunk/etc/auth/myServerCertificate.pem /opt/splunk/etc/auth/rootCA.pem   SH cluster   Cluster Master   DMZ HF   DS   ES SH Deployer   HF Cluster   IDX Cluster   Monitoring Console   ES SH Cluster /opt/splunk/etc/auth/webcerts/mySplunkWebCertificate.pem   /opt/splunk/etc/auth/myServerCertificate.pem   /opt/splunk/etc/auth/rootCA.pem  
Hi, A new user here on Splunk. It's been 4 hours that I am going through Splunk multiple documents and I am going in circle here. Maybe someone can point me to the right direction to get me started... See more...
Hi, A new user here on Splunk. It's been 4 hours that I am going through Splunk multiple documents and I am going in circle here. Maybe someone can point me to the right direction to get me started. We have a new splunk cloud account, I am trying to get my cisco asa and pfsense logs to splunk cloud. I installed on windows server splunk forwarder, But I can't figure out how to get the logs to the forwarder and then to the splunk cloud. I specified on the ASA in the syslog server the IP of splunk forwarder but it doesn't seem like the forwarder is taking it. PS: I already installed the spl credential on the forwarder and restarted the service. (I believe that's all that is needed for the forwarder to send data to the cloud right?) Thank you for any help I can get.
Hello experts, My splunk search can return only a list of group IDs, but group names can only be found separately there is a groups.csv file which maps id and name groupid,groupname, "a1234", "ap... See more...
Hello experts, My splunk search can return only a list of group IDs, but group names can only be found separately there is a groups.csv file which maps id and name groupid,groupname, "a1234", "apple", "b2345","balloons", "c1144","cats" How can I write the query to return group id and the corresponding group name index=myidx type=groups  | table _time groupid groupname Thanks a lot!  
Hello guys... We need some help, as always. We are a bunch of noobs in Splunk and we want to create some basic dashboards about the local performance such as disk, cpu, memory... And dashboards about... See more...
Hello guys... We need some help, as always. We are a bunch of noobs in Splunk and we want to create some basic dashboards about the local performance such as disk, cpu, memory... And dashboards about a few of the most importants event logs in windows. Any idea how to start? I've been reading docs, forums, etc. but it looks like since is too basic no one talks about it lol Hope you can give me a hand. We are using splunk enterprise on a local w10 machine just to get our hands dirt and learn the basics as you can see. Thank you again and happy halloween!