All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Looking to see if we can ingest data from O365 that would list a person's name and what they accessed within Sharepoint.  We were hoping that the new Graph API input from the O365 add-on would get u... See more...
Looking to see if we can ingest data from O365 that would list a person's name and what they accessed within Sharepoint.  We were hoping that the new Graph API input from the O365 add-on would get us this information.  Our O365 admin states that he needs to setup an app registration for us to access O365 Graph. Different than the Tenant ID and Client ID we are using to connect to O365 from the SPlunk add-on He said - It would need to connect to Graph with the App ID and shared secret at a minimum What endpoint is Splunk trying to pull from when it is using the Graph API Inputs? O365 add-on documentation states:    O365:graph:api              All Audit events and reports visable through the Microsoft Graph API endpoints. This                                               includes all the logs events and reports visable thr the MS graphic API   Any help is appreciated. 
Hi,   I wanted to ask if multisite Splunk clusters can run different Operating systems without any issues. For example, cluster on site1 runs CentOS on peers, SH cluster and master node, and we wo... See more...
Hi,   I wanted to ask if multisite Splunk clusters can run different Operating systems without any issues. For example, cluster on site1 runs CentOS on peers, SH cluster and master node, and we would like to deploy site2 cluster with ubuntu on all the cluster members. would that cause any problems with Splunk's functionality?   Thanks in advance.
Hi! I'm trying to collect the local splunk server Windows Application event logs.   I would like them in non_XML format.  In .../app/Splunk_TA_windows/inputs.conf stanza I added:    [WinEventLog://A... See more...
Hi! I'm trying to collect the local splunk server Windows Application event logs.   I would like them in non_XML format.  In .../app/Splunk_TA_windows/inputs.conf stanza I added:    [WinEventLog://Application] index = splunk_server_app source = WinEventLog:Application sourcetype = WinEventLog disabled = 0 renderXML = 0 I'm getting events but they are in XML format.  Using Splunk Enterprise version 8.1.4. Any help wond be appreciated.  Thanks
Between these two locations -   $SPLUNK_HOME/etc/apps/TA-eStreamer/data $SPLUNK_HOME/etc/apps/TA-QualysCloudPlatform/tmp   30 GBs are taken for files as old as one and half years. Are there any ... See more...
Between these two locations -   $SPLUNK_HOME/etc/apps/TA-eStreamer/data $SPLUNK_HOME/etc/apps/TA-QualysCloudPlatform/tmp   30 GBs are taken for files as old as one and half years. Are there any configurations in these Add-ons to clean after themselves?
Hi Team, We cannot get the appdynamics php agent to load on php8. This is the startup error we are encountering: php -v PHP Warning: PHP Startup: Unable to load dynamic library 'appdynamics_age... See more...
Hi Team, We cannot get the appdynamics php agent to load on php8. This is the startup error we are encountering: php -v PHP Warning: PHP Startup: Unable to load dynamic library 'appdynamics_agent.so' (tried: /usr/lib64/php/modules/appdynamics_agent.so (/usr/lib64/php/modules/appdynamics_agent.so: undefined symbol: zend_vm_stack_copy_call_frame), /usr/lib64/php/modules/appdynamics_agent.so.so (/usr/lib64/php/modules/appdynamics_agent.so.so: cannot open shared object file: No such file or directory)) in Unknown on line 0 PHP 8.0.12 (cli) (built: Oct 19 2021 10:34:32) ( NTS gcc x86_64 ) Copyright (c) The PHP Group Zend Engine v4.0.12, Copyright (c) Zend Technologies with Zend OPcache v8.0.12, Copyright (c), by Zend Technologies appdynamics-php-agent-21.7.0.4560-1.x86_64.rpm is the version we are using. Any help is appreciated. Thanks, Amit Singh
I had some questions about the limits of a lookup file that I wasn't able to find when referencing documentation (below) or anywhere else in Splunk Cloud. https://docs.splunk.com/Documentation/Splun... See more...
I had some questions about the limits of a lookup file that I wasn't able to find when referencing documentation (below) or anywhere else in Splunk Cloud. https://docs.splunk.com/Documentation/SplunkCloud/latest/Knowledge/DefineaKVStorelookupinSplunkWeb What is the lookup file limit/is there a file limit when uploading directly to browser?  How long will data in lookup files be stored (do they ever get deleted after a time period?) Does joining with large lookups with OUTPUT/OUTPUTNEW have a limit to how much data is joined between the two lookups OR an index/sourcetype and a lookup? Is there a max limit for the amount of records that can be overwritten into the lookup when you run |outputlookup?   Business Use Case Example: We are ingesting logs and putting them into an index/sourcetype. We've created a search to append the sourcetype with a lookup file by an ID. This search will get updated everyday by the hour and output a new lookup. The amount of new data that gets added into the sourcetype varies in the 10s up to the 100s daily. If we keep doing it this way, the data size for the lookup on the browser will increase exponentially so I'm worried if there is a limit. Also open to recommendations on a better way of doing this.
We have an issues with big amounts of IO waits alerts on Splunk indexers. After investigation I found there is no swap space used during all the time. Do you know how can I enable swap or swap file t... See more...
We have an issues with big amounts of IO waits alerts on Splunk indexers. After investigation I found there is no swap space used during all the time. Do you know how can I enable swap or swap file to be used by splunk indexer?   Service] Type=simple Restart=always ExecStart=/splunk/bin/splunk _internal_launch_under_systemd KillMode=mixed KillSignal=SIGINT TimeoutStopSec=360 LimitNOFILE=65536 SuccessExitStatus=51 52 RestartPreventExitStatus=51 RestartForceExitStatus=52 User=splunk Group=splunk Delegate=true CPUShares=1024 MemoryLimit=32654905344 PermissionsStartOnly=true ExecStartPost=/bin/bash -c "chown -R splunk:splunk /sys/fs/cgroup/cpu/system.slice/%n" ExecStartPost=/bin/bash -c "chown -R splunk:splunk /sys/fs/cgroup/memory/system.slice/%n" [Install] WantedBy=multi-user.target [root@splunk]# cat /proc/meminfo MemTotal: 31889556 kB MemFree: 1715036 kB   root@splunk ~]# free -m total used free shared buff/cache available Mem: 31142 5835 13411 1584 11895 23308 Swap: 0 0 0
Since I realized it existed, I've setup my environment to source the $SPLUNK_HOME/share/splunk/cli-command-completion.sh script to allow tab completion of Splunk commands. Recently, we upgraded ... See more...
Since I realized it existed, I've setup my environment to source the $SPLUNK_HOME/share/splunk/cli-command-completion.sh script to allow tab completion of Splunk commands. Recently, we upgraded to 8.2.2 after previously being on 8.0.3. After the upgrade, the sourcing of the file no longer works, giving the following 2 stderr messages: cli-command-completion.sh: line 83: verb_to_objects: bad array subscript cli-command-completion.sh: line 85: verb_to_objects[$verb]: bad array subscript It looks like the script was originally a Splunk Answers post by a Splunk Dev that was later included in the Splunk distribution, and has not changed since then: https://community.splunk.com/t5/Deployment-Architecture/CLI-command-completion-Yes-and-here-s-how-For-bash-4-0-and/m-p/82552 However it looks like @V_at_Splunk is no longer active on the community and likely no longer at Splunk, their last post being in 2014. Is anyone still using this script? Has anyone run into these issues and determined their cause? I suspect something the script references changed, but I'm unsure what was changed. This was such a nice QoL thing to have it'd be a shame if I had to let it die.
Hello, I have followed https://docs.splunk.com/Documentation/ES/6.6.2/Admin/Customizenotables and created Additional Fields under "Incident Review Settings" page and saved my changes.  Now i am seei... See more...
Hello, I have followed https://docs.splunk.com/Documentation/ES/6.6.2/Admin/Customizenotables and created Additional Fields under "Incident Review Settings" page and saved my changes.  Now i am seeing that when a notable is created in Incident Review dashboard,  none of my new additional fields are showing up there.  I have verified when i run the search manually,  those fields are there and there is no typo in their name. 2 Qns 1) Is there a default limit as in  how many additional fields show at the max for a Notable ? The way i see not all fields are showing up. 2) Is there a way to customize which addn. fields to show for which Notable event /Co-relaion search ?
Is there a way to extract the Splunk search query from the URL and send it to another software? We want to send the search query to software that would allow users to edit their data, and passing the... See more...
Is there a way to extract the Splunk search query from the URL and send it to another software? We want to send the search query to software that would allow users to edit their data, and passing the search query would mean the user could go right from Splunk to editing data that they are seeing instantly.
I'm trying to use DB Connect on our search heads to do something like this ...   | dbxquery query="My Query" connection="My_Connection"   This sorta works, but only on one search head. The issue ... See more...
I'm trying to use DB Connect on our search heads to do something like this ...   | dbxquery query="My Query" connection="My_Connection"   This sorta works, but only on one search head. The issue seems to be when the identities.conf file syncs to the other heads the encrypted password is not readable by the other instances. It works on the machine that the SQL identity was created on but no others. So, I'm thinking I either need to somehow get identities.conf to not sync and manually create it on each search head or the other search heads need to be able to read the encrypted password. Or maybe there is another solution I'm not thinking of. Anybody have any thoughts with this? Thanks.
Please share a SPL to alert when a UF/HF stops sending data or there is a significant change ingestion by Splunk from them. I have had times when a UF/HF stops sending or sends little all of a sudden... See more...
Please share a SPL to alert when a UF/HF stops sending data or there is a significant change ingestion by Splunk from them. I have had times when a UF/HF stops sending or sends little all of a sudden & I find the next day or so. Thanks a million.
Hi All, We have 3 Search Heads in cluster which are Linux based. We use LDAP authentication for all the users. We noticed that we are not able to open any of the existing LDAP strategies. We are abl... See more...
Hi All, We have 3 Search Heads in cluster which are Linux based. We use LDAP authentication for all the users. We noticed that we are not able to open any of the existing LDAP strategies. We are able to navigate to the below page- Settings>Authentication Method>LDAP Settings to see the list of existing LDAP strategies but when I click on any one it gives "404 Error" (screenshot attached) URL:https://servername:8000/en-US/manger/launcher/authentication/providers/LDAP Also when I use the (request ID = [617fb7c4737f9eac401190) for this in Search it gives me an error "ERROR [617fb7c4737f9eac401190] admin:1272 - getSingleEntity - unable to load the section form definition for endpoint=authentication/providers/LDAP" We use Splunk Enterprise 8.0.0. Not sure if this a capability related issue or not. LDAP strategies are working fine in other servers like Indexer/Heavy Forwarders etc Any help would be highly appreciated. Thanks, Neerav
Hi, we have got a inputs.conf with : [monitor:///home/.../.bash_history] disabled = 0 crcSalt = <SOURCE> whitelist = \.bash_history$ Just to monitor the .bash_history file.  But when i look at ... See more...
Hi, we have got a inputs.conf with : [monitor:///home/.../.bash_history] disabled = 0 crcSalt = <SOURCE> whitelist = \.bash_history$ Just to monitor the .bash_history file.  But when i look at "./splunk list monitor"  it list every file in the /home/... folders.  Besides that.. the splunkd process just uses much cpu. (no wonder with so many files in the "list monitor" i think). Why is the splunkd on the universal forwarder monitoring every file in the /home/... folders while all he has to do is check .bash_history? What am i doing wrong with this input?   thanks in advance Jari p.s. Splunk version 8.1.3
Hi I have the following complex statement with multiple mstats. The issue is I think I have to do joins to get the data to work for me correctly, however, this is expencive for time. It takes 5 sec... See more...
Hi I have the following complex statement with multiple mstats. The issue is I think I have to do joins to get the data to work for me correctly, however, this is expencive for time. It takes 5 seconds to get back 1 hour. I want to get back 10 hours.  Is there any other way I can pull multiple data and bring them together without using a join?     | mstats min("mx.process.cpu.utilization") as cpuPerc min("mx.process.threads") as nbOfThreads min("mx.process.memory.usage") as memoryCons min("mx.process.file_descriptors") as nbOfOpenFiles min("mx.process.up.time") as upTime avg("mx.process.creation.time") as creationTime WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=30s BY pid cmd service.type host.name service.name replica.name component.name threshold | rename "service.name" as service_name | rename "replica.name" as replica_name | rename "service.type" as service_type | eval T_NbOfThreads=if(isnull(nbOfThreads),"",threshold) | eval T_MemoryCons=if(isnull(memoryCons),"",threshold) | eval T_NbOfOpenFiles=if(isnull(nbOfOpenFiles),"",threshold) | stats values(cpuPerc) as cpuPerc values(nbOfThreads) as nbOfThreads values(memoryCons) as memoryCons values(nbOfOpenFiles) as nbOfOpenFiles values(upTime) as upTime values(creationTime) as creationTime values(T_NbOfOpenFiles) as T_NbOfOpenFiles values(T_MemoryCons) as T_MemoryCons values(T_CpuPerc) as T_CpuPerc values(T_NbOfThreads) as T_NbOfThreads by _time pid cmd service_type host.name service_name replica_name component.name | eval Process_Name=((service_name . " # ") . replica_name) | sort 0 - _time | dedup _time pid | join type=left Process_Name _time [| mstats min("mx.replica.status") as Replica WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=30s BY "service.name" replica.name service.type | rename "service.name" as service_name | rename "replica.name" as replica_name | eval Process_Name=((service_name . " # ") . replica_name) | table Process_Name, Replica, "service.type", _time | sort 0 - _time | dedup _time Process_Name] | sort Process_Name _time | table _time, Process_Name, Replica | streamstats last(Replica) as Replica | sort - _time | append maxout=200000 [| mstats min("mx.replica.status") as Replica WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=30s BY "service.name" replica.name service.type | rename "service.name" as service_name | rename "replica.name" as replica_name | rename "service.type" as service_type | eval Process_Name=((service_name . " # ") . replica_name) | table Process_Name, Replica, service_type, _time | sort 0 - _time | dedup _time Process_Name | join type=left Process_Name,_time [| mstats min("mx.process.cpu.utilization") as cpuPerc min("mx.process.threads") as nbOfThreads min("mx.process.memory.usage") as memoryCons min("mx.process.file_descriptors") as nbOfOpenFiles min("mx.process.up.time") as upTime avg("mx.process.creation.time") as creationTime WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=30s BY pid cmd service.type host.name service.name replica.name component.name threshold | rename "service.name" as service_name | rename "replica.name" as replica_name | rename "service.type" as service_type | eval T_NbOfThreads=if(isnull(nbOfThreads),"",threshold) | eval T_MemoryCons=if(isnull(memoryCons),"",threshold) | eval T_NbOfOpenFiles=if(isnull(nbOfOpenFiles),"",threshold) | stats values(cpuPerc) as cpuPerc values(nbOfThreads) as nbOfThreads values(memoryCons) as memoryCons values(nbOfOpenFiles) as nbOfOpenFiles values(upTime) as upTime values(creationTime) as creationTime values(T_NbOfOpenFiles) as T_NbOfOpenFiles values(T_MemoryCons) as T_MemoryCons values(T_CpuPerc) as T_CpuPerc values(T_NbOfThreads) as T_NbOfThreads by _time pid cmd service_type host.name service_name replica_name component.name | eval Process_Name=((service_name . " # ") . replica_name) | sort 0 - _time | dedup _time pid] | rex field=Process_Name "(?<service_name2>.*) # (?<replica_name2>.*)" | eval service_name=if(isnull(service_name),service_name2,service_name) | eval replica_name=if(isnull(replica_name),replica_name2,replica_name)] | sort 0 - _time | dedup _time pid | join type=left [| mstats min("mx.process.resources.status") as Resources WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=30s BY "service.name" replica.name | rename "service.name" as service_name | rename "replica.name" as replica_name | eval Process_Name=((service_name . " # ") . replica_name) | sort 0 - _time | dedup _time Process_Name | table Process_Name, Status, Resources | eval Resources=rtrim(Resources,substr(Resources,-7)) | eval Resources=if((Resources == ""),0,Resources)] | eval Status=(Resources * Replica) | eval Status=if((Status == 4),2,if((Status == 0),0,1)) | eval Replica=case((Process_Name == "xmlserver # xmlserver"),"2",(Process_Name == "zookeeper # zookeeper"),"2",(Process_Name == "fileserver # fileserver"),"2",true(),Replica) | search Process_Name!="*ANT_TASK*" | eval Replica=if((Replica == 1),0,Replica) | timechart min(Replica) as Process_Status          
Hello, I have been struggling with something that probably is common sense to experts. Part of the Splunk messages that I deal with are mostly structured like the one pasted in the end [1]. The mess... See more...
Hello, I have been struggling with something that probably is common sense to experts. Part of the Splunk messages that I deal with are mostly structured like the one pasted in the end [1]. The message is persisted full size, however, when it is part of some search result the "object" part which is JSON gets cut off to the following:   {"objectName":"<some_string>"   I know that there is some kind of default limitation that a field cannot exceed 10 000 characters and if it does it could end up like this, however, the problem is also observed for messages that have a total length of 6 000 characters. There must be something else that I currently miss. I also went through similar questions here that suggested enriching the search queries with a regex that will force the complete field extraction in the search results, like:    | rex object=(?<object>.+)$   This does the job for testing purposes but I would like to find another solution because my searches are executed through the Splunk REST API, it is not an option to hardcode such regexes for multiple fields. I assume that the solution could be accomplished by a configuration on the Splunk side and I would really appreciate it if someone with more experience could take a look. In addition to the setup at my side, I have one search head and two indexers, the problem is observed no matter if I execute the search through the search head or directly on the indexers. Thank you in advance. Best Regards, Martin [1] sample message:   formatVersion="<some_version>", serverTimestamp="<some_timestamp>", crtAccount="<some_string>", crtApplication="<some_string>", crtComponent="<some_string>", crtTenantId="<some_string>", crtPermissions="<some_string>", crtHostname="<some_string>", accountExt="<some_string>", clientTimestamp="<some_timestamp>", messageId="<some_string>", category="<some_string>", loggedByClass="<some_string>", correlation_id="<some_string>", ip_address="<some_string>", username="<some_string>", tenantId="<some_string>", verb={"action":"update"}, "object="{ "objectName":"<some_string>", "objectAttributs":{ "System details":{ "oldValue":"<some_string>", "newValue":"<some_string>" } }, "auditedObject":{ "type":"<some_string>", "id":{ "key":"<some_string>" } } }    
The Splunk Documentation has steps to upgrade a Universal Forwarder to a Heavy Forwarder. But not any steps on downgrading.  Is it the same steps except swap where it says UF for HF and HF for UF? ... See more...
The Splunk Documentation has steps to upgrade a Universal Forwarder to a Heavy Forwarder. But not any steps on downgrading.  Is it the same steps except swap where it says UF for HF and HF for UF? I'm guessing it would go like:  1. Install UF and stop the HF 2. Copy $SPLUNK_HOME/var/lib/splunk/* from HF to the UF 3. Copy over the inputs.conf, outputs.conf, etc from the HF to the UF?     Does it work like this?  
Hi, I have a requirement to blacklist all  computer accounts (ending with $) in Security Event Code 4769. So far I have created following filter in inputs.conf but it is not working.     [Win... See more...
Hi, I have a requirement to blacklist all  computer accounts (ending with $) in Security Event Code 4769. So far I have created following filter in inputs.conf but it is not working.     [WinEventLog://Security] disabled = 0 renderXml = 1 source = XmlWinEventLog:Security blacklist1 = EventCode="4769" Message="(?:<Data Name='ServiceName'>).+\$"      I checked regex and it is working on regex builder App but filtering is not working. I am still receiving events with computer accounts. I referred and tried out various splunk forum questions on the same but no luck. Any help will be appreciated. Thanks for your time.
My apologies if this question seems mundane or was answered elsewhere but I have searched to no avail.  I am completely new to Splunk and am pathfinding the installation and configuration for use as ... See more...
My apologies if this question seems mundane or was answered elsewhere but I have searched to no avail.  I am completely new to Splunk and am pathfinding the installation and configuration for use as a syslog and audit log store similar to how ELK is often used.  While we will add additional data sources at some point my primary focus is on collecting and forwarding /var/log/audit/audit.log and /var/log/auth.log from various Ubuntu hosts into Splunk 8.2(.2.1) Enterprise. My initial attempt involved installing the UF alongside the Splunk server installation which did not turn out well.  Realizing that they are essentially the same daemon and use the same default ports they obviously conflict.  So instead I attempted to use the Splunk installation itself as so:   user@splunkhost:~$ sudo /opt/splunk/bin/splunk add forward-server splunkserver:9997 user@splunkhost:~$ sudo /opt/splunk/bin/splunk list forward-server user@splunkhost:~$ sudo /opt/splunk/bin/splunk add monitor /var/log/audit/audit.log -index main -sourcetype %audit-log% user@splunkhost:~$ sudo /opt/splunk/bin/splunk add monitor /var/log/auth.log -index main -sourcetype %auth-log%    However this also did not work and caused the pipeline to essentially become stuck and back up.  I believe the error message was something about the TCP output processor pausing the data flow.  I am just unsure why. Essentially I need to collect the security logs from the Splunk server host and index them in Splunk along with everything else but am at a loss as to how this can be accomplished. Any help or pointers would be most appreciated.  Thank you!
Hey. I'm trying to add the "Drilldown" and "Contributing Events" to our Splunk notables. I have added to this parameter with the relevant search and they didn't appear in our notables: action.... See more...
Hey. I'm trying to add the "Drilldown" and "Contributing Events" to our Splunk notables. I have added to this parameter with the relevant search and they didn't appear in our notables: action.notable.param.drilldown_search action.notable.param.drilldown_name Splunk version: 8.0.3 This is notable that I have configured. It contains a drilldown parameter, but as you see it didn't appeared in the notable: