All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I just want to configure BREAK_ONLY_BEFORE. When I save the source type, it automatically adds LINE_BREAKER. I do not want the LINE_BREAKER to be there as it will remove the regex that I have specifi... See more...
I just want to configure BREAK_ONLY_BEFORE. When I save the source type, it automatically adds LINE_BREAKER. I do not want the LINE_BREAKER to be there as it will remove the regex that I have specified in BREAK_ONLY_BEFORE. I have done many things.  I want it to be like this.   But when I save it, Splunk automatically add the regex that I have specified for BREAK_ONLY_BEFORE as LINE_BREAKER. And the result is like this. Splunk remove the pg-2   What should I do to keep my regex not being removed by Splunk but I want it to split into another event?  
Hello guys... I'm new in Splunk and now I'm having troubles with the perfmon stanza. It looks like is not getting any data and I need to make some graphs about the cpu usage etc.. Any idea? This is ... See more...
Hello guys... I'm new in Splunk and now I'm having troubles with the perfmon stanza. It looks like is not getting any data and I need to make some graphs about the cpu usage etc.. Any idea? This is my input.conf By the way, can you give me some ideas about easy dashboards to deploy about windows local logs and performance for when I fix this problem? I read a lot of docs and forums but no ideas. It's my local windows so I'm just getting data from my computer if that means something... Love you. perfmon
 I have a field called "command" with the below input: C:\windows\systems32\cmd.exe /c ""c:\program Files(x86)\Microsoft. So I want to extract all the special characters and need to get the count o... See more...
 I have a field called "command" with the below input: C:\windows\systems32\cmd.exe /c ""c:\program Files(x86)\Microsoft. So I want to extract all the special characters and need to get the count of number of times a special character is used..  For example; I want my result to be like this : Special character    Count \                                       5 /                                       1 ""                                      1 (                                         1 )                                          1   How can I get this done?        
I am facing exact same issue as described at this forum post, however, in my case, the outcome of the same loadjob command is different on different search heads of the cluster. For e.g I have a das... See more...
I am facing exact same issue as described at this forum post, however, in my case, the outcome of the same loadjob command is different on different search heads of the cluster. For e.g I have a dashboard which is powered by searches which leverages | loadjob command to load results of a scheduled searches. If I access the dashboard on primary SH, it loads various panels except 2-3 panels. However, if I access the same dashboard from a different SH of the cluster, the same dashboard loads all the panels that didnt load on the primary SH but then has some panels that dont show data that were actually being loaded on the primary SH. E.g. if dashboard has 1-3, 4-6, 7-9 panels, panel 1-3 and 4-6 get loaded on the SH1 but 7-9 dont, however, on SH2, panel 1-3, 7-9,  get loaded but not 4-6. When I click on inspect job on the loadjob command, it says, no matching fields found, whereas, the same search runs on another SH.
I need the Universal Forwarders to send Windows Security Logs to two different indexers but the data I want to send has different criteria.   I need to send all win security events without a whiteli... See more...
I need the Universal Forwarders to send Windows Security Logs to two different indexers but the data I want to send has different criteria.   I need to send all win security events without a whitelist to Indexer1 and I need to send win security events with a whitelist to indexer2.   Indexer2 is in another country which will provide 24/7 SOC support and there's a bandwidth limitation.   Is this possible?   
Warning: Splunk noob question. I have a base search: source="Administrator_logs" name="An account failed to log on" Using  https://community.splunk.com/t5/Splunk-Search/Getting-Average-Number-of... See more...
Warning: Splunk noob question. I have a base search: source="Administrator_logs" name="An account failed to log on" Using  https://community.splunk.com/t5/Splunk-Search/Getting-Average-Number-of-Requests-Per-Hour/m-p/73506 I can calculate hourly averages: source="Administrator_logs"name="An account failed to log on" | eval reqs = 1 | timechart span=1h per_hour(reqs) as AvgReqPerHour What I would like to do is calculate a baseline. Having never done this before my thought is to calculate the hourly average and either standard deviation and/or some percentile, e.g. 90th, for all events as apposed to the last day/week/month although that would be interesting too. Eventually, this baseline calculation will be the basis for an alert, e.g. create alert if hourly count is outside 1 stddev or 90th percentile. Q1: How do I calculate the hourly average for all events? Q2: How do I calculate the hourly standard deviation for all events? Q3: How do I calculate the hourly 90th percentile for all events?
We have a requirement to configure splunk with the CA issued certificates. We are running Splunk 8.2.2.1. In test environment – two standalone splunk instances. In other environments –   cluster ... See more...
We have a requirement to configure splunk with the CA issued certificates. We are running Splunk 8.2.2.1. In test environment – two standalone splunk instances. In other environments –   cluster 3 node SH cluster  + SH deployer 3 node indexer cluster + CM License Master/Monitoring server Deployment Server Heavy forwarders   I tried to configure standalone server's Splunk web (8443) and splunkd (8089) using this new CA issued cert. But after I configured it for splunkd 8089 it breaks web, command line and also when I run openssl from other server it shows connected but then hangs and does not show certs. I came across following link but it was for splunk 6 and things has changed a lot since then. https://community.splunk.com/t5/Security/Custom-Certificate-for-Port-8089/m-p/362377 We also want to configure SH cluster to use CA issued cert for splunkd (8089). But I could not find doc for SH cluster. On standalone splunk instance: cat /opt/splunk/etc/system/local/web.conf [settings] httpport = 8443 enableSplunkWebSSL = 1 sslVersions = tls1.2 sslPassword = $7$1_encrypted_password_lzShn0euEM5Yi9m6pUPS38TkYu1lDDsg= serverCert = etc/auth/splunkweb/QA_Splunk_Concatenated.pem privKeyPath = etc/auth/splunkweb/QA_Splunk_PrivateKey.key cat /opt/splunk/etc/system/local/server.conf [general] serverName = xxx.test pass4SymmKey = $7$k_encryted_key== [sslConfig] #serverCert = server.pem sslPassword = $7$3_encryted_key== sslVersions = tls1.2 enableSplunkdSSL = true serverCert = /opt/splunk/etc/auth/splunkweb/QA_Splunk_Concatenated.pem #requireClientCert = false Is this correct? Also, Do I need to request separate cert for each SH member? Will this impact other communication between SH custer and indexer cluster, license master, monitoring console, SH deployer?
I am attempting to configure the TA-MS_O365_Reporting app but can't seem to get the permissions correct. I've configured a user account in Azure AD called "splunk" and from the Exchange Admin console... See more...
I am attempting to configure the TA-MS_O365_Reporting app but can't seem to get the permissions correct. I've configured a user account in Azure AD called "splunk" and from the Exchange Admin console assigned it to a custom role with the four required permissions: Message Tracking View-Only Audit Logs View-Only Configuration View-Only Recipients But when I enable the input and then check Splunk's internal logs I see the following error: 401 Client Error: Unauthorized for url: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2021-10-27T20:08:33.583692Z'%20and%20EndDate%20eq%20datetime'2021-10-27T21:08:33.583692Z Is there something I am missing with regards to setting up the permissions? Thank you.
My base search is trying to show the amount of GB left from servers that I have deployed the windows add-on for splunk to. (Not final - just trying to get it to work. Base search below) source="perf... See more...
My base search is trying to show the amount of GB left from servers that I have deployed the windows add-on for splunk to. (Not final - just trying to get it to work. Base search below) source="perfmonmk:logicaldisk" earliest=-7d | eval gb_free=Free_Megabytes/1024 | timechart span=1d max(gb_free) AS GB_Free I want to use a chain search like "host=this_server" and "instance=C:" to target specifc servers and drives to show how much space is on each left in a chart. However, i run into many problems when trying to use chain search to narrow the base search's results down. If anyone has any ideas as to how to can search for specific fields from a base search without error it would help a lot. The specific error is "unknown search command" from the chain search
Hello,  I'd like to create a search for a multiple alerts on the same host. The idea would be to get results for each host that would see more than 10 malicious files alerts within let's say las... See more...
Hello,  I'd like to create a search for a multiple alerts on the same host. The idea would be to get results for each host that would see more than 10 malicious files alerts within let's say last 72hours from now. I tried something like this:  index=xyz sourcetype=xyz:123  | bin span=1d createdDate | eval createdDate_epoch=strptime(createdDate,"%Y-%m-%d") | eval today_epoch=now() `comment("#### ####")` | eval days_lapsed=round((today_epoch - createdDate_epoch)/86400,0) | where days_lapsed <=3 | stats earliest(createdDate) as createdDate ```values(file_name) as file_name values(filePath) as filePath values(agentComputerName) as agentComputerName``` values(category) as category values(siteName) as siteName values(file_hash) as file_hash values(signature) as signature dc(file_name) as number_of_alerts max(days_lapsed) as days_lapsed by agentComputerName | where number_of_alerts >4 In my case that gives me that there were 5 alerts for malicious files (some of the files occurred more than once in these 3 days) And all would be great if not the fact that I have only one date (createdDate), where I'd like to see all dates per each file and when the file was created.  How do I need to modify my search to get where I need to be? Thank you! 
Looking to see if we can ingest data from O365 that would list a person's name and what they accessed within Sharepoint.  We were hoping that the new Graph API input from the O365 add-on would get u... See more...
Looking to see if we can ingest data from O365 that would list a person's name and what they accessed within Sharepoint.  We were hoping that the new Graph API input from the O365 add-on would get us this information.  Our O365 admin states that he needs to setup an app registration for us to access O365 Graph. Different than the Tenant ID and Client ID we are using to connect to O365 from the SPlunk add-on He said - It would need to connect to Graph with the App ID and shared secret at a minimum What endpoint is Splunk trying to pull from when it is using the Graph API Inputs? O365 add-on documentation states:    O365:graph:api              All Audit events and reports visable through the Microsoft Graph API endpoints. This                                               includes all the logs events and reports visable thr the MS graphic API   Any help is appreciated. 
Hi,   I wanted to ask if multisite Splunk clusters can run different Operating systems without any issues. For example, cluster on site1 runs CentOS on peers, SH cluster and master node, and we wo... See more...
Hi,   I wanted to ask if multisite Splunk clusters can run different Operating systems without any issues. For example, cluster on site1 runs CentOS on peers, SH cluster and master node, and we would like to deploy site2 cluster with ubuntu on all the cluster members. would that cause any problems with Splunk's functionality?   Thanks in advance.
Hi! I'm trying to collect the local splunk server Windows Application event logs.   I would like them in non_XML format.  In .../app/Splunk_TA_windows/inputs.conf stanza I added:    [WinEventLog://A... See more...
Hi! I'm trying to collect the local splunk server Windows Application event logs.   I would like them in non_XML format.  In .../app/Splunk_TA_windows/inputs.conf stanza I added:    [WinEventLog://Application] index = splunk_server_app source = WinEventLog:Application sourcetype = WinEventLog disabled = 0 renderXML = 0 I'm getting events but they are in XML format.  Using Splunk Enterprise version 8.1.4. Any help wond be appreciated.  Thanks
Between these two locations -   $SPLUNK_HOME/etc/apps/TA-eStreamer/data $SPLUNK_HOME/etc/apps/TA-QualysCloudPlatform/tmp   30 GBs are taken for files as old as one and half years. Are there any ... See more...
Between these two locations -   $SPLUNK_HOME/etc/apps/TA-eStreamer/data $SPLUNK_HOME/etc/apps/TA-QualysCloudPlatform/tmp   30 GBs are taken for files as old as one and half years. Are there any configurations in these Add-ons to clean after themselves?
Hi Team, We cannot get the appdynamics php agent to load on php8. This is the startup error we are encountering: php -v PHP Warning: PHP Startup: Unable to load dynamic library 'appdynamics_age... See more...
Hi Team, We cannot get the appdynamics php agent to load on php8. This is the startup error we are encountering: php -v PHP Warning: PHP Startup: Unable to load dynamic library 'appdynamics_agent.so' (tried: /usr/lib64/php/modules/appdynamics_agent.so (/usr/lib64/php/modules/appdynamics_agent.so: undefined symbol: zend_vm_stack_copy_call_frame), /usr/lib64/php/modules/appdynamics_agent.so.so (/usr/lib64/php/modules/appdynamics_agent.so.so: cannot open shared object file: No such file or directory)) in Unknown on line 0 PHP 8.0.12 (cli) (built: Oct 19 2021 10:34:32) ( NTS gcc x86_64 ) Copyright (c) The PHP Group Zend Engine v4.0.12, Copyright (c) Zend Technologies with Zend OPcache v8.0.12, Copyright (c), by Zend Technologies appdynamics-php-agent-21.7.0.4560-1.x86_64.rpm is the version we are using. Any help is appreciated. Thanks, Amit Singh
I had some questions about the limits of a lookup file that I wasn't able to find when referencing documentation (below) or anywhere else in Splunk Cloud. https://docs.splunk.com/Documentation/Splun... See more...
I had some questions about the limits of a lookup file that I wasn't able to find when referencing documentation (below) or anywhere else in Splunk Cloud. https://docs.splunk.com/Documentation/SplunkCloud/latest/Knowledge/DefineaKVStorelookupinSplunkWeb What is the lookup file limit/is there a file limit when uploading directly to browser?  How long will data in lookup files be stored (do they ever get deleted after a time period?) Does joining with large lookups with OUTPUT/OUTPUTNEW have a limit to how much data is joined between the two lookups OR an index/sourcetype and a lookup? Is there a max limit for the amount of records that can be overwritten into the lookup when you run |outputlookup?   Business Use Case Example: We are ingesting logs and putting them into an index/sourcetype. We've created a search to append the sourcetype with a lookup file by an ID. This search will get updated everyday by the hour and output a new lookup. The amount of new data that gets added into the sourcetype varies in the 10s up to the 100s daily. If we keep doing it this way, the data size for the lookup on the browser will increase exponentially so I'm worried if there is a limit. Also open to recommendations on a better way of doing this.
We have an issues with big amounts of IO waits alerts on Splunk indexers. After investigation I found there is no swap space used during all the time. Do you know how can I enable swap or swap file t... See more...
We have an issues with big amounts of IO waits alerts on Splunk indexers. After investigation I found there is no swap space used during all the time. Do you know how can I enable swap or swap file to be used by splunk indexer?   Service] Type=simple Restart=always ExecStart=/splunk/bin/splunk _internal_launch_under_systemd KillMode=mixed KillSignal=SIGINT TimeoutStopSec=360 LimitNOFILE=65536 SuccessExitStatus=51 52 RestartPreventExitStatus=51 RestartForceExitStatus=52 User=splunk Group=splunk Delegate=true CPUShares=1024 MemoryLimit=32654905344 PermissionsStartOnly=true ExecStartPost=/bin/bash -c "chown -R splunk:splunk /sys/fs/cgroup/cpu/system.slice/%n" ExecStartPost=/bin/bash -c "chown -R splunk:splunk /sys/fs/cgroup/memory/system.slice/%n" [Install] WantedBy=multi-user.target [root@splunk]# cat /proc/meminfo MemTotal: 31889556 kB MemFree: 1715036 kB   root@splunk ~]# free -m total used free shared buff/cache available Mem: 31142 5835 13411 1584 11895 23308 Swap: 0 0 0
Since I realized it existed, I've setup my environment to source the $SPLUNK_HOME/share/splunk/cli-command-completion.sh script to allow tab completion of Splunk commands. Recently, we upgraded ... See more...
Since I realized it existed, I've setup my environment to source the $SPLUNK_HOME/share/splunk/cli-command-completion.sh script to allow tab completion of Splunk commands. Recently, we upgraded to 8.2.2 after previously being on 8.0.3. After the upgrade, the sourcing of the file no longer works, giving the following 2 stderr messages: cli-command-completion.sh: line 83: verb_to_objects: bad array subscript cli-command-completion.sh: line 85: verb_to_objects[$verb]: bad array subscript It looks like the script was originally a Splunk Answers post by a Splunk Dev that was later included in the Splunk distribution, and has not changed since then: https://community.splunk.com/t5/Deployment-Architecture/CLI-command-completion-Yes-and-here-s-how-For-bash-4-0-and/m-p/82552 However it looks like @V_at_Splunk is no longer active on the community and likely no longer at Splunk, their last post being in 2014. Is anyone still using this script? Has anyone run into these issues and determined their cause? I suspect something the script references changed, but I'm unsure what was changed. This was such a nice QoL thing to have it'd be a shame if I had to let it die.
Hello, I have followed https://docs.splunk.com/Documentation/ES/6.6.2/Admin/Customizenotables and created Additional Fields under "Incident Review Settings" page and saved my changes.  Now i am seei... See more...
Hello, I have followed https://docs.splunk.com/Documentation/ES/6.6.2/Admin/Customizenotables and created Additional Fields under "Incident Review Settings" page and saved my changes.  Now i am seeing that when a notable is created in Incident Review dashboard,  none of my new additional fields are showing up there.  I have verified when i run the search manually,  those fields are there and there is no typo in their name. 2 Qns 1) Is there a default limit as in  how many additional fields show at the max for a Notable ? The way i see not all fields are showing up. 2) Is there a way to customize which addn. fields to show for which Notable event /Co-relaion search ?
Is there a way to extract the Splunk search query from the URL and send it to another software? We want to send the search query to software that would allow users to edit their data, and passing the... See more...
Is there a way to extract the Splunk search query from the URL and send it to another software? We want to send the search query to software that would allow users to edit their data, and passing the search query would mean the user could go right from Splunk to editing data that they are seeing instantly.