All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

What are the various methods to integrate 3rd party SaaS applications with Splunk.
Hi Team, I am trying to deploy the Splunk UBA node, but I get a bit confused because, in the Splunk UBA operating system requirements, I didn't find whether Red Hat 8.10 or 9.2 was supported or no... See more...
Hi Team, I am trying to deploy the Splunk UBA node, but I get a bit confused because, in the Splunk UBA operating system requirements, I didn't find whether Red Hat 8.10 or 9.2 was supported or not.  I only found the below information. How can I determine if Red Hat 8.10 or 9.2 are supported or not? Operating System: Red Hat Enterprise Linux (RHEL) 8.8 Kernel-Version Tested: 4.18.0-477.10.1.el8_8.x86_64, 4.18.0-372.9.1.el8.x86_64
I need to capture everything except the html tags like </a> <a> </p> </b>. These tags may appear anywhere in the raw data. I was able to come up with regex that matches non capturing group (?:<\/?\w... See more...
I need to capture everything except the html tags like </a> <a> </p> </b>. These tags may appear anywhere in the raw data. I was able to come up with regex that matches non capturing group (?:<\/?\w>) but I am stuck with not able to capture the rest everything in raw data.   Sample:   Explorer is a web-browser developed by Microsoft which is included in Microsoft Windows Operating Systems.<P> Microsoft has released Cumulative Security Updates for Internet Explorer which addresses various vulnerabilities found in Internet Explorer 8 (IE 8), Internet Explorer 9 (IE 9), Internet Explorer 10 (IE 10) and Internet Explorer 11 (IE 11). <P> KB Articles associated with the Update:<P> 1) 4908777<BR> 2) 879586<BR> 3) 9088783<BR> 4) 789792<BR> 5) 0973782<BR> 6) 098781<BR> 7) 1234788<BR> 8) 8907799<BR><BR> Please Note - CVE-2020-9090 required extra steps to be manually applied for being fully patched. Please refer to the FAQ seciton for <A HREF='https://portal.mtyb.windows.com/en-PK/WINDOWS-guidance/advisory/CVE-2020-9090 ' TARGET='_blank'>CVE-2020-9090 .</A><P> QID Detection Logic (Authenticated):<BR> Additionally the QID checks if the required Registry Keys are enabled to fully patch <A HREF='https://portal.msrc.windows.com/en-US/guidance/advisory/CVE-2014-82789' TARGET='_blank'>CVE-2014-2897.</A> (See FAQ Section) <BR> The keys to be patched are: <BR> &quot;whkl\SOFTWARE\Microsoft\Internet Explorer\Main\FEATURE_ENABLE_PASTE_INFO_DISCLOSURE_FIX&quot; value &quot;iexplore.exe&quot; set to &quot;1&quot;.<BR>
Hello, Can someone please help me in extracting nested json fields without regex? I have tried below: 1. Updating KV_mode =json in the search head TA props.conf 2. Updating indexed_extractions=JS... See more...
Hello, Can someone please help me in extracting nested json fields without regex? I have tried below: 1. Updating KV_mode =json in the search head TA props.conf 2. Updating indexed_extractions=JSON in the search head TA props.conf 3. Updating the limits.conf with the spath stanza for the HF TA [spath] extraction_cutoff = 10000 4. Tried mvexpand command also.  Nothing worked. My raw logs looks like this: event": "{\"eventVersion\" "1.08\",\"userIdentity\":{\"type\" "AssumedRole\",\"principalId\" "AROAXYKJUXCU7M4FXD7ZZ:redlock\",\"arn\" "arn:aws:sts::533267265705:assumed-role/PrismaCloudRole-804603675133320192/redlock\",\"accountId\" "533267265705\",\"accessKeyId\" "ASIAXYKJUXCUSTP25SUE\",\"sessionContext\":{\"sessionIssuer\":{\"type\" "Role\",\"principalId\" "AROAXYKJUXCU7M4FXD7ZZ\",\"arn\" "arn:aws:iam::533267265705:role/PrismaCloudRole-804603675133320192\",\"accountId\" "533267265705\",\"userName\" "PrismaCloudRole-804603675133320192\"},\"webIdFederationData\":{},\"attributes\":{\"creationDate\" "2024-05-03T00:53:45Z\",\"mfaAuthenticated\" "false\"}}},\"eventTime\" "2024-05-03T04:09:07Z\",\"eventSource\" "autoscaling.amazonaws.com\",\"eventName\" "DescribeScalingPolicies\",\"awsRegion\" "us-west-2\",\"sourceIPAddress\" "13.52.105.217\",\"userAgent\" "Vert.x-WebClient/4.4.6\",\"requestParameters\":{\"maxResults\":10,\"serviceNamespace\" "cassandra\"},\"responseElements\":null,\"additionalEventData\":{\"service\" "application-autoscaling\"},\"requestID\" "ef12925d-0e9a-4913-8da5-1022cfd15964\",\"eventID\" "a1799eeb-1323-46b6-a964-efd9b2c30a8a\",\"readOnly\":true,\"eventType\" "AwsApiCall\",\"managementEvent\":true,\"recipientAccountId\" "533267265705\",\"eventCategory\" "Management\",\"tlsDetails\":{\"tlsVersion\" "TLSv1.3\",\"cipherSuite\" "TLS_AES_128_GCM_SHA256\",\"clientProvidedHostHeader\" "application-autoscaling.us-west-2.amazonaws.com\"}}"}
Hi Support Team I have two Splunk indexers and two forwarders. Both forwarders have a configuration with index = test in inputs.conf, but there is configuration in the indexers to decide which inde... See more...
Hi Support Team I have two Splunk indexers and two forwarders. Both forwarders have a configuration with index = test in inputs.conf, but there is configuration in the indexers to decide which index to put the data in based on the data itself (one of the values in the json object). Forwarder 1 has been running for a while with no problems (this runs version 6.4.1) Forwarder 2 is new (version 9.2.1), and requires exactly the same configuration as forwarder 1 which I have already done. The only difference is the host (host1 and host2). The data from Forwarder 2 is being sent to the indexers, but the index is not changed based on the config in the indexers. The data goes to the test index as specified in the forwarder config. Both indexers are running 7.3.3. What could I be missing to get the indexers to put the data from forwarder 2 in the correct index? Could this not be working due to the different versions of Splunk? Thanks
I cannot find any option for recurring Maintenance Window in ITSI?  E.g Stop alerting daily 11pm to 00:00 (1 hour)?  ITSI have something like cron suppression?  Do not tell me to use REST API agai... See more...
I cannot find any option for recurring Maintenance Window in ITSI?  E.g Stop alerting daily 11pm to 00:00 (1 hour)?  ITSI have something like cron suppression?  Do not tell me to use REST API again      
So i have a dashboard and I want to print custom message when there are 0 results. but in the dashboard i am working on i am using geostats command for map usage , so  when the result comes zero a c... See more...
So i have a dashboard and I want to print custom message when there are 0 results. but in the dashboard i am working on i am using geostats command for map usage , so  when the result comes zero a custom message should be shown on the top of the panel  so I want the custom message on top of this image
Hi. I'm using Splunk Enterprise 7.3.2 and installed universal forwarder 8.2.6 on Linux. I was asked to monitor the .bash_history file, so I installed the universal forwarder and checked that data i... See more...
Hi. I'm using Splunk Enterprise 7.3.2 and installed universal forwarder 8.2.6 on Linux. I was asked to monitor the .bash_history file, so I installed the universal forwarder and checked that data is coming into Splunk. However, in a real-time search, most of the files are imported as well as newly added data. So monitoring is difficult because previously events are mixed with real-time events. When I do a real-time search again, the _time field of the previously imported event and the newly added event is the same. Is it related to this? Does anyone know how to solve this problem? + inputs.conf settings [monitor:///home/*/.bash_history] index=test sourcetype=test_add disabled=false crcSalt = <SOURCE> [monitor:///root/.bash_history] index=test sourcetype=test_add disabled=false crcSalt = <SOURCE>
Hi, Can I get a recommendation around the appropriate/best options between these two apps for to ingest and query "logs" from Snowflake: Splunk DB Connect Snowflake
I want to build a query that pulls Cisco ASA events based on a particular syslog message ID which shows denied traffic. I dedup the information for events that have the same source ip, destination ip... See more...
I want to build a query that pulls Cisco ASA events based on a particular syslog message ID which shows denied traffic. I dedup the information for events that have the same source ip, destination ip, destination port and action.  It seems to work well however now I would like to have a count added for each time that unique combination is seen. Query is: index=cisco sourctype=cisco:asa message_id=XXXXXX | dedup host, src_ip, dest_ip, dest_port, action | table host, src_ip, dest_ip, dest_port, action | sort host, src_ip, dest_ip, dest_port, action That query gives me a table that appears to be dedup'ed however I would like to add a column that shows how many times each entry is seen.
Currently this is a manual process for me, I swap our connections between our primary and secondary HFs for every patch window. Is this what everyone is doing or is there a way to automate a cutover?... See more...
Currently this is a manual process for me, I swap our connections between our primary and secondary HFs for every patch window. Is this what everyone is doing or is there a way to automate a cutover? Thanks for any insight! 
We use a Deployment server to manage config of our UF fleet. Recent changes to privileges on clients are preventing the UF from restarting it's service after new config or systemclass has been downlo... See more...
We use a Deployment server to manage config of our UF fleet. Recent changes to privileges on clients are preventing the UF from restarting it's service after new config or systemclass has been downloaded. The company doesn't want to provide Splunk with a DA-level account or something similar.  What is the best "Least Privilege" way for the Splunk UF to be able to restart it's own service and collect needed logs within a windows domain?
Please help me on the below items: #1) | chart count(WriteType) over Collection by WriteType | sort Collection for above query  can we add conditon as below:  (i am facing issue here) | chart co... See more...
Please help me on the below items: #1) | chart count(WriteType) over Collection by WriteType | sort Collection for above query  can we add conditon as below:  (i am facing issue here) | chart count(WriteType) over Collection by WriteType |where c in("test","qa") | sort Collection  #2): can we add one more field after WriteType as below: | chart count(WriteType) over Collection by WriteType, c |where c in("test","qa")
I am trying to generate one event from of list of similar events. I want to remove the _check and add these to one field separated by comas. I am generating a critical event that lists all the host t... See more...
I am trying to generate one event from of list of similar events. I want to remove the _check and add these to one field separated by comas. I am generating a critical event that lists all the host that are not showing. example: HOST                                           SEVERITY Bob_1009_check                   Critical Jack_1002_check                  Critical John_1001_check                  Critical   So when I am done I want it to be: HOST   (or some other field name)                   SEVERITY                   DESCRIPTION Bob_1009, Jack_1002, John_1001              Critical                       (Bob_1009, Jack_1002, John_1001) are no longer up, please review your logs. I have trimmed the host accurately but I cannot figure out how to get a table of host to show in a side by side list to add into a description field I want to generate in an alert.  I DO NOT WANT a table. I want them side by side comma separated or semicolon separated.    
Probably the wrong board, choices were limited. In our dev environment we have a 3 node sh cluster, a 3 node idx cluster, an ES sh and a few other anciliary machines (DS, deployer, UF's HF's, LM, CM... See more...
Probably the wrong board, choices were limited. In our dev environment we have a 3 node sh cluster, a 3 node idx cluster, an ES sh and a few other anciliary machines (DS, deployer, UF's HF's, LM, CM, etc). All instances use the one LM.  On the SHC we are unable to search, getting the subject line message, yet on the ES SH we can search fine and no error message. The nodes of the SHC are "phoning home" to the LM. Licensing settings (indexer name, manager server uri) have been verified as correct. All nodes show having connected to the LM within the last minute-ish. Not sure where to look from here.
Hi All,   How to count field values.The field extracted and showing 55 .When i use below query: | stats count by content.scheduleDetails.lastRunTime it will give all the values with counts ... See more...
Hi All,   How to count field values.The field extracted and showing 55 .When i use below query: | stats count by content.scheduleDetails.lastRunTime it will give all the values with counts | stats dc(content.scheduleDetails.lastRunTime) AS lastRunTime its showing 55 counts. my output as: content.scheduleDetails.lastRunTime     Count 02/FEB/2024 08:22:19 AM 9 02/FEB/2024 08:21:19 AM 63 03/FEB/2024 08:22:19 AM 7   Expected output as only total count of the field: 79  
Hello. I am completely new at Splunk. Recently, I've recently taken on a role where I'll be working with Splunk quite a lot. I have a question about SC4S (Splunk Connect For Syslog). I successfully i... See more...
Hello. I am completely new at Splunk. Recently, I've recently taken on a role where I'll be working with Splunk quite a lot. I have a question about SC4S (Splunk Connect For Syslog). I successfully installed the SC4S (podman + systemd) using the guide from this: https://splunk.github.io/splunk-connect-for-syslog/main/gettingstarted/podman-systemd-general/ The SC4S is installed in Centos 7 VM (in vsphere). The HEC is configured successfully in heavy forwarder and I can successfully see the SC4S is properly communicating with Splunk. After that, I used Kiwi Syslog Message Generator from my windows 10 machine to send a syslog tcp message to the Centos 7 VM. Successful Output (TCP): However, if i sent a syslog udp message, the message was not successfully sent. As shown in the screenshot, the messages sent was zero after i pressed send. Unsuccessful Output (UDP): No new messages were shown in Splunk Web. 514 TCP and UDP is enabled in the firewall in Centos 7. I would like to request assistance about this issue. Thank you.                  
I have splunk logs where there is key word like  <ref>BTB- Abcd1234<ref> as it's primary key for trade reference and I did extract using delemiter <> , and give field name "my_Ref". now if sear... See more...
I have splunk logs where there is key word like  <ref>BTB- Abcd1234<ref> as it's primary key for trade reference and I did extract using delemiter <> , and give field name "my_Ref". now if search BTB it showing me all the matching reference as my dashboard search string is like <ref>BTB-*<ref> . now the problem is along with reference i can see some additional line is also getting pick and when is see the event detail my extract field showing that values .  output from search query :  index=in_my "<ref>*$Ref$*<ref> | table my_ref | dedup my_ref 1.BTB-Abcd1 2.BTB-Abvd2 3.]...)Application]true ?.. 4.BTB-Acdg3 5.BTB-Shfhfj4 now I want to ignore the 3."]...)Application]true "value and don't know how.... can someone please help on the same.
Hi, recently we had an issue with the LUN drive where data is stored and after fixing it, a new problem came up. splunk services starts normally but the web access does not work anymore. the outpu... See more...
Hi, recently we had an issue with the LUN drive where data is stored and after fixing it, a new problem came up. splunk services starts normally but the web access does not work anymore. the output of the splunk start command is the following     \bin>splunk.exe start Splunk> Map. Reduce. Recycle. Checking prerequisites... Checking mgmt port [8089]: open Checking configuration... Done. Checking critical directories... Done Checking indexes... (skipping validation of index paths because not running as LocalSystem) Validated: _configtracker _introspection _metrics _metrics_rollup _thefishbucket anomaly_detection autek azure cim_modactions cisco citrix email eusc_apps firedalerts ftp hyper-v infraops itsi_grouped_alerts itsi_im_meta itsi_im_metrics itsi_import_objects itsi_notable_archive itsi_notable_audit itsi_summary itsi_summary_metrics itsi_tracked_alerts kubernetes metrics_sc4s msad msexchange netauth netfw netops netproxy os osnix pan_logs perfmon rancher_k8sca rancher_k8scc rancher_k8scs rancherprod sample snmptrapd sns symantec sysmon test thor windefender windows wineventlog winevents Done Bypassing local license checks since this instance is configured with a remote license master. Checking filesystem compatibility... Done Checking conf files for problems... Bad regex value: '(::)?...', of param: props.conf / [(::)?...]; why: this regex is likely to apply to all data and may break summary indexing, among other Splunk features. One or more regexes in your configuration are not valid. For details, please see btool.log or directly above. Done Checking default conf files for edits... Validating installed files against hashes from 'C:\Program Files\Splunk\splunk-9.0.8-4fb5067d40d2-windows-64-manifest' All installed files intact. Done All preliminary checks passed. Starting splunk server daemon (splunkd)... Splunkd: Starting (pid 38432) Done     extract of btool.log      05-06-2024 11:07:35.039 WARN ConfMetrics - single_action=BASE_INITIALIZE took wallclock_ms=1014 05-06-2024 11:17:25.445 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 11:17:25.445 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 13:00:58.310 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 13:00:58.310 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 13:00:58.373 WARN btool-support - Bad regex value: '(::)?...', of param: props.conf / [(::)?...]; why: this regex is likely to apply to all data and may break summary indexing, among other Splunk features. 05-06-2024 13:19:36.176 WARN ConfMetrics - single_action=BASE_INITIALIZE took wallclock_ms=1234 05-06-2024 14:44:42.912 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 14:44:42.912 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 14:44:42.975 WARN btool-support - Bad regex value: '(::)?...', of param: props.conf / [(::)?...]; why: this regex is likely to apply to all data and may break summary indexing, among other Splunk features. 05-06-2024 14:44:51.022 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 14:44:51.022 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 14:44:51.084 WARN btool-support - Bad regex value: '(::)?...', of param: props.conf / [(::)?...]; why: this regex is likely to apply to all data and may break summary indexing, among other Splunk features. 05-06-2024 16:36:21.051 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 16:36:21.051 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 16:36:21.114 WARN btool-support - Bad regex value: '(::)?...', of param: props.conf / [(::)?...]; why: this regex is likely to apply to all data and may break summary indexing, among other Splunk features. 05-06-2024 16:36:29.661 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v14 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 16:36:29.661 WARN IConfCache - Stanza has an expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-ClientAccess\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1], ignoring alternate expansion [script://C:\Program Files\Splunk\etc\apps\TA-Exchange-Mailbox\bin\exchangepowershell.cmd v15 read-audit-logs_2010_2013.ps1] in inputs.conf 05-06-2024 16:36:29.723 WARN btool-support - Bad regex value: '(::)?...', of param: props.conf / [(::)?...]; why: this regex is likely to apply to all data and may break summary indexing, among other Splunk features.       I already checked the /etc/system/local/web.conf and everything seems fine.     [settings] enableSplunkWebSSL = 1 httpport = 443     system/default/web.conf     [default] [settings] # enable/disable the appserver startwebserver = 1 # First party apps: splunk_dashboard_app_name = splunk-dashboard-studio # enable/disable splunk dashboard app feature enable_splunk_dashboard_app_feature = true # port number tag is missing or 0 the server will NOT start an http listener # this is the port used for both SSL and non-SSL (we only have 1 port now). httpport = 8000 # this determines whether to start SplunkWeb in http or https. enableSplunkWebSSL = false # location of splunkd; don't include http[s]:// in this anymore. mgmtHostPort = 127.0.0.1:8089 # list of ports to start python application servers on (although usually # one port is enough) # # In the past a special value of "0" could be passed here to disable # the modern UI appserver infrastructure, but that is no longer supported. appServerPorts = 8065     any suggestion? many thanks. jose
I want to get the values from the path field but I can't extract this alone as data.initial_state.path would output extra values