All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi All, I am working on skipped searches, what is the difference between below 2? 1) The maximum number of concurrent historical scheduled searches on this cluster has been reached 2) The maximum ... See more...
Hi All, I am working on skipped searches, what is the difference between below 2? 1) The maximum number of concurrent historical scheduled searches on this cluster has been reached 2) The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been reached  
Hi, It sounds like you've made great progress, nice one. There are multiple designs and opinions out there regarding getting syslog into Splunk. It's up to you to decide what's best. To get you st... See more...
Hi, It sounds like you've made great progress, nice one. There are multiple designs and opinions out there regarding getting syslog into Splunk. It's up to you to decide what's best. To get you started there are tools such as Splunk Connect For Syslog which provides an "all in one" feel, you can also use a syslog service such as rsyslog or syslog-ng to listen for your logs and cache them to disk and then forward them via a monitor stanza in inputs.conf. However, if you want Splunk to listen directly, here is an example inputs.conf that you can tweak for your deployment:   [udp://10514] disabled = false connection_host = ip sourcetype = <<firewall_product>> index = main   For sourcetype, look on Splunkbase for your firewall vendor to check if there is an appropriate TA that you can use for field extractions. For example palo-alto firewall would be pan_log.  For index, pick an appropriate index to suit your needs. Finally, inputs.conf can either be deployed within an app (recommended) or directly under /opt/splunk/etc/system/local/ Also, make sure that 10514 is permitted on the local firewall.
I'm trying to distribute an app from the deployment server to the index server via the cluster manager. In the cluster manager's deploymentclient.conf, it uses serverRepositoryLocationPolicy and re... See more...
I'm trying to distribute an app from the deployment server to the index server via the cluster manager. In the cluster manager's deploymentclient.conf, it uses serverRepositoryLocationPolicy and repositoryLocation to receive the app in $SPLUNK_HOME$/etc/manager-apps and pushes it to peer-apps on the index server for distribution. Distribution to the index server was successful, but an install error message appears in the deployment server's internal log. Is there a setting to prevent items distributed to manager-apps from being installed?
Outstanding. That worked perfectly. Thank you. 
What type of DB are you trying to connect to? Can you share your connection string and other configuration (redacted as appropriate) please?
We are in the process of data onboarding. We managed to deploy a distributed architecture in which we have 3 indexers, 3 search, mastercluster, deployer, deployment, and 2 intermediate forwarders.... See more...
We are in the process of data onboarding. We managed to deploy a distributed architecture in which we have 3 indexers, 3 search, mastercluster, deployer, deployment, and 2 intermediate forwarders. On my syslog server, I receive logs from the firewall through syslog port 10514 and I managed to install a forwarder into my syslog server connected to my deployment server.  and on my forwarder configuration file, I connect to all 2 intermediate forwarders Now help me to finish this task, how can I manage to see the firewall logs in my Splunk? What do you think I should edit into my syslog server? Please remember I don't write the syslog logs(firewall) into a file. Its onstream logs My forwarder inputs.conf file| [udp://514] connection_host = ip index = tcra_firewall_idx sourcetype = tcra:syslog:log
Hi, You can do that with an eval command. | eval firstSeenTS = strptime(firstSeen, "%b %d, %Y %H:%M:%S %Z"), lastSeenTS = strptime(lastSeen, "%b %d, %Y %H:%M:%S %Z"), firstLastDiff = (lastSeenTS - ... See more...
Hi, You can do that with an eval command. | eval firstSeenTS = strptime(firstSeen, "%b %d, %Y %H:%M:%S %Z"), lastSeenTS = strptime(lastSeen, "%b %d, %Y %H:%M:%S %Z"), firstLastDiff = (lastSeenTS - firstSeenTS)/86400, firstNowDiff = (now() - firstSeenTS)/86400 If you want to round your days down to whole numbers you can use floor()
Hi CW, Assuming you haven't made any modifications to the Palo Alto TA (and subsequent sourcetypes). There is no reason why Splunk would be dropping the URL log_subtype. Check for a filter on the P... See more...
Hi CW, Assuming you haven't made any modifications to the Palo Alto TA (and subsequent sourcetypes). There is no reason why Splunk would be dropping the URL log_subtype. Check for a filter on the Palo Alto logging policy to not include the URL subtype? Otherwise can you confirm the PanOS version and TA version that you have deployed as there has been some issues with this sourcetype before. Cheers!  
Hello, I'm struggling mightily with this one. I have two dates in the same event, both are strings.  Their format is below. I would like to evaluate the number of days between the firstSeen and lastS... See more...
Hello, I'm struggling mightily with this one. I have two dates in the same event, both are strings.  Their format is below. I would like to evaluate the number of days between the firstSeen and lastSeen dates. I would also like to evaluate the number of days since firstSeen and when the search is performed. Any help would be much appreciated...    firstSeen: Aug 27, 2022 20:18:37 UTC lastSeen: Jun 23, 2024 06:17:25 UTC
site_replication_factor = origin:3,site1:3,total:6 site_search_factor = origin:2,total:2
A search effector exists on each site. How do I ensure that only data from the site the SH belongs to is retrieved?  
Hi, There could be a few things going on here. I would hazard a guess that you're running Splunk as a non-root user trying to bind to port 443 which is a privileged-port. Check if Splunk is liste... See more...
Hi, There could be a few things going on here. I would hazard a guess that you're running Splunk as a non-root user trying to bind to port 443 which is a privileged-port. Check if Splunk is listening on 443/TCP:     ss -tlp     Also, check for web related messages under: $SPLUNK_HOME/bin/splunk status And UiHttpListener entries in /opt/splunk/var/log/splunk/splunkd.log. If this is indeed your problem, I suggest that you use different port such as 8443. You could also try to allow Splunk to bind to the system ports, you will need to research how to do this securely for your environment. If Splunk is listening on 443, start working your way out: Are there any error entries in splunkd.log. Can you curl it via local host   curl -kv https://localhost​   Are you dropping connections on the local FW?   firewall-cmd --list-all   Is there a routing issue / network firewall between your client browser and Splunk instance? Is there an issue with client machine / browser? Feel free to upload any relevant Splunkd.log entries, redacted appropriately to help troubleshooting.      
Many thanks for the help.  I want to expand the requirement as follows: For  an "id" there could be  upto 12 max possible different events with response.action.type="UserCreated" or response.action.... See more...
Many thanks for the help.  I want to expand the requirement as follows: For  an "id" there could be  upto 12 max possible different events with response.action.type="UserCreated" or response.action.type="TxCreated"  or response.action.type="TxUpdated" and 9 other types. The goal is to group by "id" where only 2 action types have occured namely:            response.action.type="UserCreated" (Event1) and            response.action.type="TxCreated"  (Event 2)   Event Type 1 data= {       "response": {                "action": {                     "type": "UserCreated",                }        "resources":[             {                "type": "loginUser",                "id": "1234"            }         ]  } }   Event Type 2 data= {        "response": {               "action": {                    "type": "TxCreated",                }                "actors":                {                       "type": "loginUser",                      "id": "1234"               }         } }   Event Type 3 data= {        "response": {               "action": {                    "type": "TxUpdated",                }                "actors":                {                       "type": "loginUser",                      "id": "1234"               }         } }    
ah okay , thanks for the confirmation. So what they meant is  for Indexes that are still on the local non S3 storage & not those indexes converted to smartstore i.e. moved to obj store.  "You can st... See more...
ah okay , thanks for the confirmation. So what they meant is  for Indexes that are still on the local non S3 storage & not those indexes converted to smartstore i.e. moved to obj store.  "You can still search any existing buckets that were tsidx-reduced before migration to SmartStore." Which means all the reduced buckets will need rebuilt to full before updating indexes.conf config to move buckets to Smartstore. https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Reducetsidxdiskusage#Restore_reduced_buckets_to_their_original_state    
Hello Is it possible to monitor remote API calls out of the box with Splunk Observability cloud.  My application is running on an IIS server and is .NET. I have 3 critical API calls   1. Callin... See more...
Hello Is it possible to monitor remote API calls out of the box with Splunk Observability cloud.  My application is running on an IIS server and is .NET. I have 3 critical API calls   1. Calling a external third party service (that i cannot get splunk on for that reason= 2. Is calling a Azure Function that is not connected to splunk 3. Is calling another ASP Core application that is currently NOT monitored by splunk.    Can I when I call from my main application those 3 services get a overview that they are being called out of the box?
serial_number would have already been extracted, too.  You do whatever is needed.  But I do not see a chart of two values() function useful in this case.  Maybe you mean to have something like _t... See more...
serial_number would have already been extracted, too.  You do whatever is needed.  But I do not see a chart of two values() function useful in this case.  Maybe you mean to have something like _time E21 E25 2024-07-15 51A81FC 51A86FC   2024-07-16   51A81FC In other words, get serial_numbers according to error_code?  All you need is something like   <your search> "ErrorCode(*)" | rex field=message "ErrorCode\((?<error_code>[^\)]+)" | timechart span=1d values(serial_number) by error_code   Here, I propose that you restrict events to those containing error code in index search rather than in another search line. Or, if you want to group error_codes on individual serial_number, like _time 51A81FC 51A86FC 2024-07-15 E21 E21 2024-07-16 E25   For this, do   <your search> "ErrorCode(*)" | rex field=message "ErrorCode\((?<error_code>[^\)]+)" | timechart span=1d values(error_code) by serial_number   Does this make sense? Here is an emulation to get the above results.  Play with it and compare with real data   | makeresults | eval data = mvappend("{\"time\": \"2024-07-15\", \"message\":\"gimlet::hardware_controller: State { target: Idle, state: Idle, cavity: 42400, fuel: 0, shutdown: None, errors: ErrorCode(E21)}\", \"serial_number\": \"51A86FC\"}", "{\"time\": \"2024-07-15\", \"message\":\"gimlet::hardware_controller: State { target: Idle, state: Idle, cavity: 42400, fuel: 0, shutdown: None, errors: ErrorCode(E21)}\", \"serial_number\": \"51A81FC\"}", "{\"time\": \"2024-07-16\", \"message\":\"gimlet::someotherstuff: State { target: whatever, state: whaever, some other messages, errors: ErrorCode(E25)}\", \"serial_number\": \"51A81FC\"}") | mvexpand data | rename data as _raw | spath | eval _time = strptime(time, "%F") ``` the above emulates <your search> "ErrorCode(*)" ```  
This is a query which can get you the cpu core for both nix and win servers:   | mstats avg(Processor.*) as * WHERE (index=win-metrics) instance!="_Total" host="***" by host instance span=5m | t... See more...
This is a query which can get you the cpu core for both nix and win servers:   | mstats avg(Processor.*) as * WHERE (index=win-metrics) instance!="_Total" host="***" by host instance span=5m | table _time, host, instance, "%_Processor_Time" | stats max(instance) as "cpu_core" by host | eval cpu_core=cpu_core + 1 | append [| mstats avg(cpu_metric.*) as cpu_* WHERE (index=nix-metrics) host="***" by CPU, host | table CPU, host | eventstats max(CPU) as cpu_core by host | stats max(cpu_core) as cpu_core by host | eval cpu_core=cpu_core + 1 ]
Thank you for sharing complete event.  If this is raw event, all you need is spath (or xmlkv, which has some interesting restrictions).  For example,   <your search> | spath   These commands are ... See more...
Thank you for sharing complete event.  If this is raw event, all you need is spath (or xmlkv, which has some interesting restrictions).  For example,   <your search> | spath   These commands are QA tested by Splunk, much more robust than anything you can develop. (It also has the added benefit of getting richer data extracted.) Here is a complete emulation.  Play with it and compare with real data.   | makeresults | eval _raw = "</HostProperties><ReportItem severity=\"0\" port=\"0\" pluginFamily=\"Ubuntu Local Security Checks\" pluginName=\"Ubuntu 18.04 ESM / 20.04 LTS / 22.04 LTS : Vim vulnerabilities (USN-6420-1)\" pluginID=\"182769\" protocol=\"tcp\" <cvss_vector>AV:N/AC:L/Au:N/C:C/I:C/A:C</cvss_vector><description>The remote Ubuntu 18.04 ESM / 20.04 LTS / 22.04 LTS host has packages installed that are affected by multiple vulnerabilities as referenced in the USN-6420-1 advisory. - Heap-based Buffer Overflow in GitHub repository vim/vim prior to 9.0.0483. (CVE-2022-3234) - Use After Free in GitHub repository vim/vim prior to 9.0.0490. (CVE-2022-3235) - Use After Free in GitHub repository vim/vim prior to 9.0.0530. (CVE-2022-3256) - NULL Pointer Dereference in GitHub repository vim/vim prior to 9.0.0552. (CVE-2022-3278) - Use After Free in GitHub repository vim/vim prior to 9.0.0579. (CVE-2022-3297) - Stack-based Buffer Overflow in GitHub repository vim/vim prior to 9.0.0598. (CVE-2022-3324) - Use After Free in GitHub repository vim/vim prior to 9.0.0614. (CVE-2022-3352) - Heap-based Buffer Overflow in GitHub repository vim/vim prior to 9.0.0742. (CVE-2022-3491) - Heap-based Buffer Overflow in GitHub repository vim/vim prior to 9.0.0765. (CVE-2022-3520) - Use After Free in GitHub repository vim/vim prior to 9.0.0789. (CVE-2022-3591) - A vulnerability was found in vim and classified as problematic. Affected by this issue is the function qf_update_buffer of the file quickfix.c of the component autocmd Handler. The manipulation leads to use after free. The attack may be launched remotely. Upgrading to version 9.0.0805 is able to address this issue. The name of the patch is. It is recommended to upgrade the affected component. The identifier of this vulnerability is VDB-212324. (CVE-2022-3705) - Use After Free in GitHub repository vim/vim prior to 9.0.0882. (CVE-2022-4292) - Floating Point Comparison with Incorrect Operator in GitHub repository vim/vim prior to 9.0.0804. (CVE-2022-4293) Note that Nessus has not tested for these issues but has instead relied only on the application's self-reported version number.</description><synopsis>The remote Ubuntu host is missing one or more security updates.<plugin_output> - Installed package : vim_2:8.1.2269-1ubuntu5.17 - Fixed package : vim_2:8.1.2269-1ubuntu5.18 - Installed package : vim-common_2:8.1.2269-1ubuntu5.17 - Fixed package : vim-common_2:8.1.2269-1ubuntu5.18 - Installed package : vim-runtime_2:8.1.2269-1ubuntu5.17 - Fixed package : vim-runtime_2:8.1.2269-1ubuntu5.18 - Installed package : vim-tiny_2:8.1.2269-1ubuntu5.17 - Fixed package : vim-tiny_2:8.1.2269-1ubuntu5.18 - Installed package : xxd_2:8.1.2269-1ubuntu5.17 - Fixed package : xxd_2:8.1.2269-1ubuntu5.18 </plugin_output></ReportItem><ReportItem severity=\"0\" port=\"0\" pluginFamily=\"Ubuntu Local Security Checks\" pluginName=\"Ubuntu 16.04 ESM / 18.04 ESM / 20.04 LTS / 22.04 LTS / 23.04 : LibTIFF vulnerability (USN-6428-1)\" pluginID=\"182891\" protocol=\"tcp\" <description>The remote Ubuntu 16.04 ESM / 18.04 ESM / 20.04 LTS / 22.04 LTS / 23.04 host has packages installed that are affected by a vulnerability as referenced in the USN-6428-1 advisory. - A flaw was found in tiffcrop, a program distributed by the libtiff package. A specially crafted tiff file can lead to an out-of-bounds read in the extractImageSection function in tools/tiffcrop.c, resulting in a denial of service and limited information disclosure. This issue affects libtiff versions 4.x. (CVE-2023-1916) Note that Nessus has not tested for this issue but has instead relied only on the application's self-reported version number.</description><synopsis>The remote Ubuntu host is missing a security update.</synopsis><cve>CVE-2023-1916</cve><xref>USN:6428-1</xref><see_also>https://ubuntu.com/security/notices/USN-6428-1</see_also><risk_factor>Medium</risk_factor><script_version>1.0</script_version><plugin_output> - Installed package : libtiff5_4.1.0+git191117-2ubuntu0.20.04.9 - Fixed package : libtiff5_4.1.0+git191117-2ubuntu0.20.04.10 </plugin_output></ReportItem><ReportItem severity=\"3\" port=\"0\" pluginFamily=\"Ubuntu Local Security Checks\" pluginName=\"Ubuntu 16.04 LTS / 18.04 LTS / 20.04 LTS / 22.04 LTS / 23.10 : GIFLIB vulnerabilities (USN-6824-1)\" pluginID=\"200257\" protocol=\"tcp\"<description>The remote Ubuntu 16.04 LTS / 18.04 LTS / 20.04 LTS / 22.04 LTS / 23.10 host has packages installed that are affected by multiple vulnerabilities as referenced in the USN-6824-1 advisory.</plugin_output></ReportItem>" ``` data emulation above ``` | spath | fields plugin_output   This is the output (for brevity, I discarded all other nodes in XML): plugin_output _raw _time - Installed package : libtiff5_4.1.0+git191117-2ubuntu0.20.04.9 - Fixed package : libtiff5_4.1.0+git191117-2ubuntu0.20.04.10 </HostProperties><ReportItem severity="0" port="0" pluginFamily="Ubuntu Local Security Checks" pluginName="Ubuntu 18.04 ESM / 20.04 LTS / 22.04 LTS : Vim vulnerabilities (USN-6420-1)" pluginID="182769" protocol="tcp" <cvss_vector>AV:N/AC:L/Au:N/C:C/I:C/A:C</cvss_vector><description>The remote Ubuntu 18.04 ESM / 20.04 LTS / 22.04 LTS host has packages installed that are affected by multiple vulnerabilities as referenced in the USN-6420-1 advisory. - Heap-based Buffer Overflow in GitHub repository vim/vim prior to 9.0.0483. (CVE-2022-3234) - Use After Free in GitHub repository vim/vim prior to 9.0.0490. (CVE-2022-3235) - Use After Free in GitHub repository vim/vim prior to 9.0.0530. (CVE-2022-3256) - NULL Pointer Dereference in GitHub repository vim/vim prior to 9.0.0552. (CVE-2022-3278) - Use After Free in GitHub repository vim/vim prior to 9.0.0579. (CVE-2022-3297) - Stack-based Buffer Overflow in GitHub repository vim/vim prior to 9.0.0598. (CVE-2022-3324) - Use After Free in GitHub repository vim/vim prior to 9.0.0614. (CVE-2022-3352) - Heap-based Buffer Overflow in GitHub repository vim/vim prior to 9.0.0742. (CVE-2022-3491) - Heap-based Buffer Overflow in GitHub repository vim/vim prior to 9.0.0765. (CVE-2022-3520) - Use After Free in GitHub repository vim/vim prior to 9.0.0789. (CVE-2022-3591) - A vulnerability was found in vim and classified as problematic. Affected by this issue is the function qf_update_buffer of the file quickfix.c of the component autocmd Handler. The manipulation leads to use after free. The attack may be launched remotely. Upgrading to version 9.0.0805 is able to address this issue. The name of the patch is. It is recommended to upgrade the affected component. The identifier of this vulnerability is VDB-212324. (CVE-2022-3705) - Use After Free in GitHub repository vim/vim prior to 9.0.0882. (CVE-2022-4292) - Floating Point Comparison with Incorrect Operator in GitHub repository vim/vim prior to 9.0.0804. (CVE-2022-4293) Note that Nessus has not tested for these issues but has instead relied only on the application's self-reported version number.</description><synopsis>The remote Ubuntu host is missing one or more security updates.<plugin_output> - Installed package : vim_2:8.1.2269-1ubuntu5.17 - Fixed package : vim_2:8.1.2269-1ubuntu5.18 - Installed package : vim-common_2:8.1.2269-1ubuntu5.17 - Fixed package : vim-common_2:8.1.2269-1ubuntu5.18 - Installed package : vim-runtime_2:8.1.2269-1ubuntu5.17 - Fixed package : vim-runtime_2:8.1.2269-1ubuntu5.18 - Installed package : vim-tiny_2:8.1.2269-1ubuntu5.17 - Fixed package : vim-tiny_2:8.1.2269-1ubuntu5.18 - Installed package : xxd_2:8.1.2269-1ubuntu5.17 - Fixed package : xxd_2:8.1.2269-1ubuntu5.18 </plugin_output></ReportItem><ReportItem severity="0" port="0" pluginFamily="Ubuntu Local Security Checks" pluginName="Ubuntu 16.04 ESM / 18.04 ESM / 20.04 LTS / 22.04 LTS / 23.04 : LibTIFF vulnerability (USN-6428-1)" pluginID="182891" protocol="tcp" <description>The remote Ubuntu 16.04 ESM / 18.04 ESM / 20.04 LTS / 22.04 LTS / 23.04 host has packages installed that are affected by a vulnerability as referenced in the USN-6428-1 advisory. - A flaw was found in tiffcrop, a program distributed by the libtiff package. A specially crafted tiff file can lead to an out-of-bounds read in the extractImageSection function in tools/tiffcrop.c, resulting in a denial of service and limited information disclosure. This issue affects libtiff versions 4.x. (CVE-2023-1916) Note that Nessus has not tested for this issue but has instead relied only on the application's self-reported version number.</description><synopsis>The remote Ubuntu host is missing a security update.</synopsis><cve>CVE-2023-1916</cve><xref>USN:6428-1</xref><see_also>https://ubuntu.com/security/notices/USN-6428-1</see_also><risk_factor>Medium</risk_factor><script_version>1.0</script_version><plugin_output> - Installed package : libtiff5_4.1.0+git191117-2ubuntu0.20.04.9 - Fixed package : libtiff5_4.1.0+git191117-2ubuntu0.20.04.10 </plugin_output></ReportItem><ReportItem severity="3" port="0" pluginFamily="Ubuntu Local Security Checks" pluginName="Ubuntu 16.04 LTS / 18.04 LTS / 20.04 LTS / 22.04 LTS / 23.10 : GIFLIB vulnerabilities (USN-6824-1)" pluginID="200257" protocol="tcp"<description>The remote Ubuntu 16.04 LTS / 18.04 LTS / 20.04 LTS / 22.04 LTS / 23.10 host has packages installed that are affected by multiple vulnerabilities as referenced in the USN-6824-1 advisory.</plugin_output></ReportItem> 2024-07-16 14:52:11
The answer can depend on data characteristics.  On first look, I thought your solution was as efficient as it can get.  You reduced a large dataset (billions of events) to a much smaller dataset, i.e... See more...
The answer can depend on data characteristics.  On first look, I thought your solution was as efficient as it can get.  You reduced a large dataset (billions of events) to a much smaller dataset, i.e., distinct values of "Field B" grouped by distinct values of "Field A".  Can you give a comparison of scale of these two datasets? (I.e., how many rows after stats before where, compared to how many raw events.) A bigger problem is in problem statement.  There seems to be a misstatement in the description: table my search result needs to only return Value B; Values A and C will be thrown out, because they don't have a unique value in Field B. I believe you meant to say "because they don't correspond to a unique value in Field A," not Field B because every value in "Field B" is unique in your illustration.  Is every value in "FieldB" unique in real data?  Your SPL seems to imply so. Because otherwise mvcount(Field B)=1 can pick more than those corresponding to a unique "Field A".  For example, if dataset is Field A Field B Value A Value A1 Value A Value A1 Value B Value B1 Value C Value C1 Value C Value C2 Value C Value C3 your search will pick two data points Field A Field B Value A Value A1 Value B Value B1 But "Value A" is not unique in original data. (Neither is "Value A1".)  Can you clarify?