All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi , how to do i display number of blocked and allowed threats with different severities in a timeframe(e.g monthly). Something like this output,   Month                    action          ... See more...
Hi , how to do i display number of blocked and allowed threats with different severities in a timeframe(e.g monthly). Something like this output,   Month                    action               critical            high                medium               low 2022-11              allowed               9                        22                  45                        100                                  blocked                20                     400           44345                   23423   2022-10              allowed               39                        22                  4                        100                                  blocked                20                     500           4445                   23423   I can get to either of below output but not able to get as above,, ---- index=palo-network threat sourcetype="pan:threat" severity!=informational| bucket _time span=1month | eval Date=strftime('_time',"%Y-%m")| stats values(severity) count by _time,action ---- index=palo-network threat sourcetype="pan:threat" severity!=informational | bucket _time span=1month | eval Date=strftime('_time',"%Y-%m") | chart count over action by severity   Thank you.
Hi I am trying to fix the vulnerable_javascript_library_usage error in Splunk Add-on Builder for appserver/static/js/build/common.js). There are a couple of answered posts that mention the fix ... See more...
Hi I am trying to fix the vulnerable_javascript_library_usage error in Splunk Add-on Builder for appserver/static/js/build/common.js). There are a couple of answered posts that mention the fix as: Export the app from any Add-on Builder Import the app into Add-on Builder v4.1.0 or newer Download the app packaged from Add-on Builder v4.1.0 or newer The issue I am facing is that I have inherited the maintenance for the add-on and don’t have an app export from the original Add-on Builder. I have tried a couple of things that include importing the spl and tgz files as a project in AOB but cannot import them as it shows an error while extracting the files. I tried to create a new add-on with the same name and replace the files in that with the files from the downloaded tgz from Splunkbase. On exporting the created add-on and importing it again it shows an error when extracting that it's not a valid compressed file. Also, I tried to just remove the common.js file which breaks the add-on when we run it on windows machines. Is there any other way I can fix this? Or how can I import the add-on in AOB? Thanking you in advance.
  I am trying to execute this search but 90% of the times this search does not complete and returns incomplete results. The count is different for example this search will return 4,63,000 even... See more...
  I am trying to execute this search but 90% of the times this search does not complete and returns incomplete results. The count is different for example this search will return 4,63,000 events, 4,20,000 or 3,60,000 etc results. I believe that my search is a bit heavy. Can anyone please help me to optimize this search.
Hi,  Hope you are doing good just have 1 doubt.. On our Splunk windows, we have onboarded the security logs, so my doubt is does security logs also help to monitor NTFS    Thanks  Debjit 
Has anyone encountered this error when sending logs to 3rd party syslog destination using Splunk App for CEF I'm getting the following error. 11-03-2022 11:23:25.904 ERROR ChunkedExternProcessor ... See more...
Has anyone encountered this error when sending logs to 3rd party syslog destination using Splunk App for CEF I'm getting the following error. 11-03-2022 11:23:25.904 ERROR ChunkedExternProcessor [32136 ChunkedExternProcessorStderrLogger] - stderr: splunk.SplunkdConnectionException: Splunkd daemon is not responding: ('Error connecting to /services/data/inputs/all: The read operation timed out',) 11-03-2022 11:23:25.904 ERROR ChunkedExternProcessor [15688 searchOrchestrator] - Error in 'cefout' command: Splunkd daemon is not responding: ('Error connecting to /services/data/inputs/all: The read operation timed out',) 11-03-2022 11:23:25.911 ERROR SearchPhaseGenerator [15688 searchOrchestrator] - Fallback to two phase search failed:Error in 'cefout' command: Splunkd daemon is not responding: ('Error connecting to /services/data/inputs/all: The read operation timed out',) 11-03-2022 11:23:25.913 ERROR SearchStatusEnforcer [15688 searchOrchestrator] - sid:scheduler__userid_c3BsdW5rX2FwcF9jZWY__RMD53bb25367b408a898_at_1667434800_56257_BED74D95-D037-415C-8C9C-81F3D2FEEBAB Error in 'cefout' command: Splunkd daemon is not responding: ('Error connecting to /services/data/inputs/all: The read operation timed out',) 11-03-2022 11:23:25.913 INFO SearchStatusEnforcer [15688 searchOrchestrator] - State changed to FAILED due to: Error in 'cefout' command: Splunkd daemon is not responding: ('Error connecting to /services/data/inputs/all: The read operation timed out',)
Hello, we have a system that receives data from multiple sources each of these sources identifies the data being sent by a 25digit number, this number can be broken down by a combination of the pos... See more...
Hello, we have a system that receives data from multiple sources each of these sources identifies the data being sent by a 25digit number, this number can be broken down by a combination of the positions, the number comes in the following format: TTWWWWWSSSYYMMDDCCCCCPL What I am trying to do is extract the CCCC portion of the number (Positions 19-23) and compare this with a lookup table to identify the sender of the information and then sort the associated data by the sender
Hi, I'm fairly new to Splunk and am considering using Splunk DB Connect to connect to one of our databases to monitor a specific table for errors. I want an alert to be generated when the number of ... See more...
Hi, I'm fairly new to Splunk and am considering using Splunk DB Connect to connect to one of our databases to monitor a specific table for errors. I want an alert to be generated when the number of rows returned is greater than 0. Will the dbxquery command work with Splunk alerts? Let me know if more information is needed. Thanks! Dylan
Hi, I have below message and Iam trying to use rex to extract the id... But myid always shows empty.. Please help - - [02/Nov/2022:17:43:03 -0400] "PUT /application/resources/cat/v7/product/12345... See more...
Hi, I have below message and Iam trying to use rex to extract the id... But myid always shows empty.. Please help - - [02/Nov/2022:17:43:03 -0400] "PUT /application/resources/cat/v7/product/1234567890003/status HTTP/1.1" 201 - abcd.com - 8 web-614   rex field=msg "/application/resources/cat/v7/product/(?<myid>[0-9]*)/status" | table myid  
I'm trying to exclude a specific file called catalina.out in /var/log/tomcat9/ from being processed by Splunk.  The file is being sent to my heavy forwarder and I have the following in inputs.conf  ... See more...
I'm trying to exclude a specific file called catalina.out in /var/log/tomcat9/ from being processed by Splunk.  The file is being sent to my heavy forwarder and I have the following in inputs.conf  [monitor:///var/log/tomcat9] blacklist=(catalina\.out) disabled = 0 The data continues to be processed.  What am I missing?
I'm trying the below query, index=XXXXXXXXX   | eval space="cf_space_name=production" | search "space"  YYYYYYYYYYYY | stats count ================================================================... See more...
I'm trying the below query, index=XXXXXXXXX   | eval space="cf_space_name=production" | search "space"  YYYYYYYYYYYY | stats count =================================================================== I want to filter the results based on the evaluated field. | search "space"    XXXXXXXXXXXXX    => is not returning correct values |  search "cf_space_name=production"    XXXXXXXXXXXXX    =>  but If I use the value like this its working. how to fix this? Thanks for the help.  
Hello,  I'm trying to filter my events/results after evalulating the field name and value dynamically using eval.    index=XXXX  YYYYYYY  | eval field_name=PPPP | eval field_value=KKKK | search f... See more...
Hello,  I'm trying to filter my events/results after evalulating the field name and value dynamically using eval.    index=XXXX  YYYYYYY  | eval field_name=PPPP | eval field_value=KKKK | search field_name=field_value   I tried  below options, but none worked. index=XXXX   [|gentimes start=-1 | eval space="Test"| table space] index=XXXX   [|gentimes start=-1 | eval space="Test"| fields space]
I currently have v8.5 of the Splunk_TA_Windows app, and the following stanza in inputs:   [WinEventLog://AD FS/Admin] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 rend... See more...
I currently have v8.5 of the Splunk_TA_Windows app, and the following stanza in inputs:   [WinEventLog://AD FS/Admin] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 renderXml=false    And it seems not to be working. I am also monitoring the Application, Security, and System logs, and they are showing up. I don't see anything in the logs. What am I doing wrong?
Hello!  I am pulling in logs from a server, there are about 500 logs in the directory.  We want to bring in all 498 of them with a generic sourcetype and two need a specific log type.  Is it as easy ... See more...
Hello!  I am pulling in logs from a server, there are about 500 logs in the directory.  We want to bring in all 498 of them with a generic sourcetype and two need a specific log type.  Is it as easy as this:   [monitor://C:\Program Files\Logs\*] blacklist = log1:log2 disable=false index=logs sourcetype=logs   [monitor://C:\Program Files\Logs\*] whitelist = log1:log2 disable=false index=logs sourcetype=specific:logs
Hello, I have created a lookup definition for CIDR. The CIDR matching works just fine and I am able to whitelist the IPs in that particular subnet range. However, I wanted to know if I can add si... See more...
Hello, I have created a lookup definition for CIDR. The CIDR matching works just fine and I am able to whitelist the IPs in that particular subnet range. However, I wanted to know if I can add single IP's to the same lookup file/definition(CIDR lookup) as well? I want single IP matching in the same lookup table where I have added the IP subnet. How to proceed about this?
I'm trying to get auditd events into Splunk using the rlog.sh script from the Splunk Add-on for Unix and Linux. It isn't working. The audit logs are not being ingested. No errors are appearing in i... See more...
I'm trying to get auditd events into Splunk using the rlog.sh script from the Splunk Add-on for Unix and Linux. It isn't working. The audit logs are not being ingested. No errors are appearing in index=_internal for the host. It is successfully scheduled through the ExecProcessor component: 0400 INFO ExecProcessor [1975905 ExecProcessor] - New scheduled exec process: /opt/splunkforwarder/etc/apps/Splunk_TA_nix_l1_inputs/bin/rlog.sh To attempt to address the problem I have done the following: Had the host owner ensure dependent utilities are installed (listed in https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/Requirements#Dependencies). Had the host owner change the log_group from root to splunk in /etc/audit/auditd.conf (suggested in https://community.splunk.com/t5/All-Apps-and-Add-ons/Can-t-get-rlog-sh-to-run/m-p/76143). When executing rlog in debug mode (./rlog.sh --debug) we get the following output: As splunk user: Blank output As root user: Expected output Additional details: This host was recently rebuilt. Before the rebuild the audit logs on this host were ingesting successfully through the Add-On. Other scripts through the Add-On are working on this host. This problem has not materialized on any of our other hosts utilizing the Add-On. Thanks in advance for your input!
Received error this morning on one of our non-distributed search head: The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch. Nothing works, cannot search, dashboar... See more...
Received error this morning on one of our non-distributed search head: The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch. Nothing works, cannot search, dashboards are non-functional.   Searching produces this error: Search not executed: The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch. user=admin., concurrency_category="historical", concurrency_context="user_instance-wide", current_concurrency=0, concurrency_limit=5000   I did quite a bit of digging in the community and found the following on my instances, non-distributed:   Dispatch Tried the clean-dispatch command on our bloated 8873 count in /opt/splunk/var/run/splunk/dispatch Shut down splunk even run in sudo, results in error of Permission denied Ran command:  ./splunk cmd splunkd clean-dispatch /temp -1day     bundle files distsearches.conf  has no maxbundlesize addressing the large .bundle files in /opt/splunk/var/run If I delete out the bundle files above, I can search for alittle bit on the search head, but then it craps out.   Now, I am at a loss after reading so many articles, how-tos and docs. I'm not a splunk guy, but I am trying to get this stable.
Hello All,   The log has empty space before and after equal with semicolon separation. I’m unable to get the table request status like index="gd" RequestStatus | table RequestStatus, _time     ... See more...
Hello All,   The log has empty space before and after equal with semicolon separation. I’m unable to get the table request status like index="gd" RequestStatus | table RequestStatus, _time                 Would you please advise if anyone have suggestions   Log sample {"timestamp":"2022-11-02 17:01:21,421+0000","level":"INFO","location":"request_process:171","message":"request_id = 5ac3565f-d964-31cd-90b1-e8b7b208e7df; RequestStatus = Completed; RequestID = 5ac3565f-d9a64-31cd-9021-e8b7b208e7df--70ivkG0Td8OBpvWk; S3SourceKey = 1049x7555.xml ; "function_request_id":"b61aa34-f22b-53bc-957e-142456b9b7a5","xray_id":"1-6482a25d-78459fbe07213ee14x4386bd"}   RequestStatus = Received RequestStatus = Completed RequestStatus = Error
I'm struggling to create a sankey diagram that take an initial username and connects that user to IP addresses that are associated with that username. Then, take that IP address and see what other us... See more...
I'm struggling to create a sankey diagram that take an initial username and connects that user to IP addresses that are associated with that username. Then, take that IP address and see what other usernames might be associated with that IP address. My initial search gets a list of IP addresses that is associated with a username. This works well, then I do the stats on those results and it looks great with sankey,       | stats count by username IP | rename username AS user IP AS address           The problem comes with I try to append the second level of the sankey. I'm not quite sure how to take the address on the far right and create that second level, looking for associated usernames. My intention is to only go 3 levels. I assume I have to search by 'address' in my dataset to see what username is associated?
I need to compare two fields "Name" and "StudentName" and I am having problems with this, the values in the field "Name" do not contain accents but the values in "StudentName" contain accents in the ... See more...
I need to compare two fields "Name" and "StudentName" and I am having problems with this, the values in the field "Name" do not contain accents but the values in "StudentName" contain accents in the names like 'Róbert' or 'Czuukó' and also names like 'Mary-Ann' so when I try to compare it will give me several matches because most of the names don't have accents but the ones that contain an accent in"Name" won't show up as matching with "StudentName"