All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, How to display what values are missing in my lookup table comparing to actual data?   I have one field with lists of users from my csv file.  I took a specific range of users and trying to ... See more...
Hi, How to display what values are missing in my lookup table comparing to actual data?   I have one field with lists of users from my csv file.  I took a specific range of users and trying to find out which of users are missing from my csv file. I cannot find a solution, can someone help me out?   My query. |inputlookup filename.csv |search Username IN (user A,  user B, User C, etc.) |dedup Username |fields Username
Hello community, I'm having a very specific problem and I can't find a solution after several days of attempts, all of which failed. I will explain the situation to you: we have a Splunk OnCall wh... See more...
Hello community, I'm having a very specific problem and I can't find a solution after several days of attempts, all of which failed. I will explain the situation to you: we have a Splunk OnCall which serves as our hypervisor and which reports the incidents of several of our monitoring tools. Our users acknowledge alerts directly to Splunk OnCall for incident support. We then pull all this data into Splunk Enterprise (via an official plugin). For several weeks, I have been trying to make the delta between the number of alerts of a type (based on its title) and the number of times this alert has been acknowledged. For this, OnCall sends the same information back to Enterprise several times but with different details: - when an alert appears on OnCall, Enterprise has the info with the status "UNACKED". - when an alert is acknowledged, it goes up with the status "ACKED" - when an alert is over, it goes up with the status "RESOLVED". So I can have up to 3 times the same information in Enterprise.   Now that the (long) scene is set, here is my problem: I manage to output the RESOLVED and ACKED alerts in the same table, in order to make a delta between the number of RESOLVED and the number of ACKED but I cannot "align" the information. I use this search :       index=oncall_prod routingKey=* | search currentPhase=RESOLVED | dedup incidentNumber | rename entityDisplayName as Service | stats count by Service | appendcols [ search index=oncall_prod routingKey=* | search currentPhase=ACKED | dedup incidentNumber | rename entityDisplayName as Service_ACKED | stats count by Service_ACKED | rename count AS ACKED] | eval matchfield=coalesce(Service,Service_ACKED) | table Service count Service_ACKED ACKED       and the result is the following: On the screen, you can see my problem: for some alerts, there has never been an acknowledgment and suddenly, there is a shift on the lines. And when I do a delta by a simple calculation, it does it row by row so the values ​​don't mean anything because it's not comparing the right things. I tried several methods, found here and there on the forum, to properly align my table, including the following search:       index=oncall_prod routingKey=* | search currentPhase=RESOLVED | dedup incidentNumber | rename entityDisplayName as Service | stats count by Service | eval matchfield=Service | join matchfield [ search index=oncall_prod routingKey=* | search currentPhase=ACKED | dedup incidentNumber | rename entityDisplayName as Service_ACKED | stats count by Service_ACKED | rename count AS ACKED | eval matchfield=Service_ACKED] | table Service count Service_ACKED ACKED       but I can't because the result shows me ONLY the lines with both a RESOLVED and ACKED status, leaving the alerts that only had the RESOLVED status undisplayed: How to make the acknowledgments face the correct RESOLVED lines? And how to leave the rows without acknowledgments empty, with a value of 0? If you have an idea, I'm interested. Best regards, Rajaion  
I have a field called Identifier which has values of server names.  I need to check the server names first character is number or not. Could you please let help me, used regex but not able to achieve... See more...
I have a field called Identifier which has values of server names.  I need to check the server names first character is number or not. Could you please let help me, used regex but not able to achieve it. thanks eg ., server names 01234server1 01256server2 2345server3 Abcserver bcdserver      
Hi Splunkers, a customer demanded us a Splunk architecture proposal regarding his own environment. I never designed Splunk archs, so I searched on web and I found some valid documents, like the "Sp... See more...
Hi Splunkers, a customer demanded us a Splunk architecture proposal regarding his own environment. I never designed Splunk archs, so I searched on web and I found some valid documents, like the "Splunk Validate Architecture"; the point is that the total architecture (Splunk + Environment to monitor) is quite particular. The desiderd customer architecture is the following one: Data sources -> Mulesoft -> Splunk Cloud SaaS -> Mulesoft Additional info: 1. No agent must be installed in the Mulesoft environment. 2. No ES required So, the flow data are the following ones: 1. All data sources send their logs to Mulesoft environment 2. Mulesoft send the data to Splunk; so, from a Splunk prospective, Mulesoft it is the only "big" one data source. 3. Splunk make correlation and, if an alarm trigger, send back data to Mulesoft So, my open points here are 2. 1. Due Mulesoft it is the only one data source, even it is a big one, and has its own HA management systems (so it is not in charge of Splunk environment to manage this task), I think I have no reason to use a forwarder as an "intermediate host" and I can send logs directly to Splunk with Token mechanism and Log4j configs in Mulesoft; are there some reasons I didn't get that could validate the use of a Forwarder between Mulesoft Environment and Splunk one? 2. If an alarm trigger, I have to forward back it to Mulesoft system. I know I can perform some response action when an alarm trigger: send an email, execute a script, and so on. What could be the best action to send back data to Mulesoft?
My data currently looks like this: Date Name 2022-11-01 ABC 2022-11-01 DEF 2022-11-01 GHI 2022-11-02 JKL 2022-11-02 MNO 2022-11-03 PQR 2022-11-03 STU 2022-11-03 V... See more...
My data currently looks like this: Date Name 2022-11-01 ABC 2022-11-01 DEF 2022-11-01 GHI 2022-11-02 JKL 2022-11-02 MNO 2022-11-03 PQR 2022-11-03 STU 2022-11-03 VWX 2022-11-03 YZ1   I would like it to look like this: Date Name 2022-11-01 ABC 2022-11-01 DEF 2022-11-01 GHI     2022-11-02 JKL 2022-11-02 MNO     2022-11-03 PQR 2022-11-03 STU 2022-11-03 VWX 2022-11-03 YZ1   I need an empty row to be inserted whenever the Date differs from the value before it.
Hello Folks, I building an add-on. There I am using a multipleselect field at the input whose value should depend on another dropdown field.  What exactly I want is, I want to hide multipleselect... See more...
Hello Folks, I building an add-on. There I am using a multipleselect field at the input whose value should depend on another dropdown field.  What exactly I want is, I want to hide multipleselect field. This field should only be visible according to the previous dropdown value. Is there any way that we can do this in globalConfig.json file. Thank you 
Hi team, I have "file_size" in my  extracted fields and the values are 1.56 KB,5.03 MB, 1.06 B. and those values are strings. I need a query how to convert string to Integer to sum(file_size)  i ne... See more...
Hi team, I have "file_size" in my  extracted fields and the values are 1.56 KB,5.03 MB, 1.06 B. and those values are strings. I need a query how to convert string to Integer to sum(file_size)  i need to use max and min and sum commands for the file_size. Please help me on the same.   Thanks
Hi. We are going to have a datasource with some sensitive data, where there is a requirement, that only the owner of a specific event is allowed to see it. The events will have the user as part o... See more...
Hi. We are going to have a datasource with some sensitive data, where there is a requirement, that only the owner of a specific event is allowed to see it. The events will have the user as part of the data, that field can be created as an indexed field. I will, of course, have the data in a separate index, and thought I might be able to use restriction to limit access so that the user only can search in data where the field user matches the logged on user. I can see it is possible to use the token $env.user$ in a dashboard, but I would really like to use it in the restrictions part of the role, so it automatically will use the logged on user in the restriction.   Any help will be much appreciated. Kind regards las  
Hi , how to do i display number of blocked and allowed threats with different severities in a timeframe(e.g monthly). Something like this output,   Month                    action          ... See more...
Hi , how to do i display number of blocked and allowed threats with different severities in a timeframe(e.g monthly). Something like this output,   Month                    action               critical            high                medium               low 2022-11              allowed               9                        22                  45                        100                                  blocked                20                     400           44345                   23423   2022-10              allowed               39                        22                  4                        100                                  blocked                20                     500           4445                   23423   I can get to either of below output but not able to get as above,, ---- index=palo-network threat sourcetype="pan:threat" severity!=informational| bucket _time span=1month | eval Date=strftime('_time',"%Y-%m")| stats values(severity) count by _time,action ---- index=palo-network threat sourcetype="pan:threat" severity!=informational | bucket _time span=1month | eval Date=strftime('_time',"%Y-%m") | chart count over action by severity   Thank you.
Hi I am trying to fix the vulnerable_javascript_library_usage error in Splunk Add-on Builder for appserver/static/js/build/common.js). There are a couple of answered posts that mention the fix ... See more...
Hi I am trying to fix the vulnerable_javascript_library_usage error in Splunk Add-on Builder for appserver/static/js/build/common.js). There are a couple of answered posts that mention the fix as: Export the app from any Add-on Builder Import the app into Add-on Builder v4.1.0 or newer Download the app packaged from Add-on Builder v4.1.0 or newer The issue I am facing is that I have inherited the maintenance for the add-on and don’t have an app export from the original Add-on Builder. I have tried a couple of things that include importing the spl and tgz files as a project in AOB but cannot import them as it shows an error while extracting the files. I tried to create a new add-on with the same name and replace the files in that with the files from the downloaded tgz from Splunkbase. On exporting the created add-on and importing it again it shows an error when extracting that it's not a valid compressed file. Also, I tried to just remove the common.js file which breaks the add-on when we run it on windows machines. Is there any other way I can fix this? Or how can I import the add-on in AOB? Thanking you in advance.
  I am trying to execute this search but 90% of the times this search does not complete and returns incomplete results. The count is different for example this search will return 4,63,000 even... See more...
  I am trying to execute this search but 90% of the times this search does not complete and returns incomplete results. The count is different for example this search will return 4,63,000 events, 4,20,000 or 3,60,000 etc results. I believe that my search is a bit heavy. Can anyone please help me to optimize this search.
Hi,  Hope you are doing good just have 1 doubt.. On our Splunk windows, we have onboarded the security logs, so my doubt is does security logs also help to monitor NTFS    Thanks  Debjit 
Has anyone encountered this error when sending logs to 3rd party syslog destination using Splunk App for CEF I'm getting the following error. 11-03-2022 11:23:25.904 ERROR ChunkedExternProcessor ... See more...
Has anyone encountered this error when sending logs to 3rd party syslog destination using Splunk App for CEF I'm getting the following error. 11-03-2022 11:23:25.904 ERROR ChunkedExternProcessor [32136 ChunkedExternProcessorStderrLogger] - stderr: splunk.SplunkdConnectionException: Splunkd daemon is not responding: ('Error connecting to /services/data/inputs/all: The read operation timed out',) 11-03-2022 11:23:25.904 ERROR ChunkedExternProcessor [15688 searchOrchestrator] - Error in 'cefout' command: Splunkd daemon is not responding: ('Error connecting to /services/data/inputs/all: The read operation timed out',) 11-03-2022 11:23:25.911 ERROR SearchPhaseGenerator [15688 searchOrchestrator] - Fallback to two phase search failed:Error in 'cefout' command: Splunkd daemon is not responding: ('Error connecting to /services/data/inputs/all: The read operation timed out',) 11-03-2022 11:23:25.913 ERROR SearchStatusEnforcer [15688 searchOrchestrator] - sid:scheduler__userid_c3BsdW5rX2FwcF9jZWY__RMD53bb25367b408a898_at_1667434800_56257_BED74D95-D037-415C-8C9C-81F3D2FEEBAB Error in 'cefout' command: Splunkd daemon is not responding: ('Error connecting to /services/data/inputs/all: The read operation timed out',) 11-03-2022 11:23:25.913 INFO SearchStatusEnforcer [15688 searchOrchestrator] - State changed to FAILED due to: Error in 'cefout' command: Splunkd daemon is not responding: ('Error connecting to /services/data/inputs/all: The read operation timed out',)
Hello, we have a system that receives data from multiple sources each of these sources identifies the data being sent by a 25digit number, this number can be broken down by a combination of the pos... See more...
Hello, we have a system that receives data from multiple sources each of these sources identifies the data being sent by a 25digit number, this number can be broken down by a combination of the positions, the number comes in the following format: TTWWWWWSSSYYMMDDCCCCCPL What I am trying to do is extract the CCCC portion of the number (Positions 19-23) and compare this with a lookup table to identify the sender of the information and then sort the associated data by the sender
Hi, I'm fairly new to Splunk and am considering using Splunk DB Connect to connect to one of our databases to monitor a specific table for errors. I want an alert to be generated when the number of ... See more...
Hi, I'm fairly new to Splunk and am considering using Splunk DB Connect to connect to one of our databases to monitor a specific table for errors. I want an alert to be generated when the number of rows returned is greater than 0. Will the dbxquery command work with Splunk alerts? Let me know if more information is needed. Thanks! Dylan
Hi, I have below message and Iam trying to use rex to extract the id... But myid always shows empty.. Please help - - [02/Nov/2022:17:43:03 -0400] "PUT /application/resources/cat/v7/product/12345... See more...
Hi, I have below message and Iam trying to use rex to extract the id... But myid always shows empty.. Please help - - [02/Nov/2022:17:43:03 -0400] "PUT /application/resources/cat/v7/product/1234567890003/status HTTP/1.1" 201 - abcd.com - 8 web-614   rex field=msg "/application/resources/cat/v7/product/(?<myid>[0-9]*)/status" | table myid  
I'm trying to exclude a specific file called catalina.out in /var/log/tomcat9/ from being processed by Splunk.  The file is being sent to my heavy forwarder and I have the following in inputs.conf  ... See more...
I'm trying to exclude a specific file called catalina.out in /var/log/tomcat9/ from being processed by Splunk.  The file is being sent to my heavy forwarder and I have the following in inputs.conf  [monitor:///var/log/tomcat9] blacklist=(catalina\.out) disabled = 0 The data continues to be processed.  What am I missing?
I'm trying the below query, index=XXXXXXXXX   | eval space="cf_space_name=production" | search "space"  YYYYYYYYYYYY | stats count ================================================================... See more...
I'm trying the below query, index=XXXXXXXXX   | eval space="cf_space_name=production" | search "space"  YYYYYYYYYYYY | stats count =================================================================== I want to filter the results based on the evaluated field. | search "space"    XXXXXXXXXXXXX    => is not returning correct values |  search "cf_space_name=production"    XXXXXXXXXXXXX    =>  but If I use the value like this its working. how to fix this? Thanks for the help.  
Hello,  I'm trying to filter my events/results after evalulating the field name and value dynamically using eval.    index=XXXX  YYYYYYY  | eval field_name=PPPP | eval field_value=KKKK | search f... See more...
Hello,  I'm trying to filter my events/results after evalulating the field name and value dynamically using eval.    index=XXXX  YYYYYYY  | eval field_name=PPPP | eval field_value=KKKK | search field_name=field_value   I tried  below options, but none worked. index=XXXX   [|gentimes start=-1 | eval space="Test"| table space] index=XXXX   [|gentimes start=-1 | eval space="Test"| fields space]