All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Team, We have a search head cluster and indexer cluster in our current Splunk environment. We don't have a deployment server and we decided to set up a new one. What are all the pre-requests that... See more...
Team, We have a search head cluster and indexer cluster in our current Splunk environment. We don't have a deployment server and we decided to set up a new one. What are all the pre-requests that should be considered, since our current environment is on a clustering model? Thanks.
Hello, I have seen the question pass multiple times already, so I have searched it already, but  I was unable to find a query for my specific situation. So, my query ends with: |stats count by se... See more...
Hello, I have seen the question pass multiple times already, so I have searched it already, but  I was unable to find a query for my specific situation. So, my query ends with: |stats count by sender| where isnull(count) OR count < 100 I had my alert set up that if it occurs, that I get a mail.  The goal is here that this above event must happen twice in a timeframe of 5 minutes before he should send the mail. Can anyone please assist me with this? Thanks Danny
Hi, I have two OUTPUT as " IA" and "IB" in one chart by appending sub search. I want addcoltotals of sum of "IA" and "IB" My input - source type = router  routingKey=routingA OR routingKey=routin... See more...
Hi, I have two OUTPUT as " IA" and "IB" in one chart by appending sub search. I want addcoltotals of sum of "IA" and "IB" My input - source type = router  routingKey=routingA OR routingKey=routingB | stats sum(count) as count, avg(percent) as percent | eval routingKey = "IA" | append [ search routingKey=routingAA OR routingBB | stats sum(count) as count, avg(percent) as percent |eval routingKey = "IB" ] | addcoltotals labelfield= routingKey label= “Total” | table routingKey, count, percent Result should be -    routingKey   count   percent IA                        50           50% IB                         50           50% Total                    ?               ? Also, when i m searching result, its saying parsing job while giving output. Do append command parsing my output? Is there any other command i can use instead of append.  
Hi, I am struggling to configure Splunk forwarder to get data into splunk. I am trying to get the data ( auth.log ) sent across from a Kali linux operating system.  When I configured it in kali used ... See more...
Hi, I am struggling to configure Splunk forwarder to get data into splunk. I am trying to get the data ( auth.log ) sent across from a Kali linux operating system.  When I configured it in kali used the below syntax ( Ip address is my KALI ip address when I ifconfig. I followed a guide online where it said to put port 11000. ./splunk add forward-server 192.168.253.XX:11000   ( note XX is not correct.. but did not want to disclose my IP on here). I then did below - ./splunk add monitor /var/log/access.log Then I restarted splunk. I then went into Splunk enterprize .. settings and then Forwarder management... I can see below - The IP address is not the same as the Kali linux VM IP.. is that normal? The first three octets are the same.. but not the fourth ( I assume it is because it is a /24 subnet). I then go into Search and reporting.. but there is no data summary or any data that come across... ?? what I am doing wrong... User-PC Apps None Server Classes None 72660893-7D38-4486-A625-A57C08C5592A User-PC 192.168.253.1 Delete Record windows-x64     0 deployed     8 minutes ago     Essentially - I am playing around with a few VM's Ubunto, Windows 10, Kali Linux and trying to get the data from those VM's to splunk enterprise and play around with setting up some alerts and generate some reports. Maybe the Universal forwarder is not the best idea for what I am trying to do? I am very new at this... so any help would be great. Thanks in advance for any help
I've got a few different tables, all csv, that provide different information. The main events table includes a bunch of fields that reference those other tables ie title_id field will contain a numb... See more...
I've got a few different tables, all csv, that provide different information. The main events table includes a bunch of fields that reference those other tables ie title_id field will contain a number and in the title_id table the numbers match up to a specific text value. I have a number of these types of fields with matching tables. Is this something where i can upload all of the tables and join them in splunk, telling it how to reference? or is it something where i need to join the data external to splunk and upload it? My goal is that when i search for something the actual title appears instead of the title id. Sorry, i'm very new to this and super appreciate any assistance. 
I wan to configure to Jira addon is this needed any Github for this, and what is the requirement from jira server to configure the Jira service desk simple addon. Thanks, Santhosh Kumar
Hi I am using below query to get the details of alarms which has (one Warning and one OK status) or (one Critical and one OK status) per checkname and device. Now how to get the time difference(dura... See more...
Hi I am using below query to get the details of alarms which has (one Warning and one OK status) or (one Critical and one OK status) per checkname and device. Now how to get the time difference(duration) between the ok and warning messages or ok and critical messages? index=abc sourcetype=alarms  |stats count by Device CheckName Status _time | sort - _time please help
We are integrating splunk with JIRA using “Splunk Add-on for JIRA”. This add-on expects us to save password in clear text inside Jira/local/inputs.conf file. For security reasons we do not want to ... See more...
We are integrating splunk with JIRA using “Splunk Add-on for JIRA”. This add-on expects us to save password in clear text inside Jira/local/inputs.conf file. For security reasons we do not want to store it as plain text inside this file. Can you suggest us a way of storing it ?   Would Really appreciate your help here.
I have a table . We need equal width for all columns
Hello, I am having trouble with filtering fields extracted using rex as follows: rex max_match=0 field=sessions_as_client "(?<SRC>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\s--\>\s(?<DST>\d{1,3}\.\d{1,3}\... See more...
Hello, I am having trouble with filtering fields extracted using rex as follows: rex max_match=0 field=sessions_as_client "(?<SRC>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\s--\>\s(?<DST>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}):(?<Port>\d+\/[a-zA-Z]+)"| where Port="123/UDP" | lookup dnslookup clientip as DST OUTPUT clienthost as DSTDNS | table Port DST DSTDNS   The field I am extracting looks as follows: sessions_as_client="1.2.3.4 --> 1.2.3.5:21/TCP (ftp), 1.2.3.4 --> 1.2.3.5:23/TCP (telnet), 1.2.3.4 --> 1.2.3.5:123/UDP (ntp/udp)"   I am getting a table with the 123/UDP events as expected, but I am also getting the other events such as 21/TCP and 23/TCP in the same row as if each match from the rex statement was no longer applying to the search. Any recommendations are appreciated. 
Hello, I need to make a report with 2 different sourcetypes. For the first sourcetype, lets call it st1, I have the list of people removing certain tags from hostnames in McAfee. For the secon... See more...
Hello, I need to make a report with 2 different sourcetypes. For the first sourcetype, lets call it st1, I have the list of people removing certain tags from hostnames in McAfee. For the second sourcetype, st2, I have the latest tags state for each hostname, and here I've got more hostnames than in the first sourcetype. What I need to achieve is to make a table with this information together, joined by the hostname field: The time a tag was removed (st1), who removed it (st1), the host with the removed tag (st1), and remaining tags (st2). I have made this search but after the join returns the values of remaining tags for the same host in every row it's not joining, and if it's joining it's joining all the hostnames from st1 with only 1 hostname from st2. Here is my search:   index=epo sourcetype="mcafee:audit" ("Clear Tag" CmdName="Clear Tag") ("tag1" OR "tag2" OR "tag3" OR "tag4") | search NOT [| inputlookup mcafee_epo_allowed_users.csv | fields UserName] | rex field=Message "Cleared\stag\s\'(?<Tag>.+)\'\sfrom\s(?<hosts>.+)\.$" | eval hosts2 = split(hosts,",") | mvexpand hosts2 | rename hosts2 as "Destination host/s" CmdName as "Action" UserName as "Source User" Tag as "Cleared tag" | fields "Action" "Source User" "Cleared tag" "Destination host/s" | join "Destination host/s" [| search index=epo sourcetype="mcafee:inventory" | dedup NodeName | table NodeName Tags | rename NodeName as "Destination host/s", Tags as "Remaining tags" ] | table _time "Action" "Source User" "Cleared tag" "Destination host/s" "Remaining tags"   It returns this: Field hostname in sourcetype 1 is called hosts2, in sourcetype 2 is called NodeName If I can avoid using join its gonna be better. Thanks.
only able to receive logs sent by localhost and not from external hosts.  
I have this data coming in every minute to monitor application performance:     { "events": [ { "appId": "mock-app", "eventType": "WorkflowRequestFailedCount", "failureType": "... See more...
I have this data coming in every minute to monitor application performance:     { "events": [ { "appId": "mock-app", "eventType": "WorkflowRequestFailedCount", "failureType": "wf.execution.error", "metricType": "COUNTER", "requestType": "WORKFLOW", "throughput": 15, "workflowId": "create" }, { "appId": "mock-app", "eventType": "WorkflowRequestProcessedCount", "metricType": "COUNTER", "requestType": "WORKFLOW", "throughput": 0 }, { "appId": "mock-app", "eventType": "WorkflowRequestReceivedCount", "metricType": "COUNTER", "requestType": "WORKFLOW", "throughput": 20, "workflowId": "create" } ] }     I need a query (to set up an alert) that identifies when throughput is GREATER than 0 for throughput nested with eventType = WorkflowRequestFailedCount. I can't figure out how to isolate the throughput count between different eventType items when they are both nested within the same events object as shown above. For the JSON shown above, the correct query should trigger an alert because the throughput for WorkflowRequestFailedCount is 15 (greater than 0). Appreciate the help.
I've been unable to get a boolean value extracted from JSON written to Splunk. The data looks like this:    build: {      build_id: bubyut7oi7xlg      cache: {        remote_enabled: false    ... See more...
I've been unable to get a boolean value extracted from JSON written to Splunk. The data looks like this:    build: {      build_id: bubyut7oi7xlg      cache: {        remote_enabled: false      }   } Here's my search: index=gradle_enterprise_export sourcetype="gradle-export-app" message=build_saved env="prod" build.build_id="bubyut7oi7xlg" | spath | rename build.cache.remote_enabled as remote | eval remote_cache = if(remote=="false", "false", "true") | table build.build_id remote_cache I've tried a number of different combinations for remote=="false" using no quotes, single quotes, different cases. I've also tried directly using build.cache.remote_enabled == "false". (though another another post says eval will concatenate fields. Even quoted, it makes no difference).  The result should be "false" and  is always "true": build.build_id     remote_cache bubyut7oi7xlg   true I've also used tostring() to show the remote_enabled value and it shows NULL. Any ideas? Are JSON boolean values supported?
data is coming in only for source types sc4s: events and sc4s: fallback. There are multiple compatible devices like (cisco ASA) set up to send data via UDP 514 at the server and nothing is being sent... See more...
data is coming in only for source types sc4s: events and sc4s: fallback. There are multiple compatible devices like (cisco ASA) set up to send data via UDP 514 at the server and nothing is being sent to Splunk. Does anyone have any ideas on how to troubleshoot this? (podman with systemd) There are 2 network interfaces
Hello Guys, I'm trying to plot multiple values onto a time chart. These values are collected through a Where Like statement. For Example: host=* time count(where like(COMMAND,"%  MKDIR%")) as "MKD... See more...
Hello Guys, I'm trying to plot multiple values onto a time chart. These values are collected through a Where Like statement. For Example: host=* time count(where like(COMMAND,"%  MKDIR%")) as "MKDIR", count(where like(COMMAND,"%  LS%")) as "LS", count(where like(COMMAND,"CHMOD")) as "CHMOD" the output i'm getting is a blank time chart. Thank you  
I have four versions of a nearly identical search.  The last one returns a completely different result.  What is it about the interaction of the "sort" and "head" commands that changes the outcome? ... See more...
I have four versions of a nearly identical search.  The last one returns a completely different result.  What is it about the interaction of the "sort" and "head" commands that changes the outcome?     ...| stats sum(eval(sc_bytes/1073741824)) AS Gigabytes by cs_uri_stem | sort -sc_bytes ...| stats sum(eval(sc_bytes/1073741824)) AS Gigabytes by cs_uri_stem | sort -Gigabytes ...| stats sum(eval(sc_bytes/1073741824)) AS Gigabytes by cs_uri_stem | sort -Gigabytes | head 100 ...| stats sum(eval(sc_bytes/1073741824)) as Gigabytes by cs_uri_stem | sort -sc_bytes | head 100      
I  want to route Syslog events to different indexes based on hostname, best to do this on indexer with transforms?
Hi, is it possible to run SC4S temporarily in Ubuntu 16? Doesn´t appear as supported but I'm not sure if it's also incompatible. Podman appears to be only available in ubuntu 18 but docket is availab... See more...
Hi, is it possible to run SC4S temporarily in Ubuntu 16? Doesn´t appear as supported but I'm not sure if it's also incompatible. Podman appears to be only available in ubuntu 18 but docket is available for 16 
Is anyone aware of a dashboard visualization that will allow me to edit a lookup table in the UI? Rather than using Lookup Editor?  I want to visualize a table, but then dynamically change the valu... See more...
Is anyone aware of a dashboard visualization that will allow me to edit a lookup table in the UI? Rather than using Lookup Editor?  I want to visualize a table, but then dynamically change the value of a column in the UI. Thoughts?