All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team,  I have a field "duration". There are lot of APIs for which this field is populated can i use the Detect outliers to find out out high duration instances in the last 7 days based on API na... See more...
Hi Team,  I have a field "duration". There are lot of APIs for which this field is populated can i use the Detect outliers to find out out high duration instances in the last 7 days based on API name. As i am new to this. i don't know how to start. can someone help?
How do i increase the Number of results in "show source" in Splunk to 10000. Now the maximum limit is 1000. Please refer to the screen print.    
I have to export data from database table where the table format is as shown below, But,I want this data to be represented in splunk in different table format like this, Is there any wa... See more...
I have to export data from database table where the table format is as shown below, But,I want this data to be represented in splunk in different table format like this, Is there any way to represent this in splunk when table formats are different in comparison to database.    
Hello, I have a search query which list users and there email addresses as the result. Now I want to send individual email notifications to those email address , how can I do that. Can anyone p... See more...
Hello, I have a search query which list users and there email addresses as the result. Now I want to send individual email notifications to those email address , how can I do that. Can anyone please help.   Thanks in advance!
Here is the document, but how? https://docs.splunk.com/Documentation/Splunk/8.2.6/Search/Changetheformatofsubsearchresults       Using the query field name Use the query field name when you wa... See more...
Here is the document, but how? https://docs.splunk.com/Documentation/Splunk/8.2.6/Search/Changetheformatofsubsearchresults       Using the query field name Use the query field name when you want the values in the fields returned from the subsearch, but not the field names. The query field name is similarly to using the format command. Instead of passing the field and value pairs to the main search, such as: (field1=val1_1 AND field2=val1_2) OR (field1=val2_1 AND field2=val2_2) Using the query field name passes only the values: (val1_1 AND val1_2) OR (val2_1 AND val2_2)       When rename one fields as query, got `remoteSearch premakeresults 1 ( ( field2="val1_2" AND val1_1 ) )` in inspect job log's remoteSearch. What I want is `remoteSearch premakeresults 1 ( ( "val1_2" AND val1_1 ) )`     | makeresults 1 [ | makeresults 1 | eval field1="val1_1" | eval field2="val1_2" | fields field1 field2 | rename field1 AS query ```| rename field2 AS query``` ]     Below post only rename one field as query. https://community.splunk.com/t5/Splunk-Search/How-to-use-subsearch-without-a-field-name-but-just-with-field/m-p/449282   @woodcock sorry to bother you, seeing a lot of high quality answers from you, seeking your help here.
Hello! I am trying to exclude a specific computer_name from showing up in our carbonblack index in Splunk using a Heavy Forwarder where the data is monitored on.  Below is an example of my props.... See more...
Hello! I am trying to exclude a specific computer_name from showing up in our carbonblack index in Splunk using a Heavy Forwarder where the data is monitored on.  Below is an example of my props.conf & transform.conf props.conf [source::/var/data/events.json] TRANSFORMS-null = nullFilter transform.conf [nullFilter] REGEX = (?ms)(.*"computer_name":\s*"test-machine".*) DEST_KEY = queue FORMAT = nullQueue   Raw data in Splunk:       {"path":"/usr/sbin/abrt-server","md5":"9F469AA349AA64009C3DB7BE","sha256":"","command_line":"abrt-server -s","parent_path":"/usr/sbin/abrtd","parent_pid":546,"parent_guid":-390649270232,"filtering_known_dlls":false,"parent_md5":"97E3CDA03CB1A8CDF9","expect_followon_w_md5":false,"link_parent":"https://server-name:443/#analyze/00000000-0000-0000-0000-74e9a5a/1","username":"root","parent_create_time":1682147484,"pid":27474,"process_guid":"00000000-0000-0000-0000-0000000e","link_process":"https://server-name:443/#analyze/00000000-0000-0000-0000-99132070551e/0","link_sensor":"https://server-name:443/#/host/518","process_path":"/usr/sbin/abrt-server","cb_server":"server-name","type":"ingress.event.procstart","sensor_id":123,"computer_name":"test-machine","event_type":"proc","timestamp":1686123541}         List format data in Splunk (there are two ways the data is displayed in Splunk but the json file produced the above raw data)       { [-] cb_server: server-name command_line: abrt-server -s computer_name: test-machine event_type: proc expect_followon_w_md5: false filtering_known_dlls: false link_parent: https://server-name:443/#analyze/00000000-0000-0000-0000-74e9a5a/1 link_process: https://server-name:443/#analyze/00000000-0000-0000-0000-99132070551e/0 link_sensor: https://sever-name:443/#/host/123 md5: 9F469AA349AA64009C3DB7BE parent_create_time: 1682147484 parent_guid: -390649270232 parent_md5: 97E3CDA03CB1A8CDF9 parent_path: /usr/sbin/abrtd parent_pid: 546 path: /usr/sbin/abrt-server pid: 27474 process_guid: 00000000-0000-0000-0000-99132070551e process_path: /usr/sbin/abrt-server sensor_id: 123 sha256: timestamp: 1686123541 type: ingress.event.procstart username: root }         I have tried a few different regex entries but they keep failing. I was using a UF initially then read the Splunk docs and upgraded to a Heavy Forwarder but still keep getting the same problem. Can you please provide any assistance, would be very much appreciated. My initial aim is to get this working for a single machine then hopefully look to exclude multiple machines sharing a similar naming convention for example "PC123..." grateful if you can provide best way to tackle both scenarios.  Thanks!
Hi All, I am new to splunk. While starting splunk for the 1st time , it is starting with "build" user even though $SPLUNK_HOME has root ownership.     ps -ef| grep splunk build 736... See more...
Hi All, I am new to splunk. While starting splunk for the 1st time , it is starting with "build" user even though $SPLUNK_HOME has root ownership.     ps -ef| grep splunk build 736222 1 0 06:42 ? 00:00:06 splunkd -p 8089 restart build 736226 736222 0 06:42 ? 00:00:00 [splunkd pid=736222] splunkd -p 8089 restart [process-runner]     I want to run it with root user....  How to fix this issue??
Hi Team, I am trying to schedule a alert base on threshold for 2 time window. 100 events between 00:00 to 14:00  20 events between   14:00 to 00:00    is it possible to define 2 threshold li... See more...
Hi Team, I am trying to schedule a alert base on threshold for 2 time window. 100 events between 00:00 to 14:00  20 events between   14:00 to 00:00    is it possible to define 2 threshold like above ? in one alert index=ABC sourcetype=XYZ failedlogin |stats count |where count >100   between 00:00 to 14:00  index=ABC sourcetype=XYZ failedlogin |stats count |where count >20 between 14:00 to 00:00 
Hi SMEs, I am getting some garbage/hexa format/ASCII format logs from one of the log source integrated with Splunk, it is customized linux platform and been integrated using TCP input. Sharing the ... See more...
Hi SMEs, I am getting some garbage/hexa format/ASCII format logs from one of the log source integrated with Splunk, it is customized linux platform and been integrated using TCP input. Sharing the sample log below. Seeking suggestions to find and fix it. thanks in advance     
I have a field with the system's IP in it and am trying to add additional fields during ingest.  It works if the IP field is a single value, but if it is a multivalue field it does not.  I can succes... See more...
I have a field with the system's IP in it and am trying to add additional fields during ingest.  It works if the IP field is a single value, but if it is a multivalue field it does not.  I can successfully add the fields at search time regardless if it is a single or multivalue field. As an example, the field name is systemIP.  The CSV lookup file is: cidr,location,region 192.168.1.0/24,Site-A,East 10.10.10.0/24, Site-B,East transforms.conf: [IPRange] INGEST_EVAL = JSON=lookup("IPRangeLookup", json_object("cidr", systemIP), json_array("location", "region")) [IPRangeLookup] batch_index_query = 1 case_sensitive_match = 1 filename=systemIPLookup.csv match_type = CIDR(cidr) max_matches = 1 props.conf: [(?::){0}host::*] TRANSFORMS = IPRange   For the INGEST_EVAL: If the system only has one IP address (192.168.1.10), then JSON gets set to: {"location":"Site-A","region":"East"} If it has two IP addresses, one in each cidr, the JSON gets set to the match for the first IP in the multivalue field.   For search time EVAL: If I search: index="*" host="host-with-two-IPs" | eval JSONzzz=lookup("IPRangeLookup", json_object("cidr", systemIP), json_array("location", "region")) Then a system with one IP address, JSONzzz gets set to: {"location":"Site-A","region":"East"} If it has two IP addresses, then JSONzzz gets set to: {"location":["Site-A","Site-B"],"region":["East","East"]}   The lookup is the same between the two, but the INGEST_EVAL only ever processes the first value in the field.  Is there a way to have INGEST_EVAL process the multivalue field and return the same JSON value as the EVAL lookup?
Hello! I am trying to figure out how to convert an table query into a histogram using timechart(), but I am having issues as no data is flowing (I read that is because when you use stats the value... See more...
Hello! I am trying to figure out how to convert an table query into a histogram using timechart(), but I am having issues as no data is flowing (I read that is because when you use stats the value of _time disappear or something). Here is my old query:       index="something" source="*-value*" ("random value 1" OR "*random value 2*") | stats count(eval(match(_raw, "random value 1"))) as value_1, count(eval(match(_raw, "random value 2"))) as value_2 by source | where value_1 > 0 AND value_2 > 0 | table source       And this is what I have so far:       index="something" source="*-value*" ("random value 1" OR "*random value 2*") | stats count(eval(match(_raw, "random value 1"))) as value_1, count(eval(match(_raw, "random value 2"))) as value_2 by source | where value_1 > 0 AND value_2 > 0 | timechart span=1d dc(source) as unique_sources       But not data is flowing, I already tried other ways and I am sure should be something easy that I am not able to figure out
I have a particularly challenging log format and would appreciate any inputs on how to tackle this problem. Problem Looking for a feasible props.conf setup that will correctly index the log below... See more...
I have a particularly challenging log format and would appreciate any inputs on how to tackle this problem. Problem Looking for a feasible props.conf setup that will correctly index the log below Example (blank lines only added for readability):   SINGLE_LINE_LOG_EVENT SINGLE_LINE_LOG_EVENT OTHER_SINGLE_LINE_LOG_EVENT Tue 06 Jun 10:00:00 UTC 2023 ANOTHER_SINGLE_LINE_LOG_EVENT Tue 06 Jun 10:00:01 UTC 2023 LARGE_MULTILINE_EVENT   The first three lines are all single events and should be parsed accordingly. But they have no timestamp The fourth and fifth line together form a single event Lines 6 and 7 also form a single event, but the event from line 7 is a multiline event that shall be parsed as a single event I am prepared to make the sacrifice that the lines without timestamp get assigned the CURRENT timestamp, if there is no other solution for this. What I have already tried I tried using the following (the Regex looks for the timestamp)   MUST_NOT_BREAK_AFTER = .{3}\s.{3}\s\d{2}\s\d{2}:\d{2}:\d{2}\sUTC\s\d{4} MUST_BREAK_AFTER = .{3}\s.{3}\s\d{2}\s\d{2}:\d{2}:\d{2}\sUTC\s\d{4}    As well as this (I tried various combinations of this, with different capture groups. Note that the file in question only has newlines and no carriage returns, hence no '\r')   SHOULD_LINEMERGE = false LINE_BREAKER = ([\n].{3}\s.{3}\s\d{2}\s\d{2}:\d{2}:\d{2}\sUCT\s\d{4})    
Index = prod-x7 host IN ( 12345678) sourcetype=“Wineventlog” Eventcode=“19” |eval patching = if(eventcode =“19”, “ok”, “not ok”) If events are found then search server availability i.e., index=... See more...
Index = prod-x7 host IN ( 12345678) sourcetype=“Wineventlog” Eventcode=“19” |eval patching = if(eventcode =“19”, “ok”, “not ok”) If events are found then search server availability i.e., index= server_123 host in (12345678) uri_stem IN (http/hltchck) | status count eval( status=100) as success, count as total by _time |eval Percent = round((Success/total)*100,2) | table Percent   how to merge this two diff querys. But display only if the patching has happened 
I am using Heavy Forwarder to send logs from different sources such as Domain Controller, Windows Servers, Network Switches etc. to Splunk Cloud   I am receiving the following error from the Splu... See more...
I am using Heavy Forwarder to send logs from different sources such as Domain Controller, Windows Servers, Network Switches etc. to Splunk Cloud   I am receiving the following error from the Splunk Cloud   PeriodicHealthReporter - feature="TCPOutAutoLB-0" color=red indicator="s2s_connections" due_to_threshold_value=70 measured_value=100 reason="More than 70% of forwarding destinations have failed.  Ensure your hosts and ports in outputs.conf are correct.  Also ensure that the indexers are all running, and that any SSL certificates being used for forwarding are correct." node_type=indicator node_path=splunkd.data_forwarding.splunk-2-splunk_forwarding.tcpoutautolb-0.s2s_connections   Does anyone know what is the cause and how to fix it? 
We don't like to give blind access to the MC, to temporary Splunk contractors. They need something like the following -  - Search performance and distributed search framework - Indexing performance... See more...
We don't like to give blind access to the MC, to temporary Splunk contractors. They need something like the following -  - Search performance and distributed search framework - Indexing performance - Operating system resources usage - Splunk KV store performance - Index and volume usage - Forwarder connections and network performance   Is there a way to access this information from outside the MC? API or any other way?
Hi All, I have got logs like below set which gives the VPN details like VPN_Name, Primary_Server, Secondary_Server and their status.    Log1: </tr> <tr> <td ><b><font color=olive>INDIA</font></... See more...
Hi All, I have got logs like below set which gives the VPN details like VPN_Name, Primary_Server, Secondary_Server and their status.    Log1: </tr> <tr> <td ><b><font color=olive>INDIA</font></b></td> <td >SNFGC_S_INDIA</td> <td ><b><font color=green>gcgnamslap03p</font></b> # <b><font color=blue>gcgnamslap04p</font></b></td> <td ><b><font color="green">UP</font></b>/<b><font color=blue>SB</font></b></td> Log2: </tr> <tr> <td ><b><font color=olive>CHINA</font></b></td> <td >JBPMGC_S_CHINA</td> <td ><b><font color=green>gcgnamslap03p</font></b> # <b><font color=blue>gcgnamslap04p</font></b></td> <td ><b><font color="green">UP</font></b>/<b><font color=blue>SB</font></b></td> Here I used the below query to extract the required fields: ... | rex field=_raw "\<tr\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>(?P<Region>[^\<]+)\<\/\w+\>\<\/b\>\<\/td\>" | rex field=_raw "\<tr\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>[^\<]+\<\/\w+\>\<\/b\>\<\/td\>\s+\<td\s\>(?P<VPN_Name>[^\<]+)\<\/td\>" | rex field=_raw "\<tr\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>[^\<]+\<\/\w+\>\<\/b\>\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>(?P<Primary_Server>[^\<]+)\<\/\w+\>\<\/b\>\s" | rex field=_raw "\<tr\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>[^\<]+\<\/\w+\>\<\/b\>\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>[^\<]+\<\/\w+\>\<\/b\>\s\#\s\<b\>\<\w+\s\w+\=\w+\>(?P<Secondary_Server>[^\<]+)\<\/\w+\>\<\/b\>\<\/td\>" | rex field=_raw "\<tr\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>[^\<]+\<\/\w+\>\<\/b\>\<\/td\>\s+\<td\s\>[^\<]+\<\/td\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\w+\>[^\<]+\<\/\w+\>\<\/b\>\s\#\s\<b\>\<\w+\s\w+\=\w+\>[^\<]+\<\/\w+\>\<\/b\>\<\/td\>\s+\<td\s\>\<b\>\<\w+\s\w+\=\"\w+\"\>(?P<Status_Primary>[^\<]+)\<\/\w+\>\<\/b\>\/\<b\>\<\w+\s\w+\=\w+\>(?P<Status_Secondary>[^\<]+)\<\/\w+\>\<\/b\>\<\/td\>"     I want to create a panel to show the details of Status_Primary (like how many are UP and how many are DOWN). For that I used added the query "| stats count by Status_Primary" to the above query and created a pie chart out of it. I also want to show in the same panel, which is the Primary_Server and which is the Secondary_Server. But I am not able to make a query to fill both data in the same panel. Please help to create a query to fill both the Status details and Server details in the same panel. Your kind help is highly appreciated.   Thank you..!!
I'm currently using SQS Based s3 input for cloudtrail and I'm trying to drop nosey events such as GET and LIST. The documentation says the standard input supports exclude_describe_events and blackli... See more...
I'm currently using SQS Based s3 input for cloudtrail and I'm trying to drop nosey events such as GET and LIST. The documentation says the standard input supports exclude_describe_events and blacklist to filter out unwanted events which I set but after further looking into it it seems that a props/transform is needed.    I have configured the following in props and transforms props.conf [aws:cloudtrail] TRANSFORMS-filter = eventsDrop transforms.conf #Filters out events that are not needed [eventsDrop] REGEX = "^Describe|Get|List\p{Lu}|LookupEvents" DEST_KEY = queue FORMAT = nullQueue   I tested the regex and it matches events but the events are not being dropped as expected. This is on a HF as that collects the logs before going into indexers 
Hello Splunkers, I have the following query     index=abc | search host IN (*) | search NOT host IN ( No_exclusions) | eval lower =((ref*l)+ref) | eval upper = if( u = "null", "10000000", (r... See more...
Hello Splunkers, I have the following query     index=abc | search host IN (*) | search NOT host IN ( No_exclusions) | eval lower =((ref*l)+ref) | eval upper = if( u = "null", "10000000", (ref+(ref*u))) | stats latest(perf_number) as perf_number by host Test upper lower        If the perf_number is within the range of upper and lower value then it should be green and if its above or below the range it should be RED. I tried using the below in xml but it only gives me one color <format type="color" field="perf_number"> <colorPalette type="expression"> if((perf_number &gt;= lower) AND (perf_number &lt;= upper), "#00FF00", "#FF0000")</colorPalette>   Thanks in Advance
Hi, I have the following code and I want to control the fields variable with the token. It doesn't seem to work however. It doesn't seem to be updated after the initial load.      <form versi... See more...
Hi, I have the following code and I want to control the fields variable with the token. It doesn't seem to work however. It doesn't seem to be updated after the initial load.      <form version="1.1" theme="dark"> <label></label> <search id="base"> <query>index=main | head 10 | fields _time,_raw</query> <earliest>-60m@m</earliest> <latest>now</latest> </search> <fieldset submitButton="false" autoRun="false"></fieldset> <row> <panel> <input type="dropdown" token="cols" searchWhenChanged="true"> <label>Show Details</label> <choice value="_time">Just Time</choice> <choice value="_time,_raw">Time and Raw</choice> <default>_time,_raw</default> <initialValue>_time,_raw</initialValue> </input> <table> <title>Show only . [$cols$]</title> <search base="base"> <query>eval run_on_change = "$cols$" | table $cols$</query> </search> <fields>$cols$</fields> </table> </panel> </row> </form>           thanks. 
I need to install an splunk addon into my splunk distributed environment. The aaddon contains modular scripted inputs to pull the data and store it into a custom index. I need you help to underst... See more...
I need to install an splunk addon into my splunk distributed environment. The aaddon contains modular scripted inputs to pull the data and store it into a custom index. I need you help to understand where should I install this addon.. what if  i install it on all the tiers (hf,indexer, sh..) and enable the input only in HF? Will the indexer receive data?