All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  I have a log like below which is having json FEATURES={ "featureDetails":[ { "featureName":"TOKEN_VALIDATION", "addedIn":"1.0.7", "description":"This feature is used to Validate the JWT t... See more...
Hi  I have a log like below which is having json FEATURES={ "featureDetails":[ { "featureName":"TOKEN_VALIDATION", "addedIn":"1.0.7", "description":"This feature is used to Validate the JWT token" }, { "featureName":"REQUETS_VALIDATION", "addedIn":"1.0.7", "description":"This feature is used to Validate request URL" }, { "featureName":"REQUEST_PAYLOAD_VALIDATION", "addedIn":"1.0.7", "description":"This feature is used to Validate request body" }, { "featureName":"RESPONSE_PAYLOAD_VALIDATION", "addedIn":"1.0.7", "description":"This feature is used to Validate response body" }, { "featureName":"AOP", "addedIn":"1.0.6", "description":"This feature is used to check method execution time" }, { "featureName":"TIBCO_COMMUNICATOR", "addedIn":"1.0.8", "description":"This feature is used to connect Benefits service " }, { "featureName":"SECRETS_SECURE", "addedIn":"1.0.7", "description":"This feature is used to Validate SECRETS" } ], "versionHistory":[ { "versionNumber":"1.0.0", "featuresAdded":"Common Exception Handling Capability"}, { "versionNumber":"1.0.0", "featuresAdded":"Common Exception Handling Capability"} ] } which is having two array one is featureDetails and naother one is versionHistory i want table like below which should come from versionHistory array versionNumber                                                                     featuresAdded 1.0.0                                                                                        Common Exception Handling Capabilit 1.0.0                                                                                        Common Exception Handling Capability
I have configured the REST API Modular Input to query data from an application we need.  The rest API is configured to do a GET and everything works as expected (i.e. it connects, authenticates and r... See more...
I have configured the REST API Modular Input to query data from an application we need.  The rest API is configured to do a GET and everything works as expected (i.e. it connects, authenticates and returns data into an index).  The problem it only returns 1 entry in the query I am making.  The parsing is fine (json) and I can see lots of key-value pairs as expected.  The returned  result is always the same result.  This is despite changing the REST configuration to do multiple queries or removing any flags that may constrain this.  If I do this natively using a curl for example, I return over 1200 records.  
Hi all,  I'm currently trying to add a certificate to my splunk app to allow it to communicate to intranet site with ntlm authentication. I currently have my cert in the bin folder as noted below. W... See more...
Hi all,  I'm currently trying to add a certificate to my splunk app to allow it to communicate to intranet site with ntlm authentication. I currently have my cert in the bin folder as noted below. Whats the correct value for vpath ?  I've tried different variations and nothing seems to work. Can someone help confirm what's the correct string value to assign the cert path? vpath="cert/sp.pem"  vpath="$splunk_home/etc/apps/TA-myapp/bin/cert/sp.pem "   response=requests.get(url,auth=auth,headers=header,verify=vpath)   $splunk_home/etc/apps/TA-myapp/bin/cert/sp.pem 
Trying to send data to Splunk using 'services/receivers/stream', but data is not logged as separated event; instead its getting logged as one event. Code: HttpURLConnection huc = (HttpURLConnection... See more...
Trying to send data to Splunk using 'services/receivers/stream', but data is not logged as separated event; instead its getting logged as one event. Code: HttpURLConnection huc = (HttpURLConnection)url.openConnection(); huc.setRequestProperty ("x-splunk-input-mode", "streaming"); huc.setRequestMethod("POST"); huc.setDoOutput(true); String content = "abc"; huc.getOutputStream().write(content.getBytes()); huc.getOutputStream().flush(); content = "xyz"; huc.getOutputStream().write(content.getBytes()); huc.getOutputStream().flush(); huc.getOutputStream().close();
Hi, I am  fairly new to Splunk. I have been going down a lot of rabbit holes and its probably time I reach out for some guidance: I work as part of a team that look after a fleet of audiovisual (AV... See more...
Hi, I am  fairly new to Splunk. I have been going down a lot of rabbit holes and its probably time I reach out for some guidance: I work as part of a team that look after a fleet of audiovisual (AV) systems. My Splunk searches return strings that populate these three fields: RoomName , AttributeID and RawSerialValue. There are two AttributeIDs I am interested in: "Config Filename" and "Processor Firmware".  My individual searches on both return their values in the RawSerialValue field. I need to run a search that returns the RoomName for every AV system that has the same combination of "Config Filename" and  "Processor Firmware". To be clear,  systems can have the same "Config Filename" but different "Processor Firmware", and vice versa. My efforts to combine the two either return no results, or strip out results that should be returned. If someone can suggest the best method I should use, I'd appreciate it. This search returns the RoomNames and groups them according to their "Config Filename": index=av sourcetype=Fusion10PROD AttributeID="Config Filename" RawSerialValue="*" | dedup RoomName| top limit=20 RawSerialValue And this returns the RoomNames and  groups them according to their "Processor Firmware": index=av sourcetype=Fusion10PROD AttributeID="Processor Firmware" RawSerialValue="*" | dedup RoomName| top limit=20 RawSerialValue Thanks in advance, Regards, John
I'm trying to fix up some of the props.conf for the Windows Infrastructure app to match our Windows XML logs, but some of the fields needed are only provided after a lookup. Is there any way to extra... See more...
I'm trying to fix up some of the props.conf for the Windows Infrastructure app to match our Windows XML logs, but some of the fields needed are only provided after a lookup. Is there any way to extract fields post lookup?
I m using append query multiple times for different searches for same index. Its parsing my job. Please advise solution.    
I created two Singe Value Visuals that hides different sets of panels when clicked. In a sense, they act as buttons. I want to sort of "highlight" the single value that the user clicked by either cha... See more...
I created two Singe Value Visuals that hides different sets of panels when clicked. In a sense, they act as buttons. I want to sort of "highlight" the single value that the user clicked by either changing the font color of the single value or the background of its panel. Is it possible to do it without adding any css or js files?  I tried this: <panel id="MyPanel"> <!--search and query tags--> <option name="drilldown">all</option> <option name="colorMode">block</option> <option name="height">70</option> <option name="useColors">1</option> <drilldown> <set token=$FirstClickedTok$>true</set> <unset token=$SecondClickedTok$></unset> </drilldown> <html depends="$FirstClickedTok$"> <style> #MyPanel .single-result {fill: rgb(255,0,0)!important;} </style> </html> </panel> but the font color is set immediately even the token is not yet set to true. I also tried: <style> #MyPanel .panel-body {background-color: red !important;} </style> This one only adds a little colored row at the bottom of the single value and not the whole panel. 
Hello Splunkers, I have my firewall sending its logs to a CentOS server where I have the Splunk Universal forwarder configured to listen to UDP 514 and forward it to the indexer. Although I have re... See more...
Hello Splunkers, I have my firewall sending its logs to a CentOS server where I have the Splunk Universal forwarder configured to listen to UDP 514 and forward it to the indexer. Although I have reviewed the configuration I wasn't able to find the reason it is not working. Note: I have tested the inputs and output.conf and It is working for the files I'm monitoring. What am I missing here? Any help would very much be appreciated!
Hi,  What's a safe way for a public application to submit REST POST calls to an on-prem splunk enterprise instance? Ideally I'm looking to do this - Application makes a REST post call to URL wit... See more...
Hi,  What's a safe way for a public application to submit REST POST calls to an on-prem splunk enterprise instance? Ideally I'm looking to do this - Application makes a REST post call to URL with log entry as payload - URL resolves to cloud vm - cloud vm forwards to on-prem instance via VPN I first thought of using the Universal Forwarded on the cloud vm but HEC is not supported. Other than running a heavy forwarder on the cloud vm, is there a better way of doing this? I've considered running my own API on the cloud vm as the forwarder but I'd prefer to go with something that is universally tested and hardened, ie. UF Thank you
Hi, I am trying to index data from a local directory, but the line break is not executing correctly. The expression I am using is ([\ r \ n] +), however, it is indexing me more than 3 events into j... See more...
Hi, I am trying to index data from a local directory, but the line break is not executing correctly. The expression I am using is ([\ r \ n] +), however, it is indexing me more than 3 events into just one. is there any way to define a line break when you find a certain character? for example, when find ";;;"
Need some help ...  I looked at several examples but not that straight forward ...  The rex and split functions were my best bet ... never got anything ... that really worked well. The split function... See more...
Need some help ...  I looked at several examples but not that straight forward ...  The rex and split functions were my best bet ... never got anything ... that really worked well. The split function only gave me the first part.  The rex function did not get me a variable assignment. I need to be able to get a variable -- preferably through the eval function that captures the sid # in a variable like mySid.   See the sample string below.  I just need the # which in the example below which is 2008518.  The Sid # is going to be in the text string in different locations- i.e not the same exact absolute location in the string.  Appreciate the help!!! Example - String    A suspicious packet was sent [sid:2008518] -- Detected an attempt to make a configuration change in SQL DB using the legit 'sp_configure' command The xp_cmdshell option is a SQL Server server configuration option that enables system administrators to control whether the xp_cmdshell extended stored procedure can be executed on a system. By default, the xp_cmdshell option is disabled on new installations. Before enabling this option, it is important to consider the potential security implications associated with the use of this option. It is proposed to disable the xp_cmdhsell option.  
Hi What I have: I have a list of events with multiple <Key,Value> pairs. For eg., like below event1:attributes:{"test__c":90, "abc":10,"now__c":10 } event2:attributes:{"bcf:90, abc:10} event1... See more...
Hi What I have: I have a list of events with multiple <Key,Value> pairs. For eg., like below event1:attributes:{"test__c":90, "abc":10,"now__c":10 } event2:attributes:{"bcf:90, abc:10} event1:attributes:{"testing__c":10, "abc":10,"now__c":100 } As you can, some rows have attributes fields with a substr "__c" and the rest do not.  What I want: Now, I want to iterate over a list of events and calculate a total of all field values (fields which has substr __c) and display that in a table format like below Total__C 100. (Since event 1 has 2 __c substr fields, their values add to 100) 0 (event2 has no __c fields, so default to 0) 110 (event3 has 2 fields) What I tried: I used foreach like below for 1 event, but I am not sure how to do it for all events iteratively. fields * | foreach *__c [ eval TotalCustom = TotalCustom + '<<FIELD>>'] | table TotalCustom Can someone help me with this?
I'm using the Microsoft Azure add on for splunk to read from event hub in Azure.   I am using Splunk cloud and a heavy fowarder in Azure. Two Problems, 1st.    The data showing up is one big fiel... See more...
I'm using the Microsoft Azure add on for splunk to read from event hub in Azure.   I am using Splunk cloud and a heavy fowarder in Azure. Two Problems, 1st.    The data showing up is one big field of JSON.  I've tried to extract in splunk cloud, but its getting mangled. 2nd.  Can i limit this.  75% of my fields are useless and and taking up space. Can anyone help me out with either issue? I am using a heavy fwd'r.    Splunk support does tell me to use spath.   But how do i do this in parsing event hub data?   Do i need  different addon?
I have a search that outputs a table with two columns, one for log source one for total count (using stats count). I'd like to add additional rows to the table where I can enter a custom field name... See more...
I have a search that outputs a table with two columns, one for log source one for total count (using stats count). I'd like to add additional rows to the table where I can enter a custom field name for the "Log Source" column,  and then the total count column will be empty for that row. This will be for exporting the results of the table to CSV and the additional rows I will be adding will be empty so I can enter whatever value I want in the Total Count column and save it.  So, this is how I would want it to look after the search is run (where Custom1 and Custom2 are the field names of the empty rows that I will be adding).  Is this possible and how would I go about it? Thank you! Log Source      Total Events A                          20 B                          100 C                          50 Custom1 Custom2 Current query:    index=A or index=B or index=C | eval "Log Source"=case(index == "A", "indexA", index == "B", "indexB", index == "C", "IndexC") | stats count by "Log Source" | append [| makeresults | eval indexA="", indexB="", indexC="" | table indexA indexB indexC | transpose column_name="Log Source" ] | stats max(count) AS count BY "Log Source" | fillnull value=0 count  
I'm trying to use a lookup table to find records in my database, but I'm not having much luck. It may just be that I'm asking too much of Splunk. My lookup table consists of 4 fields: mrch_num, term... See more...
I'm trying to use a lookup table to find records in my database, but I'm not having much luck. It may just be that I'm asking too much of Splunk. My lookup table consists of 4 fields: mrch_num, term_num, start_time, and end_time. mrch_num and term_num are straight-up fields from my database. start_time and end_time are value I want to compare in records that match the mrch_num and term_num. My query so far looks like this: `search` [|inputlookup key_requests.csv | table mrch_num,term_num,$start_time,$end_time] j_timestamp>=start_time j_timestamp<=end_time  | `transtuff` The object is to find transactions with mrch_num=mrch_num, term_num=term_num and j_timestamp between the values from start_time and end_time. I don't think I'm properly passing the start_time and end_time from my lookup table to the search. Can anyone give me some pointers?   Thanks, Rich
Let's say I am using a visualization to map the relationships between different "objects" (my use case isn't IT specific but you could also call these "objects" devices), and up to this point my data... See more...
Let's say I am using a visualization to map the relationships between different "objects" (my use case isn't IT specific but you could also call these "objects" devices), and up to this point my data looks like the table below. This table shows the connection from one object to another (A goes to B, B goes to C,etc.) FROM TO A B B C C D X C Y C C Z   Now, what I want to do in order to de-clutter my visualization is "collapse" paths based on the object. I've determined objects B and C are nonessential to my use case and would like minimize the map by relating A to D instead of A to B to C to D and X to Z instead of X to C to Z. So my final goal is for the table to look like: FROM TO A D X Z Y Z   Is it possible to structure a search to perform this transformation, or is the only solution to write a custom search command?
hi, i want to display an array by the index of the array on splunk dashboard. i send from MATLAB software to splunk array in Single Event: y=1,2,3,4,5,6,7,8,9,10 i want to display line chart ... See more...
hi, i want to display an array by the index of the array on splunk dashboard. i send from MATLAB software to splunk array in Single Event: y=1,2,3,4,5,6,7,8,9,10 i want to display line chart of this y value  by the index. thanks thanks  
I have been able to set the value of 2 tokens anytime the timepicker is changed by using the below code:   <input type="time" searchWhenChanged="true" token="time_token">       <label>Time Range</... See more...
I have been able to set the value of 2 tokens anytime the timepicker is changed by using the below code:   <input type="time" searchWhenChanged="true" token="time_token">       <label>Time Range</label>       <default>         <earliest>-7d@h</earliest>         <latest>now</latest>       </default>       <change>         <eval token="relstart_time">strftime(relative_time(now(), 'time_token.earliest'), "%m/%d/%Y %T")</eval>         <eval token="relend_time">strftime(relative_time(now(), 'time_token.latest'), "%m/%d/%Y %T")</eval>        </change>      </input> However, upon page load both tokens reflect the value of now() instead of the default earliest/latest values.  The values will not update until I change the timepicker.  I have attempted to use multiple variations of the below settings in the <init> tag, is this the right way to go, and if so, can anyone help with the correct syntax? <init>      <eval token="relstart_time">strftime(relative_time(now(), -7d@h), "%m/%d/%Y %T")</eval>      <eval token="relend_time">strftime(now(),"%m/%d/%Y %T")</eval> </init>
While debugging an issue where a forwarder would not send a specific log to our main splunk instance,  i found this great post (among others): https://community.splunk.com/t5/Deployment-Architecture... See more...
While debugging an issue where a forwarder would not send a specific log to our main splunk instance,  i found this great post (among others): https://community.splunk.com/t5/Deployment-Architecture/Unix-Forwarder-is-not-Sending-Logs/m-p/167347/highlight/true#M6237 in the inputs.conf on the Universal Forwarder, the fix was adding initCrcLength = 2048  to the specific logs stanza: ( in:  C:\Program Files\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\inputs.conf) Apparently the default is initCrcLength = 256.  This made me start thinking,  how many other forwarder logs are we not getting/indexing that im not aware of !? thus,  this search below showed me that 4 of our Forwarder's (of ~22 forwarders in total) where showing this same error for various specific log files (thus those specific logs have not been getting indexed):     index=_internal source=*splunkd.log host=* "seekptr checksum"     (while this is very unfortunate),  my question is: Should we be manually setting something like initCrcLength = 2048 on every one of our Forwarders (and on future new forwarders)? I assume the downside is increased RAM and CPU usage on the forwarders (but this is not an issue for us, as volume is not very high, and resources plentiful).  Anything else im not considering as a downside? question 2: I assume we can "globally" set this on a forwarders inputs.conf by simply placing:     [default] initCrcLength = 2048     and it will apply to all stanzas (unless a stanza overrides initCrcLength, of-course), Right? thanks!