All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

As @ITWhisperer , you should use $form.element$ - the $form.element$ variant of the token is the one that holds the values of the selections, whereas the base $element$ holds the final full expanded ... See more...
As @ITWhisperer , you should use $form.element$ - the $form.element$ variant of the token is the one that holds the values of the selections, whereas the base $element$ holds the final full expanded token with all the prefixes, suffixed and delimiter, see your slightly modified example. <form version="1.1" theme="light"> <label>test2</label> <fieldset submitButton="false"> <input type="multiselect" token="element" searchWhenChanged="true"> <label>Fruit Select</label> <choice value="a">Apple</choice> <choice value="b">Banana</choice> <choice value="c">Coconut</choice> <choice value="d">Dragonfruit</choice> <choice value="e">Elderberry</choice> <choice value="f">Fig</choice> <choice value="g">Grape</choice> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter>, </delimiter> </input> </fieldset> <row> <panel> <title>Form element::$form.element$, Element::$element$</title> <single> <title>Number of selected fruit</title> <search> <query>| makeresults | eval selected_total=mvcount(split($form.element|s$,",")) | table selected_total</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="refresh.display">progressbar</option> </single> </panel> </row> </form> I am not sure if the| eval selected_total=mvcount(split($form.element|s$,",")) would work also in dashboard studio also.  
After I "discovered" MAX_EVENTS from solving Why are REST API receivers/simple breaks input unexpectedly? I thought that gave me key to this problem as well, especially confirming that some events I ... See more...
After I "discovered" MAX_EVENTS from solving Why are REST API receivers/simple breaks input unexpectedly? I thought that gave me key to this problem as well, especially confirming that some events I knew got cutoff indeed had > 256 "lines".  Alas, that was not to be. Nevertheless, I finally find the fix and the key is still in props.conf and still explained in Line breaking.   TRUNCATE = <non-negative integer> * The default maximum line length, in bytes. * Although this is in bytes, line length is rounded down when this would otherwise land mid-character for multi-byte characters. * Set to 0 if you never want truncation (very long lines are, however, often a sign of garbage data). * Default: 10000   It turns out that those events were larger than 10,000 bytes!  In short, I previously focused too much on column and forgot to check total event size, blindly trusting that a row in CSV cannot be that long. (That the CSV contains multi-line columns makes the assessment more difficult.) This problem has nothing to do with CSV format as the title of the post implies.  Similar to REST API being a red herring in my other problem, CSV is a red herring here. Like line numbers in that other trouble, limit on total event size is in props.conf, not limit.conf. Even though some of these events do contain > 256 lines, MAX_EVENTS has no effect one way or another when INDEXED_EXTRACTIONS = csv is in place. Change to TRUNCATE can be set per sourcetype from Splunk Web.  No restart needed. For anyone who sees this in the future, the final clue came from this warning in Event Preview while using Splunk Web Upload: Final triage came upon me when I extracted those failing events in a test file and saw that they didn't trigger this warning in Instance 2.  Following a clue given by inventsekar in https://community.splunk.com/t5/Getting-Data-In/How-can-we-avoid-the-line-truncating-warning/m-p/370655 to examine that "good" instance, TRUNCATE override was shown in localized system props.conf!  (I couldn't find any notes in my previous work that indicated this change.) But to make things more interesting, you may not be able to see that warning if the absolute majority of events do not exceed default TRUNCATE value.  This lack of warning really blindsided my diagnosis.
First question - is the output a single row or are there multiple rows expected, in which case, what is the entity that separates the rows - is it REFERENCE_VAL and if so, how does one correlate REFE... See more...
First question - is the output a single row or are there multiple rows expected, in which case, what is the entity that separates the rows - is it REFERENCE_VAL and if so, how does one correlate REFERENCE_VAL to RELATED_VAL? This is the ONE row solution index=someIndex searchString OR someSearchString | rex field=_raw "stuff(?<REFERENCE_VAL>)$" | rex field=_raw "stuff(?<RELATED_VAL>)$" | stats min(eval(if(isnotnull(REFERENCE_VAL), _time, null()))) as EVENT_TIME min(eval(if(isnotnull(RELATED_VAL), _time, null()))) as RELATED_TIME | eval timeBand=RELATED_TIME-EVENT_TIME | where abs(timeBand)<2000 which will only give a result if the time range is less than 2 seconds, but I suspect you are expecting more than one row...
I'm still seeing this behavoir in an upgrade to 9.2.1 (via rpm). This system was running an older version of splunk.  the "splunk" user did exist, and logs were showing up in the indexers/web-search... See more...
I'm still seeing this behavoir in an upgrade to 9.2.1 (via rpm). This system was running an older version of splunk.  the "splunk" user did exist, and logs were showing up in the indexers/web-search.   the service is running via systemd. upon starting it chowned everything to the wrong user (splunkfwd) and it couldn't access its config and exited. lol.  please splunk, do not force user names or groups names and don't change them during an update!  It is not the (unix) way.   (don't get me started about the main splunk process being able to modifiy its own config and binaries and execute the altered binaries. that just isn't safe.) I reverted to a snapshot.  at least splunk runs and logs again.  Unfortunately, this is a compliance failure at modern companies. > Now tell me again why this stunt was necessary.
From the documentation, I believe that the Task Server should start after I set up the JAVA_HOME, but it has been failing to start, with only a message "Failed to restart task server." I am running ... See more...
From the documentation, I believe that the Task Server should start after I set up the JAVA_HOME, but it has been failing to start, with only a message "Failed to restart task server." I am running an instance of Splunk version 8.1.5. When installing DBx 3.17.2, I installed OpenJDK 8, DBx started that it required Java 11. So I installed java-11-openjdk version 11.0.23.0.9.  Task Server JVM Options were automatically set to "-Ddw.server.applicationConnectors[0].port=9998". Is there anything else missing?   Is there a way to debug this issue? I looked into the internal logs from this host but have not been able to find anything that stands out.   Thanks for any insights and thoughts.
Hello SPLUNK Community! There  are clear instructions on how to import services from a  CSV file in ITSI.  However I can't find a way to export the same data into a CSV file.   How can I export ser... See more...
Hello SPLUNK Community! There  are clear instructions on how to import services from a  CSV file in ITSI.  However I can't find a way to export the same data into a CSV file.   How can I export services dependencies from ITSI? Thanks.
Both HEC and the UF support ack.  While HEC does support higher volume, but both have good throughput.  We'd need to know more about how much data you intend to send to determine which is better. Th... See more...
Both HEC and the UF support ack.  While HEC does support higher volume, but both have good throughput.  We'd need to know more about how much data you intend to send to determine which is better. The data send to HEC has to be in a particular format and ACKs must be checked periodically, so there must be a client that has to be maintained by the customer. There is no additional cost (from Splunk) for either approach. Yes, you will want an add-on, especially if you use the UF (but may also be needed for HEC).  The add-on ensures the data is onboarded properly and defines the fields to be extracted.
I believe this was a misunderstanding on my part on how the episode views work.  The "Events Timeline" screen looks like I would expect, with one alert and the timeline shows it was red, then moved t... See more...
I believe this was a misunderstanding on my part on how the episode views work.  The "Events Timeline" screen looks like I would expect, with one alert and the timeline shows it was red, then moved to green.  The "All Events" view appears to be a running list of all events that drive state changes.
"Find event in one search, get related events by time in another search" Found some related questions but could not formulate a working solution from them....  Of course this doesn't work, but maybe... See more...
"Find event in one search, get related events by time in another search" Found some related questions but could not formulate a working solution from them....  Of course this doesn't work, but maybe it will make clear what is wanted, values in 2nd search events within milliseconds (2000 shown) of first search's event....     index=someIndex searchString | rex field=_raw "stuff(?<REFERENCE_VAL>)$" | stats _time as EVENT_TIME | append (search index=anIndex someSearchString | rex field=_raw "stuff(?<RELATED_VAL>)$" | eval timeBand=_time-EVENT_TIME | where abs(timeBand)<2000 | stats _time as RELATED_TIME) | table EVENT_TIME REFERENCE_VAL RELATED_TIME RELATED_VAL    
@bowesmana Ill test this out and report back. If I can pass the captured variables, it should work.   Search filters on roles might be a bit too limiting, though admittedly I'm not sure.  Most user... See more...
@bowesmana Ill test this out and report back. If I can pass the captured variables, it should work.   Search filters on roles might be a bit too limiting, though admittedly I'm not sure.  Most users with access to splunk already have roles, so unless the search filter would apply only to the indexes in the new role (IE users with Role A have access to index A and Role B have access to filtered search index B) it might not work for me.    
Did this work? Did you discover that you had to implement additional steps to make it work?   Thanks, Farhan
Yes, it is possible and done often.  It requires Professional Services, though.
50k is the limit on subsearch when used with join command. The "normal" subsearch limit is much lower - it's 10k results.
OK. So this is the second case I mentioned. How do you decide then if it's a single session or two separate sessions? Are the events occuring repeatedly while the user is logged in?
Hello Would anyone know whether it is possible to migrate an on-prem smartstore to Splunk Cloud? How would that happen ? Thank you !
It's hard to see, but what is need is for the "Message": line to be the breaking line and for the "TimeStamp': line to be the first line of the whole event. "Message": "User query failed: Connection... See more...
It's hard to see, but what is need is for the "Message": line to be the breaking line and for the "TimeStamp': line to be the first line of the whole event. "Message": "User query failed: Connection ID: 55, User: piadmin, User ID: 1, Point ID: 247000, Type: summary, Start: 14-Jun-24 07:54:50, End: 14-Jun-24 07:56:20, Mode: 5, Status: [-11059] No Good Data For Calculation",-------event break here "TimeStamp": "\/Date(1718366180157)\/",  ----event start here In the example I sent it's hard to see the break after message and before Timestamp clearly because they look like one big line.      
We are looking to integrate Splunk SIEM with our microservice, we are looking to send events from service to Splunk and then configure alerts based on eventType.  As we understand there are 2 approa... See more...
We are looking to integrate Splunk SIEM with our microservice, we are looking to send events from service to Splunk and then configure alerts based on eventType.  As we understand there are 2 approaches Universal Log forwarder and HTTP Event collector. We are inclining more towards using HEC as it has the ability to send ack for events as well and challenge with Universal Log forwarder is that it needs to be managed by customer where Splunk will be running and volume of the events is also not that much. Can someone help us in understanding cost involved in both approaches and scaling of HEC is number of events increases due to a spike. Also should we go with building a Technology Add-on or app which can be used along with Splunk Enterprise Security. We want to implement this for Enterprise as well as Cloud. #SplunkAddOnbuilder
  | rex "(?<head1>[^,]*),(?<head2>[^,]*),(?<head3>[^,]*),(?<head4>[^,]*),(?<head5>[^,]*),(?<head6>[^,]*),(?<head7>[^,]*),(?<head8>[^,]*),(?<head9>[^,]*),(?<head10>[^,]*),(?<head11>[^,]*),(?<head12>[... See more...
  | rex "(?<head1>[^,]*),(?<head2>[^,]*),(?<head3>[^,]*),(?<head4>[^,]*),(?<head5>[^,]*),(?<head6>[^,]*),(?<head7>[^,]*),(?<head8>[^,]*),(?<head9>[^,]*),(?<head10>[^,]*),(?<head11>[^,]*),(?<head12>[^,]*)" The fields will be null so you could use fillnull to give them values e.g. | fillnull value="N/A"  
Hi there, for better visibility i built a dashboard for indexer restarts, this dashboard is based on the _internal index and the /var/log/messages from the indexers themself. I would like to ad... See more...
Hi there, for better visibility i built a dashboard for indexer restarts, this dashboard is based on the _internal index and the /var/log/messages from the indexers themself. I would like to add the Info how the restart was triggered. so i can see whether the restart came from the manager (WebUI: Configuration Bundle Actions) or was done via the cli. Does Splunk log this? If yes where do i find that info? Thanks in advance!
if it shows no results, how can i make it so that the value of that 'epoch' value = OK versus 'Not Ok'