All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

From the documentation, I believe that the Task Server should start after I set up the JAVA_HOME, but it has been failing to start, with only a message "Failed to restart task server." I am running ... See more...
From the documentation, I believe that the Task Server should start after I set up the JAVA_HOME, but it has been failing to start, with only a message "Failed to restart task server." I am running an instance of Splunk version 8.1.5. When installing DBx 3.17.2, I installed OpenJDK 8, DBx started that it required Java 11. So I installed java-11-openjdk version 11.0.23.0.9.  Task Server JVM Options were automatically set to "-Ddw.server.applicationConnectors[0].port=9998". Is there anything else missing?   Is there a way to debug this issue? I looked into the internal logs from this host but have not been able to find anything that stands out.   Thanks for any insights and thoughts.
Hello SPLUNK Community! There  are clear instructions on how to import services from a  CSV file in ITSI.  However I can't find a way to export the same data into a CSV file.   How can I export ser... See more...
Hello SPLUNK Community! There  are clear instructions on how to import services from a  CSV file in ITSI.  However I can't find a way to export the same data into a CSV file.   How can I export services dependencies from ITSI? Thanks.
Both HEC and the UF support ack.  While HEC does support higher volume, but both have good throughput.  We'd need to know more about how much data you intend to send to determine which is better. Th... See more...
Both HEC and the UF support ack.  While HEC does support higher volume, but both have good throughput.  We'd need to know more about how much data you intend to send to determine which is better. The data send to HEC has to be in a particular format and ACKs must be checked periodically, so there must be a client that has to be maintained by the customer. There is no additional cost (from Splunk) for either approach. Yes, you will want an add-on, especially if you use the UF (but may also be needed for HEC).  The add-on ensures the data is onboarded properly and defines the fields to be extracted.
I believe this was a misunderstanding on my part on how the episode views work.  The "Events Timeline" screen looks like I would expect, with one alert and the timeline shows it was red, then moved t... See more...
I believe this was a misunderstanding on my part on how the episode views work.  The "Events Timeline" screen looks like I would expect, with one alert and the timeline shows it was red, then moved to green.  The "All Events" view appears to be a running list of all events that drive state changes.
"Find event in one search, get related events by time in another search" Found some related questions but could not formulate a working solution from them....  Of course this doesn't work, but maybe... See more...
"Find event in one search, get related events by time in another search" Found some related questions but could not formulate a working solution from them....  Of course this doesn't work, but maybe it will make clear what is wanted, values in 2nd search events within milliseconds (2000 shown) of first search's event....     index=someIndex searchString | rex field=_raw "stuff(?<REFERENCE_VAL>)$" | stats _time as EVENT_TIME | append (search index=anIndex someSearchString | rex field=_raw "stuff(?<RELATED_VAL>)$" | eval timeBand=_time-EVENT_TIME | where abs(timeBand)<2000 | stats _time as RELATED_TIME) | table EVENT_TIME REFERENCE_VAL RELATED_TIME RELATED_VAL    
@bowesmana Ill test this out and report back. If I can pass the captured variables, it should work.   Search filters on roles might be a bit too limiting, though admittedly I'm not sure.  Most user... See more...
@bowesmana Ill test this out and report back. If I can pass the captured variables, it should work.   Search filters on roles might be a bit too limiting, though admittedly I'm not sure.  Most users with access to splunk already have roles, so unless the search filter would apply only to the indexes in the new role (IE users with Role A have access to index A and Role B have access to filtered search index B) it might not work for me.    
Did this work? Did you discover that you had to implement additional steps to make it work?   Thanks, Farhan
Yes, it is possible and done often.  It requires Professional Services, though.
50k is the limit on subsearch when used with join command. The "normal" subsearch limit is much lower - it's 10k results.
OK. So this is the second case I mentioned. How do you decide then if it's a single session or two separate sessions? Are the events occuring repeatedly while the user is logged in?
Hello Would anyone know whether it is possible to migrate an on-prem smartstore to Splunk Cloud? How would that happen ? Thank you !
It's hard to see, but what is need is for the "Message": line to be the breaking line and for the "TimeStamp': line to be the first line of the whole event. "Message": "User query failed: Connection... See more...
It's hard to see, but what is need is for the "Message": line to be the breaking line and for the "TimeStamp': line to be the first line of the whole event. "Message": "User query failed: Connection ID: 55, User: piadmin, User ID: 1, Point ID: 247000, Type: summary, Start: 14-Jun-24 07:54:50, End: 14-Jun-24 07:56:20, Mode: 5, Status: [-11059] No Good Data For Calculation",-------event break here "TimeStamp": "\/Date(1718366180157)\/",  ----event start here In the example I sent it's hard to see the break after message and before Timestamp clearly because they look like one big line.      
We are looking to integrate Splunk SIEM with our microservice, we are looking to send events from service to Splunk and then configure alerts based on eventType.  As we understand there are 2 approa... See more...
We are looking to integrate Splunk SIEM with our microservice, we are looking to send events from service to Splunk and then configure alerts based on eventType.  As we understand there are 2 approaches Universal Log forwarder and HTTP Event collector. We are inclining more towards using HEC as it has the ability to send ack for events as well and challenge with Universal Log forwarder is that it needs to be managed by customer where Splunk will be running and volume of the events is also not that much. Can someone help us in understanding cost involved in both approaches and scaling of HEC is number of events increases due to a spike. Also should we go with building a Technology Add-on or app which can be used along with Splunk Enterprise Security. We want to implement this for Enterprise as well as Cloud. #SplunkAddOnbuilder
  | rex "(?<head1>[^,]*),(?<head2>[^,]*),(?<head3>[^,]*),(?<head4>[^,]*),(?<head5>[^,]*),(?<head6>[^,]*),(?<head7>[^,]*),(?<head8>[^,]*),(?<head9>[^,]*),(?<head10>[^,]*),(?<head11>[^,]*),(?<head12>[... See more...
  | rex "(?<head1>[^,]*),(?<head2>[^,]*),(?<head3>[^,]*),(?<head4>[^,]*),(?<head5>[^,]*),(?<head6>[^,]*),(?<head7>[^,]*),(?<head8>[^,]*),(?<head9>[^,]*),(?<head10>[^,]*),(?<head11>[^,]*),(?<head12>[^,]*)" The fields will be null so you could use fillnull to give them values e.g. | fillnull value="N/A"  
Hi there, for better visibility i built a dashboard for indexer restarts, this dashboard is based on the _internal index and the /var/log/messages from the indexers themself. I would like to ad... See more...
Hi there, for better visibility i built a dashboard for indexer restarts, this dashboard is based on the _internal index and the /var/log/messages from the indexers themself. I would like to add the Info how the restart was triggered. so i can see whether the restart came from the manager (WebUI: Configuration Bundle Actions) or was done via the cli. Does Splunk log this? If yes where do i find that info? Thanks in advance!
if it shows no results, how can i make it so that the value of that 'epoch' value = OK versus 'Not Ok'  
I hve few events where data is not available. Instead I see commas where head6 and head7 data is not availble. Need rex so that I get output blank if no data but if data is available then it should p... See more...
I hve few events where data is not available. Instead I see commas where head6 and head7 data is not availble. Need rex so that I get output blank if no data but if data is available then it should provide output. below is the event (three commas beside between UNKNOWN AND /TEST)   head1,head2,head3,head4,head5,head6,head7,head8,head9,head10,head11,head12 sadfasdfafasdfs,2024-06 21T01:33:30.918000+00:00,test12,1,UNKNOWN,,,/test/rrr/swss/customer1/454554/test.xml,UNKNOWN,PASS,2024-06-21T01:33:30.213000+00:00,UNKNOWN
Doing an EVAL in STATS has made by day @ITWhisperer 
Subsearches are usually limited to 50k events so an alltime subsearch it likely to have been (silently) terminated. Given that your index and source type are the same, try removing the subsearch ind... See more...
Subsearches are usually limited to 50k events so an alltime subsearch it likely to have been (silently) terminated. Given that your index and source type are the same, try removing the subsearch index=ndx sourcetype=src (device="PM4" OR device="PM2") earliest=0 latest=@d | bucket _time span=1d | stats max(eval(if(device="PM4",value,null()))) as PM4Val max(eval(if(device="PM2",value,null()))) as PM2Val by _time index  
Typically most of extraction is taking place in search time so the most important thing about field extraction is that the format is consistent and can be easily configured (so you don't have cases l... See more...
Typically most of extraction is taking place in search time so the most important thing about field extraction is that the format is consistent and can be easily configured (so you don't have cases like escaped characters). From indexing performance point of view it's most important that the format is consistent across the whole sourcetype, the data breaks easily into separate events and that the timestamp is well-defined and hopefully placed at the beginning of the event. If you have all this and your sourcetype has the so-called great eight properly configured, you're good to go. From the practical point of view regarding parsing the data - avoid any nesting - like "real" data as somehow-formatted string within a json structure or the other way around - json structure with a syslog header, any escaped strings within strings and so on - it makes writing extractions and searches a painful experience.