All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I am having troubles with providing the correct regex to extract the hostname from the file location. The file structure looks like this  /var/log/syslog/splunk-lb/ise/switch01.log I need only ... See more...
Hi, I am having troubles with providing the correct regex to extract the hostname from the file location. The file structure looks like this  /var/log/syslog/splunk-lb/ise/switch01.log I need only the switch01 as hostname but splunk add switch01.log. The regex i use is (?:[\/][^\/]*){1,}[\/](\w*) Any idea how to modify the regex to match only switch01? thanks Alex  
I am trying below blogs to use Splunk Cloud Trial version in SAP Cloud Integration. However, I am getting below error when trying to call Splunk Cloud Trial version url https://<hostname>.splunkclo... See more...
I am trying below blogs to use Splunk Cloud Trial version in SAP Cloud Integration. However, I am getting below error when trying to call Splunk Cloud Trial version url https://<hostname>.splunkcloud.com:8088/services/collector/event   Error:  java.net.ConnectException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target, cause: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target   I tried adding the Root certificate in my keystore but still get same error. Also, when trying to add the url to Cloud Connector (after adding root certificate in keystore), I get handshake error.   Is there a way to resolve this ?   Blogs https://community.sap.com/t5/technology-blogs-by-members/splunk-part-1-sap-apim-logging-monitoring/ba-p/13444151 https://community.sap.com/t5/technology-blogs-by-members/splunk-part-2-sap-cpi-mpl-logging/ba-p/13446064    
Correct I configured a linux host with a Splunk Enterprise installation (not Universal Forwarder) and configured it to retrieve deployment configurations from a second server
When we go to look at the UI sometimes, it says the app is missing so the UI is unavailable. When it does let us look at the UI, we can't create anything because the app is missing. I was under the i... See more...
When we go to look at the UI sometimes, it says the app is missing so the UI is unavailable. When it does let us look at the UI, we can't create anything because the app is missing. I was under the impression from the documents that it's created the second you open that UI so unsure what is going on.
| tstats count WHERE index=_internal _index_earliest=-1h _index_latest=now Just set your time range for the search to be greater than the expected delay * earliest_time = -1d@d * latest_time = +... See more...
| tstats count WHERE index=_internal _index_earliest=-1h _index_latest=now Just set your time range for the search to be greater than the expected delay * earliest_time = -1d@d * latest_time = +60d@d 
Certainly, you can edit the app code by cloning the app into a draft and then editing the carbonblack_connector.py file.
Thanks, somehow I didn't see anywhere this step. My errors are gone, now waiting for data to show up on the indexer. 
Hello @danspav , Is there a listing of all the different charting options?  I've tried to use what are some of the possible names for the different charting types.  Some worked and some didn't.  I'm ... See more...
Hello @danspav , Is there a listing of all the different charting options?  I've tried to use what are some of the possible names for the different charting types.  Some worked and some didn't.  I'm sure there's some that I've missed as well.     Is there an option where I can also switch from a chart to a table?  TIA   The following work <choice value="line">Line Chart</choice> <choice value="column">Bar Chart</choice> <choice value="area">Area</choice> <choice value="bar">Bar</choice> <choice value="pie">Pie</choice> <choice value="scatter">scatter</choice> <choice value="bubble">bubble</choice>   The following DIDN'T work <choice value="box-plot">boxplot</choice>. <choice value="histogram">histogram</choice> <choice value="horizon">horizon</choice> <choice value="scatterline">scatterline</choice>  
Hi all -  I am a Splunk Novice, especially when it comes to writing my own queries.  I have created a Splunk Query that serves my first goal:  calculate elapsed time between 2 events.   Now, goa... See more...
Hi all -  I am a Splunk Novice, especially when it comes to writing my own queries.  I have created a Splunk Query that serves my first goal:  calculate elapsed time between 2 events.   Now, goal #2 is to graph that over a time period (i.e. 7 days).  What is stalling my brain is that these events happen every day - in fact, they are batches that run on a cron schedule, so they better be happening every day!  So I am unable to just change the time preset and graph this, because I am using earliest and latest events to calculate beginning and end.  Here is my query to calculate duration:    index=*XYZ" "Batchname1" | stats earliest(_time) AS Earliest, latest(_time) AS Latest | eval Elapsed_Time=Latest-Earliest, Start_Time_Std=strftime(Earliest,"%H:%M:%S:%Y-%m-%d"), End_Time_Std=strftime(Latest,"%H:%M:%S:%Y-%m-%d") | eval Elapsed_Time=Elapsed_Time/60 | table Start_Time_Std, End_Time_Std, Elapsed_Time Any ideas on how to graph this duration over time so I can develop trend lines, etc?  Thanks all for the help! 
Thanks @deepakc 
It could be a n umber of things as to why the data is not coming through or not showing. 1.Whatever your monitoring does it have read permissions? 2.Check for typos' (index name etc) You can als... See more...
It could be a n umber of things as to why the data is not coming through or not showing. 1.Whatever your monitoring does it have read permissions? 2.Check for typos' (index name etc) You can also check the internal logs for clues index=_internal sourcetype=splunkd host=neo log_level=INFO component=WatchedFile | table host, _time, component, event_message, log_level | sort - _time   What is the output of this command - it shows whats being monitored (Assuming its a linux host) /opt/splunk/bin/splunk list inputstatus   Are you able to show us your inputs.conf and describe what you are trying to monitor?
I made my configuration for inputs.conf to ingest data into splunk but not getting data, during my investigation to check if there is any issue i realize the configured source is not showing any data... See more...
I made my configuration for inputs.conf to ingest data into splunk but not getting data, during my investigation to check if there is any issue i realize the configured source is not showing any data and i cant see the source path in the index in splunk. Is there a reason why am not seeing the source after configuring the inputs.conf 
That worked, here is the updated SPL using your concept.   |eval soar_uuid= id+"_RecordedFuture" |append [search index=rf-alerts soar_uuid] | eventstats count by soar_uuid |where count<2 | table... See more...
That worked, here is the updated SPL using your concept.   |eval soar_uuid= id+"_RecordedFuture" |append [search index=rf-alerts soar_uuid] | eventstats count by soar_uuid |where count<2 | table soar_uuid,triggered,rule.name,title,classification,url,count
Splunk Works apps are unsupported.  They're created by Splunk employees contributing to the community in an unofficial capacity.  If the app is not updated it could be because the author has moved on... See more...
Splunk Works apps are unsupported.  They're created by Splunk employees contributing to the community in an unofficial capacity.  If the app is not updated it could be because the author has moved on to a different project or may have left Splunk.
Your output looks correct. Is it not what you expected? If not, what did you expect?
My Output : Inbound file processed successfully GL1025pcardBCAXX8595143691007 Inbound file processed successfully GL1025pcardBCAXX8595144691006 Inbound file processed successfully GL1025pcardB... See more...
My Output : Inbound file processed successfully GL1025pcardBCAXX8595143691007 Inbound file processed successfully GL1025pcardBCAXX8595144691006 Inbound file processed successfully GL1025pcardBCAXX8732024191001 Inbound file processed successfully GL1025transBCAXX8277966711002 File put Succesfully GL1025pcardBCAXX8595143691007 File put Succesfully GL1025pcardBCAXX8595144691006 File put Succesfully GL1025pcardBCAXX8732024191001 File put Succesfully GL1025transBCAXX8277966711002 In OR condition i mentioned both the keywords.why because some of the messages fields dont have Fileput successfully .That y i gave both the strings in the mvdedup
As I suggested, it might be your data because the way you appear to be doing it should work. Can you identify values of field1 which should have joined which don't appear to have joined? Also, bear... See more...
As I suggested, it might be your data because the way you appear to be doing it should work. Can you identify values of field1 which should have joined which don't appear to have joined? Also, bear in mind that sub-search (as used by your inner search on the join) are limited to 50,000 events so it could be that the missing inner events have fallen outside the 50k limit. Try reducing the timeframe for you search to see if there is a point at which you get the results you expect.
 Network data can be notorious for sending large volumes of data - where possible filter at source.   It’s also worth thing about how you’re sending the network data to Splunk   The  better sys... See more...
 Network data can be notorious for sending large volumes of data - where possible filter at source.   It’s also worth thing about how you’re sending the network data to Splunk   The  better syslog options are: Splunks free SC4S (container syslog under the hood) Have a syslog (r-syslog or syslog-ng) server and send the data there, then let a UF to pick up from there, and send it to Splunk.   Many people set up TCP/UDP ports on a HF or Splunk Indexers, and this can various implications for large environments (not saying you can't do this) but it’s not ideal for production, but for testing or small environments Ok.
If the left side is a subset of the right side then the left side will be the result of a Left Join.
match uses regex so the * at the end of each string is probably superfluous (unless you were matching for "File put Succesfull" or "File put Succesfullyyyyy") Other than that, it looks like your mvd... See more...
match uses regex so the * at the end of each string is probably superfluous (unless you were matching for "File put Succesfull" or "File put Succesfullyyyyy") Other than that, it looks like your mvdedup mvfilter should work. Please can you share some example events for which this is not working?