All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I dont have one as I didn't think I needed one for something this simple. I have tried just now though adding this to no avail   [my_sourcetype] SHOULD_LINEMERGE = FALSE LINE_BREAKER = ([\r\n]+)  
Once the app is installed, is there any more steps that need to be taken to ensure, that its applied to searches? Is there a common way to debug the app? Its hard to troubleshoot by simply editing pr... See more...
Once the app is installed, is there any more steps that need to be taken to ensure, that its applied to searches? Is there a common way to debug the app? Its hard to troubleshoot by simply editing props.conf, uninstalling and reinstalling over and over
I have a question. I have a table that contains groups of people with their email addresses. I want to use this table in the recipients field when creating an alert to notify users via email. For thi... See more...
I have a question. I have a table that contains groups of people with their email addresses. I want to use this table in the recipients field when creating an alert to notify users via email. For this, I want to know if I can use $result.fieldname$ to call that table in the 'to' field when configuring the recipients.     
Hi - Recently we have upgraded splunk to version 9.1.3 . Noticed that I can not not start the splunk using :   "./splunk start --accept-licnese = yes" , forcing my to use "systemctl start Splunkd" ... See more...
Hi - Recently we have upgraded splunk to version 9.1.3 . Noticed that I can not not start the splunk using :   "./splunk start --accept-licnese = yes" , forcing my to use "systemctl start Splunkd" to start splunk   Could you please let me know how to pass --accept-license=yes with "systemctl start Splunkd"
The search you provided isn't in the format where using Trellis makes sense.  Turn off Trellis for the Single Value visualization you are using and the "distinct_count" will disappear.  
The app with props.conf is separate from the app(s) you may be using on a UF to read data. Putting the app on the SH is my attempt to make it clear the app does not go on the UF.  It *can* be instal... See more...
The app with props.conf is separate from the app(s) you may be using on a UF to read data. Putting the app on the SH is my attempt to make it clear the app does not go on the UF.  It *can* be installed on the UF, but it won't have any effect there.  Yes, go to Apps->Manage apps->Uploaded Apps to install your app.
Still struggling a bit, so I only need to create a custom app with those two .conf files? This is separate from my universal forwarder that's actually retrieving the data, correct? Also, what is ... See more...
Still struggling a bit, so I only need to create a custom app with those two .conf files? This is separate from my universal forwarder that's actually retrieving the data, correct? Also, what is meant by putting the app on the search head, the only location I know where to install apps is under Apps > Manage Apps
We are using standalone Splunk server and their no monitoring console setup. Internal index logs are still not visible to me and without it, not able to troubleshoot further. Please help me what are ... See more...
We are using standalone Splunk server and their no monitoring console setup. Internal index logs are still not visible to me and without it, not able to troubleshoot further. Please help me what are the other workarounds are available to get the data in from internal indexes again. @isoutamo @gcusello 
I'm trying to create a workload management rule to prevent users from searching with "All Time". After researching, it seems that best practice would be to not run "All Time" searches as they produ... See more...
I'm trying to create a workload management rule to prevent users from searching with "All Time". After researching, it seems that best practice would be to not run "All Time" searches as they produce long run times and use more memory/cpu. Are there any types of searches, users or otherwise exceptions that should be allowed to use "All Time"?
In the Splunk web interface, you can make macros by clicking on Settings (in the upper-right), then in the drop-down menu click on "Advanced search" in the KNOWLEDGE section, then click on "Search ma... See more...
In the Splunk web interface, you can make macros by clicking on Settings (in the upper-right), then in the drop-down menu click on "Advanced search" in the KNOWLEDGE section, then click on "Search macros". You can then click the green "New Search Macro" button to make a new search macro, which you can give a name. In the Definition section you are supposed to enter the SPL that you would like the macro to expand to. This screen will not let you leave the Definition blank so you could fill in a comment like ```emptymacro``` which would then make the macro do nothing. You can leave the other fields blank. After you save the macro, you should change its permissions so its accessible to you in the app you use to search. What the guide likely means is that when you use macros, you can edit how the SPL of a search behaves, without editing the SPL of the search. For example you could have a scheduled report which uses a macro for filtering out certain hosts. You can then edit the macro to add new host values without having to edit the scheduled search.
Q: Given a "timechart span=1m sep='-" last(foo) as foo last( bar) as bar by  hostname", how would I get a unique value of the bar-* fields? This has to be a standard problem, but I cannot find any... See more...
Q: Given a "timechart span=1m sep='-" last(foo) as foo last( bar) as bar by  hostname", how would I get a unique value of the bar-* fields? This has to be a standard problem, but I cannot find any writeup of solving it... Background: I'm processing Apache Impala logs for data specific to a query, server, and pool (i.e., cluster). The data arrives on multiple lines that are easily combined with a transaction and rex-ed out to get the values. Ignoring the per-query values, I end up with: | fields _time hostname reserved max_mem The next step is to summarize the reserved and max_mem by minute, taking the last value by hostname and summing the reserved values, extracting a single max_mem value. I can get the data by host using: | timechart span=1m sep="-" last( reserved ) as reserved last( max_mem ) as max_mem by hostname which gives me a set of reserved-* and max_mem-* fields. The reserved values can be summed with: | addtotals fieldname=reserved reserved-* Issue: The problem I'm having is getting the single unique value of max_mem back out of it. The syntax "| stats values( max_mem-* ) as max_mem" does not work, but gives the idea of what I'm trying to accomplish. I've tried variations on bin to group the values with stats to post-process them, but gotten nowhere. I get the funny feeling that there may be a way to "| addcols [ values( max_mem-* ) as max_mem " but that doesn't get me anywhere either. A slightly different approach would be leaving the individual reserved values as-is, finding some way to get the single max_mem value out of the timechart, and plotting it as an area chart using max_mem as a layover  (i.e., the addtotals can be skipped). In either case, I'm still stuck getting the unique value from max_mem-* as a single field for propagation with the reserved values. Aside: The input to this report is taken from the transaction list which includes memory estimates and SQL statements per query. I need that much for other purposes. The summary here of last reserved & max_mem per time unit is taken from the per-query events because the are the one place that the numbers are available.
Hi All,   How can I optimize the below query? Can we convert it to tstats?   index=abc host=def* stalled | rex field=_raw "symbol (?<symbol>.*) /" | eval hourofday = strftime(_time, "%H") | w... See more...
Hi All,   How can I optimize the below query? Can we convert it to tstats?   index=abc host=def* stalled | rex field=_raw "symbol (?<symbol>.*) /" | eval hourofday = strftime(_time, "%H") | where NOT (hourofday>2 AND hourofday <= 4) | timechart dc(symbol) span=15m | eventstats avg("count") as avg stdev("count") as stdev | eval lowerBound=-1, upperBound=(avg+stdev*exact(4)) | eval isOutlier=if('count' < lowerBound OR 'count' > upperBound, 1, 0) | fields _time, "count", lowerBound, upperBound, isOutlier, * | sort -_time | head 1 | where isOutlier=1
@Mario.Morelli  Yes I need to forward my event details to Grafana tool for creating dashboard as Open  and Resolved. Open I see but resolved i am seeing as "Health Rule Close". I need this as Res... See more...
@Mario.Morelli  Yes I need to forward my event details to Grafana tool for creating dashboard as Open  and Resolved. Open I see but resolved i am seeing as "Health Rule Close". I need this as Resolved as per screenshot.
Can we ingest these logs?
@Mario.Morelli  Yes I need to forward my event details to Grafana tool for creating dashboard as Open  and Resolved. Open I see but resolved i am seeing as "Health Rule Close". I need this as Res... See more...
@Mario.Morelli  Yes I need to forward my event details to Grafana tool for creating dashboard as Open  and Resolved. Open I see but resolved i am seeing as "Health Rule Close". I need this as Resolved as per screenshot.
Hello everyone, I am trying to follow this guide https://research.splunk.com/endpoint/ceaed840-56b3-4a70-b8e1-d762b1c5c08c/ and I created the macros that this guide is referencing, but I am unable ... See more...
Hello everyone, I am trying to follow this guide https://research.splunk.com/endpoint/ceaed840-56b3-4a70-b8e1-d762b1c5c08c/ and I created the macros that this guide is referencing, but I am unable to create the macro for windows_rdp_connection_successful_filter, because I am unsure how to create an empty macro in Splunk web. The guide says "windows_rdp_connection_successful_filter is a empty macro by default. It allows the user to filter out any results (false positives) without editing the SPL." What does this even mean? We are currently using Splunk Enterprise 9.0.5
I'm using script interface for custom REST endpoints, and it uses: from splunk.persistconn.application import PersistentServerConnectionApplication I understand it's a package inside splunk ente... See more...
I'm using script interface for custom REST endpoints, and it uses: from splunk.persistconn.application import PersistentServerConnectionApplication I understand it's a package inside splunk enterprise, but is there a chance it is uploaded to PyPI?
The Cisco Networks Add-on for Splunk Enterprise is licensed under Creative Commons. This license does not allow for commercial use...I have been unable to track down a way to "purchase" a license tha... See more...
The Cisco Networks Add-on for Splunk Enterprise is licensed under Creative Commons. This license does not allow for commercial use...I have been unable to track down a way to "purchase" a license that would allow me to utilize this Add-on legally. Is there any chance someone can point me in the right direction?
Reduce the replication factor (and search factor if it's also 3) before removing the indexer.
What are the props.conf settings for [mysourcetype]?