All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Still struggling a bit, so I only need to create a custom app with those two .conf files? This is separate from my universal forwarder that's actually retrieving the data, correct? Also, what is ... See more...
Still struggling a bit, so I only need to create a custom app with those two .conf files? This is separate from my universal forwarder that's actually retrieving the data, correct? Also, what is meant by putting the app on the search head, the only location I know where to install apps is under Apps > Manage Apps
We are using standalone Splunk server and their no monitoring console setup. Internal index logs are still not visible to me and without it, not able to troubleshoot further. Please help me what are ... See more...
We are using standalone Splunk server and their no monitoring console setup. Internal index logs are still not visible to me and without it, not able to troubleshoot further. Please help me what are the other workarounds are available to get the data in from internal indexes again. @isoutamo @gcusello 
I'm trying to create a workload management rule to prevent users from searching with "All Time". After researching, it seems that best practice would be to not run "All Time" searches as they produ... See more...
I'm trying to create a workload management rule to prevent users from searching with "All Time". After researching, it seems that best practice would be to not run "All Time" searches as they produce long run times and use more memory/cpu. Are there any types of searches, users or otherwise exceptions that should be allowed to use "All Time"?
In the Splunk web interface, you can make macros by clicking on Settings (in the upper-right), then in the drop-down menu click on "Advanced search" in the KNOWLEDGE section, then click on "Search ma... See more...
In the Splunk web interface, you can make macros by clicking on Settings (in the upper-right), then in the drop-down menu click on "Advanced search" in the KNOWLEDGE section, then click on "Search macros". You can then click the green "New Search Macro" button to make a new search macro, which you can give a name. In the Definition section you are supposed to enter the SPL that you would like the macro to expand to. This screen will not let you leave the Definition blank so you could fill in a comment like ```emptymacro``` which would then make the macro do nothing. You can leave the other fields blank. After you save the macro, you should change its permissions so its accessible to you in the app you use to search. What the guide likely means is that when you use macros, you can edit how the SPL of a search behaves, without editing the SPL of the search. For example you could have a scheduled report which uses a macro for filtering out certain hosts. You can then edit the macro to add new host values without having to edit the scheduled search.
Q: Given a "timechart span=1m sep='-" last(foo) as foo last( bar) as bar by  hostname", how would I get a unique value of the bar-* fields? This has to be a standard problem, but I cannot find any... See more...
Q: Given a "timechart span=1m sep='-" last(foo) as foo last( bar) as bar by  hostname", how would I get a unique value of the bar-* fields? This has to be a standard problem, but I cannot find any writeup of solving it... Background: I'm processing Apache Impala logs for data specific to a query, server, and pool (i.e., cluster). The data arrives on multiple lines that are easily combined with a transaction and rex-ed out to get the values. Ignoring the per-query values, I end up with: | fields _time hostname reserved max_mem The next step is to summarize the reserved and max_mem by minute, taking the last value by hostname and summing the reserved values, extracting a single max_mem value. I can get the data by host using: | timechart span=1m sep="-" last( reserved ) as reserved last( max_mem ) as max_mem by hostname which gives me a set of reserved-* and max_mem-* fields. The reserved values can be summed with: | addtotals fieldname=reserved reserved-* Issue: The problem I'm having is getting the single unique value of max_mem back out of it. The syntax "| stats values( max_mem-* ) as max_mem" does not work, but gives the idea of what I'm trying to accomplish. I've tried variations on bin to group the values with stats to post-process them, but gotten nowhere. I get the funny feeling that there may be a way to "| addcols [ values( max_mem-* ) as max_mem " but that doesn't get me anywhere either. A slightly different approach would be leaving the individual reserved values as-is, finding some way to get the single max_mem value out of the timechart, and plotting it as an area chart using max_mem as a layover  (i.e., the addtotals can be skipped). In either case, I'm still stuck getting the unique value from max_mem-* as a single field for propagation with the reserved values. Aside: The input to this report is taken from the transaction list which includes memory estimates and SQL statements per query. I need that much for other purposes. The summary here of last reserved & max_mem per time unit is taken from the per-query events because the are the one place that the numbers are available.
Hi All,   How can I optimize the below query? Can we convert it to tstats?   index=abc host=def* stalled | rex field=_raw "symbol (?<symbol>.*) /" | eval hourofday = strftime(_time, "%H") | w... See more...
Hi All,   How can I optimize the below query? Can we convert it to tstats?   index=abc host=def* stalled | rex field=_raw "symbol (?<symbol>.*) /" | eval hourofday = strftime(_time, "%H") | where NOT (hourofday>2 AND hourofday <= 4) | timechart dc(symbol) span=15m | eventstats avg("count") as avg stdev("count") as stdev | eval lowerBound=-1, upperBound=(avg+stdev*exact(4)) | eval isOutlier=if('count' < lowerBound OR 'count' > upperBound, 1, 0) | fields _time, "count", lowerBound, upperBound, isOutlier, * | sort -_time | head 1 | where isOutlier=1
@Mario.Morelli  Yes I need to forward my event details to Grafana tool for creating dashboard as Open  and Resolved. Open I see but resolved i am seeing as "Health Rule Close". I need this as Res... See more...
@Mario.Morelli  Yes I need to forward my event details to Grafana tool for creating dashboard as Open  and Resolved. Open I see but resolved i am seeing as "Health Rule Close". I need this as Resolved as per screenshot.
Can we ingest these logs?
@Mario.Morelli  Yes I need to forward my event details to Grafana tool for creating dashboard as Open  and Resolved. Open I see but resolved i am seeing as "Health Rule Close". I need this as Res... See more...
@Mario.Morelli  Yes I need to forward my event details to Grafana tool for creating dashboard as Open  and Resolved. Open I see but resolved i am seeing as "Health Rule Close". I need this as Resolved as per screenshot.
Hello everyone, I am trying to follow this guide https://research.splunk.com/endpoint/ceaed840-56b3-4a70-b8e1-d762b1c5c08c/ and I created the macros that this guide is referencing, but I am unable ... See more...
Hello everyone, I am trying to follow this guide https://research.splunk.com/endpoint/ceaed840-56b3-4a70-b8e1-d762b1c5c08c/ and I created the macros that this guide is referencing, but I am unable to create the macro for windows_rdp_connection_successful_filter, because I am unsure how to create an empty macro in Splunk web. The guide says "windows_rdp_connection_successful_filter is a empty macro by default. It allows the user to filter out any results (false positives) without editing the SPL." What does this even mean? We are currently using Splunk Enterprise 9.0.5
I'm using script interface for custom REST endpoints, and it uses: from splunk.persistconn.application import PersistentServerConnectionApplication I understand it's a package inside splunk ente... See more...
I'm using script interface for custom REST endpoints, and it uses: from splunk.persistconn.application import PersistentServerConnectionApplication I understand it's a package inside splunk enterprise, but is there a chance it is uploaded to PyPI?
The Cisco Networks Add-on for Splunk Enterprise is licensed under Creative Commons. This license does not allow for commercial use...I have been unable to track down a way to "purchase" a license tha... See more...
The Cisco Networks Add-on for Splunk Enterprise is licensed under Creative Commons. This license does not allow for commercial use...I have been unable to track down a way to "purchase" a license that would allow me to utilize this Add-on legally. Is there any chance someone can point me in the right direction?
Reduce the replication factor (and search factor if it's also 3) before removing the indexer.
What are the props.conf settings for [mysourcetype]?
Hi @sireesha.vadlamuru, I'm reaching out again looking for some clarity here on what help you need. 
I have an issue with adding indexed fields to each of the new (splatted) sourcetype: Configuration that "duplicated" indexed fields for each sourcetype: Now I see fields: indexedfileds1, indexedfil... See more...
I have an issue with adding indexed fields to each of the new (splatted) sourcetype: Configuration that "duplicated" indexed fields for each sourcetype: Now I see fields: indexedfileds1, indexedfileds2 and indexedfileds3 as 200%, For example: indexedfields1 values: valuie1 150% value2 50% props.conf [MAIN SOURCE] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 TRANSFORMS-changesourcetype = sourcetype1, sourcetype2 TRANSFORMS-indexedfields = indexedfield1, indexedfield2, indexedfield3 [sourcetype1] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 [sourcetype2] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 transforms.conf [indexedfield1] REGEX= FORMAT= WRITE_META= [indexedfield2] REGEX= FORMAT= WRITE_META= [indexedfield3] REGEX= FORMAT= WRITE_META= [sourcetype1] DEST_KEY-MetaData:Sourcetype REGEX = some regex FORMAT = sourcetype::sourcetype1 [sourcetype2] DEST_KEY-MetaData:Sourcetype REGEX = some regex FORMAT = sourcetype::sourcetype2   I thought to move the indexed fields to each of the new sourcetype but then I see no indexed fields. Check with | tstats count props.conf [MAIN SOURCE] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 TRANSFORMS-changesourcetype = sourcetype1, sourcetype2 [sourcetype1] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 TRANSFORMS-indexedfields = indexedfield1, indexedfield2, indexedfield3 [sourcetype2] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 TRANSFORMS-indexedfields = indexedfield1, indexedfield2, indexedfield3   What is the needed configuration to see indexed fields per sourcetype, w/o showing 200% Thanks
Hi @Zoltan.Gutleber, Given that this post is a few years old, it's unlikely to get a reply from the original poster. At this point, it might be best to reach out to AppD Support: How do I submit a ... See more...
Hi @Zoltan.Gutleber, Given that this post is a few years old, it's unlikely to get a reply from the original poster. At this point, it might be best to reach out to AppD Support: How do I submit a Support ticket? An FAQ 
Hi, We have 3 indexers and 1 search head (replication factor = 3).I need to permanently remove one indexer  What is the correct procedure: 1. Change replication factor = 2 and then remove inde... See more...
Hi, We have 3 indexers and 1 search head (replication factor = 3).I need to permanently remove one indexer  What is the correct procedure: 1. Change replication factor = 2 and then remove indexer OR 1. Remove indexer and after that change the replication factor to 2   Thanks
Hi, I don't think it exists, I've inserted this question which also interests me as an idea for a proposal for future developments. You could add a vote to my idea https://ideas.splunk.com/ideas/ESS... See more...
Hi, I don't think it exists, I've inserted this question which also interests me as an idea for a proposal for future developments. You could add a vote to my idea https://ideas.splunk.com/ideas/ESSID-I-392 so that it is more visible and taken into consideration. A thousand thanks
Hi @lakshman239  Thanks for the info. Can you provide some more insights? What are the additional rules? I have a similar request and I am able to telnet on 1521 port from splunk, but still the ... See more...
Hi @lakshman239  Thanks for the info. Can you provide some more insights? What are the additional rules? I have a similar request and I am able to telnet on 1521 port from splunk, but still the connectivity says it is blocked by firewall while submitting..