All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Any you are not aware of any workarounds to this "limitation" I suppose (except for directs access to the box)?
we got a requirement to on-board new platform logs to Splunk. They will have 1.8 TB/day data to be ingested. As of now our license is 2 TB/day and we already have other platform data on-boarded. Now ... See more...
we got a requirement to on-board new platform logs to Splunk. They will have 1.8 TB/day data to be ingested. As of now our license is 2 TB/day and we already have other platform data on-boarded. Now these new ones accepted to uplift our license with 2TB/day more so now our total becomes 4TB/day. But here they said that their normal ingestion is 1.8 TB/day, but during DDOS attack it can go in double digits. We got surprised by this. Total itself is 4TB/day, how come we can handle double digits TB of data, which in return this project might impact the on-boarding of other projects. My manager asked me to investigate on this whether we can accommodate this requirement? If yes, he want the action plan. If not, he want the justification to share it with them. I am not much aware of these licensing things in Splunk, but as per my knowledge this is very dangerous because 4TB and 10/20TB per day is huge difference. My understanding is, if we breach 4TB/day (may be 200gb of data more), new indexing stops but still old searches can be accessed.  Our infrastructure: multi site cluster with 3 sites ... 2 indexers in each (total 6), 3 SHs one in each, 1 deployment server, 2 CMs (active and standby), 1 deployer (which is license master.) Can anyone please help me on this topic how to proceed on it?
The App Exporter never solved that part.   Back when it was written and working, the /package REST call would give you a URL to download the package. That URL (& the download) was removed from the ... See more...
The App Exporter never solved that part.   Back when it was written and working, the /package REST call would give you a URL to download the package. That URL (& the download) was removed from the REST call in Splunk Enterprise long ago.
There are a couple of ways you could do this as demonstrated below <dashboard version="1.1" theme="light"> <label>forecast</label> <row> <panel depends="$alwaysHidden$"> <html> ... See more...
There are a couple of ways you could do this as demonstrated below <dashboard version="1.1" theme="light"> <label>forecast</label> <row> <panel depends="$alwaysHidden$"> <html> <style> #forecast g.highcharts-series-1.highcharts-line-series path.highcharts-graph { stroke: red; } #data g.highcharts-series-0.highcharts-line-series path.highcharts-graph { stroke: red; stroke-width: 3; data-z-index: 3; } </style> </html> </panel> <panel id="forecast"> <viz type="Splunk_ML_Toolkit.ForecastViz"> <search> <query>| inputlookup internet_traffic.csv | timechart span=120min avg("bits_transferred") as bits_transferred | eval bits_transferred=round(bits_transferred) | predict "bits_transferred" as prediction algorithm=LLP5 holdback=112 future_timespan=224 upper95=upper95 lower95=lower95 | `forecastviz(224, 112, "bits_transferred", 95)`</query> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </viz> </panel> </row> <row> <panel id="data"> <viz type="Splunk_ML_Toolkit.ForecastViz"> <search> <query>| inputlookup internet_traffic.csv | timechart span=120min avg("bits_transferred") as bits_transferred | eval bits_transferred=round(bits_transferred) | predict "bits_transferred" as prediction algorithm=LLP5 holdback=112 future_timespan=224 upper95=upper95 lower95=lower95 | `forecastviz(224, 112, "bits_transferred", 95)`</query> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </viz> </panel> </row> </dashboard>
Hi @martin_mueller, As I’ve tried to explain- the package part work perfectly, the interesting part begins after this stage: How to get the package out, when not having physical access to the box. ... See more...
Hi @martin_mueller, As I’ve tried to explain- the package part work perfectly, the interesting part begins after this stage: How to get the package out, when not having physical access to the box. The benefits begins after the export Are there any REST available that can accomplish this?  
Hi, I ended up solving it! The issue was that Azure isn't sending anything regarding the load balancers. To fix it, I enabled debug logs in the addon, retrieved the details of the call, and simulate... See more...
Hi, I ended up solving it! The issue was that Azure isn't sending anything regarding the load balancers. To fix it, I enabled debug logs in the addon, retrieved the details of the call, and simulated the call with Postman. From the results, Azure returns a 200 status code but with all the "timeseries" fields of the metrics empty. Thanks for your support!
I'm using forecast graph for my data and it shows the graph in below format: 1. Existing data shows in solid Blue line 2. Forecast data shows in dotted Blue line 3. Confidence level (upper limit a... See more...
I'm using forecast graph for my data and it shows the graph in below format: 1. Existing data shows in solid Blue line 2. Forecast data shows in dotted Blue line 3. Confidence level (upper limit and lower limit with blue) I want to change the forecast data color (refer attached).
I suppose the command should be defined as centralized streaming command instead of distributed one - the local setting in commands.conf - see https://docs.splunk.com/Documentation/Splunk/latest/Admi... See more...
I suppose the command should be defined as centralized streaming command instead of distributed one - the local setting in commands.conf - see https://docs.splunk.com/Documentation/Splunk/latest/Admin/Commandsconf  
I haven't looked at it in years. If you just need to package an app once, call the REST API:  | rest splunk_server=local services/apps/local/{name}/package https://docs.splunk.com/Documentation/Spl... See more...
I haven't looked at it in years. If you just need to package an app once, call the REST API:  | rest splunk_server=local services/apps/local/{name}/package https://docs.splunk.com/Documentation/Splunk/9.4.1/RESTREF/RESTapps#apps.2Flocal.2F.7Bname.7D.2Fpackage
There couple of parameters in limits.conf and distsearch.conf which define how this is done and what are limitations for those replicated CSV files.
Is your command actively querying the collection? If so, replicate=true won't help you.   replicate=true will push the collection's content to the indexers *as CSV*. The KV Store on the indexers (i... See more...
Is your command actively querying the collection? If so, replicate=true won't help you.   replicate=true will push the collection's content to the indexers *as CSV*. The KV Store on the indexers (if even running) won't know the collection or its content. I'm surprised and doubtful that replicate=true ever worked for someone running Splunk Enterprise on-prem.   Easiest fix would be to only use the command on the SHs, e.g. after a |stats, |tstats, etc. - or if need be |localop.
@iamsahilshaiksKVStore  contents are not replicated between tiers. Actually, kvstore does not normally run on indexer tier at all and even if for some reason it is (you fancy runing a modular input w... See more...
@iamsahilshaiksKVStore  contents are not replicated between tiers. Actually, kvstore does not normally run on indexer tier at all and even if for some reason it is (you fancy runing a modular input which uses kvstore on an indexer instead of on a separate HF) it is not replicated anywhere. If a collection is replicated to the indexer tier, its contents are getting exported as CSV to the knowledge bundle (so there are possible size/performance issues with it).
No. Restarting DS won't change anything on the deployment clients (your forwarders). Restarting forwarders requires... well, restarting forwarders. That's why I keep saying that ugly hacks like scrip... See more...
No. Restarting DS won't change anything on the deployment clients (your forwarders). Restarting forwarders requires... well, restarting forwarders. That's why I keep saying that ugly hacks like scripting those restarts from scripted inputs are risky. If something goes wrong, you end up with a downed process and no way to bring it up.
No! Don't do makeresults | append [ a big search ]. This way you're on your best path to shoot yourself in the foot with prematurely finalized subsearch! No to mention that the now in the subsearch... See more...
No! Don't do makeresults | append [ a big search ]. This way you're on your best path to shoot yourself in the foot with prematurely finalized subsearch! No to mention that the now in the subsearch has nothing to do with the now in the outer search.
Hello All  Thanks for suggesting multiple solutions. I am able to fetch the details correctly but i am not able to set the business day as below:  Business day starts at 5 PM (D) and ends at 5 PM (... See more...
Hello All  Thanks for suggesting multiple solutions. I am able to fetch the details correctly but i am not able to set the business day as below:  Business day starts at 5 PM (D) and ends at 5 PM (D+1) I've attached the final set of code. Can you please help to answer this last question to set the business day as 5 PM to 5 PM.   
Let me interject here, @iamsahilshaiks  For regex-based whitelisting it's not as simple as just puting the regex on its own. See https://docs.splunk.com/Documentation/Splunk/latest/admin/inputsconf#... See more...
Let me interject here, @iamsahilshaiks  For regex-based whitelisting it's not as simple as just puting the regex on its own. See https://docs.splunk.com/Documentation/Splunk/latest/admin/inputsconf#Event_Log_allow_list_and_deny_list_formats - you have to 1. Provide a key which will be matched against the regex 2. Enclose the regex in delimiters of your choice. So your example of whitelist = (?m)^4624$|^4625$ is completely wrong. Additionally, if you're exporting events as XML (which is currently the preferred way because traditional format tends to have some problems with parsing), you use XmlRegex key for filtering. splunk list monitor only list - as the name says - monitor type inputs which read files from the disk. Windows event log sources are not of monitor type. Last time I checked, grep was neither a standard CMD-usable tool nor a standard powershell cmdlet. And Windows doesn't use the $ notation for environment variables. As far as I remember, the events from Security log are not recast to another sourcetype. And actually your input stanza matches the "old way" of ingesting windows logs, which hasn't been used for several years now. Currently, the sourcetype for all windows logs should be WinEventlog or XmlWinEventlog depending on whether you're rendering them to XML or not. It's the source field which specifies the point of origin.  
Hi @splunkreal , as @livehybrid said, the easiest approach is to create two copies of the Splunk_TA_Windows that differ only for the index in the input stanzas. If not possible, you could follow th... See more...
Hi @splunkreal , as @livehybrid said, the easiest approach is to create two copies of the Splunk_TA_Windows that differ only for the index in the input stanzas. If not possible, you could follow the approach that you described. Remember that in the second case, you have to put these configurations not in the Universal Forwarders, but in the first full Splunk instance that data pass throug, in other words on indexers or, if present on intermediate Heavy Forwarders. Ciao. Giuseppe
Hi  @ynag, @satyaallaparthi, i found my solution in creating an dedicaded user with the the role, splunk_vmware_admin
@livehybrid Thanks for sharing your insights, this is helpful.
@isoutamo Thanks for sharing your insights, this is helpful.