All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You could try this https://github.com/paychex/Splunk.Conf19 if it helps you to get all KOs on files and then create app on your local environment?
Perhaps this will help. index=*1644* container_name="ls2-sdp-java" $selected_countries$ | rex field=_raw "for \[(?P<country>\w+),\s*(?P<cobDate>\w+),\s*(?P<sdpType>\w+)" | rex field=_raw "records: (... See more...
Perhaps this will help. index=*1644* container_name="ls2-sdp-java" $selected_countries$ | rex field=_raw "for \[(?P<country>\w+),\s*(?P<cobDate>\w+),\s*(?P<sdpType>\w+)" | rex field=_raw "records: (?P<Recordcount>\w+)" | rex field=_raw "^(?<dateTime>\S+)" | eval DateTime=strptime(dateTime, "%Y-%m-%dT%H:%M:%S.%3N%Z") | eval CreatedTime=strftime(DateTime, "%H:%M") | eval CreatedDate=strftime(DateTime, "%Y-%m-%d")
Hello Team, We have been using Corelight APP for Splunk to ingest Corelight IDS events to our distributed Splunk environment. This app is working absolutely fine on Indexer.  We are unable to launc... See more...
Hello Team, We have been using Corelight APP for Splunk to ingest Corelight IDS events to our distributed Splunk environment. This app is working absolutely fine on Indexer.  We are unable to launch this app on Search head. Can you please assist us with the details to get the App working on Search Head?   Regards, Prathamesh
I have the following simplified version of the query where for each caller, I need all_calls (from sourcetype=x) and messagebank_calls (from sourcetype=y).  index=sample1 sourcetype=x host=host1 | s... See more...
I have the following simplified version of the query where for each caller, I need all_calls (from sourcetype=x) and messagebank_calls (from sourcetype=y).  index=sample1 sourcetype=x host=host1 | stats values(caller) as caller by callid | stats count as all_calls by caller | rename caller as caller_party | appendcols [ search index=sample1 AND sourcetype=y | stats count as messagebank_calls by caller_party] | search all_calls=*   messagebank_calls value is incorrect and I'm guessing because of the subsearch/appendcols? How do I increase the limit or re-write so I can get the same results caller, all_calls, messagebank_calls?
Please find the below attached screenshot and data sample i need to create 5 felids  problem statement - old splunk query not working as logging pattern got changed 3/28/25 10:04:25.685 PM ... See more...
Please find the below attached screenshot and data sample i need to create 5 felids  problem statement - old splunk query not working as logging pattern got changed 3/28/25 10:04:25.685 PM   2025-03-28T22:04:25.685Z INFO 1 --- [ool-1-thread-11] c.d.t.l.s.s.e.e.NoopLoggingEtlEndpoint : Completed generation for [DE, 2025-03-28, LOAN_EVENT_SDP, 1]. Number of records: 186 host = lonhybridapp03.uk.db.com source = /var/log/pods/ls2_ls2-intraday-sdp-86854ff574-48dgp_830e2ef9-56be-4996-ae21-127366a78515/ls2-intraday-sdp/0.log sourcetype = kube:container:ls2-intraday-sdp   Need below    index=*1644* container_name="ls2-sdp-java" $selected_countries$ | rex field=_raw "country=(?P<country>\w+)"    (DE) | rex field=_raw "sdpType=(?P<sdpType>\w+)"  (LOAN_EVENT_SDP) | rex field=_raw "cobDate=(?P<cobDate>\w+)"  (2025-03-28) | rex field=_raw "record-count: (?P<Recordcount>\w+)" (186) | rex field=_raw "\[(?<dateTime>.*)\] \{Thread"  (2025-03-28T22:04) | eval DateTime=strptime(dateTime, "%Y-%m-%dT%H:%M:%S,%N") | eval CreatedTime=strftime(DateTime, "%H:%M") | eval CreatedDate=strftime(DateTime, "%Y-%m-%d") above SPL has old query , can you please help me with new rex pattern to extract these fields  For clear understanding i have attached required fields in screenshot    
@bowesmana  Thanks for the response. We are using SVCs model. We have Splunk ES and other resource intensive apps. I am looking for a way to optimize searches as the current indexer memory utilizat... See more...
@bowesmana  Thanks for the response. We are using SVCs model. We have Splunk ES and other resource intensive apps. I am looking for a way to optimize searches as the current indexer memory utilization is 99% since last 2 months.  I used the CMC dashboard and that showing me the DM acceleration searches are the most expensive ones.
Any you are not aware of any workarounds to this "limitation" I suppose (except for directs access to the box)?
we got a requirement to on-board new platform logs to Splunk. They will have 1.8 TB/day data to be ingested. As of now our license is 2 TB/day and we already have other platform data on-boarded. Now ... See more...
we got a requirement to on-board new platform logs to Splunk. They will have 1.8 TB/day data to be ingested. As of now our license is 2 TB/day and we already have other platform data on-boarded. Now these new ones accepted to uplift our license with 2TB/day more so now our total becomes 4TB/day. But here they said that their normal ingestion is 1.8 TB/day, but during DDOS attack it can go in double digits. We got surprised by this. Total itself is 4TB/day, how come we can handle double digits TB of data, which in return this project might impact the on-boarding of other projects. My manager asked me to investigate on this whether we can accommodate this requirement? If yes, he want the action plan. If not, he want the justification to share it with them. I am not much aware of these licensing things in Splunk, but as per my knowledge this is very dangerous because 4TB and 10/20TB per day is huge difference. My understanding is, if we breach 4TB/day (may be 200gb of data more), new indexing stops but still old searches can be accessed.  Our infrastructure: multi site cluster with 3 sites ... 2 indexers in each (total 6), 3 SHs one in each, 1 deployment server, 2 CMs (active and standby), 1 deployer (which is license master.) Can anyone please help me on this topic how to proceed on it?
The App Exporter never solved that part.   Back when it was written and working, the /package REST call would give you a URL to download the package. That URL (& the download) was removed from the ... See more...
The App Exporter never solved that part.   Back when it was written and working, the /package REST call would give you a URL to download the package. That URL (& the download) was removed from the REST call in Splunk Enterprise long ago.
There are a couple of ways you could do this as demonstrated below <dashboard version="1.1" theme="light"> <label>forecast</label> <row> <panel depends="$alwaysHidden$"> <html> ... See more...
There are a couple of ways you could do this as demonstrated below <dashboard version="1.1" theme="light"> <label>forecast</label> <row> <panel depends="$alwaysHidden$"> <html> <style> #forecast g.highcharts-series-1.highcharts-line-series path.highcharts-graph { stroke: red; } #data g.highcharts-series-0.highcharts-line-series path.highcharts-graph { stroke: red; stroke-width: 3; data-z-index: 3; } </style> </html> </panel> <panel id="forecast"> <viz type="Splunk_ML_Toolkit.ForecastViz"> <search> <query>| inputlookup internet_traffic.csv | timechart span=120min avg("bits_transferred") as bits_transferred | eval bits_transferred=round(bits_transferred) | predict "bits_transferred" as prediction algorithm=LLP5 holdback=112 future_timespan=224 upper95=upper95 lower95=lower95 | `forecastviz(224, 112, "bits_transferred", 95)`</query> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </viz> </panel> </row> <row> <panel id="data"> <viz type="Splunk_ML_Toolkit.ForecastViz"> <search> <query>| inputlookup internet_traffic.csv | timechart span=120min avg("bits_transferred") as bits_transferred | eval bits_transferred=round(bits_transferred) | predict "bits_transferred" as prediction algorithm=LLP5 holdback=112 future_timespan=224 upper95=upper95 lower95=lower95 | `forecastviz(224, 112, "bits_transferred", 95)`</query> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </viz> </panel> </row> </dashboard>
Hi @martin_mueller, As I’ve tried to explain- the package part work perfectly, the interesting part begins after this stage: How to get the package out, when not having physical access to the box. ... See more...
Hi @martin_mueller, As I’ve tried to explain- the package part work perfectly, the interesting part begins after this stage: How to get the package out, when not having physical access to the box. The benefits begins after the export Are there any REST available that can accomplish this?  
Hi, I ended up solving it! The issue was that Azure isn't sending anything regarding the load balancers. To fix it, I enabled debug logs in the addon, retrieved the details of the call, and simulate... See more...
Hi, I ended up solving it! The issue was that Azure isn't sending anything regarding the load balancers. To fix it, I enabled debug logs in the addon, retrieved the details of the call, and simulated the call with Postman. From the results, Azure returns a 200 status code but with all the "timeseries" fields of the metrics empty. Thanks for your support!
I'm using forecast graph for my data and it shows the graph in below format: 1. Existing data shows in solid Blue line 2. Forecast data shows in dotted Blue line 3. Confidence level (upper limit a... See more...
I'm using forecast graph for my data and it shows the graph in below format: 1. Existing data shows in solid Blue line 2. Forecast data shows in dotted Blue line 3. Confidence level (upper limit and lower limit with blue) I want to change the forecast data color (refer attached).
I suppose the command should be defined as centralized streaming command instead of distributed one - the local setting in commands.conf - see https://docs.splunk.com/Documentation/Splunk/latest/Admi... See more...
I suppose the command should be defined as centralized streaming command instead of distributed one - the local setting in commands.conf - see https://docs.splunk.com/Documentation/Splunk/latest/Admin/Commandsconf  
I haven't looked at it in years. If you just need to package an app once, call the REST API:  | rest splunk_server=local services/apps/local/{name}/package https://docs.splunk.com/Documentation/Spl... See more...
I haven't looked at it in years. If you just need to package an app once, call the REST API:  | rest splunk_server=local services/apps/local/{name}/package https://docs.splunk.com/Documentation/Splunk/9.4.1/RESTREF/RESTapps#apps.2Flocal.2F.7Bname.7D.2Fpackage
There couple of parameters in limits.conf and distsearch.conf which define how this is done and what are limitations for those replicated CSV files.
Is your command actively querying the collection? If so, replicate=true won't help you.   replicate=true will push the collection's content to the indexers *as CSV*. The KV Store on the indexers (i... See more...
Is your command actively querying the collection? If so, replicate=true won't help you.   replicate=true will push the collection's content to the indexers *as CSV*. The KV Store on the indexers (if even running) won't know the collection or its content. I'm surprised and doubtful that replicate=true ever worked for someone running Splunk Enterprise on-prem.   Easiest fix would be to only use the command on the SHs, e.g. after a |stats, |tstats, etc. - or if need be |localop.
@iamsahilshaiksKVStore  contents are not replicated between tiers. Actually, kvstore does not normally run on indexer tier at all and even if for some reason it is (you fancy runing a modular input w... See more...
@iamsahilshaiksKVStore  contents are not replicated between tiers. Actually, kvstore does not normally run on indexer tier at all and even if for some reason it is (you fancy runing a modular input which uses kvstore on an indexer instead of on a separate HF) it is not replicated anywhere. If a collection is replicated to the indexer tier, its contents are getting exported as CSV to the knowledge bundle (so there are possible size/performance issues with it).
No. Restarting DS won't change anything on the deployment clients (your forwarders). Restarting forwarders requires... well, restarting forwarders. That's why I keep saying that ugly hacks like scrip... See more...
No. Restarting DS won't change anything on the deployment clients (your forwarders). Restarting forwarders requires... well, restarting forwarders. That's why I keep saying that ugly hacks like scripting those restarts from scripted inputs are risky. If something goes wrong, you end up with a downed process and no way to bring it up.
No! Don't do makeresults | append [ a big search ]. This way you're on your best path to shoot yourself in the foot with prematurely finalized subsearch! No to mention that the now in the subsearch... See more...
No! Don't do makeresults | append [ a big search ]. This way you're on your best path to shoot yourself in the foot with prematurely finalized subsearch! No to mention that the now in the subsearch has nothing to do with the now in the outer search.