All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It looks like I closed this thread early because I've hit an issue. I decided to go with the lookup table afterall. The query I marked as correct, does work, but it creates duplicate hosts which thro... See more...
It looks like I closed this thread early because I've hit an issue. I decided to go with the lookup table afterall. The query I marked as correct, does work, but it creates duplicate hosts which throws off the results. At first I thought it was just because the host name in Splunk had different casing than the MemberServers.csv file. I tweaked the query to lower the host names (all the names in MemberServers.csv are lower case) and removed the "| where total=0" line. That showed me there are 2 hosts for every host returned by the Splunk query. There is one from the original query and the one appended by the .csv file. For some reason the stats sum(count) command doesn't see them as identical hosts but two different ones even though their names are exactly the same (including case). This is the query which tells me there are now duplicates, one with count 0 (presumably added by the append command) and one a count greater than 0 (presumably added by the Splunk query). index=sw tag=MemberServers sourcetype="windows PFirewall Log" | eval host=lower(host) | stats count BY sourcetype host | append [ | inputlookup MemberServers.csv | eval count=0 | fields sourcetype host count] | stats sum(count) AS total BY sourcetype host I tried replacing append with a join command but I ran into problems with that too. Any help would be appreciated.
@livehybrid  for your comment"Without the original data its a little hard to say" Will this work? index="xxxx"   field.type="xxx" OR index=Summary_index | eventstats values(index) as sources by t... See more...
@livehybrid  for your comment"Without the original data its a little hard to say" Will this work? index="xxxx"   field.type="xxx" OR index=Summary_index | eventstats values(index) as sources by trace | where mvcount(sources) > 1 | spath output=yyyId path=xxxId input=_raw | where isnotnull(yyyId) ANDyyyId!="" | bin _time span=5m AS hour_bucket | stats latest(_time) as last_activity_in_hour, count by hour_bucket, yyyId | stats count by hour_bucket | sort hour_bucket | rename hour_bucket AS _time | timechart span=5m values(count) AS "Unique Customers per Hour" Still doesnt return any results
Hi all, I'm collecting iLO logs in Splunk and have set up configurations on a Heavy Forwarder (HF). Logs are correctly indexed in the ilo index with sourcetypes ilo_log and ilo_error, but I'm facing... See more...
Hi all, I'm collecting iLO logs in Splunk and have set up configurations on a Heavy Forwarder (HF). Logs are correctly indexed in the ilo index with sourcetypes ilo_log and ilo_error, but I'm facing an issue with search results. When I run index=ilo | stats count by sourcetype, it correctly shows the count for ilo_log and ilo_error. Also, index=ilo | spath | table _raw sourcetype confirms logs are indexed with the correct sourcetype. However, when I search directly with index=ilo sourcetype=ilo_log, index=ilo sourcetype=ilo_error, or even index=ilo ilo_log, I get zero results. Strangely, sourcetype!=ilo_error returns all ilo_log events and the same for ilo_error. props.conf: [source::udp:5000] TRANSFORMS-set_sourcetype = set_ilo_error SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)  transforms.conf: [set_ilo_error] REGEX=(failure|failed) DEST_KEY=MetaData:Sourcetype FORMAT=sourcetype::ilo_error WRITE_META = true
This is a thread in which latest response was 11 years ago. I wouldn't count on a reasonable answer.
Dunno about Edge Processor but the general answer to any "inter-event" question regarding forwarding/filtering is "no". Splunk processes each event independently and doesn't carry any state from one ... See more...
Dunno about Edge Processor but the general answer to any "inter-event" question regarding forwarding/filtering is "no". Splunk processes each event independently and doesn't carry any state from one event to another (which makes sense when you realize that two subsequent events from the same source can be forwarded to two different receivers and end up on two completely different indexers).
We talked about this on Slack. Six indexers might or might not be sufficient for 1TB/day. Depends on the search load. Remember about two things: 1. Acceleration searches are quite heavy. They p... See more...
We talked about this on Slack. Six indexers might or might not be sufficient for 1TB/day. Depends on the search load. Remember about two things: 1. Acceleration searches are quite heavy. They plow through heaps of data to create DAS (which they write additionally stressing I/O). 2. Acceleration searches are at the far end of the priority queue when it comes to scheduling searches. So your acceleration searches might simply be running when there is already a whole lot of other things going in your system. That's one thing. Another thing is that CIM datamodels rely on tags and eventtypes which on the other hand rely on the raw data and how the fields are extracted. You might simply be having just "heavy" sourcetypes. You could try to dry-run an acceleration search as an ad-hoc search (get it with the acceleration_search option for the datamodel command) and inspect the job to see where it spends most of its time. It doesn't have to be directly tied to the datamodel or acceleration settings themselves (although it still might). Of course, you could try to run more shorter searches as was already suggested (and what is recommended in  https://help.splunk.com/en/splunk-enterprise/manage-knowledge-objects/knowledge-management-manual/9.3/use-data-summaries-to-accelerate-searches/accelerate-data-models#id_80506253_eb97_4bd6_9185_739b6a6e6087__Advanced_configurations_for_persistently_accelerated_data_models) but this will probably not lower your i/o load.
As I said before - if it worked, good for you. And I'm not gonna go through your logs. For several reasons. Don't take it personally but this is a public forum where people give time whenever they ... See more...
As I said before - if it worked, good for you. And I'm not gonna go through your logs. For several reasons. Don't take it personally but this is a public forum where people give time whenever they want, the way they want. Answers is not a free support/consulting service. If you want something specific done - reach out to your local Splunk Partner or hire Professional Services. To put it bluntly - it costs money. Secondly, you've already done the migration and apparently you're running the new system as prod. My looking into your logs won't change anything. Thirdly, I might miss something, especially considering the first point. You asked, you have been advised, you did otherwise. I won't pat you now on the back and say "it's ok". It might. But don't have to. And I don't have spare time just to ease your mind, to be frank. I hope it is ok. But I won't check it.
Each of these answers are excellent. Thank you all for your input. For now, I'm going to go with the "was working within the last 30 days" search as that's what I need. Thanks!
Yes, "should" seems correct  However, I did change these levels and there was no effect. This led me to the documentation pointing at local changes on the indexers. The warnings are presented for ... See more...
Yes, "should" seems correct  However, I did change these levels and there was no effect. This led me to the documentation pointing at local changes on the indexers. The warnings are presented for indexers under the "Health of Distributed Splunk Deployment" pointing at indexers, though I am not sure where warnings are generated, to where they are "collected" before being presented in the search head cluster. I also cannot figure out where in the healtch.conf file these levels could/should be modified (health.conf | Splunk Docs) so I'm guessing it is likely somewhere else.
When collecting Linux logs using a Universal Forwarder we are collecting a lot of unnecessary audit log from cronjobs collecting server status and other information. In particular the I/O from a litt... See more...
When collecting Linux logs using a Universal Forwarder we are collecting a lot of unnecessary audit log from cronjobs collecting server status and other information. In particular the I/O from a little script. To build a grep based logic on a Heavy Forwarder there would have to be a long list of very particular "grep" strings not to loose ALL grep attempts. In a similar manner, commands like 'uname' and 'id' are even harder to filter out. The logic needed o reliably filter out only I/O generated by the script would be to find events with comm="script-name", get the pid value from that initial event and drop all events for the next say 10 seconds with a ppid that matches the pid. To make things complicated there is no control over the log/files on the endpoints, only what the universal forwarder is able to do then the heavy forwarder before the log in indexed. Is there any way to accomplish this kind of adaptive filtering/stateful cross-event logic in transit and under these conditions? Is this something that may be possible using the new and shiny Splunk Edge Processor once it is generally available?
The Search Head determines if the alert should be displayed or not so that is where the threshold is set.  Go to Settings->Health Report Manager to change the threshold.
No, bucket size should be fine - just need to play around with 3 DMA parameters only -  1. Backfill Range 2. Max Concurrent Searches 3. Max Summarization Search Time So, the strategy I suggested ... See more...
No, bucket size should be fine - just need to play around with 3 DMA parameters only -  1. Backfill Range 2. Max Concurrent Searches 3. Max Summarization Search Time So, the strategy I suggested previously was - "build short, build often". Check the attached file. The boxes in the Yellow are default - where with Max Concurrent Searches with 3 and Max Search Time as 3600, the 4th concurrent search will only start once 1st is completed (after 3600) - which is happening in your case. However, we can go with the other approach - in green - where with 4 concurrent searches and Max Search Time as 1200 - we can build the summaries faster - which will also respect the recent event faster. Please let me if any questions (and how the configuration changes goes!)
Hi @genesiusj  If your ITSI instance is separate to the SHC then there is no built-in feature that would replicate between the SHC and ITSI.  There are a number of apps on Splunkbase that do variou... See more...
Hi @genesiusj  If your ITSI instance is separate to the SHC then there is no built-in feature that would replicate between the SHC and ITSI.  There are a number of apps on Splunkbase that do various knowledge object management but I havent personally seen any that do this. How do you currently manage your tags.conf on the SHC? If these are managed on a search-head deployer and pushed to the SHC then you can install the same app on the ITSI SH (via appropriate deployment mechanism) but you do then have to ensure you push this out to ITSI when making changes to the app which is deployed to SHC.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @spisiakmi  Wow, Splunk 4! I dont really know where to start with this one, but what I would say is this sort of thing should probably be done with the assistance of Splunk Professional Services ... See more...
Hi @spisiakmi  Wow, Splunk 4! I dont really know where to start with this one, but what I would say is this sort of thing should probably be done with the assistance of Splunk Professional Services (PS). There are so many questions and caveats here. The only "off the shelf" supported option would be to go through the official supported upgrade path for each version as @PrewinThomas has mentioned - but then you also need to balance that with the version of OS that each version is on (e.g. when do you move it from Server 2016 to 2022 and continue the upgrade path). There are also factors like how many indexers you have and what the rest of the environment looks like, how does data get in to Splunk, and how would you manage the upgrade of any forwarders in the estate? There are a number of places in the upgrade journey where the structure of buckets changes. e.g. around 4.1/4.2 there are changes, then again at 7.2 and 8.x which is why the upgrade process is place. Therefore I wouldnt recommend just copying them from the old to the new infrastructure. Depending on the volume of data, one option might be to freeze out the data from your old infra and thaw it out into your new infra. This way the relevant changes are managed by Splunk. According to the docs it is possible to thaw pre 4.2 data into Splunk 9.x  - however there are warning about different architecture (is your Windows Server 2016 on 64-bit architecture?). Would you be looking to change the Splunk architecture (e.g utilise clustering) on the new infra? Ultimately there are a number of ways to do this, but I would strongly suggest speaking to a Splunk Partner and/or Splunk Professional Services to get this planned out. Lastly - In order to upgrade from Splunk 4 using the upgrade path, you will need lots of old versions which are no longer publicly available, so you will need to see if support can provide these, however I believe this may be unlikely without PS involvement.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, We have a search head cluster and an ITSI instance. How do we replicate the tags.conf files from various apps on the SHC to ITSI? These are needed for running the various module searches, an... See more...
Hello, We have a search head cluster and an ITSI instance. How do we replicate the tags.conf files from various apps on the SHC to ITSI? These are needed for running the various module searches, and other ITSI macros. Did someone create an app to handle this? Thanks and God bless, Genesius
Thank you So the acceptable solution to these issues is to adjust thresholds to not trigger under "normal operation". The follow-up regarding thresholds settings, from what I understand these alert... See more...
Thank you So the acceptable solution to these issues is to adjust thresholds to not trigger under "normal operation". The follow-up regarding thresholds settings, from what I understand these alerts are generated locally on the indexers in the indexer cluster. The health.conf settings are apparently not synced in an indexer cluster, only in the search head cluster where any changes has no effect (already tried). If thresholds are to be modified in the indexer cluster, what file and values are of interest to push from the manager to change relevant thresholds? I have not been able to identify these in the documentation. If not in the indexer cluster, then where?
Hi @XOR  You shouldnt need to add the port in the Prisma config as Splunk Cloud uses the default HTTPS port for HEC receiving. I assume the URL you used starts https:// ? As far as I know there is ... See more...
Hi @XOR  You shouldnt need to add the port in the Prisma config as Splunk Cloud uses the default HTTPS port for HEC receiving. I assume the URL you used starts https:// ? As far as I know there is no option to add an index into the Prisma configuration therefore the data will go into the default index you selected when you created the HEC token - Are you able to confirm that this is the index that you are checking in? Regarding the  service/collector or /service/collector/events, you should be able to use the first, or "/services/collector/event" - note no "S" on the end. Prisma Cloud sends HEC events so this is the correct endpoint to use.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
If you want to see actions performed then something like this: index=_dsappevent earliest=-24h | table _time data.action data.appName data.clientId data.result | append [ tstats first(data.hos... See more...
If you want to see actions performed then something like this: index=_dsappevent earliest=-24h | table _time data.action data.appName data.clientId data.result | append [ tstats first(data.hostname) as hostname, first(data.dns) as dns_name, first(data.ip) as ip WHERE earliest=-24h index=_dsclient by data.clientId] | stats values(*) as * by data.clientId | table hostname dns_name ip *    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
The Forwarder Management page in the DS will do that for you.  If it doesn't show what you want then please tell us your use case and we can suggest something.
Hi @rk60422  Here are some starters which might help: List of clients (and some useful info) |tstats first(data.hostname) as hostname, first(data.dns) as dns_name, first(data.ip) as ip, first(data... See more...
Hi @rk60422  Here are some starters which might help: List of clients (and some useful info) |tstats first(data.hostname) as hostname, first(data.dns) as dns_name, first(data.ip) as ip, first(data.splunkVersion) as splunkVersion, first(data.package) as package WHERE index=_dsclient by data.clientId Latest phonehome time by clientId |tstats latest(_time) as latest_phonehome where earliest=-24h index=_dsphonehome by data.clientId | eval friendlyPhonehomeTime=strftime(latest_phonehome,"%Y-%m-%d %H:%M:%S") These could be combined to get the last phonehome with the additional info: | tstats latest(_time) as latest_phonehome where earliest=-24h index=_dsphonehome by data.clientId | eval friendlyPhonehomeTime=strftime(latest_phonehome,"%Y-%m-%d %H:%M:%S") | append [ tstats first(data.hostname) as hostname, first(data.dns) as dns_name, first(data.ip) as ip, first(data.splunkVersion) as splunkVersion, first(data.package) as package WHERE earliest=-24h index=_dsclient by data.clientId] | stats values(*) as * by data.clientId    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing