All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Curious as to why stats has to be ran twice.  Even using table before stats doesn't work to get the proper average.
Hi @Ryan.Paredez  Thanks a lot for following up with my question. I really appreciate it. Unfortunately, I was not able to solve it on my own till now and I had to discover just the primary node as... See more...
Hi @Ryan.Paredez  Thanks a lot for following up with my question. I really appreciate it. Unfortunately, I was not able to solve it on my own till now and I had to discover just the primary node as a workaround until I can find a solution to my problem. Thanks once again
1) is a giant question.    The shortest story here is probably to understand the Admin differences - what you will no longer be able to do yourself and will need a ticket for.  The second is to und... See more...
1) is a giant question.    The shortest story here is probably to understand the Admin differences - what you will no longer be able to do yourself and will need a ticket for.  The second is to understand the licensing and billing you will be using and how that may affect things.  A lot of that is covered in the Splunk Cloud Platform Migration Success Guide. 2 and 3 both) It's generally best to send from the UFs direct to cloud, that way all your indexers will equally participate in receiving the data.  Ditto with your syslog servers - they already have a UF/HF on them, I'd suspect, to grab the data sent in by syslog and send it into your on-prem instance so you just need to reconfigure those to forward data to your cloud instance instead of on-prem instance.   In your cloud instance you'll find an app called the (or some variation of) Splunk universal forwarder credentials package.  Click that and it has instructions and a little app to install on your forwarders to teach them how to talk to your cloud instance. You could send your syslog directly in to cloud too, using the SC4S app from Splunk. 4) I believe Splunk Cloud only accepts encrypted streams (https) so the encryption is enforced by the Splunk universal forwarder credentials package you can download from your cloud instance to set up your forwarders.  Compression is not necessary.   I hope that helps! -Rich
There's a few ways to go about this and none of them are really easy.  There's a specific command 'delta' but it only works for one field, so we'll have to go a bit off road. I just used my firewall... See more...
There's a few ways to go about this and none of them are really easy.  There's a specific command 'delta' but it only works for one field, so we'll have to go a bit off road. I just used my firewall data "by transport" instead of "by user" (== tcp, udp, etc), but I'm sure you can adapt it to yours (only lines 1 and 2 need changing) index=fw | timechart span=10m count by transport | streamstats window=2 first(*) as first_* last(*) as last_* | foreach *_* [eval delta_<<MATCHSEG2>> = first_<<MATCHSEG2>> - last_<<MATCHSEG2>> ] | fields delta*  So what that does: Line 1 and 2 are more or less like you have them.  Start by running just these two lines and adapting until you get your timechart data coming out OK. In line 3 we use streamstats to build groups of two of those events, and though it looks messy with the wildcards and underscores, it'll build new fields like first_tcp, last_tcp and so on. Line 4 is foreach and says for every field with a _ in the middle, like first_tcp, make a new field delta_tcp which is first_tcp - last_tcp. The last line just trims out all fields except our delta_* fields, because that's the only one we want. Try those - get the first two working, then add one line at a time and watch what each one adds/does.   Happy Splunking! -Rich
We have a small satellite deployment of 40+ servers, that have a dedicated HF doubling as a Deployment Server running on Linux.  Equal mix of Windows and Linux.  24h ago discovered that a few of the ... See more...
We have a small satellite deployment of 40+ servers, that have a dedicated HF doubling as a Deployment Server running on Linux.  Equal mix of Windows and Linux.  24h ago discovered that a few of the Windows servers were now reporting that they no longer had the Windows_TA installed, but instead were running the Linux_TA.  Checking the UF hosts directly, they in fact were running the Windows_TA even though the DS was reporting they were running the Linux_TA?? After a day of trying to figure out how (validated filters, tested, removed and readded all Server Classes, and Apps), it continued.  Noticed throughout the day a few more were now reporting this "mix-up", and again validated those reporting Linux_TA were running Windows_TA.  As a final drastic measure, removed Splunk from the host (the HF/DS, not the UF's), reinstalled from scratch, and created the environment new.  Made sure the UF's were not running any of the distributed apps/ta's.  Built new Apps, Server Class.  The UF's started phoning home, and once again, the Windows servers were reporting the Linux_TA, but running the Windows_TA
I have an on-prem splunk enterprise installation, consisting exclusively of Universal forwarders and a single Indexer. We now have a cloud-hosted environment, that it restricted, as it is hosted by ... See more...
I have an on-prem splunk enterprise installation, consisting exclusively of Universal forwarders and a single Indexer. We now have a cloud-hosted environment, that it restricted, as it is hosted by an external company.  They do not allow us to install any software (but their own) on the servers. Is there any way to get data into my Indexer, without a forwarder? Without a forwarder, am I able to apply allow/deny lists to events?
I had this exact issue in one environment. Versioning turned off in AWS S3 and turned on in Splunk. It works perfectly fine until a index bucket needs to freeze. Then Splunk is not able to remove an... See more...
I had this exact issue in one environment. Versioning turned off in AWS S3 and turned on in Splunk. It works perfectly fine until a index bucket needs to freeze. Then Splunk is not able to remove any index bucket related files on S3, and splunkd will log errors and warnings. This event gives a hint of the issue: 03-26-2024 18:53:54.640 +0100 WARN S3Client [118080 FilesystemOpExecutorWorker-0] - Error removing object name=splunk01/index01/db/9c/b2/1058~09FD8FE0-DA2A-4698-BE4C-BC2CD5D92EFF/receipt.json(0,-1,0,) as versions of the object could not be listed   These are all events related to failing to freeze a index bucket on S3: 03-26-2024 18:53:54.640 +0100 INFO BucketMover [118080 FilesystemOpExecutorWorker-0] - RemoteStorageAsyncFreezer freeze completed succesfully for bid=index01_ccd~1058~09FD8FE0-DA2A-4698-BE4C-BC2CD5D92EFF 03-26-2024 18:53:54.640 +0100 WARN DatabaseDirectoryManager [118080 FilesystemOpExecutorWorker-0] - failed to request CacheManager to remove remote data for bucket, cache_id="bid|index01_ccd~1058~09FD8FE0-DA2A-4698-BE4C-BC2CD5D92EFF|", exception=Error removing bucket with cache_id="bid|index01_ccd~1058~09FD8FE0-DA2A-4698-BE4C-BC2CD5D92EFF|" 03-26-2024 18:53:54.640 +0100 ERROR CacheManager [118080 FilesystemOpExecutorWorker-0] - cache_id="bid|index01_ccd~1058~09FD8FE0-DA2A-4698-BE4C-BC2CD5D92EFF|", issue="Failed to remove receipt remoteId=splunk01/index01_ccd/db/9c/b2/1058~09FD8FE0-DA2A-4698-BE4C-BC2CD5D92EFF/receipt.json(0,-1,0,)" 03-26-2024 18:53:54.640 +0100 ERROR CacheManager [118080 FilesystemOpExecutorWorker-0] - Remove bucket cache_id="bid|index01_ccd~1058~09FD8FE0-DA2A-4698-BE4C-BC2CD5D92EFF|" with receiptId=splunk01/index01_ccd/db/9c/b2/1058~09FD8FE0-DA2A-4698-BE4C-BC2CD5D92EFF/receipt.json(0,-1,0,) failed 03-26-2024 18:53:54.640 +0100 WARN S3Client [118080 FilesystemOpExecutorWorker-0] - Error removing object name=splunk01/index01_ccd/db/9c/b2/1058~09FD8FE0-DA2A-4698-BE4C-BC2CD5D92EFF/receipt.json(0,-1,0,) as versions of the object could not be listed 03-26-2024 18:53:54.558 +0100 INFO CacheManager [118080 FilesystemOpExecutorWorker-0] - will remove cache_id="bid|index01_ccd~1058~09FD8FE0-DA2A-4698-BE4C-BC2CD5D92EFF|" from remote storage 03-26-2024 18:53:54.545 +0100 INFO CacheManager [118080 FilesystemOpExecutorWorker-0] - will remove cache_id="bid|index01_ccd~1058~09FD8FE0-DA2A-4698-BE4C-BC2CD5D92EFF|" removeRemote=1 03-26-2024 18:53:54.379 +0100 INFO BucketMover [118080 FilesystemOpExecutorWorker-0] - RemoteStorageAsyncFreezer trying to freeze bid=index01_ccd~1058~09FD8FE0-DA2A-4698-BE4C-BC2CD5D92EFF, freezeInitiatedByAnotherPeer=false 03-26-2024 18:53:54.379 +0100 INFO DatabaseDirectoryManager [118080 FilesystemOpExecutorWorker-0] - cache_id="bid|index01_ccd~1058~09FD8FE0-DA2A-4698-BE4C-BC2CD5D92EFF|" found to be on remote storage v  
I didn’t add those to previous messages as answered by phone. You could look those from https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/TroubleshootSmartStore
You should look e.g where an isnull function. With it you could drop unwanted rows away.
https://docs.splunk.com/Documentation/Splunk/Latest/Admin/Propsconf [<spec>] * This stanza enables properties for a given <spec>. * A props.conf file can contain multiple stanzas for any number of ... See more...
https://docs.splunk.com/Documentation/Splunk/Latest/Admin/Propsconf [<spec>] * This stanza enables properties for a given <spec>. * A props.conf file can contain multiple stanzas for any number of different <spec>. * Follow this stanza name with any number of the following setting/value pairs, as appropriate for what you want to do. * If you do not set a setting for a given <spec>, the default is used. <spec> can be: 1. <sourcetype>, the source type of an event. 2. host::<host>, where <host> is the host, or host-matching pattern, for an event. 3. source::<source>, where <source> is the source, or source-matching pattern, for an event. 4. rule::<rulename>, where <rulename> is a unique name of a source type classification rule. 5. delayedrule::<rulename>, where <rulename> is a unique name of a delayed source type classification rule. These are only considered as a last resort before generating a new source type based on the source seen. **[<spec>] stanza precedence:** For settings that are specified in multiple categories of matching [<spec>] stanzas, [host::<host>] settings override [<sourcetype>] settings. Additionally, [source::<source>] settings override both [host::<host>] and [<sourcetype>] settings. Thereis one caveat - it applies to the original sourcetype/source/host values the data is ingested with. If your props overwrite thise values (for example by "splitting" single sourcetype to multiple more specific ones or rewrite source/sourcetype as happens with some windows logs, especially read from ForwardedEvents), the new values don't affect event processing in ingest pipeline.
did not work, | makeresults | eval _raw = "{\"orderNum\":\"1234\",\"orderLocation\":\"demoLoc\",\"details\":{\"key1\":\"value1\",\"key2\":\"value2\"}}" | spath | spath input=_raw path=details outpu... See more...
did not work, | makeresults | eval _raw = "{\"orderNum\":\"1234\",\"orderLocation\":\"demoLoc\",\"details\":{\"key1\":\"value1\",\"key2\":\"value2\"}}" | spath | spath input=_raw path=details output=hold | rex field=hold "\"(?<kvs>[^\"]*\"*[^\"]*\"*[^\"]*\"*)\"" max_match=0 | stats values(*) as * by kvs | rex field=kvs "(?<key>[^\"]*)\" : \"(?<value>[^\"]*)" max_match=0 | table orderNum key value orderLocation   the value from the key-value pair can be an escaped JSON string. we also need to consider this while writing regex.
Unfortunately no, tested them both - separately with each boolean value  and together with both true/true, true/false, false/true and false/false. Does not seem to provide the matching needed.
I want mask some data coming from web server logs particularly only one server out of all my web server logs. Can I apply my masking rule to only one my webserver  source instead of all my web server... See more...
I want mask some data coming from web server logs particularly only one server out of all my web server logs. Can I apply my masking rule to only one my webserver  source instead of all my web server sending to the same sourcetype? If I apply this rule to all web server log it will be high resource usage at my indexer? Thanks
Hi all,   Im analysing event counts for a specific search criteria and I want to know how the count of values changed over time.  Below search is not good enough to see whats going on as many use... See more...
Hi all,   Im analysing event counts for a specific search criteria and I want to know how the count of values changed over time.  Below search is not good enough to see whats going on as many usernames have huge number of events and some with small numbers are barely noticeable (Im interested in rate of change and not count itself) ``` index=test_index "search string" | timechart span=10m count(field1) by username ``` So I want to see a rate of change of the count rather than simple count, by username field. How can we achieve this?
Neither are working for me. Their search gives an unwieldy table with 100+ columns, yours has only blanks for avg and max.  Splunk 9.1.2
Thank you for all updates. Due to large number of devices I decided to use method #2 from the last post. My SPL looks like ------- index=index2 OR (index=index1 sourcetype="metadata" "health.sev... See more...
Thank you for all updates. Due to large number of devices I decided to use method #2 from the last post. My SPL looks like ------- index=index2 OR (index=index1 sourcetype="metadata" "health.severity"!=NULL) | eval IP_ADDRESS=if(index=index1, interfaces.address, PRIMARY_IP_ADDRESS) ```PRIMARY_IP_ADDRESS is from index2 to match interfaces.address from index1111 | stats dc(index) as indexes values(DISCOVERED_OS) as DISCOVERED_OS by interfaces.address | where indexes=2 | table IP_ADDRESS ________ Query runs with no errors, but produced 0(zero) events Thank you, Leon
I am sorry but I don't see any commands. Did you mean to attach them to the post?
Working with just this example, the same applies across the board. get_device_trajectory_2:action_result.data.*.events.*.file.parent.file_name .data.*.events.*. is most likely your problem. Every t... See more...
Working with just this example, the same applies across the board. get_device_trajectory_2:action_result.data.*.events.*.file.parent.file_name .data.*.events.*. is most likely your problem. Every time your filter block hits a true, you're telling your format block to pull in all of the file names in the event data from get_device_trajectory_2. You'll need to find a way to tell it to only pull in the information from the index of the item you care about. Something like  get_device_trajectory_2:action_result.data.*.events.X.file.parent.file_name​ where X is the item in the list that evaluated true.
How are you measuring / detecting the value of the load? How often do you want to check? Over what period do you want to measure the load?
I have 2 servers (hosts) and I need to create an alert so that when the difference in value (or load) between the 2 hosts is greater than 50 percent, it gives results (alerts).