consider this a one-time thing, based on the requirement we only move the data from one index to another index in another cluster, and we are not going to configure new data forwarding to the new ind...
See more...
consider this a one-time thing, based on the requirement we only move the data from one index to another index in another cluster, and we are not going to configure new data forwarding to the new index.
Thank you; and yes, I understand that. It's just that this is my only option "once per event" and I don't know if you have different options on your Splunk instance. If you do, then it's probably tha...
See more...
Thank you; and yes, I understand that. It's just that this is my only option "once per event" and I don't know if you have different options on your Splunk instance. If you do, then it's probably that I need to get with the team that administers this and ask them to add the other options.
Assuming words are always made of alphabetic letters, try something like this | rex max_match=0 field=txt1 "(?<initial>[a-zA-Z])[a-zA-Z]*_?"
| eval new_word=mvjoin(initial,"")
Hi. Currently, I receive my Linux logs in an index called linux_logs and a syslog sourcetype. I would like to change the syslog sourcetype to the linux_secure sourcetype. How can I make that chang...
See more...
Hi. Currently, I receive my Linux logs in an index called linux_logs and a syslog sourcetype. I would like to change the syslog sourcetype to the linux_secure sourcetype. How can I make that change so that the new logs already arrive in the new sourcetype? My configuration Sourcetype syslog Sourcetype linux_secure Thanks!
I am looking to create an acronym from a dynamic string, by capturing the first letter of each broken substring How do I write the script, so I can capture whatever number of substrings gets generat...
See more...
I am looking to create an acronym from a dynamic string, by capturing the first letter of each broken substring How do I write the script, so I can capture whatever number of substrings gets generated from the original string? ie. "Hello_World_Look_At_Me" => "HWLAM" "Hello_World" => "HW" I'm thinking of doing the following, but this seems to be pretty lengthy. Would like to know if there's a more efficient way of getting this done. | eval txt1 = "Hello_World_Look_At_Me" | eval tmp = split(txt1, "_") | eval new_word = substr(mv_index(tmp,1), 1) + ...
Can we use the similar approach for udp events as well ? I have a udp port monitor configured in inputs of a UF. And multiple hosts are sending the logs to that port while i want to whitelist ...
See more...
Can we use the similar approach for udp events as well ? I have a udp port monitor configured in inputs of a UF. And multiple hosts are sending the logs to that port while i want to whitelist only one host and blacklist the rest.
Is this a one-time thing, or will you have an on-going need to replicate data like this? Are you trying to isolate data for some org to search without being able to see other data? If it is a one...
See more...
Is this a one-time thing, or will you have an on-going need to replicate data like this? Are you trying to isolate data for some org to search without being able to see other data? If it is a one-time thing, can you add an indexer to your existing cluster and get the data replicated/balanced to it. Then, remove that from the cluster and reconfigure it to be on its own, and make the necessary retention settings for this new environment in your indexes.conf? You could also remove any data you didn't need. Overall, moving just a single sourcetype from one environment to another when the data has already been at rest means you're probably going to need to do a manual export from Index Cluster A over to Index (cluster) B. If it's not the only sourcetype in a single index, then that means you're going to have a bunch of interleaved data within your index buckets on the filesystem. If you have on-going requirements to have data forked like this, then you should look towards using some sort of stream-management product like Splunk Edge or Cribl.
considering we can move data from one index to another index in the same cluster by moving buckets, I am in a scenario where to create an index on index cluster 2 with a new name with an increased re...
See more...
considering we can move data from one index to another index in the same cluster by moving buckets, I am in a scenario where to create an index on index cluster 2 with a new name with an increased retention time period, how do I move the data of one index which is in say cluster 1 with replication factor 3 i.e., the indexer has a copy of every bucket in other two indexers under different names, which can not be copied to the new index with a different name in cluster 2. Here the challenge is to identify the replicated buckets to avoid being copied to a new index on another cluster, so we only copy the primary buckets to cluster 2 and allow cluster 2 to create the replication buckets based on its own rep factor. was this achievable? either UI or CLI? how? If I only want to copy data of specific sourcetype from index 1 from cluster 1 to index 2 in cluster 2 how can I do that? NOTE: I can not create the index 2 with same name as index1, it's been created and under use for other data. How can I acive this?
This has finally been addressed in a useable way that seems to not have any downside/impact in 9.1 (search for "Preserve search history across search heads"): https://docs.splunk.com/Documentation/S...
See more...
This has finally been addressed in a useable way that seems to not have any downside/impact in 9.1 (search for "Preserve search history across search heads"): https://docs.splunk.com/Documentation/Splunk/9.1.1/ReleaseNotes/MeetSplunk Scarily enough, it appears to be enabled by default.
This has finally been addressed in a useable way that seems to not have any downside/impact in 9.1 (search for "Preserve search history across search heads"): https://docs.splunk.com/Documentation/S...
See more...
This has finally been addressed in a useable way that seems to not have any downside/impact in 9.1 (search for "Preserve search history across search heads"): https://docs.splunk.com/Documentation/Splunk/9.1.1/ReleaseNotes/MeetSplunk Scarily enough, it appears to be enabled by default.
That's strange because as far as I remember, the SA_CIM should _not_ have any "Launch app" link associated with it. It should have the "Set up" link which should lead to the setup screens (specifying...
See more...
That's strange because as far as I remember, the SA_CIM should _not_ have any "Launch app" link associated with it. It should have the "Set up" link which should lead to the setup screens (specifying filters, enabling acceleration and such). But there is nothing "launchable" within the app.
Hi, How to create automatic tag if: eventtypes.conf [duo_authentication] search = sourcetype=json:duo type=authentication tags.conf [eventtype=duo_authentication] authentication = enabled I...
See more...
Hi, How to create automatic tag if: eventtypes.conf [duo_authentication] search = sourcetype=json:duo type=authentication tags.conf [eventtype=duo_authentication] authentication = enabled I also add to admin, user, power roles "default: index=index_of_duo". But, simply it want add tag (dont understand why if above eventtype search is working)
Checked each SH individually, they all behave exactly the same. I find the app in the management page, I klick the "launch app" link and I end up here: <splunkURI>:8000/en-GB/app/Splunk_SA_CIM/ta_n...
See more...
Checked each SH individually, they all behave exactly the same. I find the app in the management page, I klick the "launch app" link and I end up here: <splunkURI>:8000/en-GB/app/Splunk_SA_CIM/ta_nix_configuration
The CIM setup page now works just fine, it is the "launch app" or using the URI for the Splunk_SA_CIM app directly which lands me on the Splunkt_TA_Linux configuration page. I have not noticed this ...
See more...
The CIM setup page now works just fine, it is the "launch app" or using the URI for the Splunk_SA_CIM app directly which lands me on the Splunkt_TA_Linux configuration page. I have not noticed this behaviour for any other app so far, everything else seems to be working just fine. It is possible, of course, that the problem exists outside of Splunk. At least I know this is not the expected behaviour, so now I just have to figure out why I cannot access the app without landing on the wrong page