All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Isn't your alert a real-time one by any chance? (Which isn't a very good idea anyway).
Ok. Again. One search gives you a single number. Another search returns several numbers (depending on how many titles you have in your data). What do you want to substract from what? And again - why... See more...
Ok. Again. One search gives you a single number. Another search returns several numbers (depending on how many titles you have in your data). What do you want to substract from what? And again - why extract so many fields when in the end you're just doing stats count?
Ok. So you have some data which might have some format but then again it might not and you want us to find for you something but you don't tell us what it is. How are we supposed to be able to do so ... See more...
Ok. So you have some data which might have some format but then again it might not and you want us to find for you something but you don't tell us what it is. How are we supposed to be able to do so if we neither understand the data nor know what you're looking for?
Hi, My table for VPN connection by a user put MAC address of the user's laptop in place of external IP.  CITY, COUNTRY, REGION, LAT, LON, everything related to the location of the user comes in blan... See more...
Hi, My table for VPN connection by a user put MAC address of the user's laptop in place of external IP.  CITY, COUNTRY, REGION, LAT, LON, everything related to the location of the user comes in blank. Does this show any unusual activity by user?
It should be achievable but not with GUI, not with "just CLI". It requires a lot of bending over backwards to find the buckets, copy them over, rename... Especially if the destination cluster is a pr... See more...
It should be achievable but not with GUI, not with "just CLI". It requires a lot of bending over backwards to find the buckets, copy them over, rename... Especially if the destination cluster is a production environment I wouldn't touch it without testing in a dev environment and a help from a friendly Splunk Consultant (or someone equally knowledgeable). And you can't "copy" just part of the buckets. You'd need to export the raw data and reingest it into the destination cluster.
consider this a one-time thing, based on the requirement we only move the data from one index to another index in another cluster, and we are not going to configure new data forwarding to the new ind... See more...
consider this a one-time thing, based on the requirement we only move the data from one index to another index in another cluster, and we are not going to configure new data forwarding to the new index.
OK, so the fact that there is a "launch" option might be the issue. Not that it is pointing to an incorrect location.
Thank you; and yes, I understand that. It's just that this is my only option "once per event" and I don't know if you have different options on your Splunk instance. If you do, then it's probably tha... See more...
Thank you; and yes, I understand that. It's just that this is my only option "once per event" and I don't know if you have different options on your Splunk instance. If you do, then it's probably that I need to get with the team that administers this and ask them to add the other options. 
Assuming words are always made of alphabetic letters, try something like this | rex max_match=0 field=txt1 "(?<initial>[a-zA-Z])[a-zA-Z]*_?" | eval new_word=mvjoin(initial,"")
Hi. Currently, I receive my Linux logs in an index called linux_logs and a syslog sourcetype. I would like to change the syslog sourcetype to the linux_secure sourcetype. How can I make that chang... See more...
Hi. Currently, I receive my Linux logs in an index called linux_logs and a syslog sourcetype. I would like to change the syslog sourcetype to the linux_secure sourcetype. How can I make that change so that the new logs already arrive in the new sourcetype? My configuration   Sourcetype syslog Sourcetype linux_secure   Thanks!    
I am looking to create an acronym from a dynamic string, by capturing the first letter of each broken substring How do I write the script, so I can capture whatever number of substrings gets generat... See more...
I am looking to create an acronym from a dynamic string, by capturing the first letter of each broken substring How do I write the script, so I can capture whatever number of substrings gets generated from the original string?     ie. "Hello_World_Look_At_Me" => "HWLAM" "Hello_World" => "HW"   I'm thinking of doing the following, but this seems to be pretty lengthy.  Would like to know if there's a more efficient way of getting this done. | eval txt1 = "Hello_World_Look_At_Me" | eval tmp = split(txt1, "_") | eval new_word = substr(mv_index(tmp,1), 1) + ...    
Can we use the similar approach for udp events as well ?    I have a udp port monitor configured in inputs of a UF.  And multiple hosts are sending the logs to that port while i want to whitelist ... See more...
Can we use the similar approach for udp events as well ?    I have a udp port monitor configured in inputs of a UF.  And multiple hosts are sending the logs to that port while i want to whitelist only one host and blacklist the rest. 
Is this a one-time thing, or will you have an on-going need to replicate data like this? Are you trying to isolate data for some org to search without being able to see other data?   If it is a one... See more...
Is this a one-time thing, or will you have an on-going need to replicate data like this? Are you trying to isolate data for some org to search without being able to see other data?   If it is a one-time thing, can you add an indexer to your existing cluster and get the data replicated/balanced to it.  Then, remove that from the cluster and reconfigure it to be on its own, and make the necessary retention settings for this new environment in your indexes.conf?  You could also remove any data you didn't need. Overall, moving just a single sourcetype from one environment to another when the data has already been at rest means you're probably going to need to do a manual export from Index Cluster A over to Index (cluster) B.  If it's not the only sourcetype in a single index, then that means you're going to have a bunch of interleaved data within your index buckets on the filesystem.  If you have on-going requirements to have data forked like this, then you should look towards using some sort of stream-management product like Splunk Edge or Cribl.
considering we can move data from one index to another index in the same cluster by moving buckets, I am in a scenario where to create an index on index cluster 2 with a new name with an increased re... See more...
considering we can move data from one index to another index in the same cluster by moving buckets, I am in a scenario where to create an index on index cluster 2 with a new name with an increased retention time period, how do I move the data of one index which is in say cluster 1 with replication factor 3 i.e., the indexer has a copy of every bucket in other two indexers under different names, which can not be copied to the new index with a different name in cluster 2. Here the challenge is to identify the replicated buckets to avoid being copied to a new index on another cluster, so we only copy the primary buckets to cluster 2 and allow cluster  2 to create the replication buckets based on its own rep factor.   was this achievable? either UI or CLI? how? If I only want to copy data of specific sourcetype from index 1 from cluster 1 to index 2 in cluster 2 how can I do that? NOTE: I can not create the index 2 with same name as index1, it's been created and under use for other data.   How can I acive this?  
This has finally been addressed in a useable way that seems to not have any downside/impact in 9.1 (search for "Preserve search history across search heads"): https://docs.splunk.com/Documentation/S... See more...
This has finally been addressed in a useable way that seems to not have any downside/impact in 9.1 (search for "Preserve search history across search heads"): https://docs.splunk.com/Documentation/Splunk/9.1.1/ReleaseNotes/MeetSplunk Scarily enough, it appears to be enabled by default.
This has finally been addressed in a useable way that seems to not have any downside/impact in 9.1 (search for "Preserve search history across search heads"): https://docs.splunk.com/Documentation/S... See more...
This has finally been addressed in a useable way that seems to not have any downside/impact in 9.1 (search for "Preserve search history across search heads"): https://docs.splunk.com/Documentation/Splunk/9.1.1/ReleaseNotes/MeetSplunk Scarily enough, it appears to be enabled by default.
If you click on "For each result" you will be getting alerts for each result separately. or    
Once is the only option that I have. I don't know if this is due to a backend config or what but it's my only option. 
It is in the screenshot you pasted yourself. You have either "Once" or "for each result". In your screenshot the option "Once" was selected.
That's strange because as far as I remember, the SA_CIM should _not_ have any "Launch app" link associated with it. It should have the "Set up" link which should lead to the setup screens (specifying... See more...
That's strange because as far as I remember, the SA_CIM should _not_ have any "Launch app" link associated with it. It should have the "Set up" link which should lead to the setup screens (specifying filters, enabling acceleration and such). But there is nothing "launchable" within the app.