All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @nabeel652 , if the alert must run only one the second Tuesday of the month, you could use your cron and add a condition in the alert, that the day of mont must be between 8 and 15: <your_search... See more...
Hi @nabeel652 , if the alert must run only one the second Tuesday of the month, you could use your cron and add a condition in the alert, that the day of mont must be between 8 and 15: <your_search> (date_mday>7 date_mday<16) | ... Ciao. Giuseppe
So you're telling us it's incompatible or asking us about it? The docs say nothing about the underlying OS compatibility. What issue do you have with the input?
Hello, we meet issue as unix and linux add-on is incompatible with rhel 9.4 ( cause of scripted input). Does Splunk PCI Compliance Add-on used are rhel 9.4 and above compatibles ? regards  
Hello Splunkers I have a requirement to run an alert on second Tuesday of each month at 5:30am. I came up with    30 05 8-14 * 2   However, Splunk tends to run it every Tuesday regardl... See more...
Hello Splunkers I have a requirement to run an alert on second Tuesday of each month at 5:30am. I came up with    30 05 8-14 * 2   However, Splunk tends to run it every Tuesday regardless of the date being between 8th to 14th.  Is this a shortcoming in Splunk or I'm doing something wrong?
Hi Team, We are trying to extract JSON data with custom sourcetype and With the current configuration, all JSON objects are being combined into a single event in Splunk. Ideally, each JSON object ... See more...
Hi Team, We are trying to extract JSON data with custom sourcetype and With the current configuration, all JSON objects are being combined into a single event in Splunk. Ideally, each JSON object should be recognized as a separate event, but the configuration is not breaking them apart as expected   I observed that each JSON object has a comma after the closing brace }, which appears to be causing the issue by preventing Splunk from treating each JSON object as a separate event. sample data :  { "timestamp":"1727962122", "phonenumber": "0000000" "appname": "cisco" }, { "timestamp":"1727962123", "phonenumber": "0000000" "appname": "windows" },  Error message : Error message : JSON StreamID:0 had parsing error: Unexpected character while looking for value comma ',' Thanks in advance
I have clarified my requirements above, which might make it easier to understand.
splunkで以下のSPLをジョブのバックグラウンドに送りました。 | metadata type=sourcetypes | search totalCount > 0 その後、こちらのサーチのジョブを削除したのですが、splunkのサーチ画面を更新(F5)すると再度先ほどのジョブが実行されています。 こちらのジョブを完全に削除するにはどうしたらいいですか?何度もジョブ実行されてしまいます。
Hello, @sainag_splunk  I tried more things, but issue has still raised. So I checked our messages, I will try to apply something to props.conf I told my infra has 3 search heads, and 5 indexers. I... See more...
Hello, @sainag_splunk  I tried more things, but issue has still raised. So I checked our messages, I will try to apply something to props.conf I told my infra has 3 search heads, and 5 indexers. If I set the props.conf, the field kv_mode =json is applied to props.conf in both search head, UF? I mean this will be applied to UF, "<SPLUNK_HOME>/etc/deployment-app/<target>/local/props.conf" and 3 search head, all of "<SPLUNK_HOME>/system/local/props.conf"? How to apply something field to UF, search head?   --- I have tried to remove "EXTRACED_INDEXED=json", and add "kv_mode=json", but the results were shown as below: 24/10/28 16:33:57.000 { "Versions" : { "google_version" : "telemetry-json1.json", "ssd_vendor_name" : "Vendors", show more 257 rows 24/10/28 16:33:57.000 "0xbf4 INFO System shutdown: 0h 0h 0h", host = <host> source = <source> sourcetype = my_json 24/10/28 16:33:57.000 "0xbf4 INFO System state active: 0h 0h 0h", host = <host> source = <source> sourcetype = my_json 24/10/28 16:33:57.000 "0xbf1 INFO System shutdown: 0h 0h 0h", host = <host> source = <source> sourcetype = my_json 24/10/28 16:33:57.000 "0xbf1 INFO System state active: 0h 0h 0h", host = <host> source = <source> sourcetype = my_json 24/10/28 16:33:57.000 "0xbee INFO System shutdown: 0h 0h 0h", host = <host> source = <source> sourcetype = my_json 24/10/28 16:33:57.000 "0xbee INFO System state active: 0h 0h 0h", host = <host> source = <source> sourcetype = my_json 24/10/28 16:33:57.000 "0xbec INFO System shutdown: 0h 0h 0h", host = <host> source = <source> sourcetype = my_json 24/10/28 16:33:57.000 "dram_corrected_count" : 0, host = <host> source = <source> sourcetype = my_json 24/10/28 16:33:57.000 ], host = <host> source = <source> sourcetype = my_json 24/10/28 16:33:57.000 { } host = <host> source = <source> sourcetype = my_json   I don't know why indexers receieved the data with line by line after first json parsing. LINE BREAKER is same with above.  Thank you.
First, you must sign in to Splunkbase to be able to download apps.  Once you do that, you'll be directed to the Visdom website where you can find out more about their product.  I presume you can down... See more...
First, you must sign in to Splunkbase to be able to download apps.  Once you do that, you'll be directed to the Visdom website where you can find out more about their product.  I presume you can download their app once you purchase their product. Splunkbase says the app is supported by the developers.
Hi,   Thank you guys. This helped a lot. I am sorry for late reply. I was away for a weekend. The primary business case is to count number of emails and their sizes (grouped by sender's SMTP addr... See more...
Hi,   Thank you guys. This helped a lot. I am sorry for late reply. I was away for a weekend. The primary business case is to count number of emails and their sizes (grouped by sender's SMTP address) sent from Proofpoint SER to internal SMTPs. The secondary case is to get message level information about these messages (from, to, number of recipients, subject, size). These are two independent Splunk queries.
I'd rather chown the old version and make it match the new one.  I think I tried that on one of my update tests, and it complained a lot before failing.  That's kinda why I'm thinking of uninstalling... See more...
I'd rather chown the old version and make it match the new one.  I think I tried that on one of my update tests, and it complained a lot before failing.  That's kinda why I'm thinking of uninstalling the old one and installing it fresh.
Do you use scripts to do your install/upgrade.  Post event could you not just CHOWN the whole directory back to the original user of splunk to run as you originally have done. There are many reasons... See more...
Do you use scripts to do your install/upgrade.  Post event could you not just CHOWN the whole directory back to the original user of splunk to run as you originally have done. There are many reasons why this might not work for you.  Honestly though given that this is the new direction it would be something you have to carry forward with every upgrade.  While it would be a big lift the idea of moving everything over now might be easier than trying to always revert back to splunk user.
We  found Visdom for Citrix VDI listing on splunkbase interesting, but not seeing how to download the app to review.   Is this app still availble and supported by the developer(s)? 
Any reason why it has to be a filter and not a decision block? Do you want it to only match on one condition and ignore the other condition?
I recommend coding your modular input. You can use the page number as a checkpoint, then index a page and increment or decrement the checkpoint. Set an interval so that your input gets a page every X... See more...
I recommend coding your modular input. You can use the page number as a checkpoint, then index a page and increment or decrement the checkpoint. Set an interval so that your input gets a page every X seconds, then has a condition to stop when the checkpoint gets to the end. 
I haven't upgraded UF in a while, and I'm having some trouble figuring out how I should proceed with bringing it up to date.  I see that the current version has changed the user from splunk to splunk... See more...
I haven't upgraded UF in a while, and I'm having some trouble figuring out how I should proceed with bringing it up to date.  I see that the current version has changed the user from splunk to splunkfwd.  I also see that updating an existing UF keeps the user as splunk (this seems to work but not always).  This will means that new installations will use a different username than updated UF. This is a problem for me because I use scripts to make the permission changes to give splunk access to the appropriate log files.  I'm not finding a lot of guidance on how to keep this sane.  How have other organizations dealt with this? I'm tempted to uninstall UF and do a fresh install on every system.  That will force me to manage splunk servers differently than other linux servers, but that has to be less complicated than trying to keep track of which systems use splunk and which use splunkfwd.
You can download several versions directly from Splunkbase. But if the data "is not sending" it means that it's actually not being ingested because otherwise, unless you're explicitly filtering it, i... See more...
You can download several versions directly from Splunkbase. But if the data "is not sending" it means that it's actually not being ingested because otherwise, unless you're explicitly filtering it, it should be forwarded to your downstream receivers. So check your config, check your logs, check your metrics. There are no miracles - something must be wrong.
Can you clarify what you meant "join" will get me nowhere?   Based on several discussions, it is apparent that you treat data in Splunk like they are in a SQL database.   join is one of the ... See more...
Can you clarify what you meant "join" will get me nowhere?   Based on several discussions, it is apparent that you treat data in Splunk like they are in a SQL database.   join is one of the commands that is included in SPL for good reasons but often used outside of those reasons. It is true that left join can give you similar effect as append.  However, by using join command in this manner, you misguide yourself into thinking that Splunk is actually performing a useful join when there is nothing to "join".  This thinking is quite obvious in the initial searches you illustrated.  The sooner you get out of the habit of using join command, the easier Splunk will become for you. (Join in NoSQL should generally be avoided because of cost penalties; left join in NoSQL is even more expensive.  Although that is a lesser consideration in learning to program in SPL.)
There is no error message the data is just not sending from my HF.  will you please share all of the versions possible with Splunk enterprise 9.x or if you can share where I can download it at 
You ever figure out how to get it working? I'm having similar issue.