All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @JandrevdM , you must find a common key between the records, if identity is your key, you could try something like this: <your_search> | stats values(email) AS email values(extensionA... See more...
Hi @JandrevdM , you must find a common key between the records, if identity is your key, you could try something like this: <your_search> | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute119 AS extensionAttribute11 values(first) AS first values(last) AS last BY identity Ciao. Giuseppe
Good day, Is there a way to join all my rows into one? My simple query    index=collect_identities sourcetype=ldap:query user | dedup email | table email extensionAttribute10 extensionAttribu... See more...
Good day, Is there a way to join all my rows into one? My simple query    index=collect_identities sourcetype=ldap:query user | dedup email | table email extensionAttribute10 extensionAttribute11 first last identity     Shows results as, as I have more than one email email extensionAttribute10 extensionAttribute11 first last identity user@domain.com   user@consultant.com User Surname USurname userT1@domain.com user@domain.com user@domain.com User Surname USurname userT0@domain.com user@domain.com user@domain.com User Surname USurname I want to add a primary key that searches for "user@domain.com" and display all their email addresses that they have in one row.  Example email extensionAttribute10 extensionAttribute11 first last identity email2 email3 user@domain.com user@domain.com user@consultant.com  User Surname USurname userT1@domain.com userT0@domain.com
Hello all, I configured an app and in the asset conf, I added an environment variable "https_proxy", but somehow I see that the action still go out via the proxy, but tries to go directly to the des... See more...
Hello all, I configured an app and in the asset conf, I added an environment variable "https_proxy", but somehow I see that the action still go out via the proxy, but tries to go directly to the destination address, I opened the app code to see the referring to this variables, but I couldn't find it. Can anyone shed light and explain how can I check the referring to those variables? in other apps I manage to use the proxy variable successfully, it only happens to me with AD LDAP app 
Hi There, I have a cluster on MongoDB Atlas that contains my data connected to my application. That cluster produces some logs that can be downloaded in .log format or .gz (compressed format). To que... See more...
Hi There, I have a cluster on MongoDB Atlas that contains my data connected to my application. That cluster produces some logs that can be downloaded in .log format or .gz (compressed format). To query and view my logs easily, I would like to use Splunk  Is there any way to ingest those logs from MongoDB Atlas and into a Splunk instance via API?  If there is any,  could anyone kindly share any documentation or process on how to accomplish this? NB: I can obtain the logs from MongoDB Atlas via a cURL request  
Hi @rpfutrell , it should be a payment app. You have to contact the producer in its site. It's maintained by the developer. Ciao. Giuseppe
Hi @splunksuperman , I suppose that you're using a CSV file to inport these data. You have two choices: use as timestamp a date and time in each row of the csv file (if present), use the current... See more...
Hi @splunksuperman , I suppose that you're using a CSV file to inport these data. You have two choices: use as timestamp a date and time in each row of the csv file (if present), use the current time (the index time) as timestamp. for both the solutions, you have to add an option in your sourcetype stanza in props.conf: for the first one: [your_sourcetype] TIMESTAMP_FIELDS = <your_timestamp_field> for the second one: [your_sourcetype] DATETIME_CONFIG = CURRENT Then, to check if a server isn't sending logs, you have two choices: create a lookup containing the list of the hosts to monitor (called e.g. perimeter.csv and containing at least one field called "host") and running a search like the following that checks if there are logs from the listed hosts in the last 24 hours: | tstats count WHERE index=your_index earliest=-24h latest=now BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 or checking if a server sent logs in the last 30 days but not in the last 24 hours with the following search: | tstats latest(_time) AS latest count WHERE index=your_index earliest=-30d latest=now BY host | eval period=if(latest<now()-86400,"Previous","Latest", latest=strftime(latest,"%Y-%m-%d %H:%M:%S") | where period="Previous" | table host latest | rename latest AS "Last Connection" Ciao. Giuseppe  
@anandhalagaras1 I am curious since I am working on a similar project, were you able to figure this out? I would appreciate it if you could share your response and your findings.
Which Forwarder agent version includes the fix for the OpenSSL 1.0.2 < 1.0.2zk vulnerability? If there is no fix for this yet, when can we expect one, or which forwarder version will include the fix... See more...
Which Forwarder agent version includes the fix for the OpenSSL 1.0.2 < 1.0.2zk vulnerability? If there is no fix for this yet, when can we expect one, or which forwarder version will include the fix to remediate this vulnerability? OpenSSL SEoL (1.0.2.x) OpenSSL 1.0.2 < 1.0.2zk Vulnerability
Hi @nabeel652 , if the alert must run only one the second Tuesday of the month, you could use your cron and add a condition in the alert, that the day of mont must be between 8 and 15: <your_search... See more...
Hi @nabeel652 , if the alert must run only one the second Tuesday of the month, you could use your cron and add a condition in the alert, that the day of mont must be between 8 and 15: <your_search> (date_mday>7 date_mday<16) | ... Ciao. Giuseppe
So you're telling us it's incompatible or asking us about it? The docs say nothing about the underlying OS compatibility. What issue do you have with the input?
Hello, we meet issue as unix and linux add-on is incompatible with rhel 9.4 ( cause of scripted input). Does Splunk PCI Compliance Add-on used are rhel 9.4 and above compatibles ? regards  
Hello Splunkers I have a requirement to run an alert on second Tuesday of each month at 5:30am. I came up with    30 05 8-14 * 2   However, Splunk tends to run it every Tuesday regardl... See more...
Hello Splunkers I have a requirement to run an alert on second Tuesday of each month at 5:30am. I came up with    30 05 8-14 * 2   However, Splunk tends to run it every Tuesday regardless of the date being between 8th to 14th.  Is this a shortcoming in Splunk or I'm doing something wrong?
Hi Team, We are trying to extract JSON data with custom sourcetype and With the current configuration, all JSON objects are being combined into a single event in Splunk. Ideally, each JSON object ... See more...
Hi Team, We are trying to extract JSON data with custom sourcetype and With the current configuration, all JSON objects are being combined into a single event in Splunk. Ideally, each JSON object should be recognized as a separate event, but the configuration is not breaking them apart as expected   I observed that each JSON object has a comma after the closing brace }, which appears to be causing the issue by preventing Splunk from treating each JSON object as a separate event. sample data :  { "timestamp":"1727962122", "phonenumber": "0000000" "appname": "cisco" }, { "timestamp":"1727962123", "phonenumber": "0000000" "appname": "windows" },  Error message : Error message : JSON StreamID:0 had parsing error: Unexpected character while looking for value comma ',' Thanks in advance
I have clarified my requirements above, which might make it easier to understand.
splunkで以下のSPLをジョブのバックグラウンドに送りました。 | metadata type=sourcetypes | search totalCount > 0 その後、こちらのサーチのジョブを削除したのですが、splunkのサーチ画面を更新(F5)すると再度先ほどのジョブが実行されています。 こちらのジョブを完全に削除するにはどうしたらいいですか?何度もジョブ実行されてしまいます。
Hello, @sainag_splunk  I tried more things, but issue has still raised. So I checked our messages, I will try to apply something to props.conf I told my infra has 3 search heads, and 5 indexers. I... See more...
Hello, @sainag_splunk  I tried more things, but issue has still raised. So I checked our messages, I will try to apply something to props.conf I told my infra has 3 search heads, and 5 indexers. If I set the props.conf, the field kv_mode =json is applied to props.conf in both search head, UF? I mean this will be applied to UF, "<SPLUNK_HOME>/etc/deployment-app/<target>/local/props.conf" and 3 search head, all of "<SPLUNK_HOME>/system/local/props.conf"? How to apply something field to UF, search head?   --- I have tried to remove "EXTRACED_INDEXED=json", and add "kv_mode=json", but the results were shown as below: 24/10/28 16:33:57.000 { "Versions" : { "google_version" : "telemetry-json1.json", "ssd_vendor_name" : "Vendors", show more 257 rows 24/10/28 16:33:57.000 "0xbf4 INFO System shutdown: 0h 0h 0h", host = <host> source = <source> sourcetype = my_json 24/10/28 16:33:57.000 "0xbf4 INFO System state active: 0h 0h 0h", host = <host> source = <source> sourcetype = my_json 24/10/28 16:33:57.000 "0xbf1 INFO System shutdown: 0h 0h 0h", host = <host> source = <source> sourcetype = my_json 24/10/28 16:33:57.000 "0xbf1 INFO System state active: 0h 0h 0h", host = <host> source = <source> sourcetype = my_json 24/10/28 16:33:57.000 "0xbee INFO System shutdown: 0h 0h 0h", host = <host> source = <source> sourcetype = my_json 24/10/28 16:33:57.000 "0xbee INFO System state active: 0h 0h 0h", host = <host> source = <source> sourcetype = my_json 24/10/28 16:33:57.000 "0xbec INFO System shutdown: 0h 0h 0h", host = <host> source = <source> sourcetype = my_json 24/10/28 16:33:57.000 "dram_corrected_count" : 0, host = <host> source = <source> sourcetype = my_json 24/10/28 16:33:57.000 ], host = <host> source = <source> sourcetype = my_json 24/10/28 16:33:57.000 { } host = <host> source = <source> sourcetype = my_json   I don't know why indexers receieved the data with line by line after first json parsing. LINE BREAKER is same with above.  Thank you.
First, you must sign in to Splunkbase to be able to download apps.  Once you do that, you'll be directed to the Visdom website where you can find out more about their product.  I presume you can down... See more...
First, you must sign in to Splunkbase to be able to download apps.  Once you do that, you'll be directed to the Visdom website where you can find out more about their product.  I presume you can download their app once you purchase their product. Splunkbase says the app is supported by the developers.
Hi,   Thank you guys. This helped a lot. I am sorry for late reply. I was away for a weekend. The primary business case is to count number of emails and their sizes (grouped by sender's SMTP addr... See more...
Hi,   Thank you guys. This helped a lot. I am sorry for late reply. I was away for a weekend. The primary business case is to count number of emails and their sizes (grouped by sender's SMTP address) sent from Proofpoint SER to internal SMTPs. The secondary case is to get message level information about these messages (from, to, number of recipients, subject, size). These are two independent Splunk queries.
I'd rather chown the old version and make it match the new one.  I think I tried that on one of my update tests, and it complained a lot before failing.  That's kinda why I'm thinking of uninstalling... See more...
I'd rather chown the old version and make it match the new one.  I think I tried that on one of my update tests, and it complained a lot before failing.  That's kinda why I'm thinking of uninstalling the old one and installing it fresh.
Do you use scripts to do your install/upgrade.  Post event could you not just CHOWN the whole directory back to the original user of splunk to run as you originally have done. There are many reasons... See more...
Do you use scripts to do your install/upgrade.  Post event could you not just CHOWN the whole directory back to the original user of splunk to run as you originally have done. There are many reasons why this might not work for you.  Honestly though given that this is the new direction it would be something you have to carry forward with every upgrade.  While it would be a big lift the idea of moving everything over now might be easier than trying to always revert back to splunk user.