All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello there,  our shop uses proofpoint vpn for our remote users to access on-prem resources. I've been looking into splunkbase to see if there's a published app, I don't see any add-on for vpn data ... See more...
Hello there,  our shop uses proofpoint vpn for our remote users to access on-prem resources. I've been looking into splunkbase to see if there's a published app, I don't see any add-on for vpn data ingestion. I see there's a proofpoing email security add on, but it doesn't seem to relate to vpn logs.  Any ideas what add-on\apps will work for it? thanks. 
Yes, i followed the steps . But its not worked in this case  Still showing below reason
Try this : |rex "project\sid[\s\:]+(?<project_id>[^\s]+).+?is[\s\:]+(?<size>[^\s]+).+?is[\s\:]+(?<upload_time_ms>\d+)"  
It goes on to say ... options in the source code. The options accept hexadecimal and RGBA formats, and can also be defined in the dashboard defaults. so try something like this { "type":... See more...
It goes on to say ... options in the source code. The options accept hexadecimal and RGBA formats, and can also be defined in the dashboard defaults. so try something like this { "type": "splunk.table", "title": "Sample title for testing color", "options": { "titleColor": "#ff0000"}, "context": {}, "containerOptions": {}, "showProgressBar": false, "showLastUpdated": false }  
While syslog-ng is often used with Splunk, it is not a part of Splunk solution and since your question is not related to issues with "interfacing" syslog-ng with Splunk but is rather a general issue ... See more...
While syslog-ng is often used with Splunk, it is not a part of Splunk solution and since your question is not related to issues with "interfacing" syslog-ng with Splunk but is rather a general issue with syslog-ng itself it'll be much better answered on its own mailing list. https://lists.balabit.hu/mailman/listinfo/syslog-ng  
Hello, 2 events does not produce 4 results, 2 events will produce just 1 result. The log I provided was just a sample set to show what I am searching.   So, if I search for just "View Refresh" for... See more...
Hello, 2 events does not produce 4 results, 2 events will produce just 1 result. The log I provided was just a sample set to show what I am searching.   So, if I search for just "View Refresh" for a duration of 1 hour, I see 4 sets of events - i.e 4 entries of "start" and "end" of each.   So when I ran my query I was expecting 4 duration values, 1 for each set. But I get 2 duration values.  RichGalloway, suggested to add maxspan along with transaction. I did that, but I still get the same result i.e. 2 duration values and NOT 4 duration values.  
In release 9.2.2403 I see that: You can customize the text color of dashboard panel titles and descriptions with the titleColor and descriptionColor options in the source code... But I'm ... See more...
In release 9.2.2403 I see that: You can customize the text color of dashboard panel titles and descriptions with the titleColor and descriptionColor options in the source code... But I'm not sure how to modify the source code appropriately to make this work.  If I have this basic starting point:   { "type": "splunk.table", "title": "Sample title for testing color", "options": {}, "context": {}, "containerOptions": {}, "showProgressBar": false, "showLastUpdated": false }   Where can I insert titleColor? My Splunkcloud version is Version:9.2.2403.108
Since I don't control the growth of data in the Contact DB,  I am trying to figure out a way to get an email alert if one of the groups exceeded 50k limit. That's exactly what my first suggestio... See more...
Since I don't control the growth of data in the Contact DB,  I am trying to figure out a way to get an email alert if one of the groups exceeded 50k limit. That's exactly what my first suggestion does: Print a line if and only if one of them exceeded 50k (if you substitute 5000 with 50000).  All you need is add sendmail after that.
Trying to use syslog-ng for latest Splunk enterprise.  I am getting error " Failed to acquire /run/systemd/journal/syslog socket, disabling systemd-syslog source" when I try to run the service manual... See more...
Trying to use syslog-ng for latest Splunk enterprise.  I am getting error " Failed to acquire /run/systemd/journal/syslog socket, disabling systemd-syslog source" when I try to run the service manually.  This error prevents me to run the syslog-ng service in systemctl during bootup.  Any idea or help would be appreciated.
Thanks for the reply.  Cheers.
Can you explain @richgalloway 's main question: How can two events produce 4 transactions (durations)? Here is an emulation of the two events you illustrated, and the transaction command to follow ... See more...
Can you explain @richgalloway 's main question: How can two events produce 4 transactions (durations)? Here is an emulation of the two events you illustrated, and the transaction command to follow   | makeresults format=csv data="_raw 2024-10-10T06:30:11.478-04:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : Start View Refresh (price_vw) !!! 2024-10-10T06:30:11.509-04:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : End View Refresh (price_vw) !!!" | eval _time = strptime(replace(_raw, "(\S+).*", "\1"), "%FT%T.%3N%z") | sort - _time ``` the above emulates index=* ("Start View Refresh (price_vw)" OR "End View Refresh (price_vw)") ``` | transaction endswith="End View Refresh" startswith="Start View Refresh"   The result is _raw _time closed_txn duration eventcount field_match_sum linecount 2024-10-10T06:30:11.478-04:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : Start View Refresh (price_vw) !!! 2024-10-10T06:30:11.509-04:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : End View Refresh (price_vw) !!! 2024-10-10 03:30:11.478 1 0.031 2 0 2 As richgalloway predicted, one duration.
So your timestamp extraction definition is not used because unless &auto_extract_timestamp=true is added to the /event URI, that endpoint skips timestamp extraction completely and uses the "time" fie... See more...
So your timestamp extraction definition is not used because unless &auto_extract_timestamp=true is added to the /event URI, that endpoint skips timestamp extraction completely and uses the "time" field from the event's envelope or (if there isn't one) a current timestamp from the receiving component (in your case - the HF).
@PickleRick  It is sending to services/collector/event    
Thanks @dural_yyz  Will try that.
No. Your data is not "in Splunk". You're fetching the results from the remote data source on every single search. I would ingest the data into Splunk's index and simply do stats-based "join" on that... See more...
No. Your data is not "in Splunk". You're fetching the results from the remote data source on every single search. I would ingest the data into Splunk's index and simply do stats-based "join" on that data.
Hi @PickleRick  Thanks for your help 1.  Like I mentioned, the DB is on a different connection, if it's possible it will take a while until the DB team work on this. So, as a workaround I will need... See more...
Hi @PickleRick  Thanks for your help 1.  Like I mentioned, the DB is on a different connection, if it's possible it will take a while until the DB team work on this. So, as a workaround I will need to do this at least to get the data now. 2. Yes 50k is for the join 3. Thanks. Let me look into _internal. The alerting that I am looking for is not only for a case where certain data hits a Splunk's internal threshold, but I also need it for other cases (non-Splunk's internal threshold), for example, if my scheduled report contains empty data or if data hits a certain threshold (max/min). 4.  Sorry, perhaps my explanation in the example is not clear enough because it's difficult to lay it out without a real example in SPL. Both tables (host table and contact table) in the example have been in Splunk and can be accessible via a DBX query. Like I mentioned before, the problem is that we cannot join in the DB; both are on different connections; the table host is in Connection A, and the table contact is in Connection B.  | dbxquery connection ="connectionA" query="select ip, host from table host" | dbxquery connection ="connectionB" query="select ip, contact from table contact" I did not search remotely on every search, but instead I ran this command for each subnet to find the number of rows. For example 10.0.1.0/16 => 20k rows and so on.   | dbxquery connection ="connectionB" query="select ip, contact from table contact where ip::inet<'10.0.0.0/16'" | dbxquery connection ="connectionB" query="select ip, contact from table contact where ip::inet<'10.1.0.0/16'" | dbxquery connection ="connectionB" query="select ip, contact from table contact where ip::inet<'10.2.0.0/16'" | dbxquery connection ="connectionB" query="select ip, contact from table contact where ip::inet<'10.3.0.0/16'" Once I figure the number of rows, then I group them until it hits right below 50k, so I am saving subsearches.  If one subnet above 50k, I will need to split them.   I hope this makes sense.  Note that this is only workaround. join max=0 type=left ip [| dbxquery connection ="connectionB" query="select ip, contact from table contact where ip::inet<'10.0.0.0/16' OR ip::inet<'10.1.0.0/16'" |eval source="group1" ]
I can make it wider by reducing the time frame, but last 7 days is usually the default. Are you sure we can't do it on the CSS part? Thanks
Yes, I understand that it's pushed to HEC input on the HF. But to which API endpoint? Because there are at at least three endpoints for the HEC input /services/collector/raw /services/collector/eve... See more...
Yes, I understand that it's pushed to HEC input on the HF. But to which API endpoint? Because there are at at least three endpoints for the HEC input /services/collector/raw /services/collector/event /services/collector/mint Additionally the /event endpoint can accept parameters changing the ingestion process. So I repeat my question - to which endpoint is your data being sent?
Ahh... you found out yourself what I've just wrote you Good job. Remember that case matters in field names. It might matter or not for field values depending on how you're using the condition. ... See more...
Ahh... you found out yourself what I've just wrote you Good job. Remember that case matters in field names. It might matter or not for field values depending on how you're using the condition. something | search a=b will match whenever field a has value of either b or B But something | where a="B" will match only upper-case B.
Case matters for field names so if you indeed use status_Code<300 when the field is named status_code it won't match