All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi all I have a search that works for a range of a few days (eg earliest=-7d@d), but when running for alltime it breaks. I suspect this is an issue with appendcols or streamstats? Any pointers would... See more...
Hi all I have a search that works for a range of a few days (eg earliest=-7d@d), but when running for alltime it breaks. I suspect this is an issue with appendcols or streamstats? Any pointers would be appreciated. I'm using this to generate a lookup which I can then search instead of using an expensive alltime. index=ndx sourcetype=src (device="PM4") earliest=0 latest=@d | bucket _time span=1d | stats max(value) as PM4Val by _time index | appendcols [ search index=ndx sourcetype=src (device="PM2") earliest=0 latest=@d | bucket _time span=1d | stats max(value) as PM2Val by _time index ] | streamstats current=f last(PM4Val) as LastPM4Val last(PM2Val) as LastPM2Val by index | eval PM4ValDelta = PM4Val - LastPM4Val, PM2ValDelta = PM2Val - LastPM2Val | table _time, index, PM4Val, PM4ValDelta, PM2Val, PM2ValDelta | sort index -_time  
@PickleRick & @ITWhisperer thank you both for the replies, I understand there are other topics in the forum but all rely as you mentioned on a login/logout field, which is not present in my raw data... See more...
@PickleRick & @ITWhisperer thank you both for the replies, I understand there are other topics in the forum but all rely as you mentioned on a login/logout field, which is not present in my raw data, this is why I am calculating based on if there are events.   Sample of raw data:   Jun 24 15:01:20 10.50.8.100 1 2024-06-24T15:01:20+03:00 pafw01.company.com.sa - - - - 1,2024/06/24 15:01:19,007959000163983,TRAFFIC,end,2561,2024/06/24 15:01:19,192.168.44.43,10.130.11.2,0.0.0.0,0.0.0.0,GP-Access-Organization-Services-Applications,company\user1,,ssl,vsys1,GP-VPN,Trust,tunnel.21,ethernet1/4,splunk-forwarding,2024/06/24 15:01:19,1269402,1,61723,443,0,0,0x47a,tcp,allow,33254,13498,19756,210,2024/06/24 14:36:36,1454,White-List,,7352086992805546250,0x0,192.168.0.0-192.168.255.255,10.0.0.0-10.255.255.255,,105,105,tcp-rst-from-client,0,0,0,0,,pafw01,from-policy,,,0,,0,,N/A,0,0,0,0,09a8fe83-e848-4cbb-bdff-0d35a4ce96b2,0,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,2024-06-24T15:01:20.681+03:00,,,encrypted-tunnel,networking,browser-based,4,"used-by-malware,able-to-transfer-file,has-known-vulnerability,tunnel-other-application,pervasive-use",,ssl,no,no,0    
Forwarder is an active component in event's path so every connection from/to the forwarder has its own settings and should not affect other connections from/to it. You can have a forwarder receiving ... See more...
Forwarder is an active component in event's path so every connection from/to the forwarder has its own settings and should not affect other connections from/to it. You can have a forwarder receiving encrypted and compressed data and sending it unencrypted and uncompressed and vice versa. (although it's not recommended of course to send it not TLS-protected). Anyway, if you're using TLS, useClientSSLCompression is enabled by default (but you can still explicitly enable it). If you're not using TLS, with modern forwarders if one of the connection ends has compression enabled, the endpoints should negotiate compression on the link. (of course we're talking about s2s, not some syslog forwarding).
1. Don't use the "graphical" extractor. It is there for simple cases but for more complicated ones it might not find proper way of extracting fields and if it does it will most probably not be the pr... See more...
1. Don't use the "graphical" extractor. It is there for simple cases but for more complicated ones it might not find proper way of extracting fields and if it does it will most probably not be the proper and efficient way to do it. 2. As @ITWhisperer already pointed out - this seems to be a json structure. Use proper KV_MODE and don't try to be smart. Fiddling with regexes against structured data usually ends badly.
I would avoid the join command if possible (it has its quirks and limitations). You might want to simply extract your date/filename extraction from the whole 30 days span. Then just classify it by t... See more...
I would avoid the join command if possible (it has its quirks and limitations). You might want to simply extract your date/filename extraction from the whole 30 days span. Then just classify it by time (see if it's last 30 minutes or not) | eval period=if(now()-_time>1800,"before","now") Aggregate over your filenames | stats values(InputFilename) as InputFilenames values(OutputFilename) as OutputFilenames values(period) as periods by CompanyId Date And now you can only list those that were both "now" and "before" | where mvcount(periods)>1 (you might not need all those fields; I don't fully understand your business case but it's about comparing "now" and "then" - adjust accordingly)
Can you please share the sample code to create button in a each row of the table? For suppose I have 10 rows I need to put 10 buttons in the same table
Sadly, no there is no field for login/logout, this is why I am trying to calculate based on if there are events or activity for each user. Filtration is being made by source zone field. Sample ev... See more...
Sadly, no there is no field for login/logout, this is why I am trying to calculate based on if there are events or activity for each user. Filtration is being made by source zone field. Sample event: Jun 24 15:01:20 10.50.8.100 1 2024-06-24T15:01:20+03:00 pafw01.company.com.sa - - - - 1,2024/06/24 15:01:19,007959000163983,TRAFFIC,end,2561,2024/06/24 15:01:19,192.168.44.43,10.130.11.2,0.0.0.0,0.0.0.0,GP-Access-Organization-Services-Applications,company\user1,,ssl,vsys1,GP-VPN,Trust,tunnel.21,ethernet1/4,splunk-forwarding,2024/06/24 15:01:19,1269402,1,61723,443,0,0,0x47a,tcp,allow,33254,13498,19756,210,2024/06/24 14:36:36,1454,White-List,,7352086992805546250,0x0,192.168.0.0-192.168.255.255,10.0.0.0-10.255.255.255,,105,105,tcp-rst-from-client,0,0,0,0,,pafw01,from-policy,,,0,,0,,N/A,0,0,0,0,09a8fe83-e848-4cbb-bdff-0d35a4ce96b2,0,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,2024-06-24T15:01:20.681+03:00,,,encrypted-tunnel,networking,browser-based,4,"used-by-malware,able-to-transfer-file,has-known-vulnerability,tunnel-other-application,pervasive-use",,ssl,no,no,0
This is a fairly common question here. Use the search for other users' solutions. The answer will depend on what data you have about your user - most importantly whether you have a separate "log in"... See more...
This is a fairly common question here. Use the search for other users' solutions. The answer will depend on what data you have about your user - most importantly whether you have a separate "log in" and "log out" events for a user or you simply have one kind of a "presence" event which is supposed to be logged in fairly regular intervals as long as the user is logged in and its absence means that user has disconnected? Additional caveat with the log in/log out events is - what if the user disconnected without logging out (or the log out event simply got "lost" - which can happen when receiving data via UDP syslog). Anyway, you will need to use streamstats to either find last log in for a given log out or find the previous event to decide whether the current one is sufficiently "far back" to constitute a new session.
You presumably have more detail in your events which tell you that the user wasn't logged on for all this time? Please share some anonymised representative events which would allow you to determine w... See more...
You presumably have more detail in your events which tell you that the user wasn't logged on for all this time? Please share some anonymised representative events which would allow you to determine when the user logged on and when they logged off. Alternatively, do the event have some sort of session identifier which allows you to determine how long each session lasted?
we tried to implement this TA in our environment at both indexers and HF level, still data is not parsed, we are sending data from TM to syslog-ng server and reading through HF to sent to IDX. we tri... See more...
we tried to implement this TA in our environment at both indexers and HF level, still data is not parsed, we are sending data from TM to syslog-ng server and reading through HF to sent to IDX. we tried using sourcetype=trendmicro and still data is not parsed into other sourcetype. Kindly let me know what i am doing wrong here.
Dears,   I am trying to calculate how the total duration each user spends connected through VPN, their total online time.   I am using the below search, but the issue is for example in a 24 hour ... See more...
Dears,   I am trying to calculate how the total duration each user spends connected through VPN, their total online time.   I am using the below search, but the issue is for example in a 24 hour range if the user logged in only for 10 minutes at 1AM then again for 1 hour at 11AM the duration output will be 10 hours as it takes the very first event then the very last event. How can I calculate based on timeslots that have events only?   index=pa src_zone="GP-VPN" src_user="*" | stats earliest(_time) AS earliest latest(_time) AS latest BY src_user | eval duration = tostring((latest-earliest)/60)     Timeline below, should be ~14hours:     Search Results, duration in minutes, resulting in 24 hours which is not correct due to gap time: user earliest latest duration user1 1719144008.192 1719230507.192 1441.6500
Thank you so much  Join worked as expected !
What exactly did you change and what were the expected results? The comments in transforms.conf and props.conf must not be un-commented because they are not valid settings.
Hi @ITWhisperer  You provided rex is also not working as expected. 
we are collecting data over VPN site to site, so to manage properly and for security policies, instead of allowing all ips to communicate with IDX we only allowed HF working as IF to connect to IDX a... See more...
we are collecting data over VPN site to site, so to manage properly and for security policies, instead of allowing all ips to communicate with IDX we only allowed HF working as IF to connect to IDX and all UFs are connected to IF   btw thanks for your response. can you provide some documentation for this
Hi @Nawab , compression must be applied both on connections between UFs and IF and IF and IDXs. Only one question: why do you need an IF? Ciao. Giuseppe
Thank you for your supporting, i have updated the information, sorry because i did not capture it  
We have multiple forwarders sending data to an Intermediary forwarder and that IF is sending data to IDXs. IF is not storing any data in this case.   If we do compression on IF, will it automatical... See more...
We have multiple forwarders sending data to an Intermediary forwarder and that IF is sending data to IDXs. IF is not storing any data in this case.   If we do compression on IF, will it automatically apply on data coming from UFs or should we do this config on all UFs as well.
Here are the content of the local.meta file: [app/install/install_source_checksum] version = 9.2.1 modtime = 1719187793.873274000 [] access = read : [ * ], write : [ * ] export = system versio... See more...
Here are the content of the local.meta file: [app/install/install_source_checksum] version = 9.2.1 modtime = 1719187793.873274000 [] access = read : [ * ], write : [ * ] export = system version = 9.2.1 modtime = 1719188413.914413000
Hi @ITWhisperer  the below error message I got The extraction failed. If you are extracting multiple fields, try removing one or more fields. Start with extractions that are embedded within lon... See more...
Hi @ITWhisperer  the below error message I got The extraction failed. If you are extracting multiple fields, try removing one or more fields. Start with extractions that are embedded within longer text strings.