All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Fiddling with _raw event using INGEST_EVAL could be tricky. You can use the normal eval text functions but I suppose you'd have to set $pd:_raw$ as the destination key (maybe normal _raw would work a... See more...
Fiddling with _raw event using INGEST_EVAL could be tricky. You can use the normal eval text functions but I suppose you'd have to set $pd:_raw$ as the destination key (maybe normal _raw would work as well, don't know never tried that; only used it to create new fields).
Hi @tv00638481  Please check these things...  https://lantern.splunk.com/Data_Descriptors/Salesforce#:~:text=Salesforce%20data%20can%20be%20used,and%20for%20data%20loss%20prevention.   This is fr... See more...
Hi @tv00638481  Please check these things...  https://lantern.splunk.com/Data_Descriptors/Salesforce#:~:text=Salesforce%20data%20can%20be%20used,and%20for%20data%20loss%20prevention.   This is from Splunk Employee  gschatz ....For an example of a SBF use case, see how the Otto group reduces system complexity with Splunk Business Flow. https://www.splunk.com/en_us/customers/success-stories/sbf-otto-group.html https://community.splunk.com/t5/All-Apps-and-Add-ons/Anyone-Using-Splunk-App-for-Salesforce-Use-Cases/m-p/476521   Splunk App for Salesforce - will be helpful for data onboarding and dashboards. https://splunkbase.splunk.com/app/1931/   https://www.splunk.com/en_us/blog/partners/monitor-salesforce-s-real-time-events-with-splunk.html https://lantern.splunk.com/Splunk_Platform/UCE/Security/Threat_Hunting/Protecting_a_Salesforce_cloud_deployment        
To be honest, I'm not fully sure at which step of the pipeline (if any) those non-printable characters are escaped. I'll have to verify it. But still - it would be best if you could make the source ... See more...
To be honest, I'm not fully sure at which step of the pipeline (if any) those non-printable characters are escaped. I'll have to verify it. But still - it would be best if you could make the source generate logs without the formating codes - they don't belong there. It's a presentation layer, those codes shouldn't be in the log entries.
2) Actually you can get away with just one streamstats. Replace the other one with autoregress. (But yes, it will still give you two separate passes across your results)
1) If the Rank is 1, it needs to remain 1, or if there is a difference in values, the rank needs to remain the same (as it is already correct), otherwise, if there is no difference between the curren... See more...
1) If the Rank is 1, it needs to remain 1, or if there is a difference in values, the rank needs to remain the same (as it is already correct), otherwise, if there is no difference between the current and previous value, the rank should be the same as the previous rank. By setting it to null(), when the filldown happens, the rank is copied down to all positions with the same rank. 2) It is not possible to do with just one streamstats because the first streamstats has to operate over the whole pipeline, whereas the second has to operate with a (rolling) window of two events.
Which HEC endpoint you are using? It depends on that what you can do for event. Here is their instruction about it https://www.aplura.com/assets/pdf/hec_pipelines.pdf r. Ismo
Hi, I would like to ask a question regarding the lookups table. I am managing logs about login and I want to be sure that on a specific host you can access only with a specific IP address, otherwise... See more...
Hi, I would like to ask a question regarding the lookups table. I am managing logs about login and I want to be sure that on a specific host you can access only with a specific IP address, otherwise alert is triggered. So basically I have a lookup built like this IP HOST 1.1.1.1 host1 2.2.2.2 host2 3.3.3.3 host3 My purpose is to build a query search that finds whenever the IP-HOST association is not respected. 1.1.1.1 connects to host1 ---> OK 1.1.1.1 connects to host2 ---> BAD 2.2.2.2 connects to host1 ---> BAD The connection from host1 should arrive only from 1.1.1.1, etc.. How can I text this query?  Thank you
This is what it looks like straight from the log file: 2023-11-15 11:47:21,605 backend_2023.2.8: INFO  [-dispatcher-7] vip.service.northbound.MrpService.serverakkaAddress=akka://bac... See more...
This is what it looks like straight from the log file: 2023-11-15 11:47:21,605 backend_2023.2.8: INFO  [-dispatcher-7] vip.service.northbound.MrpService.serverakkaAddress=akka://backend, akkaUid=2193530468036521242 Server is alive - num conns = 0 of course it looks better from the terminal  
Unfortunately, this is not an option for Splunk Cloud
Thank you for your response! so in my scenario will below work? Props: [test:syslog] SHOULD_LINEMERGE = false EVENT_BREAKER_ENABLE = true TRANSFORMS-test_source = nullFilter, test_source, test_f... See more...
Thank you for your response! so in my scenario will below work? Props: [test:syslog] SHOULD_LINEMERGE = false EVENT_BREAKER_ENABLE = true TRANSFORMS-test_source = nullFilter, test_source, test_format_source REPORT-regex_field_extraction = test_regex_field_extraction, test_file_name_file_path REPORT-dvc = test_dvc Transforms: [test_source] REGEX = ProductName="([^"]+)" DEST_KEY = MetaData:Source FORMAT = source::$1 [test_format_source] INGEST_EVAL = source=replace(lower(source), "\s", "_") [test_dvc] REGEX = ^<\d+>\d\s[^\s]+\s([^\s]+) FORMAT = dvc::"$1" [nullFilter] REGEX = (?mi)XYZData\>(.*)?=\<*?\/XYZData\> DEST_KEY = queue FORMAT = nullQueue [test_regex_field_extraction] REGEX = <([\w-]+)>([^<]+?)<\/\1> FORMAT = $1::$2 CLEAN_KEYS = false [test_file_name_file_path] REGEX = ^(.+)[\\/]([^\\/]+)$ FORMAT = source_process_name::$2 source_process_path::$1 SOURCE_KEY = SourceProcessName [test_severity_lookup] filename = test_severity.csv [test_action_lookup] filename = test_action_v110.csv case_sensitive_match = false
Hi there, thank you for your response! can you help by sharing the configuration using (INGEST_EVAL) to trim out this specific part of the event.
Hi all, I'm trying to configure SSL certificate for management port 8089 on Manager Node and Indexers. In file $SPLUNK_HOME/etc/system/local/server.conf in Manager Node and Indexers.   [sslConfig... See more...
Hi all, I'm trying to configure SSL certificate for management port 8089 on Manager Node and Indexers. In file $SPLUNK_HOME/etc/system/local/server.conf in Manager Node and Indexers.   [sslConfig] sslRootCAPath = <path_to_rootCA> sslPassword = mycertpass enableSplunkdSSL = true serverCert = <path_to_manager_or_indexer_cert> requireClientCert = true sslAltNameToCheck = manage-node.example.com   I check rootCA and my server certificate in Manager Node and Indexers with `openssl verify` and it return OK. I use the same certificate for Indexers and one for Manager Node. All my certificate have purpose is SSL server and SSL client: X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication But when I set `requireClientCert = true`, it return "unsupported certificate" error and I can't access to Splunk Web of Manager Node. Please help me to fix this! 
Thank you for your response, i tried SEDCMD as you suggested in our test environment but with g in the last (SEDCMD-rm_XYZData = s/XYZData\>.*\<\/XYZData\>//g) it only works if i don't use the curren... See more...
Thank you for your response, i tried SEDCMD as you suggested in our test environment but with g in the last (SEDCMD-rm_XYZData = s/XYZData\>.*\<\/XYZData\>//g) it only works if i don't use the current Add-on, is there anything i missing? 
Hi @AL3Z, in this case you cannot use tstats but the norma search, anyway the logic is the same: index=your_index ParentProcessName="C:\Windows\System32\cmd.exe" | stats count BY host | append [ | ... See more...
Hi @AL3Z, in this case you cannot use tstats but the norma search, anyway the logic is the same: index=your_index ParentProcessName="C:\Windows\System32\cmd.exe" | stats count BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0  Ciao. Giuseppe
Hi at all, new failed test: I also tried to use  transforms.conf instead of SEDCMD in props.conf (as I usually do) with no luck: [set_sourcetype_linux_audit_remove] REGEX = (?ms).*\"message\":\"([^... See more...
Hi at all, new failed test: I also tried to use  transforms.conf instead of SEDCMD in props.conf (as I usually do) with no luck: [set_sourcetype_linux_audit_remove] REGEX = (?ms).*\"message\":\"([^\"]+).* FORMAT = $2 DEST_KEY = _raw and I tried [set_sourcetype_linux_audit_remove] REGEX = .*\"message\":\"([^\"]+).* FORMAT = $1 DEST_KEY = _raw with the same result. Ciao. Giuseppe
@gcusello  Hi, I'd like to investigate which hosts aren't forwarding the specific events with the ParentProcessName="C:\Windows\System32\cmd.exe" to Splunk. How can we troubleshoot if a host isn't s... See more...
@gcusello  Hi, I'd like to investigate which hosts aren't forwarding the specific events with the ParentProcessName="C:\Windows\System32\cmd.exe" to Splunk. How can we troubleshoot if a host isn't sending its logs to Splunk? Thanks
Hi, I'm looking Security Use case on Salesforce application. Request to suggest if any please. Regards BT
Hi @dhana22 , in the multisite Indexer Cluster architecture, there's only one Cluster Manager not two, if you have two Cluster Manager you have two clusters. You can eventually have, in the seconda... See more...
Hi @dhana22 , in the multisite Indexer Cluster architecture, there's only one Cluster Manager not two, if you have two Cluster Manager you have two clusters. You can eventually have, in the secondary site, a turned off copy of the Cluster Manager but anyway the active CM is only one. For more infos see at https://docs.splunk.com/Documentation/Splunk/9.1.1/Indexer/Basicclusterarchitecture  Ciao. Giuseppe
Hi @MM0071 , let me understand, you want to filter the results of the main search using the first lookup, already filtered suing the second one, is it correct? If this is your requirement, my first... See more...
Hi @MM0071 , let me understand, you want to filter the results of the main search using the first lookup, already filtered suing the second one, is it correct? If this is your requirement, my first hint is to run a search the filters the raws of the first lookup using the second on so you have to use only one lookup. Anyway, if you want to use both the lookups in the same search, you can use your search and it should work fine or use the second lookup in the first lookup subsearch: index=netlogs [| inputlookup baddomains.csv | search NOT [| inputlookup good_domains.csv | fields domain] | eval url = "*.domain."*" | fields url] or something similar. Ciao. Giuseppe  
I need a python file/ function to be triggered while deleting a input/ configuration