All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Which HEC endpoint you are using? It depends on that what you can do for event. Here is their instruction about it https://www.aplura.com/assets/pdf/hec_pipelines.pdf r. Ismo
Hi, I would like to ask a question regarding the lookups table. I am managing logs about login and I want to be sure that on a specific host you can access only with a specific IP address, otherwise... See more...
Hi, I would like to ask a question regarding the lookups table. I am managing logs about login and I want to be sure that on a specific host you can access only with a specific IP address, otherwise alert is triggered. So basically I have a lookup built like this IP HOST 1.1.1.1 host1 2.2.2.2 host2 3.3.3.3 host3 My purpose is to build a query search that finds whenever the IP-HOST association is not respected. 1.1.1.1 connects to host1 ---> OK 1.1.1.1 connects to host2 ---> BAD 2.2.2.2 connects to host1 ---> BAD The connection from host1 should arrive only from 1.1.1.1, etc.. How can I text this query?  Thank you
This is what it looks like straight from the log file: 2023-11-15 11:47:21,605 backend_2023.2.8: INFO  [-dispatcher-7] vip.service.northbound.MrpService.serverakkaAddress=akka://bac... See more...
This is what it looks like straight from the log file: 2023-11-15 11:47:21,605 backend_2023.2.8: INFO  [-dispatcher-7] vip.service.northbound.MrpService.serverakkaAddress=akka://backend, akkaUid=2193530468036521242 Server is alive - num conns = 0 of course it looks better from the terminal  
Unfortunately, this is not an option for Splunk Cloud
Thank you for your response! so in my scenario will below work? Props: [test:syslog] SHOULD_LINEMERGE = false EVENT_BREAKER_ENABLE = true TRANSFORMS-test_source = nullFilter, test_source, test_f... See more...
Thank you for your response! so in my scenario will below work? Props: [test:syslog] SHOULD_LINEMERGE = false EVENT_BREAKER_ENABLE = true TRANSFORMS-test_source = nullFilter, test_source, test_format_source REPORT-regex_field_extraction = test_regex_field_extraction, test_file_name_file_path REPORT-dvc = test_dvc Transforms: [test_source] REGEX = ProductName="([^"]+)" DEST_KEY = MetaData:Source FORMAT = source::$1 [test_format_source] INGEST_EVAL = source=replace(lower(source), "\s", "_") [test_dvc] REGEX = ^<\d+>\d\s[^\s]+\s([^\s]+) FORMAT = dvc::"$1" [nullFilter] REGEX = (?mi)XYZData\>(.*)?=\<*?\/XYZData\> DEST_KEY = queue FORMAT = nullQueue [test_regex_field_extraction] REGEX = <([\w-]+)>([^<]+?)<\/\1> FORMAT = $1::$2 CLEAN_KEYS = false [test_file_name_file_path] REGEX = ^(.+)[\\/]([^\\/]+)$ FORMAT = source_process_name::$2 source_process_path::$1 SOURCE_KEY = SourceProcessName [test_severity_lookup] filename = test_severity.csv [test_action_lookup] filename = test_action_v110.csv case_sensitive_match = false
Hi there, thank you for your response! can you help by sharing the configuration using (INGEST_EVAL) to trim out this specific part of the event.
Hi all, I'm trying to configure SSL certificate for management port 8089 on Manager Node and Indexers. In file $SPLUNK_HOME/etc/system/local/server.conf in Manager Node and Indexers.   [sslConfig... See more...
Hi all, I'm trying to configure SSL certificate for management port 8089 on Manager Node and Indexers. In file $SPLUNK_HOME/etc/system/local/server.conf in Manager Node and Indexers.   [sslConfig] sslRootCAPath = <path_to_rootCA> sslPassword = mycertpass enableSplunkdSSL = true serverCert = <path_to_manager_or_indexer_cert> requireClientCert = true sslAltNameToCheck = manage-node.example.com   I check rootCA and my server certificate in Manager Node and Indexers with `openssl verify` and it return OK. I use the same certificate for Indexers and one for Manager Node. All my certificate have purpose is SSL server and SSL client: X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication But when I set `requireClientCert = true`, it return "unsupported certificate" error and I can't access to Splunk Web of Manager Node. Please help me to fix this! 
Thank you for your response, i tried SEDCMD as you suggested in our test environment but with g in the last (SEDCMD-rm_XYZData = s/XYZData\>.*\<\/XYZData\>//g) it only works if i don't use the curren... See more...
Thank you for your response, i tried SEDCMD as you suggested in our test environment but with g in the last (SEDCMD-rm_XYZData = s/XYZData\>.*\<\/XYZData\>//g) it only works if i don't use the current Add-on, is there anything i missing? 
Hi @AL3Z, in this case you cannot use tstats but the norma search, anyway the logic is the same: index=your_index ParentProcessName="C:\Windows\System32\cmd.exe" | stats count BY host | append [ | ... See more...
Hi @AL3Z, in this case you cannot use tstats but the norma search, anyway the logic is the same: index=your_index ParentProcessName="C:\Windows\System32\cmd.exe" | stats count BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0  Ciao. Giuseppe
Hi at all, new failed test: I also tried to use  transforms.conf instead of SEDCMD in props.conf (as I usually do) with no luck: [set_sourcetype_linux_audit_remove] REGEX = (?ms).*\"message\":\"([^... See more...
Hi at all, new failed test: I also tried to use  transforms.conf instead of SEDCMD in props.conf (as I usually do) with no luck: [set_sourcetype_linux_audit_remove] REGEX = (?ms).*\"message\":\"([^\"]+).* FORMAT = $2 DEST_KEY = _raw and I tried [set_sourcetype_linux_audit_remove] REGEX = .*\"message\":\"([^\"]+).* FORMAT = $1 DEST_KEY = _raw with the same result. Ciao. Giuseppe
@gcusello  Hi, I'd like to investigate which hosts aren't forwarding the specific events with the ParentProcessName="C:\Windows\System32\cmd.exe" to Splunk. How can we troubleshoot if a host isn't s... See more...
@gcusello  Hi, I'd like to investigate which hosts aren't forwarding the specific events with the ParentProcessName="C:\Windows\System32\cmd.exe" to Splunk. How can we troubleshoot if a host isn't sending its logs to Splunk? Thanks
Hi, I'm looking Security Use case on Salesforce application. Request to suggest if any please. Regards BT
Hi @dhana22 , in the multisite Indexer Cluster architecture, there's only one Cluster Manager not two, if you have two Cluster Manager you have two clusters. You can eventually have, in the seconda... See more...
Hi @dhana22 , in the multisite Indexer Cluster architecture, there's only one Cluster Manager not two, if you have two Cluster Manager you have two clusters. You can eventually have, in the secondary site, a turned off copy of the Cluster Manager but anyway the active CM is only one. For more infos see at https://docs.splunk.com/Documentation/Splunk/9.1.1/Indexer/Basicclusterarchitecture  Ciao. Giuseppe
Hi @MM0071 , let me understand, you want to filter the results of the main search using the first lookup, already filtered suing the second one, is it correct? If this is your requirement, my first... See more...
Hi @MM0071 , let me understand, you want to filter the results of the main search using the first lookup, already filtered suing the second one, is it correct? If this is your requirement, my first hint is to run a search the filters the raws of the first lookup using the second on so you have to use only one lookup. Anyway, if you want to use both the lookups in the same search, you can use your search and it should work fine or use the second lookup in the first lookup subsearch: index=netlogs [| inputlookup baddomains.csv | search NOT [| inputlookup good_domains.csv | fields domain] | eval url = "*.domain."*" | fields url] or something similar. Ciao. Giuseppe  
I need a python file/ function to be triggered while deleting a input/ configuration
Hi at all, I have a data flow in json format from one host that I ingest with HEC, so I have one host, one source and one sourcetype for all events. I would override the host, source and sourcetype... See more...
Hi at all, I have a data flow in json format from one host that I ingest with HEC, so I have one host, one source and one sourcetype for all events. I would override the host, source and sourcetype values based on regexes and I'm able to do this. The issue is that the data flow is an elaboration of an external systel (logstash) that takes raw logs (e.g. from linux systems) and saves them in a fields of the json format ("message") adding many other fields. So, after host, source and sourcetype overriding (that is fine working) I would remove all the extra contents in the events and maintain only the content of the message field (the raw logs). I'm able to do this, but the issue is that I'm not able to do both the transformations: in other words I'm able to override values but the extra contents removing doesn't work or I can remove extra contents but the overriding doesn't work. I have in my props. conf the following configurations: [logstash] # set host TRANSFORMS-sethost = set_hostname_logstash # set sourcetype Linux TRANSFORMS-setsourcetype_linux_audit = set_sourcetype_logstash_linux_audit # set source TRANSFORMS-setsource = set_source_logstash_linux # restoring original raw log [linux_audit] SEDCMD-raw_data_linux_audit = s/.*\"message\":\"([^\"]+).*/\1/g as you can see in the first stanza I override sourcetype from logstash to linux_audit and in the second I try to remove the extra contents using the linux audit sourcetype. If I use the logstash sourcetype also in the second stanza, the extra contents are removed, but the fields overriding (that runs using the extra contents) doesn't work. I also tried to setup a priority using the props.conf "priority" option with no luck. I also tried to use source for the first stanza because source usually has an higher priority than sourcetype, but with the same result. Can anyone give me an hint how to solve this issue? Thank you in advance. Ciao. Giuseppe
Hi @gayathrc , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @AL3Z , as @bowesmana said, this is a very frequesnt question in this Community and you'll find many resolutive answers to it (also from me and him!) that analyzed many different situations and U... See more...
Hi @AL3Z , as @bowesmana said, this is a very frequesnt question in this Community and you'll find many resolutive answers to it (also from me and him!) that analyzed many different situations and Use Cases. Anyway, in few words, you have to create a lookup (called e.g. perimeter.csv), with at list one column (host) and containing the list of hosts to monitor and then run a search like the following: | tstats count WHERE index=your_index BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0  Ciao. Giuseppe
@richgalloway I have added version 1.1 to all the XML dashboards and unrestricted all the older versions of jquery. But my question is, is there any impact if we do not fix all the suggested issues f... See more...
@richgalloway I have added version 1.1 to all the XML dashboards and unrestricted all the older versions of jquery. But my question is, is there any impact if we do not fix all the suggested issues for Python scripts? Is it mandatory to fix it? Does it impact on Splunk ?
Use the --data-urlencode option instead of -d (--data) curl -H "Authorization: Bearer <token ID here>" -k https://host.domain.com:8089/services/search/jobs --data-urlencode search='<your search term... See more...
Use the --data-urlencode option instead of -d (--data) curl -H "Authorization: Bearer <token ID here>" -k https://host.domain.com:8089/services/search/jobs --data-urlencode search='<your search term>' One more thing: SPL uses lots of double quotes.  Quote your search with single quotes saves you lots of escapes.