All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This question is nearly the same as one you asked at https://community.splunk.com/t5/Alerting/And-condition/m-p/670558#M15557 .  Please show how you used the answers there to craft a query that solve... See more...
This question is nearly the same as one you asked at https://community.splunk.com/t5/Alerting/And-condition/m-p/670558#M15557 .  Please show how you used the answers there to craft a query that solves this problem and how that query fails to meet expectations. As stated in the referenced answer, the AND operator works only on a single event and cannot compare values among several events.
I have below requirement Log info: 09.00PM Xyz event received for customernumber:1234 Log info: 09.05 PM abc event received for customernumber:1234 Loginfo :09.10PM pqr event received for customer... See more...
I have below requirement Log info: 09.00PM Xyz event received for customernumber:1234 Log info: 09.05 PM abc event received for customernumber:1234 Loginfo :09.10PM pqr event received for customernumber:1234   Like that n number of customer number is there in the splunk log   I want to check ,whether all 3 event has been received for customernumber or not    For that I tried using AND operation    Like "xyz received for customernumber" AND "abc event received for customernumber" AND "PQR event received for customernumber|rex (I tried retrieve customernumber)   Which is not working .pls help with exact query 
Hi @jhooper33, Internally, your regular expression compiles to a length that exceeds the offset limits set in PCRE2 at build time. For example, the regular expression (?<bar>.){3119} will compile: ... See more...
Hi @jhooper33, Internally, your regular expression compiles to a length that exceeds the offset limits set in PCRE2 at build time. For example, the regular expression (?<bar>.){3119} will compile: | makeresults | eval foo="anything" | rex field=foo "(?<bar>.){3119}" but this regular expression (?<bar>.){3120} will not: | rex field=foo "(?<bar>.){3120}" Error in 'rex' command: Encountered the following error while compiling the regex '(?<bar>.){3120}': Regex: regular expression is too large. Repeating the match 3,120 times exceeds PCRE2's compile-time limit. If we add a second character to the pattern, we'll exceed the limit in fewer repetitions: Good: | rex field=foo "(?<bar>..){2339}" Bad: | rex field=foo "(?<bar>..){2340}" Error in 'rex' command: Encountered the following error while compiling the regex '(?<bar>..){2340}': Regex: regular expression is too large. The error message should contain a pattern offset to help with identification of the error; however, Splunk does not expose that, and enabling DEBUG output on SearchOperator:rex adds no other information. In short, the code generated by the regular expression compiler is too long, and you'll need to modify your regular expression. With respect to CSV lookups versus KV store lookups, test, test, and test again. A CSV file deserialized to an optimized data structure in memory should have a search complexity similar to a MongoDB data store, but because the entire structure is in memory, the CSV file may outperform a similarly configured KV store lookup. If you want to replicate your lookup to your indexing tier as part of your search bundle, you also need to consider that a KV store lookup will be serialized to a CSV file and used as a CSV file on the indexer. Finally, if you're using Splunk Enterprise Security, consider integrating your threat intelligence with the Splunk Enterprise Security threat intelligence framework. The latter may not meet 100% of your requirements, so as before, test, test, and test again.
Hi @jhooper33 , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all t... See more...
Hi @jhooper33 , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
@PickleRick!!! Thank you for the suggestion. I haven't seen anyone use KVstore for lookup for any of our searches yet so I will definitely look into this.
Hey @gcusello  I will have to verify first if I'm allowed to share the search, may fall under IP. But I am going to look at summary index and I'm also considering rewriting this whole file.
@robnewman666  A new TA "Add-on for SharePoint API with AWS Integration" has been released @ Add-on for SharePoint API with AWS Integration | Splunkbase  
Have you thought about using the container API? phantom.add_artifact(container=None, raw_data=None, cef_data=None, label=None, name=None, severity=None, identifier=None, artifact_type=None, ... See more...
Have you thought about using the container API? phantom.add_artifact(container=None, raw_data=None, cef_data=None, label=None, name=None, severity=None, identifier=None, artifact_type=None, field_mapping=None, trace=False, run_automation=False)  
Hey, consider a scenario where you want to create a reusable input playbook that takes advantage  of the condition blocks such as Filter&Decision.  For example, an input playbook that receives an i... See more...
Hey, consider a scenario where you want to create a reusable input playbook that takes advantage  of the condition blocks such as Filter&Decision.  For example, an input playbook that receives an ip_hostname, then queries AD over LDAP to check whether the ip_hostname is in a specific OU. that would be easily achievable using Filter/Decision normally, but since its in an input playbook, I haven't seen any output parameters that u can then use as in a main playbook to find out whether the condition was true or false.  Thanks in advance
OK. So I assume your logs come in on the 5514 port with sourcetype syslog, right? So if you cast them to another sourcetype using the CLONE_SOURCETYPE mechanism and then drop the original instance (... See more...
OK. So I assume your logs come in on the 5514 port with sourcetype syslog, right? So if you cast them to another sourcetype using the CLONE_SOURCETYPE mechanism and then drop the original instance (the syslog one) from the processing queue, you can't use props for the syslog sourcetype. To be fully honest, I showed you how the CLONE_SOURCETYPE thingy works and told you how it can be put to use in a relatively simple use case but in my opinion it's non-maintainable and completely not scalable solution so I'd probably not use it in production. Also remember that CLONE_SOURCETYPE applies to all events matching by sourcetype, source or host so you can't limit it by regex. So if you clone it four times, you'll have to do much uglier filtering in the "destination" sourcetypes. Which again makes it not scalable. BTW, if you want rsyslog instead of the usual SC4S, I can help
I think you'll find you'll get more help from these forums if you show you've made some effort to solve the problem before posting.  Tell what query you've tried and how it didn't meet expectations. ... See more...
I think you'll find you'll get more help from these forums if you show you've made some effort to solve the problem before posting.  Tell what query you've tried and how it didn't meet expectations. SQL users new to SPL should see Splunk's SPL for SQL Users manual at https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/SQLtoSplunk IMO, any requirement that calls for "all available log sources" hasn't been thought through enough.  It's a rare use case that needs every source; most will need specific sources. We need to know more about the "report that shows Changes to System Sec Config events".  Splunk tracks its own config changes in the _configtracker index, but that information does not include user info.  If you refer to configs for other systems then please identify those systems.  Of course, those systems must be reporting info to Splunk. The first step to creating a report is to create a search for the data that goes into that report.  Once that is producing the desired results, click on the "Save as" menu and select Report.  That is where you select the report format and schedule it. The expiration time of a search/report must be set manually from the Jobs dashboard (Activity->Jobs).  IIRC, the maximum TTL is seven days.  The 30-day expiration date is too long and may use up the users' disk quota. You can use the IN operator in the search command to filter message IDs and users.  SQL patterns are available using the like function within where and eval commands.  Use the table command to select fields for display and the stats command to group results. index=foo VendorMsgID IN (4727, 4728, 4729, 4730, 4731, 4732, 4733, 4734, 4735, 4736, 4737, 4740, 4754, 4755, 4756, 4757, 4758, 4759, 4783, 4784, 4785, 4786, 4787, 4788, 4789, 4791, 631) AND NOT User IN (xxx, system, xxx, xxxx, xxxxx) AND User_Impacted != (res group name) | where NOT (match(Host_Impacted, "%sc%") OR match(Host_Impacted, "%sd%") OR match(Host_Impacted, "^sc.+") OR match(Host_Impacted, "^sd.+")) | table User, _time, Event, Group, oHost, Host_Impacted, oLogin, VendorMsgID, Domain Impacted) | stats values(*) as * by User  
Hi @PickleRick, thank you for your support. I tried three different configurations: at first, in a dedicated app: #props.conf [source::tcp:5514] TRANSFORMS-00_hostname = set_hostname_fortinet TRA... See more...
Hi @PickleRick, thank you for your support. I tried three different configurations: at first, in a dedicated app: #props.conf [source::tcp:5514] TRANSFORMS-00_hostname = set_hostname_fortinet TRANSFORMS-10_sourcetype = set_sourcetype_infoblox TRANSFORMS-11_sourcetype = set_sourcetype_juniper TRANSFORMS-12_sourcetype = set_sourcetype_fortinet #Transforms.conf ######################### Sourcetype ####################### # infoblox:port [set_sourcetype_infoblox] REGEX = \<\d+\>\w+\s+\d+\s+\d+:\d+\d+:\d+\s+\w+-dns-\w+ FORMAT = sourcetype::infoblox:port DEST_KEY = MetaData:Sourcetype # Infoblox # [set_sourcetype_juniper] REGEX = ^\<\d+\>\d+\s+\d+-\d+-\d+\w+:\d+:\d+\.\d+\w(\+|-)\d+:\d+\s\w+-edget-fw FORMAT = sourcetype::juniper DEST_KEY = MetaData:Sourcetype # Infoblox # [set_sourcetype_fortinet] REGEX = ^\<\d+\>date\=\d+-\d+-\d+\s+time\=\d+:\d+:\d+\s+devname\=\"[^\"]+\"\s+devid FORMAT = sourcetype::fgt_log DEST_KEY = MetaData:Sourcetype ############################## hostname ############################ # Infoblox # [set_hostname_fortinet] REGEX = devname\=\"([^\"]+)\" FORMAT = host::$1 then I tried (following your previous hint), in a dedicated app: #props.conf [source::tcp:5514] TRANSFORMS-00_hostname = set_hostname_fortinet TRANSFORMS-10_sourcetype = set_sourcetype_infoblox TRANSFORMS-11_sourcetype = set_sourcetype_juniper TRANSFORMS-12_sourcetype = set_sourcetype_fortinet TRANSFORMS-50_drop_dead = drop_dead_infoblox TRANSFORMS-51_drop_dead = drop_dead_juniper TRANSFORMS-52_drop_dead = drop_dead_fortinet #Transforms.conf ######################### Sourcetype ####################### # infoblox:port [set_sourcetype_infoblox] REGEX = \<\d+\>\w+\s+\d+\s+\d+:\d+\d+:\d+\s+\w+-dns-\w+ CLONE_SOURCETYPE = infoblox:port # Infoblox # [set_sourcetype_juniper] REGEX = ^\<\d+\>\d+\s+\d+-\d+-\d+\w+:\d+:\d+\.\d+\w(\+|-)\d+:\d+\s\w+-edget-fw CLONE_SOURCETYPE = juniper # Infoblox # [set_sourcetype_fortinet] REGEX = ^\<\d+\>date\=\d+-\d+-\d+\s+time\=\d+:\d+:\d+\s+devname\=\"[^\"]+\"\s+devid CLONE_SOURCETYPE = fgt_log ############################## hostname ############################ # Infoblox # [set_hostname_fortinet] REGEX = devname\=\"([^\"]+)\" FORMAT = host::$1 ############################# original log removing ################ # infoblox:port [drop_dead_infoblox] REGEX = \<\d+\>\w+\s+\d+\s+\d+:\d+\d+:\d+\s+\w+-dns-\w+ FORMAT = nullQueue DEST_KEY = queue # Infoblox # [drop_dead_juniper] REGEX = ^\<\d+\>\d+\s+\d+-\d+-\d+\w+:\d+:\d+\.\d+\w(\+|-)\d+:\d+\s\w+-edget-fw FORMAT = nullQueue DEST_KEY = queue # Infoblox # [drop_dead_fortinet] REGEX = ^\<\d+\>date\=\d+-\d+-\d+\s+time\=\d+:\d+:\d+\s+devname\=\"[^\"]+\"\s+devid FORMAT = nullQueue DEST_KEY = queue third try, in the apps Splunk_TA_fortinet_fortigate I added the same transformation present in the Add-On starting from the syslog sourcetype: props.conf [syslog] TRANSFORMS-force_sourcetype_fgt = force_sourcetype_fortigate SHOULD_LINEMERGE = false EVENT_BREAKER_ENABLE = true and in the app Splunk_TA_juniper #props.conf [syslog] SHOULD_LINEMERGE = false EVENT_BREAKER_ENABLE = true TRANSFORMS-force_info_for_juniper = force_host_for_netscreen_firewall,force_sourcetype_for_netscreen_firewall,force_sourcetype_for_junos_idp_structured,force_sourcetype_for_junos_idp,force_sourcetype_for_junos_aamw,force_sourcetype_for_junos_secintel,force_sourcetype_for_junos_firewall_structured,force_sourcetype_for_junos_firewall, force_sourcetype_for_junos_snmp,force_sourcetype_for_junos_firewall_rpd  But anyway I continue to have only the three first overrided sourcetypes and the original syslog sourcetype: syslog infoblox:port fgt_log juniper Instead all the other sourcetypes transformed in the Add-Ons. About your question, for the moment I still using Splunk as syslog receiver but I have in my mind to use rsyslog. Ciao. Giuseppe
Hi @yuanliu this is truly excellent work, thank you so much for your time an determination on finding the root cause of this behaviour
We have already integrated linux, palo alto,SAP log sources. Just looking to create Linux, Palo alto, SAP use cases which is based on MITRE framework or any attack pattern use cases, as we don't have... See more...
We have already integrated linux, palo alto,SAP log sources. Just looking to create Linux, Palo alto, SAP use cases which is based on MITRE framework or any attack pattern use cases, as we don't have that much knowledge to create SPL use cases.
Yes, it's possible, but don't do that.  It tells Splunk to open every bucket with data from the last seven days and read all of the events from that time.  It will be slow and wasteful of resources. ... See more...
Yes, it's possible, but don't do that.  It tells Splunk to open every bucket with data from the last seven days and read all of the events from that time.  It will be slow and wasteful of resources.  Use tstats, instead.
If your problem is resolved, then please click the "Accept as Solution" button to help future readers.
Hello, is it possible to execute the following command to calculate Event per day in one week? index=* earliest=-7d | timechart span=1d count  
I can take a look some other time. (It's very late here.)  In the meantime, you can see my confirmation of this problem and how to workaround it in general in https://community.splunk.com/t5/Splunk-S... See more...
I can take a look some other time. (It's very late here.)  In the meantime, you can see my confirmation of this problem and how to workaround it in general in https://community.splunk.com/t5/Splunk-Search/Do-you-lose-any-information-between-Chain-Searches-in-Dashboards/m-p/671359/highlight/true#M230087. (I also included a specific recommendation about coding.  Additionally, I recommend that you report this as bug and/or get support involved even though I don't expect it to be fixed soon.)
This is an amazing find!  My tests show that indeed, when a chain search needs a field that the base search does not pass, it will fail in mysterious ways.  In most applications, the base search is n... See more...
This is an amazing find!  My tests show that indeed, when a chain search needs a field that the base search does not pass, it will fail in mysterious ways.  In most applications, the base search is not as vanilla as index=abcd, so this behavior would not be revealed.  But I consider this a bug because even if the requirement that base search must contain provisions to pass fields used by all chain searches is carefully documented, it is really counterintuitive for users and a slip can affect results in subtle ways that users may end up trusting bad data.  Good news is that DS team is aggressively trying to relieve user friction.  Bad news is that this is a rather tricky one so I don't expect speedy fix even if they accept it as a bug. Here is the gist of the problem/behavior: To improve performance, SPL compiler will decide which field(s) to pass through a pipe by inspecting downstream searches.  Because base search and chain search are completely separate as far as compiler is concerned, only indexed fields and explicitly invoked fields in the base search will be passed to chain searches.  In your example, base search index=_internal will only pass _time, sourcetype, source, host, etc.  All search-time fields are omitted.  When you change base search to index=_internal useTypeahead=true, the compiler sees that useTypeahead is referenced, therefore it passes this to result cache. Here is a simpler test dashboard to demonstrate: (I use date_hour because it is 100% populated) { "visualizations": { "viz_AD6BWNHC": { "type": "splunk.events", "dataSources": { "primary": "ds_4EfZYMc8" }, "title": "base1", "description": "index=_internal", "showProgressBar": false, "showLastUpdated": false }, "viz_TrPHlPsH": { "type": "splunk.events", "dataSources": { "primary": "ds_561TjAWf" }, "showProgressBar": false, "showLastUpdated": false, "title": "base1 | table date_hour sourcetype _time", "description": "(bad)" }, "viz_SiLJUCQc": { "type": "splunk.events", "dataSources": { "primary": "ds_FmGTHy8w" }, "title": "base2", "description": "index=_internal date_hour=*" }, "viz_A0PjYfHd": { "type": "splunk.events", "dataSources": { "primary": "ds_feUCBRcX" }, "title": "base2 | table date_second sourcetype _time", "description": "(good)" } }, "dataSources": { "ds_4EfZYMc8": { "type": "ds.search", "options": { "query": "index=_internal", "queryParameters": { "earliest": "-4h@m", "latest": "now" } }, "name": "base1" }, "ds_561TjAWf": { "type": "ds.chain", "options": { "extend": "ds_4EfZYMc8", "query": "| table date_hour sourcetype _time" }, "name": "chain" }, "ds_FmGTHy8w": { "type": "ds.search", "options": { "query": "index=_internal date_hour=*", "queryParameters": { "earliest": "-4h@m", "latest": "now" } }, "name": "base2" }, "ds_feUCBRcX": { "type": "ds.chain", "options": { "extend": "ds_FmGTHy8w", "query": "| table date_hour sourcetype _time" }, "name": "chain1a" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": {}, "layout": { "type": "grid", "options": { "width": 1440, "height": 960 }, "structure": [ { "item": "viz_AD6BWNHC", "type": "block", "position": { "x": 0, "y": 0, "w": 720, "h": 307 } }, { "item": "viz_TrPHlPsH", "type": "block", "position": { "x": 0, "y": 307, "w": 720, "h": 266 } }, { "item": "viz_SiLJUCQc", "type": "block", "position": { "x": 720, "y": 0, "w": 720, "h": 307 } }, { "item": "viz_A0PjYfHd", "type": "block", "position": { "x": 720, "y": 307, "w": 720, "h": 266 } } ], "globalInputs": [] }, "description": "https://community.splunk.com/t5/Splunk-Search/Do-you-lose-any-information-between-Chain-Searches-in-Dashboards/m-p/671245#M230046", "title": "Chain search lose info test fresh" } This is the result: Here, date_hour is null in the chain search using index=_internal as base search. One recommendation about your workaround: If your base search uses index=_internal useTypeahead=true instead of index=_internal | useTypeahead=true, the indexer will return a lot fewer events, and the search will be much more efficient. As to the bug/behavior, because the cause is inherent to the compiler, I imagine it to be really difficult for a high-level application like a dashboard engine to influence.  Nevertheless, I trust that the DS team will be grateful that you discovered this problem.
Hi @yuanliu  I think I have the solution, I have pasted my guess work as a response to the original post I have uploaded a dashboard with a working method, albeit perhaps not an optional one: http... See more...
Hi @yuanliu  I think I have the solution, I have pasted my guess work as a response to the original post I have uploaded a dashboard with a working method, albeit perhaps not an optional one: https://gist.github.com/niksheridan/d8377778e4c5f1ff3e2e49b0b9899185 I will try and "collapsing" the first and second steps of the chain in order to actually get a pipeline working that can be further extended. I'd be very grateful in your feelback, if you could be so kind as to review this, if you have the time - thank you again for you help    thanks nik