All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hey @gcusello  I will have to verify first if I'm allowed to share the search, may fall under IP. But I am going to look at summary index and I'm also considering rewriting this whole file.
@robnewman666  A new TA "Add-on for SharePoint API with AWS Integration" has been released @ Add-on for SharePoint API with AWS Integration | Splunkbase  
Have you thought about using the container API? phantom.add_artifact(container=None, raw_data=None, cef_data=None, label=None, name=None, severity=None, identifier=None, artifact_type=None, ... See more...
Have you thought about using the container API? phantom.add_artifact(container=None, raw_data=None, cef_data=None, label=None, name=None, severity=None, identifier=None, artifact_type=None, field_mapping=None, trace=False, run_automation=False)  
Hey, consider a scenario where you want to create a reusable input playbook that takes advantage  of the condition blocks such as Filter&Decision.  For example, an input playbook that receives an i... See more...
Hey, consider a scenario where you want to create a reusable input playbook that takes advantage  of the condition blocks such as Filter&Decision.  For example, an input playbook that receives an ip_hostname, then queries AD over LDAP to check whether the ip_hostname is in a specific OU. that would be easily achievable using Filter/Decision normally, but since its in an input playbook, I haven't seen any output parameters that u can then use as in a main playbook to find out whether the condition was true or false.  Thanks in advance
OK. So I assume your logs come in on the 5514 port with sourcetype syslog, right? So if you cast them to another sourcetype using the CLONE_SOURCETYPE mechanism and then drop the original instance (... See more...
OK. So I assume your logs come in on the 5514 port with sourcetype syslog, right? So if you cast them to another sourcetype using the CLONE_SOURCETYPE mechanism and then drop the original instance (the syslog one) from the processing queue, you can't use props for the syslog sourcetype. To be fully honest, I showed you how the CLONE_SOURCETYPE thingy works and told you how it can be put to use in a relatively simple use case but in my opinion it's non-maintainable and completely not scalable solution so I'd probably not use it in production. Also remember that CLONE_SOURCETYPE applies to all events matching by sourcetype, source or host so you can't limit it by regex. So if you clone it four times, you'll have to do much uglier filtering in the "destination" sourcetypes. Which again makes it not scalable. BTW, if you want rsyslog instead of the usual SC4S, I can help
I think you'll find you'll get more help from these forums if you show you've made some effort to solve the problem before posting.  Tell what query you've tried and how it didn't meet expectations. ... See more...
I think you'll find you'll get more help from these forums if you show you've made some effort to solve the problem before posting.  Tell what query you've tried and how it didn't meet expectations. SQL users new to SPL should see Splunk's SPL for SQL Users manual at https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/SQLtoSplunk IMO, any requirement that calls for "all available log sources" hasn't been thought through enough.  It's a rare use case that needs every source; most will need specific sources. We need to know more about the "report that shows Changes to System Sec Config events".  Splunk tracks its own config changes in the _configtracker index, but that information does not include user info.  If you refer to configs for other systems then please identify those systems.  Of course, those systems must be reporting info to Splunk. The first step to creating a report is to create a search for the data that goes into that report.  Once that is producing the desired results, click on the "Save as" menu and select Report.  That is where you select the report format and schedule it. The expiration time of a search/report must be set manually from the Jobs dashboard (Activity->Jobs).  IIRC, the maximum TTL is seven days.  The 30-day expiration date is too long and may use up the users' disk quota. You can use the IN operator in the search command to filter message IDs and users.  SQL patterns are available using the like function within where and eval commands.  Use the table command to select fields for display and the stats command to group results. index=foo VendorMsgID IN (4727, 4728, 4729, 4730, 4731, 4732, 4733, 4734, 4735, 4736, 4737, 4740, 4754, 4755, 4756, 4757, 4758, 4759, 4783, 4784, 4785, 4786, 4787, 4788, 4789, 4791, 631) AND NOT User IN (xxx, system, xxx, xxxx, xxxxx) AND User_Impacted != (res group name) | where NOT (match(Host_Impacted, "%sc%") OR match(Host_Impacted, "%sd%") OR match(Host_Impacted, "^sc.+") OR match(Host_Impacted, "^sd.+")) | table User, _time, Event, Group, oHost, Host_Impacted, oLogin, VendorMsgID, Domain Impacted) | stats values(*) as * by User  
Hi @PickleRick, thank you for your support. I tried three different configurations: at first, in a dedicated app: #props.conf [source::tcp:5514] TRANSFORMS-00_hostname = set_hostname_fortinet TRA... See more...
Hi @PickleRick, thank you for your support. I tried three different configurations: at first, in a dedicated app: #props.conf [source::tcp:5514] TRANSFORMS-00_hostname = set_hostname_fortinet TRANSFORMS-10_sourcetype = set_sourcetype_infoblox TRANSFORMS-11_sourcetype = set_sourcetype_juniper TRANSFORMS-12_sourcetype = set_sourcetype_fortinet #Transforms.conf ######################### Sourcetype ####################### # infoblox:port [set_sourcetype_infoblox] REGEX = \<\d+\>\w+\s+\d+\s+\d+:\d+\d+:\d+\s+\w+-dns-\w+ FORMAT = sourcetype::infoblox:port DEST_KEY = MetaData:Sourcetype # Infoblox # [set_sourcetype_juniper] REGEX = ^\<\d+\>\d+\s+\d+-\d+-\d+\w+:\d+:\d+\.\d+\w(\+|-)\d+:\d+\s\w+-edget-fw FORMAT = sourcetype::juniper DEST_KEY = MetaData:Sourcetype # Infoblox # [set_sourcetype_fortinet] REGEX = ^\<\d+\>date\=\d+-\d+-\d+\s+time\=\d+:\d+:\d+\s+devname\=\"[^\"]+\"\s+devid FORMAT = sourcetype::fgt_log DEST_KEY = MetaData:Sourcetype ############################## hostname ############################ # Infoblox # [set_hostname_fortinet] REGEX = devname\=\"([^\"]+)\" FORMAT = host::$1 then I tried (following your previous hint), in a dedicated app: #props.conf [source::tcp:5514] TRANSFORMS-00_hostname = set_hostname_fortinet TRANSFORMS-10_sourcetype = set_sourcetype_infoblox TRANSFORMS-11_sourcetype = set_sourcetype_juniper TRANSFORMS-12_sourcetype = set_sourcetype_fortinet TRANSFORMS-50_drop_dead = drop_dead_infoblox TRANSFORMS-51_drop_dead = drop_dead_juniper TRANSFORMS-52_drop_dead = drop_dead_fortinet #Transforms.conf ######################### Sourcetype ####################### # infoblox:port [set_sourcetype_infoblox] REGEX = \<\d+\>\w+\s+\d+\s+\d+:\d+\d+:\d+\s+\w+-dns-\w+ CLONE_SOURCETYPE = infoblox:port # Infoblox # [set_sourcetype_juniper] REGEX = ^\<\d+\>\d+\s+\d+-\d+-\d+\w+:\d+:\d+\.\d+\w(\+|-)\d+:\d+\s\w+-edget-fw CLONE_SOURCETYPE = juniper # Infoblox # [set_sourcetype_fortinet] REGEX = ^\<\d+\>date\=\d+-\d+-\d+\s+time\=\d+:\d+:\d+\s+devname\=\"[^\"]+\"\s+devid CLONE_SOURCETYPE = fgt_log ############################## hostname ############################ # Infoblox # [set_hostname_fortinet] REGEX = devname\=\"([^\"]+)\" FORMAT = host::$1 ############################# original log removing ################ # infoblox:port [drop_dead_infoblox] REGEX = \<\d+\>\w+\s+\d+\s+\d+:\d+\d+:\d+\s+\w+-dns-\w+ FORMAT = nullQueue DEST_KEY = queue # Infoblox # [drop_dead_juniper] REGEX = ^\<\d+\>\d+\s+\d+-\d+-\d+\w+:\d+:\d+\.\d+\w(\+|-)\d+:\d+\s\w+-edget-fw FORMAT = nullQueue DEST_KEY = queue # Infoblox # [drop_dead_fortinet] REGEX = ^\<\d+\>date\=\d+-\d+-\d+\s+time\=\d+:\d+:\d+\s+devname\=\"[^\"]+\"\s+devid FORMAT = nullQueue DEST_KEY = queue third try, in the apps Splunk_TA_fortinet_fortigate I added the same transformation present in the Add-On starting from the syslog sourcetype: props.conf [syslog] TRANSFORMS-force_sourcetype_fgt = force_sourcetype_fortigate SHOULD_LINEMERGE = false EVENT_BREAKER_ENABLE = true and in the app Splunk_TA_juniper #props.conf [syslog] SHOULD_LINEMERGE = false EVENT_BREAKER_ENABLE = true TRANSFORMS-force_info_for_juniper = force_host_for_netscreen_firewall,force_sourcetype_for_netscreen_firewall,force_sourcetype_for_junos_idp_structured,force_sourcetype_for_junos_idp,force_sourcetype_for_junos_aamw,force_sourcetype_for_junos_secintel,force_sourcetype_for_junos_firewall_structured,force_sourcetype_for_junos_firewall, force_sourcetype_for_junos_snmp,force_sourcetype_for_junos_firewall_rpd  But anyway I continue to have only the three first overrided sourcetypes and the original syslog sourcetype: syslog infoblox:port fgt_log juniper Instead all the other sourcetypes transformed in the Add-Ons. About your question, for the moment I still using Splunk as syslog receiver but I have in my mind to use rsyslog. Ciao. Giuseppe
Hi @yuanliu this is truly excellent work, thank you so much for your time an determination on finding the root cause of this behaviour
We have already integrated linux, palo alto,SAP log sources. Just looking to create Linux, Palo alto, SAP use cases which is based on MITRE framework or any attack pattern use cases, as we don't have... See more...
We have already integrated linux, palo alto,SAP log sources. Just looking to create Linux, Palo alto, SAP use cases which is based on MITRE framework or any attack pattern use cases, as we don't have that much knowledge to create SPL use cases.
Yes, it's possible, but don't do that.  It tells Splunk to open every bucket with data from the last seven days and read all of the events from that time.  It will be slow and wasteful of resources. ... See more...
Yes, it's possible, but don't do that.  It tells Splunk to open every bucket with data from the last seven days and read all of the events from that time.  It will be slow and wasteful of resources.  Use tstats, instead.
If your problem is resolved, then please click the "Accept as Solution" button to help future readers.
Hello, is it possible to execute the following command to calculate Event per day in one week? index=* earliest=-7d | timechart span=1d count  
I can take a look some other time. (It's very late here.)  In the meantime, you can see my confirmation of this problem and how to workaround it in general in https://community.splunk.com/t5/Splunk-S... See more...
I can take a look some other time. (It's very late here.)  In the meantime, you can see my confirmation of this problem and how to workaround it in general in https://community.splunk.com/t5/Splunk-Search/Do-you-lose-any-information-between-Chain-Searches-in-Dashboards/m-p/671359/highlight/true#M230087. (I also included a specific recommendation about coding.  Additionally, I recommend that you report this as bug and/or get support involved even though I don't expect it to be fixed soon.)
This is an amazing find!  My tests show that indeed, when a chain search needs a field that the base search does not pass, it will fail in mysterious ways.  In most applications, the base search is n... See more...
This is an amazing find!  My tests show that indeed, when a chain search needs a field that the base search does not pass, it will fail in mysterious ways.  In most applications, the base search is not as vanilla as index=abcd, so this behavior would not be revealed.  But I consider this a bug because even if the requirement that base search must contain provisions to pass fields used by all chain searches is carefully documented, it is really counterintuitive for users and a slip can affect results in subtle ways that users may end up trusting bad data.  Good news is that DS team is aggressively trying to relieve user friction.  Bad news is that this is a rather tricky one so I don't expect speedy fix even if they accept it as a bug. Here is the gist of the problem/behavior: To improve performance, SPL compiler will decide which field(s) to pass through a pipe by inspecting downstream searches.  Because base search and chain search are completely separate as far as compiler is concerned, only indexed fields and explicitly invoked fields in the base search will be passed to chain searches.  In your example, base search index=_internal will only pass _time, sourcetype, source, host, etc.  All search-time fields are omitted.  When you change base search to index=_internal useTypeahead=true, the compiler sees that useTypeahead is referenced, therefore it passes this to result cache. Here is a simpler test dashboard to demonstrate: (I use date_hour because it is 100% populated) { "visualizations": { "viz_AD6BWNHC": { "type": "splunk.events", "dataSources": { "primary": "ds_4EfZYMc8" }, "title": "base1", "description": "index=_internal", "showProgressBar": false, "showLastUpdated": false }, "viz_TrPHlPsH": { "type": "splunk.events", "dataSources": { "primary": "ds_561TjAWf" }, "showProgressBar": false, "showLastUpdated": false, "title": "base1 | table date_hour sourcetype _time", "description": "(bad)" }, "viz_SiLJUCQc": { "type": "splunk.events", "dataSources": { "primary": "ds_FmGTHy8w" }, "title": "base2", "description": "index=_internal date_hour=*" }, "viz_A0PjYfHd": { "type": "splunk.events", "dataSources": { "primary": "ds_feUCBRcX" }, "title": "base2 | table date_second sourcetype _time", "description": "(good)" } }, "dataSources": { "ds_4EfZYMc8": { "type": "ds.search", "options": { "query": "index=_internal", "queryParameters": { "earliest": "-4h@m", "latest": "now" } }, "name": "base1" }, "ds_561TjAWf": { "type": "ds.chain", "options": { "extend": "ds_4EfZYMc8", "query": "| table date_hour sourcetype _time" }, "name": "chain" }, "ds_FmGTHy8w": { "type": "ds.search", "options": { "query": "index=_internal date_hour=*", "queryParameters": { "earliest": "-4h@m", "latest": "now" } }, "name": "base2" }, "ds_feUCBRcX": { "type": "ds.chain", "options": { "extend": "ds_FmGTHy8w", "query": "| table date_hour sourcetype _time" }, "name": "chain1a" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": {}, "layout": { "type": "grid", "options": { "width": 1440, "height": 960 }, "structure": [ { "item": "viz_AD6BWNHC", "type": "block", "position": { "x": 0, "y": 0, "w": 720, "h": 307 } }, { "item": "viz_TrPHlPsH", "type": "block", "position": { "x": 0, "y": 307, "w": 720, "h": 266 } }, { "item": "viz_SiLJUCQc", "type": "block", "position": { "x": 720, "y": 0, "w": 720, "h": 307 } }, { "item": "viz_A0PjYfHd", "type": "block", "position": { "x": 720, "y": 307, "w": 720, "h": 266 } } ], "globalInputs": [] }, "description": "https://community.splunk.com/t5/Splunk-Search/Do-you-lose-any-information-between-Chain-Searches-in-Dashboards/m-p/671245#M230046", "title": "Chain search lose info test fresh" } This is the result: Here, date_hour is null in the chain search using index=_internal as base search. One recommendation about your workaround: If your base search uses index=_internal useTypeahead=true instead of index=_internal | useTypeahead=true, the indexer will return a lot fewer events, and the search will be much more efficient. As to the bug/behavior, because the cause is inherent to the compiler, I imagine it to be really difficult for a high-level application like a dashboard engine to influence.  Nevertheless, I trust that the DS team will be grateful that you discovered this problem.
Hi @yuanliu  I think I have the solution, I have pasted my guess work as a response to the original post I have uploaded a dashboard with a working method, albeit perhaps not an optional one: http... See more...
Hi @yuanliu  I think I have the solution, I have pasted my guess work as a response to the original post I have uploaded a dashboard with a working method, albeit perhaps not an optional one: https://gist.github.com/niksheridan/d8377778e4c5f1ff3e2e49b0b9899185 I will try and "collapsing" the first and second steps of the chain in order to actually get a pipeline working that can be further extended. I'd be very grateful in your feelback, if you could be so kind as to review this, if you have the time - thank you again for you help    thanks nik
Check your effective config with btool to see if you've successfully overriden the settings. But you may also be hitting some different issue.
The thing I could suggest is enabling debug and trying to look into forwarder's logs but that's a long shot and I have really no concrete advice what to look for. Kinda like "exploratory surgery".
Depending on what you mean by webhook - as @_JP already pointed out, Http Event Collector is one of the main and most often used ways of input. Also you can call Splunk using REST to perform various ... See more...
Depending on what you mean by webhook - as @_JP already pointed out, Http Event Collector is one of the main and most often used ways of input. Also you can call Splunk using REST to perform various actions - see https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTlist (actually pushing events to HEC is also using one of those REST endpoints).
Also sometimes change on the source (like version upgrade) can result in change in log format/level of detail even without changing loglevel (i.e. new software versions logs additional fields in the ... See more...
Also sometimes change on the source (like version upgrade) can result in change in log format/level of detail even without changing loglevel (i.e. new software versions logs additional fields in the events).
Sorting values in a multi-value field of IP addresses involves arranging the IPs in a specific order. Here's a simple way to do it: Separate IP Addresses: If your multi-value field c... See more...
Sorting values in a multi-value field of IP addresses involves arranging the IPs in a specific order. Here's a simple way to do it: Separate IP Addresses: If your multi-value field contains multiple IPs in a single string, separate them into individual values. Convert IPs to Numeric Format: Convert each IP address to its numeric equivalent. You can do this by treating each part of the IP as a number and combining them. Sort Numeric Values: Sort the numeric representations of the IPs in ascending or descending order, depending on your preference. Convert Back to IP Format: Once sorted, convert the numeric values back to IP address format. Example: Suppose you have IP addresses like "192.168.1.2," "10.0.0.1," and "172.16.0.5." Separate: ["192.168.1.2", "10.0.0.1", "172.16.0.5"] Convert: [3232235778, 167772161, 2886729733] Sort: [167772161, 2886729733, 3232235778] Convert Back: ["10.0.0.1", "172.16.0.5", "192.168.1.2"] You can use programming languages like Python or JavaScript for this task. Always consider the specific requirements of your project and the tools available for your development environment.