All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Does the file ever change? If so, I would index the file and then create a scheduled search to update the lookup based on the indexed data.   If it never changes, just import the file one time with... See more...
Does the file ever change? If so, I would index the file and then create a scheduled search to update the lookup based on the indexed data.   If it never changes, just import the file one time with the Lookup Editor App.
@pck_npluyaud i personally prefer @PickleRick option 2 and as @yuanliu mention if it isn't working it's because the json isn't properly formatted if the json isn't properly formatted and it's in-hou... See more...
@pck_npluyaud i personally prefer @PickleRick option 2 and as @yuanliu mention if it isn't working it's because the json isn't properly formatted if the json isn't properly formatted and it's in-house you can try to get it fixed, if it's a paid product that sucks and you can try to open a support ticket but good luck if you have to do a regex because you can't get the json fixed... go to regex101 and build your regex there make sure you are using the bare minimum escapes "\" and don't use any if you don't have to the .props file handles things ever so slightly differently than the Search GUI so they should both work with teeny tweaks but the cleanest version is the one you want in your props file
I personally like to put _time span=whatever like you have in your first example everywhere it will work (like with "timechart") since it works and it makes it clear what you are spanning.  For the ... See more...
I personally like to put _time span=whatever like you have in your first example everywhere it will work (like with "timechart") since it works and it makes it clear what you are spanning.  For the longest time I was not using timechart and span correctly until I learned you should put the span literally right next to the _time to make sure it is getting applied appropriately, so now I just do that everywhere  But to answer your real question...what is the technical difference...IDK 
@paleewawa you should accept this answer as a solution if it works for you also
Okay, this was an easy fix. Whitelist the email domain (for your Teams link) in Server Settings > Email settings. I successfully added mine after whitelisting. 
crossposting: Enable Veeam Splunk App Data Visibility Across Your Splunk Ecosystem | Veeam Community Resource Hub
Veeam has a really nice Veeam App for Splunk.  It’s actually one of the nicer apps that has easy data integration and pre-built dashboards that pretty much work out-of-the-box.     However, the Vee... See more...
Veeam has a really nice Veeam App for Splunk.  It’s actually one of the nicer apps that has easy data integration and pre-built dashboards that pretty much work out-of-the-box.     However, the Veeam data is really only usable within the Veeam App.  If you are in a different App in Splunk and try to query the Veeam data a lot of fields will be “missing”.  You can see here that I need use 3 fields (EventGroup, ActivityType, and severity) to find the specific events I’m looking for, but only 1 of those fields is actually availble in the _raw data:   Ok...so why are these fields available in the Veeam App but not in any other App in Splunk, especially since they don’t even actually exist?  This is due to the “enrichment” the Veeam App is performing translating things like “instanceId” into something human-readable and informative.  For example instanceId here is “41600” and when you query the Veeam events there is a lookup that references 41600 and returns additional information:   Great, so if this is available in the Veeam App, why don’t I just do all my work there rather than trying to make this extra information available outside the Veeam App?  The short answer is I want to be able to work with more than one dataset at a time.  The longer answer is that I have a custom “app” where I store all my SOC security detection queries.  Splunk also has their Enterprise Security App which basically does the same thing.   What this allows is the creation of correlated searches, such as one search that picks up any “ransomware” related event regardless of whether it comes from Veeam or AntiVirus or UEBA, etc.   But if the Veeam data isn’t usable outside of the Veeam app you can’t incorporate it into your standard SOC process.     What you need to do is make the all the enrichment in the Veeam App (props, lookups, transforms, datamodels, etc) readable from any App in Splunk, not just from the Veeam App.   You can do all this from the Splunk GUI (you might need to be an Admin...not sure...I’m an Admin so I can do everything/whatever I want LOL  ) Share the Data Model Globally:   Share the enrichment (“props” & “transforms”) Globally:   You can see here before and after snips of the “export” config after I modified all the properties: (default.meta)   (local.meta which overrides defaul created dynamically after edit)
"Email recipients groups" | Ideas
https://ideas.splunk.com/ideas/APPSID-I-989
Well crud... The PSC os something like 2.3 GB, so even if the entire app is not redistributed every push I suspect it severly affects the check and push process. No, it is a fresh and first time in... See more...
Well crud... The PSC os something like 2.3 GB, so even if the entire app is not redistributed every push I suspect it severly affects the check and push process. No, it is a fresh and first time install so no inherited problems. As far as I can see, there are no errors or problems associated with the install itself and MLTK is running smoothly. So this is another "improvement" to wish for then, something like a ".gitignore" option where you can manually add apps to your cluster without having them "managed" by the SHD. My guess is that the only way around this would be a standalone SH where you'd basically only install PSC and MLTK and only use it for these purposes. In any case, thank you for your feedback and have a nice weekend
I'm having a similar problem. When setting up an alert notification by email, the email address for the Teams channel is not being accepted. I'm still researching the issue.
Hi,   For learning purpose Why cant we use personal Mail id for Trial account, tried creating one with gmail but denied.
Hi, We have db connect connections & inputs created in Splunk HF. We see that it has status=FAILED sometimes and below is the error captured through internal  DB logs. Logs: /opt/splunk/var/log/s... See more...
Hi, We have db connect connections & inputs created in Splunk HF. We see that it has status=FAILED sometimes and below is the error captured through internal  DB logs. Logs: /opt/splunk/var/log/splunk/splunk_app_db_connect_job_metrics.log  /opt/splunk/var/log/splunk/splunk_app_db_connect_server.log Error- ERROR org.easybatch.core.job.BatchJob - Unable to write records java.io.IOException: There are no Http Event Collectors available at this time. Can someone help?
Hello, I wish to know the functional difference (if any) between the following: | tstats count FROM datamodel=Endpoint.Processes where Processes.user=SYSTEM by _time span=1h Processes.dest ... And... See more...
Hello, I wish to know the functional difference (if any) between the following: | tstats count FROM datamodel=Endpoint.Processes where Processes.user=SYSTEM by _time span=1h Processes.dest ... And | tstats count FROM datamodel=Endpoint.Processes where Processes.user=SYSTEM by Processes.dest ... | bin _time span=1h I understand the function and that "| bin" would always be used for a non tstats search, but within tstats is there any reason to place the "span" within the "by", or is it just cleaner/slightly faster? Thanks in advance!
Is it possible that your developers made a mistake?  If the mock data accurately reflects the raw event structure, there are two errors: The value of root.var2 is missing a quotation mark at the be... See more...
Is it possible that your developers made a mistake?  If the mock data accurately reflects the raw event structure, there are two errors: The value of root.var2 is missing a quotation mark at the beginning. (toto" instead of "toto".) The structure is missing the final closing bracket. A corrected structure would be { "root": { "field1": "value1", "message": { "var1":132, "var2":"toto", "var3":{}, "var4":{"A":1,"B":2}, "var5":{"C":{"D":5}} } } } If the raw event has the correct structure, you don't need to do anything and Splunk will automatically extract the following: root.field1 root.message.var1 root.message.var2 root.message.var4.A root.message.var4.B root.message.var5.C.D value1 132 toto 1 2 5 root.message.var3 will not show because it's value is a null JSON.
Hi @nivets  If you're getting notables created then this is a big part of the battle, the other thing being the NEAP as you've suggested.  Which content pack are you using? Do you just have a sing... See more...
Hi @nivets  If you're getting notables created then this is a big part of the battle, the other thing being the NEAP as you've suggested.  Which content pack are you using? Do you just have a single NEAP enabled? Please could you share a screenshot of the configuration?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Ok. So, no solution. 1. Indexed extractions => the Json is too complex 2. Automatic search-time KV-extraction => no, fields need to be parsed .... 3. Manual use of the spath command. => at the sea... See more...
Ok. So, no solution. 1. Indexed extractions => the Json is too complex 2. Automatic search-time KV-extraction => no, fields need to be parsed .... 3. Manual use of the spath command. => at the search time ... too late   Well, thanks anyway.
As I always repeat - fiddling with regexes around structured data will only bring tears and won't give you the result you want (or it will for a short time but the moment your data is reordered or re... See more...
As I always repeat - fiddling with regexes around structured data will only bring tears and won't give you the result you want (or it will for a short time but the moment your data is reordered or reindented (which is perfectly OK with json) your solution will stop working. So there are three ways of handling json data with Splunk. 1. Indexed extractions 2. Automatic search-time KV-extraction 3. Manual use of the spath command. Each of those ways has its pros and cons and yields a bit different results.
  Hello, I am new to content pack and started to check on the service monitoring degradation for KPI, Entities. Have created Services, KPIs and Entities and can see correlation search is finding no... See more...
  Hello, I am new to content pack and started to check on the service monitoring degradation for KPI, Entities. Have created Services, KPIs and Entities and can see correlation search is finding notable event if any KPIs, Entities are having high or critical values. And Neap policies are enabled to create episodes(using the NEAP policy from content pack). but episodes are not getting created. Can someone help how to troubleshoot the issue?   Thanks.
Hi @danielbb  There was a very similar question the other day around this, please see my answer below or check out the original question at https://community.splunk.com/t5/Splunk-Enterprise-Security... See more...
Hi @danielbb  There was a very similar question the other day around this, please see my answer below or check out the original question at https://community.splunk.com/t5/Splunk-Enterprise-Security/Can-Splunk-read-a-CSV-file-and-automatically-upload-it-as-a/m-p/744948/highlight/true#M12497 The other option as you mentioned would be to use the REST API - There are some scripts at https://github.com/mthcht/lookup-editor_scripts#readme. which aim to achieve this if this is the route you wanted to go down. @livehybrid wrote:   If you have a CSV on a forwarder that you want to become a lookup in Splunk then the best way to achieve this is probably to monitor (using monitor:// in inputs.conf) the file and send it to a specific index on your Splunk indexers. Then, Create scheduled search which searches that index and retrieves the sent data and outputs it to a lookup (using | outputlookup command). Depending on how/when the CSV is updated may depend on exactly how the resulting search ends up, but ultimately this should be a viable solution. There may be other solutions but would require significantly more engineering effort.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing