All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

pretty sure your \s needs to be a new line which is not necessarily the same thing as whitespace like @PickleRick said this regex will get your breaks and only leave the footer on the last event and... See more...
pretty sure your \s needs to be a new line which is not necessarily the same thing as whitespace like @PickleRick said this regex will get your breaks and only leave the footer on the last event and break the header into its own event which you can just ignore all of this as long as your data format doesn't change LOL [\[\}]+([,\s\r\n]+){  
Query1: index=test-index "ERROR" Code=OPT OR Code=ONP |bin _time span=1d |stats count as TOATL_ONIP1 by Code _time. Query2: index=test-index "WARN" "User had issues with code" Code=OPT OR Code=ONP |... See more...
Query1: index=test-index "ERROR" Code=OPT OR Code=ONP |bin _time span=1d |stats count as TOATL_ONIP1 by Code _time. Query2: index=test-index "WARN" "User had issues with code" Code=OPT OR Code=ONP |search code_ip IN(1001, 1002, 1003, 1004) |bin _time span=1d |stats count as TOATL_ONIP2 by Code _time. Query3: index=test-index "INFO" "POST" NOT "GET /authenticate/mmt" |search code_data IN(iias, iklm, oilk) |bin _time span=1d |stats count as TOATL_ONI3 by Code _time. Combined query: index=test-index "ERROR" Code=OPT OR Code=ONP |bin _time span=1d |stats count as TOATL_ONIP1 by Code _time |appendcols [|search index=test-index "WARN" "User had issues with code" Code=OPT OR Code=ONP |search code_ip IN(1001, 1002, 1003, 1004) |bin _time span=1d |stats count as TOATL_ONIP2 by Code _time] |appendcols [|search index=test-index "INFO" "POST" NOT "GET /authenticate/mmt" Code=OPT OR Code=ONP |search code_data IN(iias, iklm, oilk) |bin _time span=1d |stats count as TOATL_ONI3 by Code _time] |eval Start_Date=srftime(_time, "%Y-%m-%d") |table Start_Date Code TOATL_ONIP1 TOATL_ONIP2 TOATL_ONIP3 Output for individual query1: Start_Date Code TOTAL_ONIP1 2025-04-01 OPT 2 2025-04-02 OPT 4 2025-04-03 OPT 0 2025-04-01 ONP 1 2025-04-02 ONP 2 2025-04-03 ONP 3 Output for individual query2: Start_Date Code TOTAL_ONIP2 2025-04-01 OPT 0 2025-04-02 OPT 0 2025-04-03 OPT 0 2025-04-01 ONP 4 2025-04-02 ONP 2 2025-04-03 ONP 3 Output for individual query3: Start_Date Code TOTAL_ONIP3 2025-04-01 OPT 0 2025-04-02 OPT 0 2025-04-03 OPT 9 2025-04-01 ONP 0 2025-04-02 ONP 6 2025-04-03 ONP 8 Combined query output: Start_Date Code TOTAL_ONIP1 TOTAL_ONIP2 TOTAL_ONIP3 2025-04-01 OPT 2 4 9 2025-04-02 OPT 4 2 6 2025-04-03 OPT 1 3 8 2025-04-01 ONP 2     2025-04-02 ONP 3     2025-04-03 ONP       When we combine the query the count is not matching with the individual queries. For example: on April1st for ONP for TOTAL_ONIP2 is 4 but in combined one it is showing null,  and 4 value updated in OPT april 1st 
The first example will produce a count of destinations, etc, for each hour of the search time window.  Something like this _time Processes.dest count 12:00 foo 2 12:00 bar 1 13:00 ... See more...
The first example will produce a count of destinations, etc, for each hour of the search time window.  Something like this _time Processes.dest count 12:00 foo 2 12:00 bar 1 13:00 foo 4 13:00 bar 2   The second example will produce counts by destination, etc.  The counts will not be broken down by time. Processes.dest count foo 6 bar 3   The bin command will have no effect because there is no _time field at that point. Putting span in the tstats command gives you control over the bin sizes.  Without span, tstats will choose a span it thinks best fits the data.
Does the file ever change? If so, I would index the file and then create a scheduled search to update the lookup based on the indexed data.   If it never changes, just import the file one time with... See more...
Does the file ever change? If so, I would index the file and then create a scheduled search to update the lookup based on the indexed data.   If it never changes, just import the file one time with the Lookup Editor App.
@pck_npluyaud i personally prefer @PickleRick option 2 and as @yuanliu mention if it isn't working it's because the json isn't properly formatted if the json isn't properly formatted and it's in-hou... See more...
@pck_npluyaud i personally prefer @PickleRick option 2 and as @yuanliu mention if it isn't working it's because the json isn't properly formatted if the json isn't properly formatted and it's in-house you can try to get it fixed, if it's a paid product that sucks and you can try to open a support ticket but good luck if you have to do a regex because you can't get the json fixed... go to regex101 and build your regex there make sure you are using the bare minimum escapes "\" and don't use any if you don't have to the .props file handles things ever so slightly differently than the Search GUI so they should both work with teeny tweaks but the cleanest version is the one you want in your props file
I personally like to put _time span=whatever like you have in your first example everywhere it will work (like with "timechart") since it works and it makes it clear what you are spanning.  For the ... See more...
I personally like to put _time span=whatever like you have in your first example everywhere it will work (like with "timechart") since it works and it makes it clear what you are spanning.  For the longest time I was not using timechart and span correctly until I learned you should put the span literally right next to the _time to make sure it is getting applied appropriately, so now I just do that everywhere  But to answer your real question...what is the technical difference...IDK 
@paleewawa you should accept this answer as a solution if it works for you also
Okay, this was an easy fix. Whitelist the email domain (for your Teams link) in Server Settings > Email settings. I successfully added mine after whitelisting. 
crossposting: Enable Veeam Splunk App Data Visibility Across Your Splunk Ecosystem | Veeam Community Resource Hub
Veeam has a really nice Veeam App for Splunk.  It’s actually one of the nicer apps that has easy data integration and pre-built dashboards that pretty much work out-of-the-box.     However, the Vee... See more...
Veeam has a really nice Veeam App for Splunk.  It’s actually one of the nicer apps that has easy data integration and pre-built dashboards that pretty much work out-of-the-box.     However, the Veeam data is really only usable within the Veeam App.  If you are in a different App in Splunk and try to query the Veeam data a lot of fields will be “missing”.  You can see here that I need use 3 fields (EventGroup, ActivityType, and severity) to find the specific events I’m looking for, but only 1 of those fields is actually availble in the _raw data:   Ok...so why are these fields available in the Veeam App but not in any other App in Splunk, especially since they don’t even actually exist?  This is due to the “enrichment” the Veeam App is performing translating things like “instanceId” into something human-readable and informative.  For example instanceId here is “41600” and when you query the Veeam events there is a lookup that references 41600 and returns additional information:   Great, so if this is available in the Veeam App, why don’t I just do all my work there rather than trying to make this extra information available outside the Veeam App?  The short answer is I want to be able to work with more than one dataset at a time.  The longer answer is that I have a custom “app” where I store all my SOC security detection queries.  Splunk also has their Enterprise Security App which basically does the same thing.   What this allows is the creation of correlated searches, such as one search that picks up any “ransomware” related event regardless of whether it comes from Veeam or AntiVirus or UEBA, etc.   But if the Veeam data isn’t usable outside of the Veeam app you can’t incorporate it into your standard SOC process.     What you need to do is make the all the enrichment in the Veeam App (props, lookups, transforms, datamodels, etc) readable from any App in Splunk, not just from the Veeam App.   You can do all this from the Splunk GUI (you might need to be an Admin...not sure...I’m an Admin so I can do everything/whatever I want LOL  ) Share the Data Model Globally:   Share the enrichment (“props” & “transforms”) Globally:   You can see here before and after snips of the “export” config after I modified all the properties: (default.meta)   (local.meta which overrides defaul created dynamically after edit)
"Email recipients groups" | Ideas
https://ideas.splunk.com/ideas/APPSID-I-989
Well crud... The PSC os something like 2.3 GB, so even if the entire app is not redistributed every push I suspect it severly affects the check and push process. No, it is a fresh and first time in... See more...
Well crud... The PSC os something like 2.3 GB, so even if the entire app is not redistributed every push I suspect it severly affects the check and push process. No, it is a fresh and first time install so no inherited problems. As far as I can see, there are no errors or problems associated with the install itself and MLTK is running smoothly. So this is another "improvement" to wish for then, something like a ".gitignore" option where you can manually add apps to your cluster without having them "managed" by the SHD. My guess is that the only way around this would be a standalone SH where you'd basically only install PSC and MLTK and only use it for these purposes. In any case, thank you for your feedback and have a nice weekend
I'm having a similar problem. When setting up an alert notification by email, the email address for the Teams channel is not being accepted. I'm still researching the issue.
Hi,   For learning purpose Why cant we use personal Mail id for Trial account, tried creating one with gmail but denied.
Hi, We have db connect connections & inputs created in Splunk HF. We see that it has status=FAILED sometimes and below is the error captured through internal  DB logs. Logs: /opt/splunk/var/log/s... See more...
Hi, We have db connect connections & inputs created in Splunk HF. We see that it has status=FAILED sometimes and below is the error captured through internal  DB logs. Logs: /opt/splunk/var/log/splunk/splunk_app_db_connect_job_metrics.log  /opt/splunk/var/log/splunk/splunk_app_db_connect_server.log Error- ERROR org.easybatch.core.job.BatchJob - Unable to write records java.io.IOException: There are no Http Event Collectors available at this time. Can someone help?
Hello, I wish to know the functional difference (if any) between the following: | tstats count FROM datamodel=Endpoint.Processes where Processes.user=SYSTEM by _time span=1h Processes.dest ... And... See more...
Hello, I wish to know the functional difference (if any) between the following: | tstats count FROM datamodel=Endpoint.Processes where Processes.user=SYSTEM by _time span=1h Processes.dest ... And | tstats count FROM datamodel=Endpoint.Processes where Processes.user=SYSTEM by Processes.dest ... | bin _time span=1h I understand the function and that "| bin" would always be used for a non tstats search, but within tstats is there any reason to place the "span" within the "by", or is it just cleaner/slightly faster? Thanks in advance!
Is it possible that your developers made a mistake?  If the mock data accurately reflects the raw event structure, there are two errors: The value of root.var2 is missing a quotation mark at the be... See more...
Is it possible that your developers made a mistake?  If the mock data accurately reflects the raw event structure, there are two errors: The value of root.var2 is missing a quotation mark at the beginning. (toto" instead of "toto".) The structure is missing the final closing bracket. A corrected structure would be { "root": { "field1": "value1", "message": { "var1":132, "var2":"toto", "var3":{}, "var4":{"A":1,"B":2}, "var5":{"C":{"D":5}} } } } If the raw event has the correct structure, you don't need to do anything and Splunk will automatically extract the following: root.field1 root.message.var1 root.message.var2 root.message.var4.A root.message.var4.B root.message.var5.C.D value1 132 toto 1 2 5 root.message.var3 will not show because it's value is a null JSON.
Hi @nivets  If you're getting notables created then this is a big part of the battle, the other thing being the NEAP as you've suggested.  Which content pack are you using? Do you just have a sing... See more...
Hi @nivets  If you're getting notables created then this is a big part of the battle, the other thing being the NEAP as you've suggested.  Which content pack are you using? Do you just have a single NEAP enabled? Please could you share a screenshot of the configuration?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Ok. So, no solution. 1. Indexed extractions => the Json is too complex 2. Automatic search-time KV-extraction => no, fields need to be parsed .... 3. Manual use of the spath command. => at the sea... See more...
Ok. So, no solution. 1. Indexed extractions => the Json is too complex 2. Automatic search-time KV-extraction => no, fields need to be parsed .... 3. Manual use of the spath command. => at the search time ... too late   Well, thanks anyway.