All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Since that is a Splunk-supported add-on, you can request enhancements at https://ideas.splunk.com.
Is it possible for the next version of the add-on to add MS defender vulnerabilty API calls to this add-on? Currently there is only "Microsoft defender for incident" and "Microsoft defender endpoint ... See more...
Is it possible for the next version of the add-on to add MS defender vulnerabilty API calls to this add-on? Currently there is only "Microsoft defender for incident" and "Microsoft defender endpoint alert".  We need another one add for "Microsoft Defender for Vulnerabilities" ---- Here's the API's below --- Permissions needed Collected data API call Permission needed Machine info GET https://api.securitycenter.microsoft.com/api/machines Machine.Read.All Full export of vulnerabilities GET https://api.securitycenter.microsoft.com/api/machines/SoftwareVulnerabilitiesExport Vulnerability.Read.All Delta export of vulnerabilities GET https://api.securitycenter.microsoft.com/api/machines/SoftwareVulnerabilityChangesByMachine Vulnerability.Read.All Description of vulnerabilities POST https://api.security.microsoft.com/api/advancedhunting/run AdvancedHunting.Read.All   https://github.com/thilles/TA-microsoft-365-defender-threat-vulnerability-add-on?tab=readme-ov-file#resources 
Original question was posed in 2017. Now, in 2024, 7 years later it is still not very clear how one applies a saved extraction regex to an existing search to extract fields from the search. Especial... See more...
Original question was posed in 2017. Now, in 2024, 7 years later it is still not very clear how one applies a saved extraction regex to an existing search to extract fields from the search. Especially without access to the various server side configuration files. Splunk has grown long in the tooth, dementia encroaching. Reality: You probably can't do it simply. If you have a sourcetype X. The extractors you saved will only run against the base, plain data set sent as X, not against your search, and they run against the base sourcetype automatically. If it was going to work, it would already be working and you would already have your field. Now, if your search does any kind of transformations like for example pulling log fields out of JSON data using spath, messing around with _raw or similar, the extractor you created isn't going to run against that resulting data set. I know, I've tried. The extractors get applied before that part of the search runs. See: https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Knowledge/Searchtimeoperationssequence You're going to have to go into Settings -> Fields -> Field Extractions and copy/paste the regex created by the web extractor page into your search and manually extract the field within your search using the "rex" command. You may have to tweak it slightly especially for quotes. It's a little disingenuous of the splunk web extraction generator to take the results of the current search as the input and imply that a saved extractor will actually operate against such a search and pull fields out for you. It doesn't.
This worked.......I was able to develop a data model that included the following as a constraint:   NOT (TERM(proc1) OR TERM(proc2) OR ...........OR TERM(procn)) Thanks, Tom
And this rex doesn't produce any error
I re-checked by putting the rex you've provided once again without the equal(=) symbol, but surprisingly the error message comes back with words 'regex='
This regex works with the sample events and is much more efficient according to regex101.com. | rex "(?<mydatetime>[^,]+),severity=(?<severity>[^,]+),thread=(?<thread>[^,]+),logger=(?<logger>[^,]+),... See more...
This regex works with the sample events and is much more efficient according to regex101.com. | rex "(?<mydatetime>[^,]+),severity=(?<severity>[^,]+),thread=(?<thread>[^,]+),logger=(?<logger>[^,]+),\{\},(?<logmsg>.*)"  
Again, what's with the = after the regex? Is this just a typo?
Assuming that your summary index has a single event for each host for each day that it has reported, then you should be able to divide your count (from the stats command you shared) by 7 and multiply... See more...
Assuming that your summary index has a single event for each host for each day that it has reported, then you should be able to divide your count (from the stats command you shared) by 7 and multiply by 100 to get the percentage "uptime"
Hi team, I am following the below instructions to bring Genesys cloud logs in to splunk  https://splunkbase.splunk.com/app/6552 Under the details and intsallation instruction of the app, I cant fin... See more...
Hi team, I am following the below instructions to bring Genesys cloud logs in to splunk  https://splunkbase.splunk.com/app/6552 Under the details and intsallation instruction of the app, I cant find the configuration and it also did not prompted me for the input configuration
I am trying to determine a hosts percent of time it logs to splunk within a summary index we created. We have an index called "summary_index" and a field called "host_reported" that shows if a host h... See more...
I am trying to determine a hosts percent of time it logs to splunk within a summary index we created. We have an index called "summary_index" and a field called "host_reported" that shows if a host has been seen in the past hour.  Here is the search i am using to see all hosts in the summary index that were seen within the last 24hrs: index=summary_index | stats count by host_reported What i am trying to do is develop a search that shows me what percent of the time over the past 7 days each host has reported to this summary index. So for example if host A only reported to the summary index 6 of the 7 days, i want it to show it's "up time "was 86% for the past 7 days. 
Yes indeed it does solve the issue, but now there's a new issue Streamed search execute failed because: Error in 'rex' command: regex="(?<mydatetime>^\S*)\,severity=(?<severity>\S*)\,thread=(?<threa... See more...
Yes indeed it does solve the issue, but now there's a new issue Streamed search execute failed because: Error in 'rex' command: regex="(?<mydatetime>^\S*)\,severity=(?<severity>\S*)\,thread=(?<thread>\S*)\,logger=(?<logger>\S*)\,\{\}\,(?<logmsg>(.)*)" has exceeded the configured depth_limit, consider raising the value in limits.conf..
Hello Luiz, In Dashboard, you would see the option on left panel- Interpolate data gaps. Kindly select that. 
Splunk support concluded it was an "as yet discovered software bug"
I think I've read this in its entirety 4 times now over the past week.  I am having difficulty understanding what the problem is.  Let me walk through it and see if writing it down helps... You work... See more...
I think I've read this in its entirety 4 times now over the past week.  I am having difficulty understanding what the problem is.  Let me walk through it and see if writing it down helps... You work in IST which is +10.5 hours from CST/DST. You have alert, which the cron schedule says to fire at 1 PM (13:00) in CDT.  That's 11:30 PM (23:30) IST.  You maybe mistyped "11:00 PM" for that, and maybe that's the issue? Disregarding the 11:00/11:30 issue, the second thing I think you mentioned is that the alert didn't actually come until 11:44, which is a 14 minute delay.   The search itself is a lightweight, it should run practically instantly and run-time shouldn't be an issue.  The most obvious reason for the 14 minute delay is because your server is too busy at 1 PM CDT to get this out any faster.  You should check into that - there's a lot of resources available inside Splunk to see what might be going on, but my guess is just that it's a busy time of the day, coupled with possibly too many "heavy" searches that trigger then.  You could also increase the priority of that search, though this doesn't address the core problem and may actually make things *worse* and not better.  I mean, maybe better for this one search, and being so fast that's probably OK, but still, it's just trying to hide the bigger problem.   Anyway, hope that helps and happy Splunking! -Rich  
You don't need the = after the rex   | rex "(?<mydatetime>^\S*)\,severity=(?<severity>\S*)\,thread=(?<thread>\S*)\,logger=(?<logger>\S*)\,\{\}\,(?<logmsg>.*)" Updated to remove brackets in the log... See more...
You don't need the = after the rex   | rex "(?<mydatetime>^\S*)\,severity=(?<severity>\S*)\,thread=(?<thread>\S*)\,logger=(?<logger>\S*)\,\{\}\,(?<logmsg>.*)" Updated to remove brackets in the logmsg pattern  
Hi Team, Our Splunk Search heads are hosted in Cloud and managed by Support and currently we are running with the latest version (9.1.2308.203).   This pertains to the Max Lines setting in the Form... See more...
Hi Team, Our Splunk Search heads are hosted in Cloud and managed by Support and currently we are running with the latest version (9.1.2308.203).   This pertains to the Max Lines setting in the Format section of the Search and Reporting App. Previously, Splunk defaulted to displaying 20 or more lines in search results within the Search and Reporting App. As an administrator responsible for extracting Splunk logs across various applications over the years, I never encountered the need to expand brief search results, to read all lines. However, in the recent weeks, possibly following an upgrade of the Splunk Search heads, I've observed that each time I open a new Splunk search window or the existing Splunk tab times out and auto-refreshes, the Format > Max Lines option is reset to 5. Consequently, I find myself changing it after nearly every search, which has become cumbersome. Therefore, Kindly guide me on how to change  the default value to 20 from 5 in the Search and Reporting App on both Search heads? This adjustment would alleviate the challenge faced by most of our customers and end-users who find it cumbersome to modify it for each search. So kindly help on my requirement.
I have an issue with ES not showing all of the views depending on which user is logged in.  Is there a location for permissions of the views?  For example, if I am logged in as a Splunk Admin I can s... See more...
I have an issue with ES not showing all of the views depending on which user is logged in.  Is there a location for permissions of the views?  For example, if I am logged in as a Splunk Admin I can see all of the views: As a ESS admin I see: Most important is not having the incident review one there.  When I go to Configure/All Configurations/General /Navigation as the ESS admin, all of the views are shown for me to move around and configure.  The ribbon remains the same.   Where should I look for what is different?    
@gcusello  Indeed, I have applied the correct sourcetype there to ensure that events are appropriately divided. Nonetheless, the masking of passwords is not taking place as intended.
How do you know that xa goes with x1 and xc goes with x2?  Like literally I'm looking at your example and I don't know how you would have come up with that correlation. Can you tell me how you'd do ... See more...
How do you know that xa goes with x1 and xc goes with x2?  Like literally I'm looking at your example and I don't know how you would have come up with that correlation. Can you tell me how you'd do this manually?  Unless we know this, I have no idea how we'd know how to make it work programatically.