All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The Y argument can be anything valid for an eval statement.  IOW, if | eval test=Y works then | eval test=case(X, Y) should also work.
How do a get a count of rows that have a value greater than 0? Example below. The last column is what we are trying to generate. Name 2024-02-06 2024-02-08 2024-02-13 2024-02-15 Count_Of... See more...
How do a get a count of rows that have a value greater than 0? Example below. The last column is what we are trying to generate. Name 2024-02-06 2024-02-08 2024-02-13 2024-02-15 Count_Of_Rows_with_Data Pablo 1 0 1 0 2 Eli 0 0 0 0 0 Jenna 1 0 0 0 1 Chad 1 0 5 0 2  
Yes, I read the reply above and concur that this error occurs when the proper directory is not created and in our case it was "unknown" instead of the actual service name, ultimately we upgraded from... See more...
Yes, I read the reply above and concur that this error occurs when the proper directory is not created and in our case it was "unknown" instead of the actual service name, ultimately we upgraded from jdk 11 to jdk 21 and like magic it started working, so imagine this was a bug in jdk 11.
Confirming this still works as of Splunk Cloud v9.0.2
I ran a |REST search to export the list of savedsearches along with their cronjob schedules.  The cronjob scheduled are not showing the time in UTC time. ex | REST output for a search shows cronjob ... See more...
I ran a |REST search to export the list of savedsearches along with their cronjob schedules.  The cronjob scheduled are not showing the time in UTC time. ex | REST output for a search shows cronjob of 10 14 * * *,  but when I look at the REPORT tab on the SHC and see the list of saved searches, the "Next Scheduled Time" column shows 2024-04-07 18:10:00 UTC My SHC and deployers splunk servers are both set to UTC as the default system time.  On the SHC UI, when I log in, my preferences are also set to view data in "default system time".  I am physically located in an Eastern Time Zone. I am trying to see how to fix this so the |REST output of saved searches and their cronjob schedule is in UTC.
Hi @Ryan.Paredez, I don't have any additional information.  I've been digging throught the script, but it doesn't look like it's an easy modification.  Thanks for the support information, I'll reach... See more...
Hi @Ryan.Paredez, I don't have any additional information.  I've been digging throught the script, but it doesn't look like it's an easy modification.  Thanks for the support information, I'll reach out to them. Thanks, Bill
Hi @Marcie.Sirbaugh,  Did you read @Sunil.Agarwal reply above? Did that offer any insight into next steps you can take to troubleshoot?
Hi @Bill.Fanning, Thanks for asking your question on the community. Did you find any new information or a solution to your question you could share as a reply here? If not, you can reach out to ... See more...
Hi @Bill.Fanning, Thanks for asking your question on the community. Did you find any new information or a solution to your question you could share as a reply here? If not, you can reach out to AppDynamics Support. How do I submit a Support ticket? An FAQ 
@ITWhisperer Thanks for your response, It's not multivalued field and tried regex which isn't excluding the results as well.
is it possible to have expression in case command for argument Y? case(x,y) |eval test=case(x=="X", 'a+b')  The Y argument, instead of a strings or number, can it be an expression like field a + f... See more...
is it possible to have expression in case command for argument Y? case(x,y) |eval test=case(x=="X", 'a+b')  The Y argument, instead of a strings or number, can it be an expression like field a + field b?   Thanks    
Thank you for the suggestion. While this is a great app, I wanted to see if there are any out of the box functionality for the same (as this is developed by a third party developer) ? 
Since that is a Splunk-supported add-on, you can request enhancements at https://ideas.splunk.com.
Is it possible for the next version of the add-on to add MS defender vulnerabilty API calls to this add-on? Currently there is only "Microsoft defender for incident" and "Microsoft defender endpoint ... See more...
Is it possible for the next version of the add-on to add MS defender vulnerabilty API calls to this add-on? Currently there is only "Microsoft defender for incident" and "Microsoft defender endpoint alert".  We need another one add for "Microsoft Defender for Vulnerabilities" ---- Here's the API's below --- Permissions needed Collected data API call Permission needed Machine info GET https://api.securitycenter.microsoft.com/api/machines Machine.Read.All Full export of vulnerabilities GET https://api.securitycenter.microsoft.com/api/machines/SoftwareVulnerabilitiesExport Vulnerability.Read.All Delta export of vulnerabilities GET https://api.securitycenter.microsoft.com/api/machines/SoftwareVulnerabilityChangesByMachine Vulnerability.Read.All Description of vulnerabilities POST https://api.security.microsoft.com/api/advancedhunting/run AdvancedHunting.Read.All   https://github.com/thilles/TA-microsoft-365-defender-threat-vulnerability-add-on?tab=readme-ov-file#resources 
Original question was posed in 2017. Now, in 2024, 7 years later it is still not very clear how one applies a saved extraction regex to an existing search to extract fields from the search. Especial... See more...
Original question was posed in 2017. Now, in 2024, 7 years later it is still not very clear how one applies a saved extraction regex to an existing search to extract fields from the search. Especially without access to the various server side configuration files. Splunk has grown long in the tooth, dementia encroaching. Reality: You probably can't do it simply. If you have a sourcetype X. The extractors you saved will only run against the base, plain data set sent as X, not against your search, and they run against the base sourcetype automatically. If it was going to work, it would already be working and you would already have your field. Now, if your search does any kind of transformations like for example pulling log fields out of JSON data using spath, messing around with _raw or similar, the extractor you created isn't going to run against that resulting data set. I know, I've tried. The extractors get applied before that part of the search runs. See: https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Knowledge/Searchtimeoperationssequence You're going to have to go into Settings -> Fields -> Field Extractions and copy/paste the regex created by the web extractor page into your search and manually extract the field within your search using the "rex" command. You may have to tweak it slightly especially for quotes. It's a little disingenuous of the splunk web extraction generator to take the results of the current search as the input and imply that a saved extractor will actually operate against such a search and pull fields out for you. It doesn't.
This worked.......I was able to develop a data model that included the following as a constraint:   NOT (TERM(proc1) OR TERM(proc2) OR ...........OR TERM(procn)) Thanks, Tom
And this rex doesn't produce any error
I re-checked by putting the rex you've provided once again without the equal(=) symbol, but surprisingly the error message comes back with words 'regex='
This regex works with the sample events and is much more efficient according to regex101.com. | rex "(?<mydatetime>[^,]+),severity=(?<severity>[^,]+),thread=(?<thread>[^,]+),logger=(?<logger>[^,]+),... See more...
This regex works with the sample events and is much more efficient according to regex101.com. | rex "(?<mydatetime>[^,]+),severity=(?<severity>[^,]+),thread=(?<thread>[^,]+),logger=(?<logger>[^,]+),\{\},(?<logmsg>.*)"  
Again, what's with the = after the regex? Is this just a typo?
Assuming that your summary index has a single event for each host for each day that it has reported, then you should be able to divide your count (from the stats command you shared) by 7 and multiply... See more...
Assuming that your summary index has a single event for each host for each day that it has reported, then you should be able to divide your count (from the stats command you shared) by 7 and multiply by 100 to get the percentage "uptime"