All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Was the root cause of this issue identified?
Hi all, I'm monitoring compliance data for the past 7 days using timechart. My current query displays the count of "comply" and "not comply" events for each day. index= indexA | timechart span=1d c... See more...
Hi all, I'm monitoring compliance data for the past 7 days using timechart. My current query displays the count of "comply" and "not comply" events for each day. index= indexA | timechart span=1d count by audit   However, I'd like to visualize this data as percentages instead. Is it possible to modify the search to display the percentage of compliant and non-compliant events on top of each bar? Thanks in advance for your help!
To lock a single dashboard down, you would want to create a new custom user that does not inherit the user permission. Then you would grant that user read permissions to that single dashboard.  Then... See more...
To lock a single dashboard down, you would want to create a new custom user that does not inherit the user permission. Then you would grant that user read permissions to that single dashboard.  Then the user can get to it via the link, but not even going to the app to browse for it.   If they can view ES, they can view all the dashboards (by default). You could go dashboard by dashboard, and change the custom nav to reflect it. But if you want the user to only see that one part of ES, I'd recommend the method I laid out up top.
I know it's a little bit delayed, but due to the nature of the 0day from February you would need to know which method was being exploited (i.e. which characters the exact attack wanted to utilize). ... See more...
I know it's a little bit delayed, but due to the nature of the 0day from February you would need to know which method was being exploited (i.e. which characters the exact attack wanted to utilize). That said, you could write a detection that checks for Outlook.exe (and all versions of it) connecting over port 445. If you have sysmon installed, you could use EventId 3's for this. Florian Roth also wrote a YARA rule that could detect emails that have that malicious Moniker link in it :  rule EXPL_CVE_2024_21413_Microsoft_Outlook_RCE_Feb24 { meta: description = "Detects emails that contain signs of a method to exploit CVE-2024-21413 in Microsoft Outlook" author = "X__Junior, Florian Roth" reference = "https://github.com/xaitax/CVE-2024-21413-Microsoft-Outlook-Remote-Code-Execution-Vulnerability/" date = "2024-02-17" modified = "2024-02-19" score = 75 strings: $a1 = "Subject: " $a2 = "Received: " $xr1 = /file:///\\[^"']{6,600}.(docx|txt|pdf|xlsx|pptx|odt|etc|jpg|png|gif|bmp|tiff|svg|mp4|avi|mov|wmv|flv|mkv|mp3|wav|aac|flac|ogg|wma|exe|msi|bat|cmd|ps1|zip|rar|7z|targz|iso|dll|sys|ini|cfg|reg|html|css|java|py|c|cpp|db|sql|mdb|accdb|sqlite|eml|pst|ost|mbox|htm|php|asp|jsp|xml|ttf|otf|woff|woff2|rtf|chm|hta|js|lnk|vbe|vbs|wsf|xls|xlsm|xltm|xlt|doc|docm|dot|dotm)!/ condition: filesize < 1000KB and all of ($a*) and 1 of ($xr*) } I hope that this late in the game, you have patched all your Outlook installations, and would instead go for detecting if any are unpatched.
If you mean rendering it that way on the Incident Review page, that is because that part of the page isn't expecting HTML, only raw data. To achieve this, you would need to use custom .js, along wit... See more...
If you mean rendering it that way on the Incident Review page, that is because that part of the page isn't expecting HTML, only raw data. To achieve this, you would need to use custom .js, along with a clone of the incident review page (if you are on an on-prem instance, and you cannot if you are on cloud I believe) to basically remap those fields for all notables.
I'm looking to craft a query  (a correlation search) that would trigger an alert in the event that an internal system tries to access a malicious website. I would greatly appreciate any suggestions y... See more...
I'm looking to craft a query  (a correlation search) that would trigger an alert in the event that an internal system tries to access a malicious website. I would greatly appreciate any suggestions you may have. Thank you in advance for your help. Source=bluecoat
Are you asking if Splunk Observability Cloud supports Windows IIS telemetry logging, or if it supports logging for your specific PHP webshop? If the latter, then it would be helpful to know the name ... See more...
Are you asking if Splunk Observability Cloud supports Windows IIS telemetry logging, or if it supports logging for your specific PHP webshop? If the latter, then it would be helpful to know the name of the PHP webshop.
If I understand correctly, you have two different log types ABC and EFG in the same index, and you want to count how many success, fail, and error events occur, but only for correlation IDs that occu... See more...
If I understand correctly, you have two different log types ABC and EFG in the same index, and you want to count how many success, fail, and error events occur, but only for correlation IDs that occur in both ABC and EFG? Assuming the field names are correct, your current query should work to count success, fail, and error events from both, though it will count events that only occur in one of the two types. It is not clear how you would like the details (json_ext of message) to be displayed with the count of success, fail, and error events. You could do stats ... by json_ext to see the counts by json_ext, but this would only be practical if the json_ext messages are not very different.
1. The best practice is to define as much as you can from the so-called magic eight. It boosts performance on ingestion and lets you avoid errors with line breaking or timestamp recognition. 2. Sour... See more...
1. The best practice is to define as much as you can from the so-called magic eight. It boosts performance on ingestion and lets you avoid errors with line breaking or timestamp recognition. 2. Sourcetype is just a value of metadata associated with an event. From technical point of view it doesn't have to be "defined" anywhere prior to setting given value as sourcetype. You could even create an index-time transform setting a sourcetype to a completely random value and Splunk would still work and process the events (although the effects might be far from desirable). It's the other way around - the value of the sourcetype metadata can - if configured properly - affect how Splunk processes events. BTW, sourcetype is just one of the metadata fields that comes into play when Splunk decides what to do with an event. Other ones are source and host.
Since we are in early stages of using Splunk cloud, we don't define props.conf as part of the onboarding process, and we introduce props for a certain sourcetype only when parsing goes off. So for a ... See more...
Since we are in early stages of using Splunk cloud, we don't define props.conf as part of the onboarding process, and we introduce props for a certain sourcetype only when parsing goes off. So for a certain sourcetype, line-breaking is off, and when looking for the sourcetype on the indexers (via support) and on the on-prem HF where the data is ingested and I cannot find the props for this sourcetype. So I wonder is it possible to have no sourcetype anywhere for a particular source?
But the aliases must be defined within an app. If that app is not exporting objects, it might cause a problem. Anyway, global is one thing (exporting globally lets you use the knowledge objects in o... See more...
But the aliases must be defined within an app. If that app is not exporting objects, it might cause a problem. Anyway, global is one thing (exporting globally lets you use the knowledge objects in other apps' scopes), permissions assigned to a knowledge object is something else (you could export globally but only give permissions to selected roles).
Ahh, indeed missed the "when indexing" part but I'd assume it was due to misunderstanding by @uagraw01 how field extractions work - they indeed mostly work during search phase, not while indexing the... See more...
Ahh, indeed missed the "when indexing" part but I'd assume it was due to misunderstanding by @uagraw01 how field extractions work - they indeed mostly work during search phase, not while indexing the events. But in case it was really meant as "index-time aliases" - there is no such thing. Aliasing is always done during search time. But yes, you can specify multiple field aliases in one alias group (you can check it out in GUI and check what conf file the server writes :-)).
The FIELDALIAS attribute extracts fields at search time rather than at index time as requested. IME, it's unusual to have a single FIELDALIAS attribute define more than one alias.  Be sure the props... See more...
The FIELDALIAS attribute extracts fields at search time rather than at index time as requested. IME, it's unusual to have a single FIELDALIAS attribute define more than one alias.  Be sure the props.conf file has line continuation characters (\) between each alias as shown in props.conf.spec.  If that doesn't work, then use a separate FIELDALIAS setting for each alias.
The best solution is to rewrite playbooks to break them up into smaller playbooks. If that is not workable, then the next best solution is to have a 5.x SOAR environment to maintain the playbooks. Oth... See more...
The best solution is to rewrite playbooks to break them up into smaller playbooks. If that is not workable, then the next best solution is to have a 5.x SOAR environment to maintain the playbooks. Other than that, you could use the dev tools in your browser to take performance measurements, then selectively disable things to see if they increase performance, but this is hacky and may have side effects.
Are you getting the logs in via a Kapersky app? If so, is it possible to set the app to debug mode, or perhaps on the Kapersky side, so as to get more detailed log messages describing what component ... See more...
Are you getting the logs in via a Kapersky app? If so, is it possible to set the app to debug mode, or perhaps on the Kapersky side, so as to get more detailed log messages describing what component is not working?
I have a feeling that Splunk is automatically capping the number of rows when you use | timechart span=1s (this could result in 86400 rows per day), which would explain why your search works fine wit... See more...
I have a feeling that Splunk is automatically capping the number of rows when you use | timechart span=1s (this could result in 86400 rows per day), which would explain why your search works fine with 1-2 days but not with more than three. Maybe you could try binning the _time to a 1s value and then doing stats on it. index=test elb_status_code=200 | bin _time span=1s | stats count as total by _time | stats count as num_seconds by total | sort 0 total I am also curious how you got it to show values for total of 0. The count() function does not do that by default.
@PickleRick Permission is already set to global already for field alias.
Assuming your naming is OK, check the permissions.
Well, since you need to connect to an existing database, it must have been set up and be maintained by someone. That's the easiest way to find out - go and ask You can find most popular engines h... See more...
Well, since you need to connect to an existing database, it must have been set up and be maintained by someone. That's the easiest way to find out - go and ask You can find most popular engines here https://en.wikipedia.org/wiki/Relational_database#List_of_database_engines (among other sources).
@gcusello Thanks for you help to understand the issue.  Case1: Actually, there is no timestamp present in the provided csv. In the snapshot you're seeing the data is getting from sample i ingested... See more...
@gcusello Thanks for you help to understand the issue.  Case1: Actually, there is no timestamp present in the provided csv. In the snapshot you're seeing the data is getting from sample i ingested from the dev machine via UF, here even i am not able to see in the events no "timestamp" field.    Case2: When i upload the csv in the data inputs, after selecting the sourcetype as "cmkcsv" there it is showing the timestamp field. So here whatever settings i added in the advance it's not at all removing the warning flag as "failed to parse timestamp defaulting to file modtime" [ cmkcsv ] DATETIME_CONFIG=CURRENT INDEXED_EXTRACTIONS=csv KV_MODE=none LINE_BREAKER=\n\W NO_BINARY_CHECK=true SHOULD_LINEMERGE=false TIME_FORMAT=%Y-%m-%d %H:%M:%S TRUNCATE=200 category=Structured description=Comma-separated value format. Set header and other settings in "Delimited Settings" disabled=false pulldown_type=true TIME_PREFIX=^\w+\s*\w+,\s*\w+,\s* MAX_TIMESTAMP_LOOKAHEAD=20