Hi @BoldKnowsNothin , what do you mean with "reduce space"? alias are applied at search time, meaning there isn't any additional disk usage. About license usage, the number of aliases or data elab...
See more...
Hi @BoldKnowsNothin , what do you mean with "reduce space"? alias are applied at search time, meaning there isn't any additional disk usage. About license usage, the number of aliases or data elaborations don't consume any additional license: license is only the volume of daily indexed logs. Ciao. Giuseppe
Hi @AMAN0113 .. please check these pages: https://docs.splunk.com/Documentation/Splunk/9.1.1/Security/limitfieldfiltering https://community.splunk.com/t5/Splunk-Search/How-to-restrict-search-access...
See more...
Hi @AMAN0113 .. please check these pages: https://docs.splunk.com/Documentation/Splunk/9.1.1/Security/limitfieldfiltering https://community.splunk.com/t5/Splunk-Search/How-to-restrict-search-access-to-certain-hosts-or-fields-on-a/m-p/192290
The following should do what you want: | eval pwdExpire = if(type="staff", strftime(relative_time(_time, "+90d"),"%F %T"), strftime(relative_time(_time, "+180d"),"%F %T") ) You may need to adjus...
See more...
The following should do what you want: | eval pwdExpire = if(type="staff", strftime(relative_time(_time, "+90d"),"%F %T"), strftime(relative_time(_time, "+180d"),"%F %T") ) You may need to adjust the time format (I've used %F %T) to suit your requirements.
Hi @BoldKnowsNothin ... ya, i got it.. please note that, for the field names will not occupy lot of space. lets say, you have a CSV file with three fields field1,field2,field3longName
data1,data2,...
See more...
Hi @BoldKnowsNothin ... ya, i got it.. please note that, for the field names will not occupy lot of space. lets say, you have a CSV file with three fields field1,field2,field3longName
data1,data2,data3 and then you have ten thousand data/records in that CSV file. now, the field3 is named as "field3longName"... even if you use alias, the indexer will store it only once. but, for license usage... you can analyze these 3 fields and if you dont want, lets say field2, .. then you can totally ignore the field2 while data onboarding. this will save lot of license. hope you understood, thanks.
Hi @inventsekar this is a production Splunk and we have a cluster of indexers and a cluster of search heads I think the number of UFs is not important in this problem Thanks
Is it possible to have the true and false parts of an if statement contain eval statements. | eval pwdExpire=if(type="staff", | eval relative_time(_time, "+90day") , | eval relative_time(_ti...
See more...
Is it possible to have the true and false parts of an if statement contain eval statements. | eval pwdExpire=if(type="staff", | eval relative_time(_time, "+90day") , | eval relative_time(_time, "+180day") ) Desired results is: If type="staff" calculate pwdExpiry as _time + 90 days, else calculate pwdExpiry as _time + 180 days. I will then format pwdExpiry and display in a table.
Hello inventsekar, Sir, all this only for reduce our license usage, currently we afraid to exclude logs, and looking something else to reduce. Many thanks,
Hi, I want to restrict access to different teams based on hosts but don't want to do it by creating multiple indexes for this. The data would be present in one index and teams would be given access...
See more...
Hi, I want to restrict access to different teams based on hosts but don't want to do it by creating multiple indexes for this. The data would be present in one index and teams would be given access to this index, however they should be able to see only the data they own. Is there a way host-based restriction can be achieved?
>>> But does this field alias reduces space? Do you mean, after doing data onboarding (so the fields are indexed properly), if you apply the field alias, will it reduce the index size?.. as per my...
See more...
>>> But does this field alias reduces space? Do you mean, after doing data onboarding (so the fields are indexed properly), if you apply the field alias, will it reduce the index size?.. as per my understanding it wont reduce the index size ( Even if it reduces, it will only reduce very negligible amount only)
I am using ITSI's KPI-based search for text log monitoring. If the text logs match the search criteria, the flow is to send an alert via email. I would like to quote the contents of the text logs tha...
See more...
I am using ITSI's KPI-based search for text log monitoring. If the text logs match the search criteria, the flow is to send an alert via email. I would like to quote the contents of the text logs that matched the detection criteria in the body of the email. Is it possible to implement such requirements with Splunk ITSI? If so, I would like to know the detailed content of the implementation. If not, I would like to know the reason why.
Hi All, Is there a way to retrieve a specific alert without using short ID in the incident review page? I was thinking of using "rule_id" field or "event_hash" of the alert, but couldn't be able to...
See more...
Hi All, Is there a way to retrieve a specific alert without using short ID in the incident review page? I was thinking of using "rule_id" field or "event_hash" of the alert, but couldn't be able to pull the specific alert. Please suggest any other alternate method other than using short id. Thanks.
We are currently using a regex pattern to match events against our raw data, and it works perfectly when we use the search app. The pattern we are using is: C:\\Windows\\system32\\cmd\.exe*C:\\Progr...
See more...
We are currently using a regex pattern to match events against our raw data, and it works perfectly when we use the search app. The pattern we are using is: C:\\Windows\\system32\\cmd\.exe*C:\\ProgramData\\Symantec\\Symantec Endpoint Protection\\14\.3\.8289\.5000\.105\\Data\\Definitions\\WebExtDefs\\20230830\.063\\webextbridge\.exe* However, when we try to use this regex pattern in a lookup table, the events are not being matched. This seems to be because of the wildcard in the pattern. Despite defining the field name in the lookup definition (e.g., WILDCARD(process)), it still doesn't match the events. I'm wondering if Splunk lookup supports wildcards within strings, or does it only support them at the beginning and end of strings? Any insights or guidance on this matter would be greatly appreciated. Regards VK
Hi @lionkesler ... maybe you should update us some more details pls.. provide us the full SPL search query pls.. is the macro's working previously or just recently you found out this issue..
Hi @SN1368 ...May i know some more details please: if this is a production splunk or a testing/dev splunk.. is it clustered or non-clustered.. how many UF's you have got..
See if this helps. It uses actual times rather than relative ones, but the format is there. index=_internal status=* earliest=-30m
``` Get the most recent status for each API every 5 minutes
| tim...
See more...
See if this helps. It uses actual times rather than relative ones, but the format is there. index=_internal status=* earliest=-30m
``` Get the most recent status for each API every 5 minutes
| timechart span=5m latest(status) as status by API
``` Convert timestamp to time (HH:MM) ```
| eval _time=strftime(_time,"%H:%M")
``` Flip the display so time is across the top and API down the side ```
| transpose 0 header_field=_time column_name="API"
``` Fill in blank cells ```
| fillnull value="-"
Hi, we are logging api requests in Splunk.
I would like to create a sort of health check table where every column represents the status code of the last API call in previous 5 minutes. While each ...
See more...
Hi, we are logging api requests in Splunk.
I would like to create a sort of health check table where every column represents the status code of the last API call in previous 5 minutes. While each row is a different API.
Here an example of what the output should be
Any Idea how I could achieve that in Splunk?
Each row represents a different API ( request.url), while the status code is stored in response.status
Thank you
One of these should work, depending on which count must be greater than 2 index=idx-sec-cloud sourcetype=rubrik:json NOT summary="*on demand backup*"
(custom_details.eventName="Snapshot.BackupFaile...
See more...
One of these should work, depending on which count must be greater than 2 index=idx-sec-cloud sourcetype=rubrik:json NOT summary="*on demand backup*"
(custom_details.eventName="Snapshot.BackupFailed" NOT (custom_details.errorId="Oracle.RmanStatusDetailsEmpty"))
OR (custom_details.eventName="Snapshot.BackupFromLocationFailed" NOT (custom_details.errorId="Fileset.FailedDataThresholdNas" OR custom_details.errorId="Fileset.FailedFileThresholdNas" OR custom_details.errorId="Fileset.FailedToFindFilesNas"))
OR (custom_details.eventName="Vmware.VcenterRefreshFailed")
OR (custom_details.eventName="Hawkeye.IndexOperationOnLocationFailed")
OR (custom_details.eventName="Hawkeye.IndexRetryFailed")
OR (custom_details.eventName="Storage.SystemStorageThreshold")
OR (custom_details.eventName="ClusterOperation.DiskLost")
OR (custom_details.eventName="ClusterOperation.DiskUnhealthy")
OR (custom_details.eventName="Hardware.DimmError")
OR (custom_details.eventName="Hardware.PowerSupplyNeedsReplacement")
| eventstats count
| where (count > 2 AND (custom_details.eventName="Mssql.LogBackupFailed")) OR count <=2 OR OR (custom_details.eventName!="Mssql.LogBackupFailed")
| eventstats earliest(_time) AS early_time latest(_time) AS late_time index=idx-sec-cloud sourcetype=rubrik:json NOT summary="*on demand backup*"
(custom_details.eventName="Snapshot.BackupFailed" NOT (custom_details.errorId="Oracle.RmanStatusDetailsEmpty"))
OR (custom_details.eventName="Snapshot.BackupFromLocationFailed" NOT (custom_details.errorId="Fileset.FailedDataThresholdNas" OR custom_details.errorId="Fileset.FailedFileThresholdNas" OR custom_details.errorId="Fileset.FailedToFindFilesNas"))
OR (custom_details.eventName="Vmware.VcenterRefreshFailed")
OR (custom_details.eventName="Hawkeye.IndexOperationOnLocationFailed")
OR (custom_details.eventName="Hawkeye.IndexRetryFailed")
OR (custom_details.eventName="Storage.SystemStorageThreshold")
OR (custom_details.eventName="ClusterOperation.DiskLost")
OR (custom_details.eventName="ClusterOperation.DiskUnhealthy")
OR (custom_details.eventName="Hardware.DimmError")
OR (custom_details.eventName="Hardware.PowerSupplyNeedsReplacement")
| eventstats count(eval(custom_details.eventName="Mssql.LogBackupFailed"))
| where (count > 2 AND (custom_details.eventName="Mssql.LogBackupFailed")) OR count <=2 OR OR (custom_details.eventName!="Mssql.LogBackupFailed")
| eventstats earliest(_time) AS early_time latest(_time) AS late_time