Hi @fahimeh , use the rule you need, e.g. if the haour cannot be 11 AM, you can insert in your search time_hour|=11. It depends on your requirements. Ciao. Giuseppe
We have a customer who does get IT mainframe insights today via CA SAS/MICS, a costly platform. They are seeking a solution for data consumption integrated with Splunk platform. We expect to ingest a...
See more...
We have a customer who does get IT mainframe insights today via CA SAS/MICS, a costly platform. They are seeking a solution for data consumption integrated with Splunk platform. We expect to ingest and host SMF in IZPCA. What are customer options to consume reports in Splunk via IBM Z Performance and Capacity Analytics?
Correct, it is not a supported db driver, that is what I'm wondering here; what attributes does the agent send to get Observability Cloud to recognize it as database? For instance, I'm populating thi...
See more...
Correct, it is not a supported db driver, that is what I'm wondering here; what attributes does the agent send to get Observability Cloud to recognize it as database? For instance, I'm populating things like `db.name`, `db.statement`, and `db.system` and wondering if any further values could be populated (either `sf_` values or opentelemetry semantic conventions) to get this to work the way I want
Hi @PickleRick Thanks for the support. The reason for the | stats | chart is to distinct my data by user. If I do not do this then I get multiple entries per user for each url. This allows for...
See more...
Hi @PickleRick Thanks for the support. The reason for the | stats | chart is to distinct my data by user. If I do not do this then I get multiple entries per user for each url. This allows for a user to only hit one url per week and then count them. I will try the suggestion. I recently moved from kql to spl and will try and figure out the format for timechart and fieldformat. Thank you!
Splunk REST commands return information about the current state of the service. They are not historical. Disk space used by search jobs is ephemeral. Once the job expires (usually in 10 minutes), ...
See more...
Splunk REST commands return information about the current state of the service. They are not historical. Disk space used by search jobs is ephemeral. Once the job expires (usually in 10 minutes), the disk space is released so a monthly total of disk usage is pretty meaningless. What problem are you trying to solve?
Ah, right. I missed the configs from UF. My bad. Could have explained sooner. When you're using indexed extractions, the data is sent from UF as parsed. And is not processed anymore on components d...
See more...
Ah, right. I missed the configs from UF. My bad. Could have explained sooner. When you're using indexed extractions, the data is sent from UF as parsed. And is not processed anymore on components downstream (with a possible exception of index actions). I suppose you want to get rid of the header line(s). You should rather use parameters from https://docs.splunk.com/Documentation/Splunk/Latest/Admin/Propsconf#Structured_Data_Header_Extraction_and_configuration for this, especially PREAMBLE_REGEX or FIELD_HEADER_REGEX
Have you created an index named "cisco" before creating inputs? You can't send events to a non-existent index. If you haven't, the event will end up either in a last-chance index (if you have one co...
See more...
Have you created an index named "cisco" before creating inputs? You can't send events to a non-existent index. If you haven't, the event will end up either in a last-chance index (if you have one configured) or discarded (and you'll get a warning in _internal about it).
Too few words. Please describe what you mean by "attach". The more effort you put in precise description of your problem the higher the chance that someone will actually be able to help you.
first, I created new UDP data input whit a new index "cisco" but when I search index=cisco there was no event then I create new UDP data input whit "main" index and I worked. but I don't like to s...
See more...
first, I created new UDP data input whit a new index "cisco" but when I search index=cisco there was no event then I create new UDP data input whit "main" index and I worked. but I don't like to store my switch event in main index
There are at least three different ways of "integrating" ES with third-party solutions. Details of implementing each of them will greatly depend on particular use case and might involve some programm...
See more...
There are at least three different ways of "integrating" ES with third-party solutions. Details of implementing each of them will greatly depend on particular use case and might involve some programming. 1) Use the external solution to search from your Splunk ES installation and retrieve notables. 2) Use the alert action (or adaptive response in case of ES) to push each notable separately to the external solution. 3) Use an additional alert to periodically export the list of new notables to the external solution. In cases 2 and 3 you need to have something developed (either use something already made if there is already an app for it or write something from scratch) to push the data from Splunk to the third-party service.
"I only know in SPL we can't get result if write query with source in the first position" It is not true. If you don't specify index conditions explicitly, Splunk uses default indexes for your user'...
See more...
"I only know in SPL we can't get result if write query with source in the first position" It is not true. If you don't specify index conditions explicitly, Splunk uses default indexes for your user's role (which might be an empty set). Conditions in a search are _not_ positional. OK, having that out of the way... 1) metasearch is an old command, rarely used nowadays since most use cases can be more effectively covered with other methods. In your case it would be | tstats count where index=* source IN ("XmlWinEventLog:Microsoft-Windows-CertificateServicesClient-Lifecycle-System/Operational") 2) Well, do you _have_ any data of this kind? If you haven't ingested it from the endpoint, you can't search from it. That's what the search result tells you. (I assume you're searching over decently wide time range and you have access to relevant indexes)
1. The first pair of props/transforms related to Universal Forwarder The second pair is putted on the indexer cluster layer 2. Yes, I see indexed extractions
It's natural that old data is getting rolled out of your index when you're either reaching retention limits or your index (or whole volume) hits size limits. So check your index and volume parameters...
See more...
It's natural that old data is getting rolled out of your index when you're either reaching retention limits or your index (or whole volume) hits size limits. So check your index and volume parameters and your index size usage.
1. Check your splunk list monitor and splunk list inputstatus output 2. Why use crcSalt? 3. Don't use KV_MODE=json when you're using INDEXED_EXTRACTIONS=json and vice versa. (that's not connect...
See more...
1. Check your splunk list monitor and splunk list inputstatus output 2. Why use crcSalt? 3. Don't use KV_MODE=json when you're using INDEXED_EXTRACTIONS=json and vice versa. (that's not connected to the problem at hand but useful anyway)
1. Don't put the "table" command in that place. It doesn't do anything useful and (in distributed setup) moves the processing to the SH layer effectively losing the advantage of parallel stats proce...
See more...
1. Don't put the "table" command in that place. It doesn't do anything useful and (in distributed setup) moves the processing to the SH layer effectively losing the advantage of parallel stats processing on indexers. 2. I can't quite grasp what's the point of that | stats | chart idea. First you count, then you count the counts. 3. There is a timechart command for time series. 4. The overal idea with eval is OK but I'd rather use fieldformat - this way you can freely sort based on actual underlying time data but present the data in a human-readable way.
There is no datamodel for this because datamodels abstract the event's conceptual side from the actual implementation. That's why your "event id being 39" is not a good condition for a CIM datamodel....
See more...
There is no datamodel for this because datamodels abstract the event's conceptual side from the actual implementation. That's why your "event id being 39" is not a good condition for a CIM datamodel. You can of course build your own datamodel but the question is what would you want to achieve with it. If you just want to find all events with this event id you can do so using normal event search (with some possible acceleration techniques).