Hi @fahimeh , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Poin...
See more...
Hi @fahimeh , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @PickleRick Thanks for your reply, and knowledge about source can called in the first position. Sorry i don't know that because in many case i got it always solved when i add index=* before the...
See more...
Hi @PickleRick Thanks for your reply, and knowledge about source can called in the first position. Sorry i don't know that because in many case i got it always solved when i add index=* before the source. And with your query i only get 0 count, so i think it because my client don't ingest in the Endpoint. Thankyou for your reply and your information. Danke
Hi @fahimeh , use the rule you need, e.g. if the haour cannot be 11 AM, you can insert in your search time_hour|=11. It depends on your requirements. Ciao. Giuseppe
We have a customer who does get IT mainframe insights today via CA SAS/MICS, a costly platform. They are seeking a solution for data consumption integrated with Splunk platform. We expect to ingest a...
See more...
We have a customer who does get IT mainframe insights today via CA SAS/MICS, a costly platform. They are seeking a solution for data consumption integrated with Splunk platform. We expect to ingest and host SMF in IZPCA. What are customer options to consume reports in Splunk via IBM Z Performance and Capacity Analytics?
Correct, it is not a supported db driver, that is what I'm wondering here; what attributes does the agent send to get Observability Cloud to recognize it as database? For instance, I'm populating thi...
See more...
Correct, it is not a supported db driver, that is what I'm wondering here; what attributes does the agent send to get Observability Cloud to recognize it as database? For instance, I'm populating things like `db.name`, `db.statement`, and `db.system` and wondering if any further values could be populated (either `sf_` values or opentelemetry semantic conventions) to get this to work the way I want
Hi @PickleRick Thanks for the support. The reason for the | stats | chart is to distinct my data by user. If I do not do this then I get multiple entries per user for each url. This allows for...
See more...
Hi @PickleRick Thanks for the support. The reason for the | stats | chart is to distinct my data by user. If I do not do this then I get multiple entries per user for each url. This allows for a user to only hit one url per week and then count them. I will try the suggestion. I recently moved from kql to spl and will try and figure out the format for timechart and fieldformat. Thank you!
Splunk REST commands return information about the current state of the service. They are not historical. Disk space used by search jobs is ephemeral. Once the job expires (usually in 10 minutes), ...
See more...
Splunk REST commands return information about the current state of the service. They are not historical. Disk space used by search jobs is ephemeral. Once the job expires (usually in 10 minutes), the disk space is released so a monthly total of disk usage is pretty meaningless. What problem are you trying to solve?
Ah, right. I missed the configs from UF. My bad. Could have explained sooner. When you're using indexed extractions, the data is sent from UF as parsed. And is not processed anymore on components d...
See more...
Ah, right. I missed the configs from UF. My bad. Could have explained sooner. When you're using indexed extractions, the data is sent from UF as parsed. And is not processed anymore on components downstream (with a possible exception of index actions). I suppose you want to get rid of the header line(s). You should rather use parameters from https://docs.splunk.com/Documentation/Splunk/Latest/Admin/Propsconf#Structured_Data_Header_Extraction_and_configuration for this, especially PREAMBLE_REGEX or FIELD_HEADER_REGEX
Have you created an index named "cisco" before creating inputs? You can't send events to a non-existent index. If you haven't, the event will end up either in a last-chance index (if you have one co...
See more...
Have you created an index named "cisco" before creating inputs? You can't send events to a non-existent index. If you haven't, the event will end up either in a last-chance index (if you have one configured) or discarded (and you'll get a warning in _internal about it).
Too few words. Please describe what you mean by "attach". The more effort you put in precise description of your problem the higher the chance that someone will actually be able to help you.
first, I created new UDP data input whit a new index "cisco" but when I search index=cisco there was no event then I create new UDP data input whit "main" index and I worked. but I don't like to s...
See more...
first, I created new UDP data input whit a new index "cisco" but when I search index=cisco there was no event then I create new UDP data input whit "main" index and I worked. but I don't like to store my switch event in main index
There are at least three different ways of "integrating" ES with third-party solutions. Details of implementing each of them will greatly depend on particular use case and might involve some programm...
See more...
There are at least three different ways of "integrating" ES with third-party solutions. Details of implementing each of them will greatly depend on particular use case and might involve some programming. 1) Use the external solution to search from your Splunk ES installation and retrieve notables. 2) Use the alert action (or adaptive response in case of ES) to push each notable separately to the external solution. 3) Use an additional alert to periodically export the list of new notables to the external solution. In cases 2 and 3 you need to have something developed (either use something already made if there is already an app for it or write something from scratch) to push the data from Splunk to the third-party service.
"I only know in SPL we can't get result if write query with source in the first position" It is not true. If you don't specify index conditions explicitly, Splunk uses default indexes for your user'...
See more...
"I only know in SPL we can't get result if write query with source in the first position" It is not true. If you don't specify index conditions explicitly, Splunk uses default indexes for your user's role (which might be an empty set). Conditions in a search are _not_ positional. OK, having that out of the way... 1) metasearch is an old command, rarely used nowadays since most use cases can be more effectively covered with other methods. In your case it would be | tstats count where index=* source IN ("XmlWinEventLog:Microsoft-Windows-CertificateServicesClient-Lifecycle-System/Operational") 2) Well, do you _have_ any data of this kind? If you haven't ingested it from the endpoint, you can't search from it. That's what the search result tells you. (I assume you're searching over decently wide time range and you have access to relevant indexes)
1. The first pair of props/transforms related to Universal Forwarder The second pair is putted on the indexer cluster layer 2. Yes, I see indexed extractions
It's natural that old data is getting rolled out of your index when you're either reaching retention limits or your index (or whole volume) hits size limits. So check your index and volume parameters...
See more...
It's natural that old data is getting rolled out of your index when you're either reaching retention limits or your index (or whole volume) hits size limits. So check your index and volume parameters and your index size usage.
1. Check your splunk list monitor and splunk list inputstatus output 2. Why use crcSalt? 3. Don't use KV_MODE=json when you're using INDEXED_EXTRACTIONS=json and vice versa. (that's not connect...
See more...
1. Check your splunk list monitor and splunk list inputstatus output 2. Why use crcSalt? 3. Don't use KV_MODE=json when you're using INDEXED_EXTRACTIONS=json and vice versa. (that's not connected to the problem at hand but useful anyway)