All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ah, right.  I missed the configs from UF. My bad. Could have explained sooner. When you're using indexed extractions, the data is sent from UF as parsed. And is not processed anymore on components d... See more...
Ah, right.  I missed the configs from UF. My bad. Could have explained sooner. When you're using indexed extractions, the data is sent from UF as parsed. And is not processed anymore on components downstream (with a possible exception of index actions). I suppose you want to get rid of the header line(s). You should rather use parameters from https://docs.splunk.com/Documentation/Splunk/Latest/Admin/Propsconf#Structured_Data_Header_Extraction_and_configuration for this, especially PREAMBLE_REGEX or FIELD_HEADER_REGEX  
Have you created an index named "cisco" before creating inputs? You can't send events to a non-existent index. If you haven't, the event will end up either in a last-chance index (if you have one co... See more...
Have you created an index named "cisco" before creating inputs? You can't send events to a non-existent index. If you haven't, the event will end up either in a last-chance index (if you have one configured) or discarded (and you'll get a warning in _internal about it).
Too few words. Please describe what you mean by "attach". The more effort you put in precise description of your problem the higher the chance that someone will actually be able to help you.
first, I created new UDP data input whit a new index "cisco" but when I search index=cisco there was no event then I create new UDP data input whit "main" index and I worked. but I don't like to s... See more...
first, I created new UDP data input whit a new index "cisco" but when I search index=cisco there was no event then I create new UDP data input whit "main" index and I worked. but I don't like to store my switch event in main index    
There are at least three different ways of "integrating" ES with third-party solutions. Details of implementing each of them will greatly depend on particular use case and might involve some programm... See more...
There are at least three different ways of "integrating" ES with third-party solutions. Details of implementing each of them will greatly depend on particular use case and might involve some programming. 1) Use the external solution to search from your Splunk ES installation and retrieve notables. 2) Use the alert action (or adaptive response in case of ES) to push each notable separately to the external solution. 3) Use an additional alert to periodically export the list of new notables to the external solution. In cases 2 and 3 you need to have something developed (either use something already made if there is already an app for it or write something from scratch) to push the data from Splunk to the third-party service.
"I only know in SPL we can't get result if write query with source in the first position" It is not true. If you don't specify index conditions explicitly, Splunk uses default indexes for your user'... See more...
"I only know in SPL we can't get result if write query with source in the first position" It is not true. If you don't specify index conditions explicitly, Splunk uses default indexes for your user's role (which might be an empty set). Conditions in a search are _not_ positional. OK, having that out of the way... 1) metasearch is an old command, rarely used nowadays since most use cases can be more effectively covered with other methods. In your case it would be | tstats count where index=* source IN ("XmlWinEventLog:Microsoft-Windows-CertificateServicesClient-Lifecycle-System/Operational") 2) Well, do you _have_ any data of this kind? If you haven't ingested it from the endpoint, you can't search from it. That's what the search result tells you. (I assume you're searching over decently wide time range and you have access to relevant indexes)
1. The first pair of props/transforms related to Universal Forwarder The second pair is putted on the indexer cluster layer 2. Yes, I see indexed extractions
1. Where do you put those props/transforms? 2. Do you use indexed extractions?
It's natural that old data is getting rolled out of your index when you're either reaching retention limits or your index (or whole volume) hits size limits. So check your index and volume parameters... See more...
It's natural that old data is getting rolled out of your index when you're either reaching retention limits or your index (or whole volume) hits size limits. So check your index and volume parameters and your index size usage.
1. Check your splunk list monitor and splunk list inputstatus output 2. Why use crcSalt? 3. Don't use KV_MODE=json when you're using INDEXED_EXTRACTIONS=json and vice versa. (that's not connect... See more...
1. Check your splunk list monitor and splunk list inputstatus output 2. Why use crcSalt? 3. Don't use KV_MODE=json when you're using INDEXED_EXTRACTIONS=json and vice versa. (that's not connected to the problem at hand but useful anyway)  
1. Don't put the "table" command in that place.  It doesn't do anything useful and (in distributed setup) moves the processing to the SH layer effectively losing the advantage of parallel stats proce... See more...
1. Don't put the "table" command in that place.  It doesn't do anything useful and (in distributed setup) moves the processing to the SH layer effectively losing the advantage of parallel stats processing on indexers. 2. I can't quite grasp what's the point of that | stats | chart idea. First you count, then you count the counts. 3. There is a timechart command for time series. 4. The overal idea with eval is OK but I'd rather use fieldformat - this way you can freely sort based on actual underlying time data but present the data in a human-readable way.
There is no datamodel for this because datamodels abstract the event's conceptual side from the actual implementation. That's why your "event id being 39" is not a good condition for a CIM datamodel.... See more...
There is no datamodel for this because datamodels abstract the event's conceptual side from the actual implementation. That's why your "event id being 39" is not a good condition for a CIM datamodel. You can of course build your own datamodel but the question is what would you want to achieve with it. If you just want to find all events with this event id you can do so using normal event search (with some possible acceleration techniques).
I would like to create a search with data models where my event id is 39. However, there is no datamodel that fulfills my criteria. Is there anyone kn
Can you be more verbose? What do you mean by "I cannot store log in another index". The TA itself shouldn't have anything to do with the indexes.
Hi @RezaET , the issue probably is in the default search path: by default only the main index is in the default search path and in apps the index isn't specified. You have two solutions: add the... See more...
Hi @RezaET , the issue probably is in the default search path: by default only the main index is in the default search path and in apps the index isn't specified. You have two solutions: add the other indexes in the default search path, add the index in all the searches of your app. Ciao. Giuseppe
Hi Splunkers, I'm working on a React app for Splunk based on the documentation provided. https://splunkui.splunk.com/Packages/create/CreatingSplunkApps  I need to hide(may configure) the Splunk hea... See more...
Hi Splunkers, I'm working on a React app for Splunk based on the documentation provided. https://splunkui.splunk.com/Packages/create/CreatingSplunkApps  I need to hide(may configure) the Splunk header bar when the React app is in use.   I’ve come across several community posts suggesting that overriding the default view files might be necessary, but I’m unsure how to configure this within a React app. I’d appreciate any guidance on how to achieve this.
I installed cisco network add-on, but only main index work and I cannot store log in another index
index=db_it_network sourcetype=pan* url_domain="www.perplexity.ai" OR app=claude-base OR app=google-gemini* OR app=openai* OR app=bing-ai-base | eval app=if(url_domain="www.perplexity.ai", url_domain... See more...
index=db_it_network sourcetype=pan* url_domain="www.perplexity.ai" OR app=claude-base OR app=google-gemini* OR app=openai* OR app=bing-ai-base | eval app=if(url_domain="www.perplexity.ai", url_domain, app) | table user, app, _time | eval week_num = "Week Number" . strftime(_time, "%U") | stats count by user app week_num | chart count by app week_num | sort app 0
According to Windows Export Certificate - Splunk Security Content it using macros in the first query  `certificateservices_lifecycle` EventCode=1007 | xmlkv UserData_Xml | stats count min(_time) a... See more...
According to Windows Export Certificate - Splunk Security Content it using macros in the first query  `certificateservices_lifecycle` EventCode=1007 | xmlkv UserData_Xml | stats count min(_time) as firstTime max(_time) as lastTime by Computer, SubjectName, UserData_Xml | rename Computer as dest | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `windows_export_certificate_filter`   And in `certificateservices_lifecycle` macros is (source=XmlWinEventLog:Microsoft-Windows-CertificateServicesClient-Lifecycle-System/Operational OR source=XmlWinEventLog:Microsoft-Windows-CertificateServicesClient-Lifecycle-User/Operational) I only know in SPL we can't get result if write query with source in the first position, so i add index=* before `certificateservices_lifecycle` but unfortunately i don't get any result. Then i'm using metasearch for check it available or not with this query  First query : | metasearch index=* source IN ("XmlWinEventLog:Microsoft-Windows-CertificateServicesClient-Lifecycle-System/Operational") Second query : | metasearch index=* source IN ("XmlWinEventLog:Microsoft-Windows-CertificateServicesClient-Lifecycle-User/Operational")   and then i only got 0 result. The question is if i want to get data from source="XmlWinEventLog:Microsoft-Windows-CertificateServicesClient-Lifecycle-User/Operational" did i need setting it in Endpoints or can solved it in Splunk ?    Thanks
Hi @jaracan , I cannot have it because it depends on the target platform that i  don't know: you have to create a script that calls it using its API passing to the script the correlaton search data... See more...
Hi @jaracan , I cannot have it because it depends on the target platform that i  don't know: you have to create a script that calls it using its API passing to the script the correlaton search data or a search on the Notable index. Check if there's an app that permits this export, for some platform (e.g Microsoft Defender) it was already developed. Ciao. Giuseppe