All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I think there's a missing point here, that I've adapted inside my current solution. The primary key here is not only the job name but the PID also. The job could be executed multiple time during the... See more...
I think there's a missing point here, that I've adapted inside my current solution. The primary key here is not only the job name but the PID also. The job could be executed multiple time during the day or time span chosen, the search key is  talend_job and talend_pid in order to have unique duration...
Hi @NightShark , check if the not visible knowledge objects are shared or private: admins can see all, other roles cannot see private knowledge objects. Ciao. Giuseppe
Hi @Bracha , you have to use the first solution: | inputlookup importers WHERE NOT [ | inputlookup importers_csv | fields interfaceName ] put attention that the field names are the same (interfac... See more...
Hi @Bracha , you have to use the first solution: | inputlookup importers WHERE NOT [ | inputlookup importers_csv | fields interfaceName ] put attention that the field names are the same (interfaceName). Ciao. Giuseppe
Do you mean something like this? | makeresults | eval _raw="mule_330299_prod_App01_Clt1,91826354-d521-4a01-999f-35953d99b829,870a76ea-8033-443c-a312-834363u3d,2023-12-23T14:22:43.025Z mule_29999_dev... See more...
Do you mean something like this? | makeresults | eval _raw="mule_330299_prod_App01_Clt1,91826354-d521-4a01-999f-35953d99b829,870a76ea-8033-443c-a312-834363u3d,2023-12-23T14:22:43.025Z mule_29999_dev_WebApp01_clt1,152g382226vi-44e6-9721-aa7c1ea1ec1b,26228e-28sgsbx-943b-58b20a5c74c6,2024-01-06T13:29:15.762867Z" | multikv noheader=t | rename Column_1 as nname | rename Column_2 as ID | rename Column_3 as app | rename Column_4 as time
Hi @marco_carolo , try to adapt something like this: <your_search> | rex "[^\[]*\[(?<extracted_pid>[^\]]*)\]\s*\[(?<extracted_job_name>[^\]]*)\]\s*\[(?<extracted_index>[^\]]+\]\s*)(?<msg>.*)" | sta... See more...
Hi @marco_carolo , try to adapt something like this: <your_search> | rex "[^\[]*\[(?<extracted_pid>[^\]]*)\]\s*\[(?<extracted_job_name>[^\]]*)\]\s*\[(?<extracted_index>[^\]]+\]\s*)(?<msg>.*)" | stats earliest(_time) AS _time latest(_time) AS latest BY talend_job_name | eval duration=latest-_time | timechart values(duration) AS duration BY talend_job_name Ciao. Giuseppe
hi @nehamvinchankar , please try the following regex: | rex "^(?<nname>[^,]+),(?<Id>[^,]+),(?<app>[^,]+),(?<Time>.*)" that you can test at https://regex101.com/r/Qd83YT/1 otherwise, you could use... See more...
hi @nehamvinchankar , please try the following regex: | rex "^(?<nname>[^,]+),(?<Id>[^,]+),(?<app>[^,]+),(?<Time>.*)" that you can test at https://regex101.com/r/Qd83YT/1 otherwise, you could use the guided field extraction with separators. Ciao. Giuseppe
How to extract field from below event I want nname,ID,app and Time , here nname is mule_330299_prod_App01_Clt1 ID=91826354-d521-4a01-999f-35953d99b829 app=870a76ea-8033-443c-a312-834363u3d Time=2... See more...
How to extract field from below event I want nname,ID,app and Time , here nname is mule_330299_prod_App01_Clt1 ID=91826354-d521-4a01-999f-35953d99b829 app=870a76ea-8033-443c-a312-834363u3d Time=2023-12-23T14:22:43.025Z CSV Content:nname,Id,app,Time mule_330299_prod_App01_Clt1,91826354-d521-4a01-999f-35953d99b829,870a76ea-8033-443c-a312-834363u3d,2023-12-23T14:22:43.025Z mule_29999_dev_WebApp01_clt1,152g382226vi-44e6-9721-aa7c1ea1ec1b,26228e-28sgsbx-943b-58b20a5c74c6,2024-01-06T13:29:15.762867Z  like this we have multiple lines in one event 
Hello. I am trying to route some events to a different index based on a field on the events. The events are JSON formatted. This is an example: { "topic": "audits", "events": [ {... See more...
Hello. I am trying to route some events to a different index based on a field on the events. The events are JSON formatted. This is an example: { "topic": "audits", "events": [ { "admin_name": "john doe john.doe@juniper.net", "device_id": "00000000-0000-0000-1000-5c5b35xxxxxx", "id": "8e00dd48-b918-4d9b-xxxx-xxxxxxxxxxxx", "message": "Update Device \"Reception\"", "org_id": "2818e386-8dec-2562-xxxx-xxxxxxxxxxx", "site_id": "4ac1dcf4-9d8b-7211-xxxx-xxxxxxxxxxxx", "src_ip": "xx.xx.xx.xx", "timestamp": 1549047906.201053 } ] } We are receiving the events into a heavy forwarder. And we forward them the event to an indexer. We want to send the events with the topic audits to a different index than the default one (imp_low). I have tried with these settings in the heavy forwarder:   -Props.conf --------------------------------------------- [_json-Mist_Juniper] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Structured pulldown_type = 1 TRANSFORMS-force_index = setindexHIGH -Transforms .conf: ------------------------- [setindexHIGH] SOURCE_KEY = topic REGEX = (audits) DEST_KEY = _MetaData:Index FORMAT = imp_high   But it is not working, all the events are going to the "imp_low" index.  Thanks
Sorry, I didn't get it. One of requests I had was: I want to see the delta for multiple execution of a particular job, in order to find out if the job is getting slower or faster. I thought to do ... See more...
Sorry, I didn't get it. One of requests I had was: I want to see the delta for multiple execution of a particular job, in order to find out if the job is getting slower or faster. I thought to do a timechart in order to find out for each job the execution delta time, I'm missing the correct way to group the timechart by the duration of each execution... @gcusello 
Hi @gcusello  thank you for your quick response I have two lists 1. importers - includes many importers  2. importers_csv - contains some of the importers from the first list I want a list which... See more...
Hi @gcusello  thank you for your quick response I have two lists 1. importers - includes many importers  2. importers_csv - contains some of the importers from the first list I want a list which will contain the importers that are not in the CSV file How to do it?  
Hi @marco_carolo , the easiest way is to follow the GUI in stached char creation. Ciao. Giuseppe
Hi @Bracha , let me understand: in the importers.csv file you have a list of interfaces and you want to filter your results using the above lookup or you want to check if they are present in the ind... See more...
Hi @Bracha , let me understand: in the importers.csv file you have a list of interfaces and you want to filter your results using the above lookup or you want to check if they are present in the index? if you want to filter your results using the lookup, you can use a subsearch, putting attention that the field names in main and sub search are the same (in your case interfaceName): index="------------" code=* [|inputlookup importers.csv | fields interfaceName ] |stats values(interfaceName) as importers_csv  If instead you want to know if there are interfaceNames in the lookup not present in the results of the main search, you have to run something like this: index="------------" code=* | stats count BY interfaceName | append [ | inputlookup importers.csv | eval count=0 |fields interfaceName count ] | stats sum(count) AS total BY interfaceName | eval status=if(total=0,"Not present","present") | table interfaceName status Ciao. Giuseppe
Hi I have a dashboard that displays CSV I want to add lists for him to display that are not in the CSV But the list I'm adding includes the records that are in the CSV I want to create a list tha... See more...
Hi I have a dashboard that displays CSV I want to add lists for him to display that are not in the CSV But the list I'm adding includes the records that are in the CSV I want to create a list that will not include the records in the CSV This code gets me the whole list   | index="------" interface="--" |stats values(interface) as importers   This code brings me the list from the CSV   index="------------" code=* | search [|inputlookup importers.csv |lookup importers.csv interfaceName OUTPUTNEW system environment timerange |stats values(interfaceName) as importers_csv     I want a code that brings me the list without the records in the CSV Thanks  
index=abc | stats count by host | inputlookup append=t yourlookup | fillnull count | stats sum(count) as count by host | where count=0
  I tried adding the entry into the processers, but no luck, nothing got changed. Attaching the agent_config.yaml and the log details. Kindly check and help.
Thank you very much for your response,I will try it tomorrow.
I’ve a scenario where I want to compare of events from index=abc host=_inventory and  data from a lookup file that includes fields such as host, location, os, etc. The end goal is to point out server... See more...
I’ve a scenario where I want to compare of events from index=abc host=_inventory and  data from a lookup file that includes fields such as host, location, os, etc. The end goal is to point out servers that aren't being reported by Splunk. The structure of my Splunk events includes fields like location, tier, servers, and splunk_server. In the lookup file, I have fields like host, location, os, and more I combined two data’s and what is the search condition to find out how servers are being monitored @ITWhisperer @PickleRick 
I need to drop EventCode 4634 and 4624 with Login_type 3, how i can use nullqueue option and write the correct REGEX on transforms.conf .
Hello It looks good but once im clicking on one of the graphs its shows no results: also, i want to visualize by Level as well
While all is fine and dandy (seriously, hats off; I hate powershell myself), you should _not_ touch $SPLUNK_HOME/bin. Overwriting Splunk-supplied stuff where there is no mechanism specifically provid... See more...
While all is fine and dandy (seriously, hats off; I hate powershell myself), you should _not_ touch $SPLUNK_HOME/bin. Overwriting Splunk-supplied stuff where there is no mechanism specifically provided for this (by config layering; remember that you should _not_ touch settings in default directories) is a very bad idea - it's non-maintaineable. Any UF upgrade will overwrite the changes you made. So it's not the proper way to introduce such changes. While modular inputs as such are not officially supported on UF (most probably because typically modular inputs are associated with python which is not distributed with UF installation), you might try to get away with defining your input separately in an app. But I won't guarantee it will work.