All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Dont know what I was doing wrong yesterday, must have been the end of day eyes. I did figure it out but thanks for the answer! I accepted it and gave the Karma Have a great day!
If I am understanding your ask correctly, I think settings like this this would do it.  
Hello Working on updating a dashboard panel to handle numerical values. I want the panel to show red anytime the count is not zero. This always works when the number is positive BUT occasionally we... See more...
Hello Working on updating a dashboard panel to handle numerical values. I want the panel to show red anytime the count is not zero. This always works when the number is positive BUT occasionally we have a negative number, such as "-3" is there a way to make negative values red as well? Basically, anything that isn't zero should be red. Thanks for the help!
Hi at all, I need to create some Correlation Searches on Splunk audit events, but I didn't find any documentation about the events to search, e.g. I don't know how to identify creation of a new role... See more...
Hi at all, I need to create some Correlation Searches on Splunk audit events, but I didn't find any documentation about the events to search, e.g. I don't know how to identify creation of a new role or updates to an existing one, I found only action=edit_roles, but I can only know the associted user and not the changed role. Can anyone idicate an url to find Splunk audit information? Ciao. Giuseppe
max_match=0 Thats what I didn't include, I completely spaced that option. Thanks as always!
I like this answer, unfortunately I am going to have to update the props for this since as it is not being recognized as a valid xml object and therefore doesn't work. Thanks for the assistance, I gr... See more...
I like this answer, unfortunately I am going to have to update the props for this since as it is not being recognized as a valid xml object and therefore doesn't work. Thanks for the assistance, I greatly appreciate you help!
If the data passes through an HF then parsing (not pre-parsing) is done by the HF.  Adding index-time extractions to the Cloud indexers will do nothing so new extractions must be added to the HF. If... See more...
If the data passes through an HF then parsing (not pre-parsing) is done by the HF.  Adding index-time extractions to the Cloud indexers will do nothing so new extractions must be added to the HF. If the data does not pass through an HF then index-time field extraction is done by the indexers.
Hi Splunkers, today I have a question related not on a "technical how": my doubt is related to a "best practice". Environment: a Splunk Cloud combo instance (Core + Enterprise Security) with some H... See more...
Hi Splunkers, today I have a question related not on a "technical how": my doubt is related to a "best practice". Environment: a Splunk Cloud combo instance (Core + Enterprise Security) with some Heavy Forwarders. Task: perform some field extractions Details: addon for parsing are already installed and configured, so we have not to create new ones, we should simply enrich/expand existing ones. Those addons are installed on both cloud components and HFs. The point is this: due we already have some addon for parsing, we could simply edit their props.conf and transforms.conf files; of course, due we have addon installed on both cloud components and HFs, we have to perform changes on all of them.  For example, performing addon editing only on cloud components with GUI Field Extraction imply that new fields will be parsed at index time on them, because they will be not pre parsed by HFs. Plus, we know that we should create a copy of those file on local folder, to avoid editing the default one, etcetera, etcetera, etcetera.  But, at the same time, for our SOC we created a custom app used as container to store all customizations performed by/for them, following one of Splunk best practice. We store there reports, alerts, and so on: with "we store there" I mean that, when we create something and choose an app context, we set our custom SOC one. With this choice, we could simply perform a field extraction with GUI and assign as app context our custom one; of course, with this technique, custom regex are saved only on cloud components and not on the HFs. So, my wondering is: when we speak about field extraction, if we consider that pre parsing performed by HF is desired but NOT mandatory, what is the best choice? Maintain all field extractions on addon or split between OOT one and custom one, using our custom SOC app?
Usually you are configuring inputs into some own app. Inside this app there is inputs.conf where you have defined needed attributes like sourcetype, source and index where to send events. Have you a... See more...
Usually you are configuring inputs into some own app. Inside this app there is inputs.conf where you have defined needed attributes like sourcetype, source and index where to send events. Have you already read this https://docs.splunk.com/Documentation/Splunk/latest/Data/Getstartedwithgettingdatain ? If you are doing regularly indexing and adding new data sources you should participate to System Admin and also Data Admin courses to fully understand the way how this should manage with splunk.
Had the same error message to an adfs server with encryption and in my case this worked, dont know if it is correct. I added the encrypted private key to signAuthnRequest certificate, which  is th... See more...
Had the same error message to an adfs server with encryption and in my case this worked, dont know if it is correct. I added the encrypted private key to signAuthnRequest certificate, which  is this authentication.conf parameter: [saml] clientCert = cert_and_encrypted_private_key.pem The password of the encypted private key was configured to the parameter sslPassword of the same stanza  sslPasswort =  No this parameter could be set to true: signAuthnRequest = true and reloaded authentication to let the sslPasswort be hashed. Worked for me.
Hi @usej    - I’m a Community Moderator in the Splunk Community.  This question was posted 5 years ago, so it might not get the attention you need for your question to be answered. We recommend th... See more...
Hi @usej    - I’m a Community Moderator in the Splunk Community.  This question was posted 5 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
Hi team ,the  screen shot user(Read-only Access) able to access the delete option and edit the query also, our requirement user not able to  see delete option(hide),  please help me
Any Updates on this feature implementation.
Hello @VatsalJagani  , as you said no need to create index on all heavy forwarders, but let me ask something, when i received logs from same new log source, how to differentiate between different lo... See more...
Hello @VatsalJagani  , as you said no need to create index on all heavy forwarders, but let me ask something, when i received logs from same new log source, how to differentiate between different logs sources from the same log source? 
Hello everyone,   I am still relatively new to Splunk. I would like to add an additionalTooltipField to my maps visualization, so that when you hover over a marker point, more data details about th... See more...
Hello everyone,   I am still relatively new to Splunk. I would like to add an additionalTooltipField to my maps visualization, so that when you hover over a marker point, more data details about the marker appear. I have formulated the following query: source="NeueIP.csv" host="IP" sourcetype="csv" | rename Breitengrad as latitude, L__ngengrad as longitude, Stadt as Stadt, Kurzbeschreibung as Beschreibung | eval CPU_Auslastung = replace(CPU_Auslastung, "%","") | eval CPU_Auslastung = tonumber(CPU_Auslastung) | eval CPU_Color = case( CPU_Auslastung > 80.0, "#de1d20", CPU_Auslastung > 50.0, "#54afda", true(), "#4ade1d" ) | table Stadt, latitude, longitude, Kurzbeschreibung, Langbeschreibung, CPU_Auslastung, CPU_Color | eval _time = now()     And I tried to adjust some things in the source code so that the additionalTooltipField appears. Last of all: "visualizations": {  "viz_map_1": {  "type": "splunk.map",  "options": {  "center": [  50.35,  17.36  ],  "zoom": 4,  "layers": [  {  "type": "marker",  "latitude": "> primary | seriesByName('latitude')",  "longitude": "> primary | seriesByName('longitude')",  "dataColors": ">primary | seriesByName(\"CPU_Auslastung\") | rangeValue(config)",  "additionalTooltipFields": ">primary | seriesByName(\"Stadt\")",  "markerOptions": {  "additionalTooltipFields": [  "Stadt",  "Kurzbeschreibung"  ] },  "hoverMarkerPanel": {  "enabled": true,  "fields": [  "Stadt",  "Kurzbeschreibung"  ]  }  }  ]  },   My sample data is as follows: Stadt, Breitengrad, Längengrad, Kurzbeschreibung, Langbeschreibung, CPU_Auslastung Berlin, 52.52, 13.405, BE, Hauptstadt Deutschlands, 45% London, 51.5074, -0.1278, LDN, Hauptstadt des Vereinigten Königreichs, 65% Paris, 48.8566, 2.3522, PAR, Hauptstadt Frankreichs, 78%     Is my plan possible?   Thanks for your help in advance!!  
I have the same problem.  The webhook work for a couple of days and the fails.    Did the cron job to restart the inputs work successfully as a workaround?         
thank you for your help can you help in how to create  my own lookup from the indexed IT   Thanks
Hello. Thank you for the suggestion. I would look into it. I did not know i can embed reports.
(eventtype =axs_event_txn_visa_req_parsedbody "++EXT-ID[C0] FLD[Authentication Program..] FRMT[TLV] LL[1] LEN[2] DATA[01]") | rex field=_raw "(?s)(.*?FLD\[Acquiring Institution.*?DATA\[(?<F19>[^\]]*)... See more...
(eventtype =axs_event_txn_visa_req_parsedbody "++EXT-ID[C0] FLD[Authentication Program..] FRMT[TLV] LL[1] LEN[2] DATA[01]") | rex field=_raw "(?s)(.*?FLD\[Acquiring Institution.*?DATA\[(?<F19>[^\]]*).*)" | rex field=_raw "(?s)(.*?FLD\[Authentication Program.*?DATA\[(?<FCO>[^\]]*).*)" | rex field=_raw "(?s)(.*?FLD\[62-2 Transaction Ident.*?DATA\[(?<F62_2>[^\]]*).*)" | stats values(F19) as F19, values(FCO) as FCO by F62_2 | where F19!=036 AND FCO=01 | append [search eventtype=axs_event_txn_visa_rsp_formatting | rex field=_raw "(?s)(.*?FLD\[62-2 Transaction Ident.*?DATA\[(?<F62_2>[^\]]*).*)"] | stats values(F19) as F19, values(FCO) as FCO values(txn_uid) as txn_uid, values(txn_timestamp) as txn_timestamp, by F62_2
what i really want is  This is  query 1  - output ------------------------------- (eventtype =axs_event_txn_visa_req_parsedbody "++EXT-ID[C0] FLD[Authentication Program..] FRMT[TLV] LL[1] LEN[2] D... See more...
what i really want is  This is  query 1  - output ------------------------------- (eventtype =axs_event_txn_visa_req_parsedbody "++EXT-ID[C0] FLD[Authentication Program..] FRMT[TLV] LL[1] LEN[2] DATA[01]") | rex field=_raw "(?s)(.*?FLD\[Acquiring Institution.*?DATA\[(?<F19>[^\]]*).*)" | rex field=_raw "(?s)(.*?FLD\[Authentication Program.*?DATA\[(?<FCO>[^\]]*).*)" | rex field=_raw "(?s)(.*?FLD\[62-2 Transaction Ident.*?DATA\[(?<F62_2>[^\]]*).*)" | stats values(F19) as F19, values(FCO) as FCO by F62_2 | where F19!=036 AND FCO=01   F62_2 F19 FCO 384011068172061 840 1 584011056069894 826 1   Query 2 eventtype=axs_event_txn_visa_rsp_formatting | rex field=_raw "(?s)(.*?FLD\[62-2 Transaction Ident.*?DATA\[(?<F62_2>[^\]]*).*)" | stats values(txn_uid) as txn_uid, values(txn_timestamp) as txn_timestamp, by F62_2 What I really want is the output of the for query 1 and pass as an input to query, common field between two queries is F62_2. if i run the query it would be different output, so basically two queries should be combined and when it run it should take from F62_2 from query 1 and produce values(txn_uid) as txn_uid, values(txn_timestamp) as txn_timestamp