All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Giuseppe, I am not talking about XML tags, but HTML tags. HTML tags are used to format the text and do not give any information about fields. Text between <b> and </b> will be formatted in bold a... See more...
Hi Giuseppe, I am not talking about XML tags, but HTML tags. HTML tags are used to format the text and do not give any information about fields. Text between <b> and </b> will be formatted in bold and <br> is a line break. I would like to remove these unnecessary characters from my inputs.   Ciao! Tommaso
You could try to do it using REST API but I'd say it's not a best idea. If you enable too many searches, you're gonna kill your servers. So it's best to enable those you need, not just all there are.
Hi there, Sorry, I should also have added that I'm searching in Smart Mode. The results, though, are the same for Verbose Mode. I hadn't thought of doing a stats on the fields but I can confirm tha... See more...
Hi there, Sorry, I should also have added that I'm searching in Smart Mode. The results, though, are the same for Verbose Mode. I hadn't thought of doing a stats on the fields but I can confirm that count(action) is still 0 and the count(change_type) has a positive value.
I also experienced the same issue. Does anyone have a solution to this?
Don't bother with the Interesting Fields sidebar. It contains only _extracted fields_ (so if you're searching in fast mode you'll get just the basic metadata fields or the ones explicitly used) which... See more...
Don't bother with the Interesting Fields sidebar. It contains only _extracted fields_ (so if you're searching in fast mode you'll get just the basic metadata fields or the ones explicitly used) which are present in at least 20% of the results. So this is not the way to verify if the field is properly extracted. Also remember that when using fast mode only the fields explicitly used are extracted. BTW, try your search with | stats count count(action) count(change_type)  
Hi, I appreciate that there are numerous questions on here for similar problems but, after reading quite a few of them, nothing seems to quite fit my scenario / issue. I am trying to extract a fie... See more...
Hi, I appreciate that there are numerous questions on here for similar problems but, after reading quite a few of them, nothing seems to quite fit my scenario / issue. I am trying to extract a field from an event and call it 'action'. The entry in the props.conf looks like : EXTRACT-pam_action = (Action\: (?P<action>\[[^:\]]+]) ) I know that the extraction is working as there is a field alias later in the props.conf : FIELDALIAS-aob_gen_syslog_alias_32 = action AS change_type When I run a basic generating search on the index & sourcetype, the field 'action' does not appear in the 'Interesting Fields' but the 'change_type' alias does appear! The regex is fine as I can create the 'action' field OK if I add the rex to the search. I have also added the exact same regex to the props.conf file but called the field 'action1' and that field is displayed OK. Another test I tried is to create a field alias for the action1 field name called 'action' : FIELDALIAS-aob_gen_syslog_alias_30 = action1 AS action FIELDALIAS-aob_gen_syslog_alias_32 = action1 AS change_type 'change_type' is visible but, again 'action' is not visible. Finally my search "index=my_index action=*" produces 0 results whereas "index=my_index change_type-*" produces an accurate output. I have looked in the props and transforms configs across my searchhead and can't see anything that might be 'removing' my field extraction but, I guess my question is..... how can I debug the creation ( or not ) of a field name? I have a deep suspicion that it is something to do with one one the Windows TA's apps that we have installed but am struggling to locate the offending configuration Many thanks for any help. Mark
Hi,   Is there a way of bulk enabling alerts in Splunk enterprise?   Thanks,   Joe
@splunkreal  : Thanks .. I tried the command but no luck  
@isoutamo  : Thanks for the links you provided. I see that my old DS lists all clients contacting. It is running 9.0.2. Where as the new one which I am trying to setup is running 9.2.1. I see from... See more...
@isoutamo  : Thanks for the links you provided. I see that my old DS lists all clients contacting. It is running 9.0.2. Where as the new one which I am trying to setup is running 9.2.1. I see from the links that, it is because of the version difference. However , I tried the steps provided in the link. Still no luck.  I also should mention that I am configuring this DS to act as log forwarder as well. So, it is that both of these setup is making use of same splunk service. Does this have any effect on proper working of Deployment Server. Do you have any comments ? Apart from the steps in above link , do you have any other suggestion. Thanks in Advance, PNV Regards, PNV
You're pretty much there with the first method using the eval.  Its a calculated field you need, not a field extraction or field transformation.  Settings > Fields > Calculated Fields > Create New.... See more...
You're pretty much there with the first method using the eval.  Its a calculated field you need, not a field extraction or field transformation.  Settings > Fields > Calculated Fields > Create New.  Then set your scope for index/sourcetype Name: MacAddr Eval Expression : replace(CL_MacAddr,“-”,“:”)
Your sample event doesn't appear to have a comma terminating the user id so perhaps use this rex to extract it? | rex "User-(?<userid>[^, ]*)"
Hi @tommasoscarpa1, if you remove the XML tags, how can you recognize fields? maybe you could use INDEXED_EXTRACTIONS = XML in your sourcetype definition having all the field extracted. Ciao. Giu... See more...
Hi @tommasoscarpa1, if you remove the XML tags, how can you recognize fields? maybe you could use INDEXED_EXTRACTIONS = XML in your sourcetype definition having all the field extracted. Ciao. Giuseppe
Try this, not sure if it will work, but worth a try.  See if the variable is pointing to this file which contains SSL config / library's etc  echo %OPENSSL_CONF% Set it as below and try again.... See more...
Try this, not sure if it will work, but worth a try.  See if the variable is pointing to this file which contains SSL config / library's etc  echo %OPENSSL_CONF% Set it as below and try again.  set OPENSSL_CONF=c:\Program Files\Splunk\openssl.cnf
Hi,   I would like to remove every occurrence of a specific pattern from my _raw events. Specifically in this case I am looking for deleting these html tags: <b>, </b>, <br>   Example, I have th... See more...
Hi,   I would like to remove every occurrence of a specific pattern from my _raw events. Specifically in this case I am looking for deleting these html tags: <b>, </b>, <br>   Example, I have this raw event: <b>This<\b> is an <b>example<\b><br>of raw<br>event And I would like to transform it like this: This is an exampleof rawevent   I tried to create this transforms.conf: [remove_html_tags] REGEX = <\/?br?> FORMAT =  DEST_KEY = _raw   And this props.conf: [_sourcetype_] TRANSFORMS-html_tags = remove_html_tags But it doesn't work.   I also thought I could change the transforms.conf like this: [remove_html_tags] REGEX = (.*)<\/?br?>(.*) FORMAT = $1$2 DEST_KEY = _raw But it will stop after just one substitution and the REPEAT_MATCH property is not suitable because the doc says: NOTE: This setting is only valid for index-time field extractions. This setting is ignored if DEST_KEY is _raw. And I must set DEST_KEY = _raw     Can you help me? Thank you in advance.
``` bucket time by day ``` | bin _time span=1d ``` find minimum for each host for each day ``` | stats min(free) AS min_free BY _time hostname ``` find lowest for minimum for each day ``` | eventsta... See more...
``` bucket time by day ``` | bin _time span=1d ``` find minimum for each host for each day ``` | stats min(free) AS min_free BY _time hostname ``` find lowest for minimum for each day ``` | eventstats min(min_free) as lowest by _time ``` find host which has that minimum for each day ``` | eval min_host=if(min_free=lowest,hostname,null()) ``` find the latest host which has the daily minumum ``` | eventstats latest(min_host) as latest_lowest ``` just keep that host ``` | where hostname==latest_lowest ``` switch to "chart" format ``` | xyseries _time hostname min_free
Hello splunkers! Has anyone had experience with getting data in Splunk from PAM (Privileged Access Management) systems? I want to do the integration of Splunk with Fudo PAM. Question of getting lo... See more...
Hello splunkers! Has anyone had experience with getting data in Splunk from PAM (Privileged Access Management) systems? I want to do the integration of Splunk with Fudo PAM. Question of getting logs from Fudo to Splunk is not a problem at all, it's easily done over syslog. However, I don't know how to parse these logs. The syslog sourcetype doesn't properly parse the events, it misses a lot of useful information such as: users, processes, action done, accounts, basically almost everything except for the IP of the node and the timestamp of the event.  Does anyone know if there is a good add-on for parsing logs from Fudo PAM? Or any other good way how to parse its logs?  Thanks for taking time reading and replying to my post
Hi, anyone else with a suggestion? Thanks again, best regards Alex
Hi, I'm curious Can SPLUNK automatically turn off the screen or start a screen saver when you log out of the Splunk console or when your session expires? Is it possible to run this functionality wit... See more...
Hi, I'm curious Can SPLUNK automatically turn off the screen or start a screen saver when you log out of the Splunk console or when your session expires? Is it possible to run this functionality without Pantom, e.g. using a bash script or PowerShell
package org.example; import com.splunk.HttpService; import com.splunk.SSLSecurityProtocol; import com.splunk.Service; import com.splunk.ServiceArgs; public class ActualSplunk { pub... See more...
package org.example; import com.splunk.HttpService; import com.splunk.SSLSecurityProtocol; import com.splunk.Service; import com.splunk.ServiceArgs; public class ActualSplunk { public static void main(String[] args) { // Create ServiceArgs object with connection parameters ServiceArgs loginArgs = new ServiceArgs(); loginArgs.setUsername("providedvalidusername"); loginArgs.setPassword("providedvalidpassword"); loginArgs.setHost("hostname"); loginArgs.setPort(8089); HttpService.setSslSecurityProtocol(SSLSecurityProtocol.TLSv1_2); // Connect to Splunk Service service = Service.connect(loginArgs); // Check if connection is successful if (service != null) { System.out.println("Connected to Splunk!"); // Perform operations with the 'service' object as needed } else { System.out.println("Failed to connect to Splunk."); } // Close the connection when done if (service != null) { service.logout(); // Logout from the service // service.close(); // Close the service connection } } } when i run the above code to connect to the local splunk it is working fine with my local splunk credentials. But when i tried same code in my VM with the actual splunk cloud host, username, password to connect to the splunk to get the logs it throwing an exception "java.lang.RuntimeException:An established connection was aborted by your host machine".
Hi @dungnq, is it mandatory for your ingestion? this is the reason for the double ingestion: logs arrive from different files. Without crcSalt = <SOURCE>, Splunk doesn't index twice a log. Ciao. ... See more...
Hi @dungnq, is it mandatory for your ingestion? this is the reason for the double ingestion: logs arrive from different files. Without crcSalt = <SOURCE>, Splunk doesn't index twice a log. Ciao. Giuseppe