All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@isoutamo  : Thanks for the links you provided. I see that my old DS lists all clients contacting. It is running 9.0.2. Where as the new one which I am trying to setup is running 9.2.1. I see from... See more...
@isoutamo  : Thanks for the links you provided. I see that my old DS lists all clients contacting. It is running 9.0.2. Where as the new one which I am trying to setup is running 9.2.1. I see from the links that, it is because of the version difference. However , I tried the steps provided in the link. Still no luck.  I also should mention that I am configuring this DS to act as log forwarder as well. So, it is that both of these setup is making use of same splunk service. Does this have any effect on proper working of Deployment Server. Do you have any comments ? Apart from the steps in above link , do you have any other suggestion. Thanks in Advance, PNV Regards, PNV
You're pretty much there with the first method using the eval.  Its a calculated field you need, not a field extraction or field transformation.  Settings > Fields > Calculated Fields > Create New.... See more...
You're pretty much there with the first method using the eval.  Its a calculated field you need, not a field extraction or field transformation.  Settings > Fields > Calculated Fields > Create New.  Then set your scope for index/sourcetype Name: MacAddr Eval Expression : replace(CL_MacAddr,“-”,“:”)
Your sample event doesn't appear to have a comma terminating the user id so perhaps use this rex to extract it? | rex "User-(?<userid>[^, ]*)"
Hi @tommasoscarpa1, if you remove the XML tags, how can you recognize fields? maybe you could use INDEXED_EXTRACTIONS = XML in your sourcetype definition having all the field extracted. Ciao. Giu... See more...
Hi @tommasoscarpa1, if you remove the XML tags, how can you recognize fields? maybe you could use INDEXED_EXTRACTIONS = XML in your sourcetype definition having all the field extracted. Ciao. Giuseppe
Try this, not sure if it will work, but worth a try.  See if the variable is pointing to this file which contains SSL config / library's etc  echo %OPENSSL_CONF% Set it as below and try again.... See more...
Try this, not sure if it will work, but worth a try.  See if the variable is pointing to this file which contains SSL config / library's etc  echo %OPENSSL_CONF% Set it as below and try again.  set OPENSSL_CONF=c:\Program Files\Splunk\openssl.cnf
Hi,   I would like to remove every occurrence of a specific pattern from my _raw events. Specifically in this case I am looking for deleting these html tags: <b>, </b>, <br>   Example, I have th... See more...
Hi,   I would like to remove every occurrence of a specific pattern from my _raw events. Specifically in this case I am looking for deleting these html tags: <b>, </b>, <br>   Example, I have this raw event: <b>This<\b> is an <b>example<\b><br>of raw<br>event And I would like to transform it like this: This is an exampleof rawevent   I tried to create this transforms.conf: [remove_html_tags] REGEX = <\/?br?> FORMAT =  DEST_KEY = _raw   And this props.conf: [_sourcetype_] TRANSFORMS-html_tags = remove_html_tags But it doesn't work.   I also thought I could change the transforms.conf like this: [remove_html_tags] REGEX = (.*)<\/?br?>(.*) FORMAT = $1$2 DEST_KEY = _raw But it will stop after just one substitution and the REPEAT_MATCH property is not suitable because the doc says: NOTE: This setting is only valid for index-time field extractions. This setting is ignored if DEST_KEY is _raw. And I must set DEST_KEY = _raw     Can you help me? Thank you in advance.
``` bucket time by day ``` | bin _time span=1d ``` find minimum for each host for each day ``` | stats min(free) AS min_free BY _time hostname ``` find lowest for minimum for each day ``` | eventsta... See more...
``` bucket time by day ``` | bin _time span=1d ``` find minimum for each host for each day ``` | stats min(free) AS min_free BY _time hostname ``` find lowest for minimum for each day ``` | eventstats min(min_free) as lowest by _time ``` find host which has that minimum for each day ``` | eval min_host=if(min_free=lowest,hostname,null()) ``` find the latest host which has the daily minumum ``` | eventstats latest(min_host) as latest_lowest ``` just keep that host ``` | where hostname==latest_lowest ``` switch to "chart" format ``` | xyseries _time hostname min_free
Hello splunkers! Has anyone had experience with getting data in Splunk from PAM (Privileged Access Management) systems? I want to do the integration of Splunk with Fudo PAM. Question of getting lo... See more...
Hello splunkers! Has anyone had experience with getting data in Splunk from PAM (Privileged Access Management) systems? I want to do the integration of Splunk with Fudo PAM. Question of getting logs from Fudo to Splunk is not a problem at all, it's easily done over syslog. However, I don't know how to parse these logs. The syslog sourcetype doesn't properly parse the events, it misses a lot of useful information such as: users, processes, action done, accounts, basically almost everything except for the IP of the node and the timestamp of the event.  Does anyone know if there is a good add-on for parsing logs from Fudo PAM? Or any other good way how to parse its logs?  Thanks for taking time reading and replying to my post
Hi, anyone else with a suggestion? Thanks again, best regards Alex
Hi, I'm curious Can SPLUNK automatically turn off the screen or start a screen saver when you log out of the Splunk console or when your session expires? Is it possible to run this functionality wit... See more...
Hi, I'm curious Can SPLUNK automatically turn off the screen or start a screen saver when you log out of the Splunk console or when your session expires? Is it possible to run this functionality without Pantom, e.g. using a bash script or PowerShell
package org.example; import com.splunk.HttpService; import com.splunk.SSLSecurityProtocol; import com.splunk.Service; import com.splunk.ServiceArgs; public class ActualSplunk { pub... See more...
package org.example; import com.splunk.HttpService; import com.splunk.SSLSecurityProtocol; import com.splunk.Service; import com.splunk.ServiceArgs; public class ActualSplunk { public static void main(String[] args) { // Create ServiceArgs object with connection parameters ServiceArgs loginArgs = new ServiceArgs(); loginArgs.setUsername("providedvalidusername"); loginArgs.setPassword("providedvalidpassword"); loginArgs.setHost("hostname"); loginArgs.setPort(8089); HttpService.setSslSecurityProtocol(SSLSecurityProtocol.TLSv1_2); // Connect to Splunk Service service = Service.connect(loginArgs); // Check if connection is successful if (service != null) { System.out.println("Connected to Splunk!"); // Perform operations with the 'service' object as needed } else { System.out.println("Failed to connect to Splunk."); } // Close the connection when done if (service != null) { service.logout(); // Logout from the service // service.close(); // Close the service connection } } } when i run the above code to connect to the local splunk it is working fine with my local splunk credentials. But when i tried same code in my VM with the actual splunk cloud host, username, password to connect to the splunk to get the logs it throwing an exception "java.lang.RuntimeException:An established connection was aborted by your host machine".
Hi @dungnq, is it mandatory for your ingestion? this is the reason for the double ingestion: logs arrive from different files. Without crcSalt = <SOURCE>, Splunk doesn't index twice a log. Ciao. ... See more...
Hi @dungnq, is it mandatory for your ingestion? this is the reason for the double ingestion: logs arrive from different files. Without crcSalt = <SOURCE>, Splunk doesn't index twice a log. Ciao. Giuseppe
I went through the documentation to check whether there is any specific way to upgrade RSA Securid addon. I didn't find anything specific, so I followed common add-on upgrade process. As we have a di... See more...
I went through the documentation to check whether there is any specific way to upgrade RSA Securid addon. I didn't find anything specific, so I followed common add-on upgrade process. As we have a distributed environment, followed below steps -  1. put addon in /opt/splunk/apps through DS UI  2. Take backup of existing addon 3. copy newly downloaded from /opt/splunk/apps to /opt/splunk/etc/deployment_apps with exact same addon name.  4. Reload DS.  And it was updated
Hi @gcusello, Thank you for your response. Currently I have configured the crcSalt parameter on the inputs.conf file ---- [monitor:///opt/IBM/WebSphere/AppServer/profiles/APP/test.log*] index = a... See more...
Hi @gcusello, Thank you for your response. Currently I have configured the crcSalt parameter on the inputs.conf file ---- [monitor:///opt/IBM/WebSphere/AppServer/profiles/APP/test.log*] index = app sourcetype = tws:aws:testdev:app disabled = false crcSalt = <SOURCE>
Hi @jacknguyen , in this case (Frozen buckets), you can use the above filesystem for Frozen  buckets., but not for hot, warm or cold buckets. Ciao. Giuseppe
Hi @dungnq, are you using crcSalt = <SOURCE> in your inputs.conf? Ciao. Giuseppe
Hi team, I encountered a problem when retrieving data from rotate log files: duplicate event. For example: the event in file test.log.1 has been retrieved, when rotating to test.log.2 splunk retrie... See more...
Hi team, I encountered a problem when retrieving data from rotate log files: duplicate event. For example: the event in file test.log.1 has been retrieved, when rotating to test.log.2 splunk retrieves it again. How to configure splunk to only retrieve the latest events and not events that have been rotated to another file? ===== Log4j App information: log4j.appender.file.File=test.log log4j.appender.file.MaxFileSize=10000KB log4j.appender.file.MaxBackupIndex=99 ===== Splunk inputs.conf information [monitor:///opt/IBM/WebSphere/AppServer/profiles/APP/test.log*]
Well I mean in Frozen log, If I save One Indexer and use it like a backup for Splunk Indexer Cluster. Is it ok? Or each Indexers just can use by only their Frozen. Sorry beacause the question is not ... See more...
Well I mean in Frozen log, If I save One Indexer and use it like a backup for Splunk Indexer Cluster. Is it ok? Or each Indexers just can use by only their Frozen. Sorry beacause the question is not clearly
It's... a bit more complicated. Both kvstore and csv-based lookups are performed internally by Splunk. There are some differences though - see the details here - https://dev.splunk.com/enterprise/do... See more...
It's... a bit more complicated. Both kvstore and csv-based lookups are performed internally by Splunk. There are some differences though - see the details here - https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/kvstore/ It gets more complicated if you want to use a lookup  early on in the search pipeline when the processing is still being done on indexers - depending on the particular collection's configuration the data might either be replicated as a part of knowledge bundle in csv form to indexers or the search might be forced to the SH tier (losing the benefits of distributed processing).