All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Depends on your environment. If you have an all-in-one installation, the easiest method would be to go to settings->indexes
Hello, Need an urgent help. I am using REST API Modular input and the problem is i am not able to set the parameter for event breaking, below is the sample log. { "User" : [ { "record_id" : "2", "... See more...
Hello, Need an urgent help. I am using REST API Modular input and the problem is i am not able to set the parameter for event breaking, below is the sample log. { "User" : [ { "record_id" : "2", "email_address" : "dsfsdf@dfdf.net", "email_address_id" : "", "email_type" : "", "email_creation_date" : "", "email_last_update_date" : "2024-08-23T05:28:43.091+00:00", "user_id" : "54216542", "username" : "Audit.Test1", "suspended" : false, "person_id" : "", "credentials_email_sent" : "", "user_guid" : "21SD6F546S2SD5F46", "user_creation_date" : "2024-08-23T05:28:42.000+00:00", "user_last_update_date" : "2024-08-23T05:28:44.000+00:00" }, { "record_id" : "3", "email_address" : "XDCFSD@dfdf.net", "email_address_id" : "", "email_type" : "", "email_creation_date" : "", "email_last_update_date" : "2024-08-28T06:42:43.736+00:00", "user_id" : "300000019394603", "username" : "Assessment.Integration", "suspended" : false, "person_id" : "", "credentials_email_sent" : "", "user_guid" : "21SD6F546S2SD5F46545SDS45S", "user_creation_date" : "2024-08-28T06:42:43.000+00:00", "user_last_update_date" : "2024-08-28T06:42:47.000+00:00" }, { "record_id" : "1", "email_address" : "dfds@dfwsfe.com", "email_address_id" : "", "email_type" : "", "email_creation_date" : "", "email_last_update_date" : "2024-08-06T13:27:34.085+00:00", "user_id" : "5612156498213", "username" : "dfsv", "suspended" : false, "person_id" : "56121564963", "credentials_email_sent" : "", "user_guid" : "D564FSD2F8WEGV216S", "user_creation_date" : "2024-08-06T13:29:00.000+00:00", "user_last_update_date" : "2024-08-06T13:29:47.224+00:00" } ]}
Please execute and share the output. splunk btool list authentication --debug  
Hi @Alex_Rus , what's the resul runnung from cmd: dir C:\MyFolder\MyFolder1\* ? if you haven't results, maybe the path isn't correct or maybe there's another issue: could data be equal to the o... See more...
Hi @Alex_Rus , what's the resul runnung from cmd: dir C:\MyFolder\MyFolder1\* ? if you haven't results, maybe the path isn't correct or maybe there's another issue: could data be equal to the ones from another input? if they are the same, even if from a differen file, Splunk by default doesn't index a log twice. Ciao. Giuseppe
Please set testmode=true in your collect command and please post the outcome. 
the problem is that data from hosts where data is coming to a mounted disk does not come to Splunk
Hi, Thanks for your help. I tried the following configuration in my transforms.conf:   [remove_logoff] INGEST_EVAL = queue=if(match(_raw,"EventCode=4634") AND match(_raw,"Security\sID:[\s]+.*\$")... See more...
Hi, Thanks for your help. I tried the following configuration in my transforms.conf:   [remove_logoff] INGEST_EVAL = queue=if(match(_raw,"EventCode=4634") AND match(_raw,"Security\sID:[\s]+.*\$"), "nullQueue", queue)   props.conf [WinEventLog] TRANSFORMS-remove_computer_logoff = remove_logoff    But after I run the query, I still get the unwanted logs. I tried to make the query on the search as well to check if the regex were right and everything seems fine. index=* sourcetype=WinEventLog | eval result=if(match(_raw,"EventCode=4634") AND match(_raw,"Security\sID:[\s]+.*\$"), "Filter", "No need to filter this log") | stats count by host, result       Am I missing something?   P.S. I cannot do a blacklist directly on the hosts  
Hi @Alex_Rus , What's the problem? you can have two different stanzas for your two different inputs with the same other parameters. Ciao. Giuseppe  
Yes, it is a mistyping, in my inputs.conf i got it right.
Hi @Hiroshi , the issue should be solved, but the url is changed: https://splunk.my.site.com Ciao. Giuseppe
Hi @Alex_Rus , I don't know if it's a mistyping, but you have to use backslashes in windows paths: [monitor://C:\MyFolder\MyFolder1\*] disabled = 0 index = MyIndex1 sourcetype = MySourcetype1 [mon... See more...
Hi @Alex_Rus , I don't know if it's a mistyping, but you have to use backslashes in windows paths: [monitor://C:\MyFolder\MyFolder1\*] disabled = 0 index = MyIndex1 sourcetype = MySourcetype1 [monitor://C:\Program Files\Microsoft\Exchange Server\...\*] disabled = 0 index = MyIndex1 sourcetype = MySourcetype1# Ciao. Giuseppe
Hi, richgalloway! Thank you for your answer.  I wrote this information in response to the previous question from Giuseppe.
Hi, Giuseppe! Thank you for your answer. Let me explain the situation. The application is configured to collect logs from four hosts, on two of which the data is collected in the internal storage C:... See more...
Hi, Giuseppe! Thank you for your answer. Let me explain the situation. The application is configured to collect logs from four hosts, on two of which the data is collected in the internal storage C:\Program Files\Microsoft\Exchange Server\... and the data comes from these hosts correctly. On the other two hosts the data is collected in a folder that is moved to a separate disk C:\MyFolder\MyFolder1\*. My stanza looks like: [monitor://C:/MyFolder\MyFolder1/*] disabled = 0 index = MyIndex1 sourcetype = MySourcetype1   [monitor://C:/Program Files/Microsoft/Exchange Server/.../*] disabled = 0 index = MyIndex1 sourcetype = MySourcetype1#
@KendallW  INFO ThruputProcessor [2963 parsing] - Current data throughput (5125 kb/s) has reached maxKBps. As a result, data forwarding may be throttled. Consider increasing the value of maxKBps in ... See more...
@KendallW  INFO ThruputProcessor [2963 parsing] - Current data throughput (5125 kb/s) has reached maxKBps. As a result, data forwarding may be throttled. Consider increasing the value of maxKBps in limits.conf. We will try increasing the limits.
@Hiroshi We are able to access the partner support portal now. Please check.  Go to the partner portal : https://splunk.my.site.com/partner/s/ and go to the "My Cases".  Karma Points are appreciate... See more...
@Hiroshi We are able to access the partner support portal now. Please check.  Go to the partner portal : https://splunk.my.site.com/partner/s/ and go to the "My Cases".  Karma Points are appreciated. !!!   
Hi, I have a requirement where I have a table on my dashboard created using dashboard studio. I need to redirect to another dashboard when clicked on Column A cell. Also, when a user clicks on Col... See more...
Hi, I have a requirement where I have a table on my dashboard created using dashboard studio. I need to redirect to another dashboard when clicked on Column A cell. Also, when a user clicks on Column C cell, the user should be redirected to a URL. How can we achieve this Linking of dashboard and URL on the same table? based on the column clicked.
Hello @ITGSOC  , Yes you can migrate a Splunk Enterprise server from a virtual machine (VM) to a physical server. Before starting the migration, make sure to take a complete backup of your Splunk dat... See more...
Hello @ITGSOC  , Yes you can migrate a Splunk Enterprise server from a virtual machine (VM) to a physical server. Before starting the migration, make sure to take a complete backup of your Splunk data, configurations, and any custom settings. Ensure that the physical server meets the hardware requirements for running Splunk and that the operating system is compatible with the version of Splunk you're using.  ransfer your configuration files and data from the virtual machine to the physical server. This typically includes files in the etc directory within your Splunk installation ($SPLUNK_HOME/etc). Be sure to copy over apps and any custom configurations.    Refer this  :https://community.splunk.com/t5/Deployment-Architecture/What-is-the-process-to-move-an-infrastructure-from-virtual/m-p/110175 
Can I migrate the Splunk Enterprise server from virtual machine to physical server?
Hi. Im trying to monitor MSK metrics by CloudWatch input.  There is no AWS/Kafka in Namespace list. So i just wrote it and set dimension value as  `[{}]`.  But i can't get any metric from Cloudwat... See more...
Hi. Im trying to monitor MSK metrics by CloudWatch input.  There is no AWS/Kafka in Namespace list. So i just wrote it and set dimension value as  `[{}]`.  But i can't get any metric from Cloudwatch Input. Please help me!    im using Add-on version 7.0.0      
Yes, SSH2 message is key.  The actual solution kind of depends on your exact use case/requirement.  If you don't particularly care if the user had multiple failures, transaction will do just fine.  A... See more...
Yes, SSH2 message is key.  The actual solution kind of depends on your exact use case/requirement.  If you don't particularly care if the user had multiple failures, transaction will do just fine.  Assuming your sessionID is unique for each connection and that you don't care if attempted user name is the same, simply add startswith and endswith. index=honeypot sourcetype=honeypotLogs | rex "\s(?<action>Connected) to (?<IP>\S+)" | rex "\sUser \"(?<user>\S+)\" (?<action>logged in)" | rex "\sSSH2_MSG_(?<ssh2_msg_type>\w+)" | rex ": (?<ssh2_message>.+)" | rex field=ssh2_message "user: (?<user>\S+)" | transaction sessionID startswith=ssh2_msg_type=USERAUTH_FAILURE endswith=ssh2_msg_type=USERAUTH_SUCCESS The above maybe goes a little overboard in extraction but usually, these semantic elements can be of interest. If you care about attempted user name, you can add user to transaction.  If you care about multiple failed attempts, streamstats could be a better approach. The following is an extended emulation; it shows that transaction will only pick up sessions with at least one USERAUTH_FAILURE, and transaction will only include the last event with USERAUTH_FAILURE. | makeresults format=csv data="_raw [02] Tue 27Aug24 15:20:56 - (143323) Connected to 1.2.3.4 [30] Tue 27Aug24 15:20:56 - (143323) SSH2_MSG_USERAUTH_REQUEST: user: bob [31] Tue 27Aug24 15:20:56 - (143323) SSH2_MSG_USERAUTH_FAILURE [30] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_REQUEST: user: bob [31] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_FAILURE [30] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_REQUEST: user: bob [02] Tue 27Aug24 15:20:57 - (143323) User \"bob\" logged in [31] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_SUCCESS: successful login [02] Tue 27Aug24 15:20:58 - (143523) Connected to 1.2.3.4 [30] Tue 27Aug24 15:20:58 - (143523) SSH2_MSG_USERAUTH_REQUEST: user: alice [02] Tue 27Aug24 15:20:58 - (143523) User \"alice\" logged in [31] Tue 27Aug24 15:20:58 - (143523) SSH2_MSG_USERAUTH_SUCCESS: successful login" | rex "^(\S+\s+){2}(?<_time>\S+\s+\S+) - \((?<sessionID>\d+)" | eval _time = strptime(_time, "%d%b%y %T") | reverse ``` the above emulates index=honeypot sourcetype=honeypotLogs ``` Play with the emulation and compare with real data.